This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.CLI tools
Learning how to use the command-line tools for OpenShift Container Platform
Abstract
Chapter 1. OpenShift CLI (oc) Copy linkLink copied to clipboard!
1.1. Getting started with the CLI Copy linkLink copied to clipboard!
1.1.1. About the CLI Copy linkLink copied to clipboard!
With the OpenShift Container Platform command-line interface (CLI), you can create applications and manage OpenShift Container Platform projects from a terminal. The CLI is ideal in situations where you:
- work directly with project source code.
- script OpenShift Container Platform operations.
- are restricted by bandwidth resources and can not use the web console.
1.1.2. Installing the CLI Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc
) either by downloading the binary or by using an RPM.
1.1.2.1. Installing the CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc
) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.4. Download and install the new version of oc
.
1.1.2.1.1. Installing the CLI on Linux Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Linux from the drop-down menu and click Download command-line tools.
Unpack the archive:
tar xvzf <file>
$ tar xvzf <file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:echo $PATH
$ echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the CLI, it is available using the oc
command:
oc <command>
$ oc <command>
1.1.2.1.2. Installing the CLI on Windows Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Windows from the drop-down menu and click Download command-line tools.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:path
C:\> path
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the CLI, it is available using the oc
command:
oc <command>
C:\> oc <command>
1.1.2.1.3. Installing the CLI on macOS Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select MacOS from the drop-down menu and click Download command-line tools.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:echo $PATH
$ echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the CLI, it is available using the oc
command:
oc <command>
$ oc <command>
1.1.2.2. Installing the CLI by using an RPM Copy linkLink copied to clipboard!
For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI (oc
) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account.
Prerequisites
- Must have root or sudo privileges.
Procedure
Register with Red Hat Subscription Manager:
subscription-manager register
# subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the repositories required by OpenShift Container Platform 4.4.
For Red Hat Enterprise Linux 8:
subscription-manager repos --enable="rhocp-4.4-for-rhel-8-x86_64-rpms"
# subscription-manager repos --enable="rhocp-4.4-for-rhel-8-x86_64-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 7:
subscription-manager repos --enable="rhel-7-server-ose-4.4-rpms"
# subscription-manager repos --enable="rhel-7-server-ose-4.4-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Install the
openshift-clients
package:yum install openshift-clients
# yum install openshift-clients
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the CLI, it is available using the oc
command:
oc <command>
$ oc <command>
1.1.3. Logging in to the CLI Copy linkLink copied to clipboard!
You can log in to the oc
CLI to access and manage your cluster.
Prerequisites
- You must have access to an OpenShift Container Platform cluster.
- You must have installed the CLI.
To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
variables. These environment variables are respected by the oc
CLI so that all communication with the cluster goes through the HTTP proxy.
Procedure
Log in to the CLI using the
oc login
command and enter the required information when prompted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now create a project or issue other commands for managing your cluster.
1.1.4. Using the CLI Copy linkLink copied to clipboard!
Review the following sections to learn how to complete common tasks using the CLI.
1.1.4.1. Creating a project Copy linkLink copied to clipboard!
Use the oc new-project
command to create a new project.
oc new-project my-project
$ oc new-project my-project
Now using project "my-project" on server "https://openshift.example.com:6443".
1.1.4.2. Creating a new app Copy linkLink copied to clipboard!
Use the oc new-app
command to create a new application.
1.1.4.3. Viewing pods Copy linkLink copied to clipboard!
Use the oc get pods
command to view the pods for the current project.
oc get pods -o wide
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none>
cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none>
cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>
1.1.4.4. Viewing pod logs Copy linkLink copied to clipboard!
Use the oc logs
command to view logs for a particular pod.
oc logs cakephp-ex-1-deploy
$ oc logs cakephp-ex-1-deploy
--> Scaling cakephp-ex-1 to 1
--> Success
1.1.4.5. Viewing the current project Copy linkLink copied to clipboard!
Use the oc project
command to view the current project.
oc project
$ oc project
Using project "my-project" on server "https://openshift.example.com:6443".
1.1.4.6. Viewing the status for the current project Copy linkLink copied to clipboard!
Use the oc status
command to view information about the current project, such as services, deployments, and build configs.
1.1.4.7. Listing supported API resources Copy linkLink copied to clipboard!
Use the oc api-resources
command to view the list of supported API resources on the server.
1.1.5. Getting help Copy linkLink copied to clipboard!
You can get help with CLI commands and OpenShift Container Platform resources in the following ways.
Use
oc help
to get a list and description of all available CLI commands:Example: Get general help for the CLI
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
--help
flag to get help about a specific CLI command:Example: Get help for the
oc create
commandCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
oc explain
command to view the description and fields for a particular resource:Example: View documentation for the
Pod
resourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.6. Logging out of the CLI Copy linkLink copied to clipboard!
You can log out the CLI to end your current session.
Use the
oc logout
command.oc logout
$ oc logout Logged "user1" out on "https://openshift.example.com"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This deletes the saved authentication token from the server and removes it from your configuration file.
1.2. Configuring the CLI Copy linkLink copied to clipboard!
1.2.1. Enabling tab completion Copy linkLink copied to clipboard!
After you install the oc
CLI tool, you can enable tab completion to automatically complete oc
commands or suggest options when you press Tab.
Prerequisites
-
You must have the
oc
CLI tool installed.
Procedure
The following procedure enables tab completion for Bash.
Save the Bash completion code to a file.
oc completion bash > oc_bash_completion
$ oc completion bash > oc_bash_completion
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the file to
/etc/bash_completion.d/
.sudo cp oc_bash_completion /etc/bash_completion.d/
$ sudo cp oc_bash_completion /etc/bash_completion.d/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also save the file to a local directory and source it from your
.bashrc
file instead.
Tab completion is enabled when you open a new terminal.
1.3. Extending the CLI with plug-ins Copy linkLink copied to clipboard!
You can write and install plug-ins to build on the default oc
commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI.
1.3.1. Writing CLI plug-ins Copy linkLink copied to clipboard!
You can write a plug-in for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plug-in to overwrite an existing oc
command.
OpenShift CLI plug-ins are currently a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
See the Red Hat Technology Preview features support scope for more information.
Procedure
This procedure creates a simple Bash plug-in that prints a message to the terminal when the oc foo
command is issued.
Create a file called
oc-foo
.When naming your plug-in file, keep the following in mind:
-
The file must begin with
oc-
orkubectl-
in order to be recognized as a plug-in. -
The file name determines the command that invokes the plug-in. For example, a plug-in with the file name
oc-foo-bar
can be invoked by a command ofoc foo bar
. You can also use underscores if you want the command to contain dashes. For example, a plug-in with the file nameoc-foo_bar
can be invoked by a command ofoc foo-bar
.
-
The file must begin with
Add the following contents to the file.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install this plug-in for the OpenShift Container Platform CLI, it can be invoked using the oc foo
command.
Additional resources
- Review the Sample plug-in repository for an example of a plug-in written in Go.
- Review the CLI runtime repository for a set of utilities to assist in writing plug-ins in Go.
1.3.2. Installing and using CLI plug-ins Copy linkLink copied to clipboard!
After you write a custom plug-in for the OpenShift Container Platform CLI, you must install it to use the functionality that it provides.
OpenShift CLI plug-ins are currently a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
See the Red Hat Technology Preview features support scope for more information.
Prerequisites
-
You must have the
oc
CLI tool installed. -
You must have a CLI plug-in file that begins with
oc-
orkubectl-
.
Procedure
If necessary, update the plug-in file to be executable.
chmod +x <plugin_file>
$ chmod +x <plugin_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Place the file anywhere in your
PATH
, such as/usr/local/bin/
.sudo mv <plugin_file> /usr/local/bin/.
$ sudo mv <plugin_file> /usr/local/bin/.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
oc plugin list
to make sure that the plug-in is listed.oc plugin list
$ oc plugin list The following compatible plugins are available: /usr/local/bin/<plugin_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your plug-in is not listed here, verify that the file begins with
oc-
orkubectl-
, is executable, and is on yourPATH
.Invoke the new command or option introduced by the plug-in.
For example, if you built and installed the
kubectl-ns
plug-in from the Sample plug-in repository, you can use the following command to view the current namespace.oc ns
$ oc ns
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the command to invoke the plug-in depends on the plug-in file name. For example, a plug-in with the file name of
oc-foo-bar
is invoked by theoc foo bar
command.
1.4. Developer CLI commands Copy linkLink copied to clipboard!
1.4.1. Basic CLI commands Copy linkLink copied to clipboard!
1.4.1.1. explain Copy linkLink copied to clipboard!
Display documentation for a certain resource.
Example: Display documentation for pods
oc explain pods
$ oc explain pods
1.4.1.2. login Copy linkLink copied to clipboard!
Log in to the OpenShift Container Platform server and save login information for subsequent use.
Example: Interactive login
oc login
$ oc login
Example: Log in specifying a user name
oc login -u user1
$ oc login -u user1
1.4.1.3. new-app Copy linkLink copied to clipboard!
Create a new application by specifying source code, a template, or an image.
Example: Create a new application from a local Git repository
oc new-app .
$ oc new-app .
Example: Create a new application from a remote Git repository
oc new-app https://github.com/sclorg/cakephp-ex
$ oc new-app https://github.com/sclorg/cakephp-ex
Example: Create a new application from a private remote repository
oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
$ oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
1.4.1.4. new-project Copy linkLink copied to clipboard!
Create a new project and switch to it as the default project in your configuration.
Example: Create a new project
oc new-project myproject
$ oc new-project myproject
1.4.1.5. project Copy linkLink copied to clipboard!
Switch to another project and make it the default in your configuration.
Example: Switch to a different project
oc project test-project
$ oc project test-project
1.4.1.6. projects Copy linkLink copied to clipboard!
Display information about the current active project and existing projects on the server.
Example: List all projects
oc projects
$ oc projects
1.4.1.7. status Copy linkLink copied to clipboard!
Show a high-level overview of the current project.
Example: Show the status of the current project
oc status
$ oc status
1.4.2. Build and Deploy CLI commands Copy linkLink copied to clipboard!
1.4.2.1. cancel-build Copy linkLink copied to clipboard!
Cancel a running, pending, or new build.
Example: Cancel a build
oc cancel-build python-1
$ oc cancel-build python-1
Example: Cancel all pending builds from the python
build config
oc cancel-build buildconfig/python --state=pending
$ oc cancel-build buildconfig/python --state=pending
1.4.2.2. import-image Copy linkLink copied to clipboard!
Import the latest tag and image information from an image repository.
Example: Import the latest image information
oc import-image my-ruby
$ oc import-image my-ruby
1.4.2.3. new-build Copy linkLink copied to clipboard!
Create a new build config from source code.
Example: Create a build config from a local Git repository
oc new-build .
$ oc new-build .
Example: Create a build config from a remote Git repository
oc new-build https://github.com/sclorg/cakephp-ex
$ oc new-build https://github.com/sclorg/cakephp-ex
1.4.2.4. rollback Copy linkLink copied to clipboard!
Revert an application back to a previous deployment.
Example: Roll back to the last successful deployment
oc rollback php
$ oc rollback php
Example: Roll back to a specific version
oc rollback php --to-version=3
$ oc rollback php --to-version=3
1.4.2.5. rollout Copy linkLink copied to clipboard!
Start a new rollout, view its status or history, or roll back to a previous revision of your application.
Example: Roll back to the last successful deployment
oc rollout undo deploymentconfig/php
$ oc rollout undo deploymentconfig/php
Example: Start a new rollout for a deployment with its latest state
oc rollout latest deploymentconfig/php
$ oc rollout latest deploymentconfig/php
1.4.2.6. start-build Copy linkLink copied to clipboard!
Start a build from a build config or copy an existing build.
Example: Start a build from the specified build config
oc start-build python
$ oc start-build python
Example: Start a build from a previous build
oc start-build --from-build=python-1
$ oc start-build --from-build=python-1
Example: Set an environment variable to use for the current build
oc start-build python --env=mykey=myvalue
$ oc start-build python --env=mykey=myvalue
1.4.2.7. tag Copy linkLink copied to clipboard!
Tag existing images into image streams.
Example: Configure the ruby
image’s latest
tag to refer to the image for the 2.0
tag
oc tag ruby:latest ruby:2.0
$ oc tag ruby:latest ruby:2.0
1.4.3. Application management CLI commands Copy linkLink copied to clipboard!
1.4.3.1. annotate Copy linkLink copied to clipboard!
Update the annotations on one or more resources.
Example: Add an annotation to a route
oc annotate route/test-route haproxy.router.openshift.io/ip_whitelist="192.168.1.10"
$ oc annotate route/test-route haproxy.router.openshift.io/ip_whitelist="192.168.1.10"
Example: Remove the annotation from the route
oc annotate route/test-route haproxy.router.openshift.io/ip_whitelist-
$ oc annotate route/test-route haproxy.router.openshift.io/ip_whitelist-
1.4.3.2. apply Copy linkLink copied to clipboard!
Apply a configuration to a resource by file name or standard in (stdin) in JSON or YAML format.
Example: Apply the configuration in pod.json
to a pod
oc apply -f pod.json
$ oc apply -f pod.json
1.4.3.3. autoscale Copy linkLink copied to clipboard!
Autoscale a deployment or replication controller.
Example: Autoscale to a minimum of two and maximum of five pods
oc autoscale deploymentconfig/parksmap-katacoda --min=2 --max=5
$ oc autoscale deploymentconfig/parksmap-katacoda --min=2 --max=5
1.4.3.4. create Copy linkLink copied to clipboard!
Create a resource by file name or standard in (stdin) in JSON or YAML format.
Example: Create a pod using the content in pod.json
oc create -f pod.json
$ oc create -f pod.json
1.4.3.5. delete Copy linkLink copied to clipboard!
Delete a resource.
Example: Delete a pod named parksmap-katacoda-1-qfqz4
oc delete pod/parksmap-katacoda-1-qfqz4
$ oc delete pod/parksmap-katacoda-1-qfqz4
Example: Delete all pods with the app=parksmap-katacoda
label
oc delete pods -l app=parksmap-katacoda
$ oc delete pods -l app=parksmap-katacoda
1.4.3.6. describe Copy linkLink copied to clipboard!
Return detailed information about a specific object.
Example: Describe a deployment named example
oc describe deployment/example
$ oc describe deployment/example
Example: Describe all pods
oc describe pods
$ oc describe pods
1.4.3.7. edit Copy linkLink copied to clipboard!
Edit a resource.
Example: Edit a deployment using the default editor
oc edit deploymentconfig/parksmap-katacoda
$ oc edit deploymentconfig/parksmap-katacoda
Example: Edit a deployment using a different editor
OC_EDITOR="nano" oc edit deploymentconfig/parksmap-katacoda
$ OC_EDITOR="nano" oc edit deploymentconfig/parksmap-katacoda
Example: Edit a deployment in JSON format
oc edit deploymentconfig/parksmap-katacoda -o json
$ oc edit deploymentconfig/parksmap-katacoda -o json
1.4.3.8. expose Copy linkLink copied to clipboard!
Expose a service externally as a route.
Example: Expose a service
oc expose service/parksmap-katacoda
$ oc expose service/parksmap-katacoda
Example: Expose a service and specify the host name
oc expose service/parksmap-katacoda --hostname=www.my-host.com
$ oc expose service/parksmap-katacoda --hostname=www.my-host.com
1.4.3.9. get Copy linkLink copied to clipboard!
Display one or more resources.
Example: List pods in the default
namespace
oc get pods -n default
$ oc get pods -n default
Example: Get details about the python
deployment in JSON format
oc get deploymentconfig/python -o json
$ oc get deploymentconfig/python -o json
1.4.3.10. label Copy linkLink copied to clipboard!
Update the labels on one or more resources.
Example: Update the python-1-mz2rf
pod with the label status
set to unhealthy
oc label pod/python-1-mz2rf status=unhealthy
$ oc label pod/python-1-mz2rf status=unhealthy
1.4.3.11. scale Copy linkLink copied to clipboard!
Set the desired number of replicas for a replication controller or a deployment.
Example: Scale the ruby-app
deployment to three pods
oc scale deploymentconfig/ruby-app --replicas=3
$ oc scale deploymentconfig/ruby-app --replicas=3
1.4.3.12. secrets Copy linkLink copied to clipboard!
Manage secrets in your project.
Example: Allow my-pull-secret
to be used as an image pull secret by the default
service account
oc secrets link default my-pull-secret --for=pull
$ oc secrets link default my-pull-secret --for=pull
1.4.3.13. serviceaccounts Copy linkLink copied to clipboard!
Get a token assigned to a service account or create a new token or kubeconfig
file for a service account.
Example: Get the token assigned to the default
service account
oc serviceaccounts get-token default
$ oc serviceaccounts get-token default
1.4.3.14. set Copy linkLink copied to clipboard!
Configure existing application resources.
Example: Set the name of a secret on a build config
oc set build-secret --source buildconfig/mybc mysecret
$ oc set build-secret --source buildconfig/mybc mysecret
1.4.4. Troubleshooting and debugging CLI commands Copy linkLink copied to clipboard!
1.4.4.1. attach Copy linkLink copied to clipboard!
Attach the shell to a running container.
Example: Get output from the python
container from pod python-1-mz2rf
oc attach python-1-mz2rf -c python
$ oc attach python-1-mz2rf -c python
1.4.4.2. cp Copy linkLink copied to clipboard!
Copy files and directories to and from containers.
Example: Copy a file from the python-1-mz2rf
pod to the local file system
oc cp default/python-1-mz2rf:/opt/app-root/src/README.md ~/mydirectory/.
$ oc cp default/python-1-mz2rf:/opt/app-root/src/README.md ~/mydirectory/.
1.4.4.3. debug Copy linkLink copied to clipboard!
Launch a command shell to debug a running application.
Example: Debug the python
Deployment
oc debug deploymentconfig/python
$ oc debug deploymentconfig/python
1.4.4.4. exec Copy linkLink copied to clipboard!
Execute a command in a container.
Example: Execute the ls
command in the python
container from pod python-1-mz2rf
oc exec python-1-mz2rf -c python ls
$ oc exec python-1-mz2rf -c python ls
1.4.4.5. logs Copy linkLink copied to clipboard!
Retrieve the log output for a specific build, build config, deployment, or pod.
Example: Stream the latest logs from the python
deployment
oc logs -f deploymentconfig/python
$ oc logs -f deploymentconfig/python
1.4.4.6. port-forward Copy linkLink copied to clipboard!
Forward one or more local ports to a pod.
Example: Listen on port 8888
locally and forward to port 5000
in the pod
oc port-forward python-1-mz2rf 8888:5000
$ oc port-forward python-1-mz2rf 8888:5000
1.4.4.7. proxy Copy linkLink copied to clipboard!
Run a proxy to the Kubernetes API server.
Example: Run a proxy to the API server on port 8011
serving static content from ./local/www/
oc proxy --port=8011 --www=./local/www/
$ oc proxy --port=8011 --www=./local/www/
1.4.4.8. rsh Copy linkLink copied to clipboard!
Open a remote shell session to a container.
Example: Open a shell session on the first container in the python-1-mz2rf
pod
oc rsh python-1-mz2rf
$ oc rsh python-1-mz2rf
1.4.4.9. rsync Copy linkLink copied to clipboard!
Copy contents of a directory to or from a running pod container. Only changed files are copied using the rsync
command from your operating system.
Example: Synchronize files from a local directory with a pod directory
oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/
$ oc rsync ~/mydirectory/ python-1-mz2rf:/opt/app-root/src/
1.4.4.10. run Copy linkLink copied to clipboard!
Create and run a particular image. By default, this creates a DeploymentConfig to manage the created containers.
Example: Start an instance of the perl
image with three replicas
oc run my-test --image=perl --replicas=3
$ oc run my-test --image=perl --replicas=3
1.4.4.11. wait Copy linkLink copied to clipboard!
Wait for a specific condition on one or more resources.
This command is experimental and might change without notice.
Example: Wait for the python-1-mz2rf
pod to be deleted
oc wait --for=delete pod/python-1-mz2rf
$ oc wait --for=delete pod/python-1-mz2rf
1.4.5. Advanced developer CLI commands Copy linkLink copied to clipboard!
1.4.5.1. api-resources Copy linkLink copied to clipboard!
Display the full list of API resources that the server supports.
Example: List the supported API resources
oc api-resources
$ oc api-resources
1.4.5.2. api-versions Copy linkLink copied to clipboard!
Display the full list of API versions that the server supports.
Example: List the supported API versions
oc api-versions
$ oc api-versions
1.4.5.3. auth Copy linkLink copied to clipboard!
Inspect permissions and reconcile RBAC roles.
Example: Check whether the current user can read pod logs
oc auth can-i get pods --subresource=log
$ oc auth can-i get pods --subresource=log
Example: Reconcile RBAC roles and permissions from a file
oc auth reconcile -f policy.json
$ oc auth reconcile -f policy.json
1.4.5.4. cluster-info Copy linkLink copied to clipboard!
Display the address of the master and cluster services.
Example: Display cluster information
oc cluster-info
$ oc cluster-info
1.4.5.5. convert Copy linkLink copied to clipboard!
Convert a YAML or JSON configuration file to a different API version and print to standard output (stdout).
Example: Convert pod.yaml
to the latest version
oc convert -f pod.yaml
$ oc convert -f pod.yaml
1.4.5.6. extract Copy linkLink copied to clipboard!
Extract the contents of a config map or secret. Each key in the config map or secret is created as a separate file with the name of the key.
Example: Download the contents of the ruby-1-ca
config map to the current directory
oc extract configmap/ruby-1-ca
$ oc extract configmap/ruby-1-ca
Example: Print the contents of the ruby-1-ca
config map to stdout
oc extract configmap/ruby-1-ca --to=-
$ oc extract configmap/ruby-1-ca --to=-
1.4.5.7. idle Copy linkLink copied to clipboard!
Idle scalable resources. An idled service will automatically become unidled when it receives traffic or it can be manually unidled using the oc scale
command.
Example: Idle the ruby-app
service
oc idle ruby-app
$ oc idle ruby-app
1.4.5.8. image Copy linkLink copied to clipboard!
Manage images in your OpenShift Container Platform cluster.
Example: Copy an image to another tag
oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable
$ oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable
1.4.5.9. observe Copy linkLink copied to clipboard!
Observe changes to resources and take action on them.
Example: Observe changes to services
oc observe services
$ oc observe services
1.4.5.10. patch Copy linkLink copied to clipboard!
Updates one or more fields of an object using strategic merge patch in JSON or YAML format.
Example: Update the spec.unschedulable
field for node node1
to true
oc patch node/node1 -p '{"spec":{"unschedulable":true}}'
$ oc patch node/node1 -p '{"spec":{"unschedulable":true}}'
If you must patch a custom resource definition, you must include the --type merge
option in the command.
1.4.5.11. policy Copy linkLink copied to clipboard!
Manage authorization policies.
Example: Add the edit
role to user1
for the current project
oc policy add-role-to-user edit user1
$ oc policy add-role-to-user edit user1
1.4.5.12. process Copy linkLink copied to clipboard!
Process a template into a list of resources.
Example: Convert template.json
to a resource list and pass to oc create
oc process -f template.json | oc create -f -
$ oc process -f template.json | oc create -f -
1.4.5.13. registry Copy linkLink copied to clipboard!
Manage the integrated registry on OpenShift Container Platform.
Example: Display information about the integrated registry
oc registry info
$ oc registry info
1.4.5.14. replace Copy linkLink copied to clipboard!
Modify an existing object based on the contents of the specified configuration file.
Example: Update a pod using the content in pod.json
oc replace -f pod.json
$ oc replace -f pod.json
1.4.6. Settings CLI commands Copy linkLink copied to clipboard!
1.4.6.1. completion Copy linkLink copied to clipboard!
Output shell completion code for the specified shell.
Example: Display completion code for Bash
oc completion bash
$ oc completion bash
1.4.6.2. config Copy linkLink copied to clipboard!
Manage the client configuration files.
Example: Display the current configuration
oc config view
$ oc config view
Example: Switch to a different context
oc config use-context test-context
$ oc config use-context test-context
1.4.6.3. logout Copy linkLink copied to clipboard!
Log out of the current session.
Example: End the current session
oc logout
$ oc logout
1.4.6.4. whoami Copy linkLink copied to clipboard!
Display information about the current session.
Example: Display the currently authenticated user
oc whoami
$ oc whoami
1.4.7. Other developer CLI commands Copy linkLink copied to clipboard!
1.4.7.1. help Copy linkLink copied to clipboard!
Display general help information for the CLI and a list of available commands.
Example: Display available commands
oc help
$ oc help
Example: Display the help for the new-project
command
oc help new-project
$ oc help new-project
1.4.7.2. plugin Copy linkLink copied to clipboard!
List the available plug-ins on the user’s PATH
.
Example: List available plug-ins
oc plugin list
$ oc plugin list
1.4.7.3. version Copy linkLink copied to clipboard!
Display the oc
client and server versions.
Example: Display version information
oc version
$ oc version
For cluster administrators, the OpenShift Container Platform server version is also displayed.
1.5. Administrator CLI commands Copy linkLink copied to clipboard!
1.5.1. Cluster management CLI commands Copy linkLink copied to clipboard!
1.5.1.1. inspect Copy linkLink copied to clipboard!
Gather debugging information for a particular resource.
This command is experimental and might change without notice.
Example: Collect debugging data for the OpenShift API server cluster Operator
oc adm inspect clusteroperator/openshift-apiserver
$ oc adm inspect clusteroperator/openshift-apiserver
1.5.1.2. must-gather Copy linkLink copied to clipboard!
Bulk collect data about the current state of your cluster to debug issues.
This command is experimental and might change without notice.
Example: Gather debugging information
oc adm must-gather
$ oc adm must-gather
1.5.1.3. top Copy linkLink copied to clipboard!
Show usage statistics of resources on the server.
Example: Show CPU and memory usage for pods
oc adm top pods
$ oc adm top pods
Example: Show usage statistics for images
oc adm top images
$ oc adm top images
1.5.2. Node management CLI commands Copy linkLink copied to clipboard!
1.5.2.1. cordon Copy linkLink copied to clipboard!
Mark a node as unschedulable. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node, but does not affect existing pods on the node.
Example: Mark node1
as unschedulable
oc adm cordon node1
$ oc adm cordon node1
1.5.2.2. drain Copy linkLink copied to clipboard!
Drain a node in preparation for maintenance.
Example: Drain node1
oc adm drain node1
$ oc adm drain node1
1.5.2.3. node-logs Copy linkLink copied to clipboard!
Display and filter node logs.
Example: Get logs for NetworkManager
oc adm node-logs --role master -u NetworkManager.service
$ oc adm node-logs --role master -u NetworkManager.service
1.5.2.4. taint Copy linkLink copied to clipboard!
Update the taints on one or more nodes.
Example: Add a taint to dedicate a node for a set of users
oc adm taint nodes node1 dedicated=groupName:NoSchedule
$ oc adm taint nodes node1 dedicated=groupName:NoSchedule
Example: Remove the taints with key dedicated
from node node1
oc adm taint nodes node1 dedicated-
$ oc adm taint nodes node1 dedicated-
1.5.2.5. uncordon Copy linkLink copied to clipboard!
Mark a node as schedulable.
Example: Mark node1
as schedulable
oc adm uncordon node1
$ oc adm uncordon node1
1.5.3. Security and policy CLI commands Copy linkLink copied to clipboard!
1.5.3.1. certificate Copy linkLink copied to clipboard!
Approve or reject certificate signing requests (CSRs).
Example: Approve a CSR
oc adm certificate approve csr-sqgzp
$ oc adm certificate approve csr-sqgzp
1.5.3.2. groups Copy linkLink copied to clipboard!
Manage groups in your cluster.
Example: Create a new group
oc adm groups new my-group
$ oc adm groups new my-group
1.5.3.3. new-project Copy linkLink copied to clipboard!
Create a new project and specify administrative options.
Example: Create a new project using a node selector
oc adm new-project myproject --node-selector='type=user-node,region=east'
$ oc adm new-project myproject --node-selector='type=user-node,region=east'
1.5.3.4. pod-network Copy linkLink copied to clipboard!
Manage pod networks in the cluster.
Example: Isolate project1
and project2
from other non-global projects
oc adm pod-network isolate-projects project1 project2
$ oc adm pod-network isolate-projects project1 project2
1.5.3.5. policy Copy linkLink copied to clipboard!
Manage roles and policies on the cluster.
Example: Add the edit
role to user1
for all projects
oc adm policy add-cluster-role-to-user edit user1
$ oc adm policy add-cluster-role-to-user edit user1
Example: Add the privileged
security context constraint to a service account
oc adm policy add-scc-to-user privileged -z myserviceaccount
$ oc adm policy add-scc-to-user privileged -z myserviceaccount
1.5.4. Maintenance CLI commands Copy linkLink copied to clipboard!
1.5.4.1. migrate Copy linkLink copied to clipboard!
Migrate resources on the cluster to a new version or format depending on the subcommand used.
Example: Perform an update of all stored objects
oc adm migrate storage
$ oc adm migrate storage
Example: Perform an update of only pods
oc adm migrate storage --include=pods
$ oc adm migrate storage --include=pods
1.5.4.2. prune Copy linkLink copied to clipboard!
Remove older versions of resources from the server.
Example: Prune older builds including those whose build configs no longer exist
oc adm prune builds --orphans
$ oc adm prune builds --orphans
1.5.5. Configuration CLI commands Copy linkLink copied to clipboard!
1.5.5.1. create-bootstrap-policy-file Copy linkLink copied to clipboard!
Create the default bootstrap policy.
Example: Create a file called policy.json
with the default bootstrap policy
oc adm create-bootstrap-policy-file --filename=policy.json
$ oc adm create-bootstrap-policy-file --filename=policy.json
1.5.5.2. create-bootstrap-project-template Copy linkLink copied to clipboard!
Create a bootstrap project template.
Example: Output a bootstrap project template in YAML format to stdout
oc adm create-bootstrap-project-template -o yaml
$ oc adm create-bootstrap-project-template -o yaml
1.5.5.3. create-error-template Copy linkLink copied to clipboard!
Create a template for customizing the error page.
Example: Output a template for the error page to stdout
oc adm create-error-template
$ oc adm create-error-template
1.5.5.4. create-kubeconfig Copy linkLink copied to clipboard!
Creates a basic .kubeconfig
file from client certificates.
Example: Create a .kubeconfig
file with the provided client certificates
oc adm create-kubeconfig \ --client-certificate=/path/to/client.crt \ --client-key=/path/to/client.key \ --certificate-authority=/path/to/ca.crt
$ oc adm create-kubeconfig \
--client-certificate=/path/to/client.crt \
--client-key=/path/to/client.key \
--certificate-authority=/path/to/ca.crt
1.5.5.5. create-login-template Copy linkLink copied to clipboard!
Create a template for customizing the login page.
Example: Output a template for the login page to stdout
oc adm create-login-template
$ oc adm create-login-template
1.5.5.6. create-provider-selection-template Copy linkLink copied to clipboard!
Create a template for customizing the provider selection page.
Example: Output a template for the provider selection page to stdout
oc adm create-provider-selection-template
$ oc adm create-provider-selection-template
1.5.6. Other Administrator CLI commands Copy linkLink copied to clipboard!
1.5.6.1. build-chain Copy linkLink copied to clipboard!
Output the inputs and dependencies of any builds.
Example: Output dependencies for the perl
imagestream
oc adm build-chain perl
$ oc adm build-chain perl
1.5.6.2. completion Copy linkLink copied to clipboard!
Output shell completion code for the oc adm
commands for the specified shell.
Example: Display oc adm
completion code for Bash
oc adm completion bash
$ oc adm completion bash
1.5.6.3. config Copy linkLink copied to clipboard!
Manage the client configuration files. This command has the same behavior as the oc config
command.
Example: Display the current configuration
oc adm config view
$ oc adm config view
Example: Switch to a different context
oc adm config use-context test-context
$ oc adm config use-context test-context
1.5.6.4. release Copy linkLink copied to clipboard!
Manage various aspects of the OpenShift Container Platform release process, such as viewing information about a release or inspecting the contents of a release.
Example: Generate a changelog between two releases and save to changelog.md
oc adm release info --changelog=/tmp/git \ quay.io/openshift-release-dev/ocp-release:4.4.0-rc.7 \ quay.io/openshift-release-dev/ocp-release:4.4.0 \ > changelog.md
$ oc adm release info --changelog=/tmp/git \
quay.io/openshift-release-dev/ocp-release:4.4.0-rc.7 \
quay.io/openshift-release-dev/ocp-release:4.4.0 \
> changelog.md
1.5.6.5. verify-image-signature Copy linkLink copied to clipboard!
Verify the image signature of an image imported to the internal registry using the local public GPG key.
Example: Verify the nodejs
image signature
oc adm verify-image-signature \ sha256:2bba968aedb7dd2aafe5fa8c7453f5ac36a0b9639f1bf5b03f95de325238b288 \ --expected-identity 172.30.1.1:5000/openshift/nodejs:latest \ --public-key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \ --save
$ oc adm verify-image-signature \
sha256:2bba968aedb7dd2aafe5fa8c7453f5ac36a0b9639f1bf5b03f95de325238b288 \
--expected-identity 172.30.1.1:5000/openshift/nodejs:latest \
--public-key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \
--save
1.6. Usage of oc and kubectl commands Copy linkLink copied to clipboard!
Kubernetes' command line interface (CLI), kubectl
, can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl
binaries that ship with OpenShift Container Platform, or you can gain extended functionality by using the oc
binary.
1.6.1. The oc binary Copy linkLink copied to clipboard!
The oc
binary offers the same capabilities as the kubectl
binary, but it extends to natively support additional OpenShift Container Platform features, including:
Full support for OpenShift Container Platform resources
Resources such as
DeploymentConfig
,BuildConfig
,Route
,ImageStream
, andImageStreamTag
objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives.Authentication
The
oc
binary offers a built-inlogin
command that allows authentication and enables you to work with OpenShift Container Platform projects, which map Kubernetes namespaces to authenticated users. See Understanding authentication for more information.Additional commands
The additional command
oc new-app
, for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional commandoc new-project
makes it easier to start a project that you can switch to as your default.
1.6.2. The kubectl binary Copy linkLink copied to clipboard!
The kubectl
binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl
CLI. Existing users of kubectl
can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster.
You can install the supported kubectl
binary by following the steps to Install the CLI. The kubectl
binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM.
For more information, see the kubectl documentation.
Chapter 2. Developer CLI (odo) Copy linkLink copied to clipboard!
2.1. Understanding OpenShift Do Copy linkLink copied to clipboard!
odo
is a CLI tool for creating applications on OpenShift Container Platform and Kubernetes. odo
allows developers to concentrate on creating applications without the need to administer a cluster itself. Creating deployment configurations, build configurations, service routes and other OpenShift Container Platform or Kubernetes elements are all automated by odo
.
Existing tools such as oc
are more operations-focused and require a deep understanding of Kubernetes and OpenShift Container Platform concepts. odo
abstracts away complex Kubernetes and OpenShift Container Platform concepts allowing developers to focus on what is most important to them: code.
2.1.1. Key features Copy linkLink copied to clipboard!
odo
is designed to be simple and concise with the following key features:
- Simple syntax and design centered around concepts familiar to developers, such as projects, applications, and components.
- Completely client based. No additional server other than OpenShift Container Platform is required for deployment.
- Official support for Node.js and Java components.
- Partial compatibility with languages and frameworks such as Ruby, Perl, PHP, and Python.
- Detects changes to local code and deploys it to the cluster automatically, giving instant feedback to validate changes in real time.
- Lists all the available components and services from the cluster.
2.1.2. Core concepts Copy linkLink copied to clipboard!
- Project
- A project is your source code, tests, and libraries organized in a separate single unit.
- Application
- An application is a program designed for end users. An application consists of multiple microservices or components that work individually to build the entire application. Examples of applications: a video game, a media player, a web browser.
- Component
- A component is a set of Kubernetes resources which host code or data. Each component can be run and deployed separately. Examples of components: Node.js, Perl, PHP, Python, Ruby.
- Service
-
A service is software that your component links to or depends on. Examples of services: MariaDB, Jenkins, MySQL. In
odo
, services are provisioned from the OpenShift Service Catalog and must be enabled within your cluster.
2.1.2.1. Officially supported languages and corresponding container images Copy linkLink copied to clipboard!
Language | Container image | Package manager |
---|---|---|
Node.js | NPM | |
NPM | ||
NPM | ||
NPM | ||
NPM | ||
Java | Maven, Gradle | |
Maven, Gradle | ||
Maven, Gradle |
2.1.2.1.1. Listing available container images Copy linkLink copied to clipboard!
The list of available container images is sourced from the cluster’s internal container registry and external registries associated with the cluster.
To list the available components and associated container images for your cluster:
Log in to the cluster with
odo
:odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available
odo
supported and unsupported components and corresponding container images:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
TAGS
column represents the available image versions, for example,10
represents therhoar-nodejs/nodejs-10
container image.
2.2. odo architecture Copy linkLink copied to clipboard!
This section describes odo
architecture and how odo
manages resources on a cluster.
2.2.1. Developer setup Copy linkLink copied to clipboard!
With odo you can create and deploy application on OpenShift Container Platform clusters from a terminal. Code editor plug-ins use odo which allows users to interact with OpenShift Container Platform clusters from their IDE terminals. Examples of plug-ins that use odo: VS Code OpenShift Connector, OpenShift Connector for Intellij, Codewind for Eclipse Che.
odo works on Windows, macOS, and Linux operating systems and from any terminal. odo provides autocompletion for bash and zsh command line shells.
odo supports Node.js and Java components.
2.2.2. OpenShift source-to-image Copy linkLink copied to clipboard!
OpenShift Source-to-Image (S2I) is an open-source project which helps in building artifacts from source code and injecting these into container images. S2I produces ready-to-run images by building source code without the need of a Dockerfile. odo uses S2I builder image for executing developer source code inside a container.
2.2.3. OpenShift cluster objects Copy linkLink copied to clipboard!
2.2.3.1. Init Containers Copy linkLink copied to clipboard!
Init containers are specialized containers that run before the application container starts and configure the necessary environment for the application containers to run. Init containers can have files that application images do not have, for example setup scripts. Init containers always run to completion and the application container does not start if any of the init containers fails.
The pod created by odo executes two Init Containers:
-
The
copy-supervisord
Init container. -
The
copy-files-to-volume
Init container.
2.2.3.1.1. copy-supervisord Copy linkLink copied to clipboard!
The copy-supervisord
Init container copies necessary files onto an emptyDir
volume. The main application container utilizes these files from the emptyDir
volume.
Files that are copied onto the emptyDir
volume:
Binaries:
-
go-init
is a minimal init system. It runs as the first process (PID 1) inside the application container. go-init starts theSupervisorD
daemon which runs the developer code. go-init is required to handle orphaned processes. -
SupervisorD
is a process control system. It watches over configured processes and ensures that they are running. It also restarts services when necessary. For odo,SupervisorD
executes and monitors the developer code.
-
Configuration files:
-
supervisor.conf
is the configuration file necessary for the SupervisorD daemon to start.
-
Scripts:
-
assemble-and-restart
is an OpenShift S2I concept to build and deploy user-source code. The assemble-and-restart script first assembles the user source code inside the application container and then restarts SupervisorD for user changes to take effect. -
Run
is an OpenShift S2I concept of executing the assembled source code. Therun
script executes the assembled code created by theassemble-and-restart
script. -
s2i-setup
is a script that creates files and directories which are necessary for theassemble-and-restart
and run scripts to execute successfully. The script is executed whenever the application container starts.
-
Directories:
-
language-scripts
: OpenShift S2I allows customassemble
andrun
scripts. A few language specific custom scripts are present in thelanguage-scripts
directory. The custom scripts provide additional configuration to make odo debug work.
-
The emptyDir
volume is mounted at the /opt/odo
mount point for both the Init container and the application container.
2.2.3.1.2. copy-files-to-volume Copy linkLink copied to clipboard!
The copy-files-to-volume
Init container copies files that are in /opt/app-root
in the S2I builder image onto the persistent volume. The volume is then mounted at the same location (/opt/app-root
) in an application container.
Without the persistent volume on /opt/app-root
the data in this directory is lost when the persistent volume claim is mounted at the same location.
The PVC is mounted at the /mnt
mount point inside the Init container.
2.2.3.2. Application container Copy linkLink copied to clipboard!
Application container is the main container inside of which the user-source code executes.
Application container is mounted with two volumes:
-
emptyDir
volume mounted at/opt/odo
-
The persistent volume mounted at
/opt/app-root
go-init
is executed as the first process inside the application container. The go-init
process then starts the SupervisorD
daemon.
SupervisorD
executes and monitors the user assembled source code. If the user process crashes, SupervisorD
restarts it.
2.2.3.3. Persistent volumes and persistent volume claims Copy linkLink copied to clipboard!
A persistent volume claim (PVC) is a volume type in Kubernetes which provisions a persistent volume. The life of a persistent volume is independent of a pod lifecycle. The data on the persistent volume persists across pod restarts.
The copy-files-to-volume
Init container copies necessary files onto the persistent volume. The main application container utilizes these files at runtime for execution.
The naming convention of the persistent volume is <component_name>-s2idata.
Container | PVC mounted at |
---|---|
|
|
Application container |
|
2.2.3.4. emptyDir volume Copy linkLink copied to clipboard!
An emptyDir
volume is created when a pod is assigned to a node, and exists as long as that pod is running on the node. If the container is restarted or moved, the content of the emptyDir
is removed, Init container restores the data back to the emptyDir
. emptyDir
is initially empty.
The copy-supervisord
Init container copies necessary files onto the emptyDir
volume. These files are then utilized by the main application container at runtime for execution.
Container | emptyDir volume mounted at |
---|---|
|
|
Application container |
|
2.2.3.5. Service Copy linkLink copied to clipboard!
A service is a Kubernetes concept of abstracting the way of communicating with a set of pods.
odo creates a service for every application pod to make it accessible for communication.
2.2.4. odo push workflow Copy linkLink copied to clipboard!
This section describes odo push
workflow. odo push deploys user code on an OpenShift Container Platform cluster with all the necessary OpenShift Container Platform resources.
Creating resources
If not already created,
odo push
creates the following OpenShift Container Platform resources:DeploymentConfig
object:-
Two init containers are executed:
copy-supervisord
andcopy-files-to-volume
. The init containers copy files onto theemptyDir
and thePersistentVolume
type of volumes respectively. -
The application container starts. The first process in the application container is the
go-init
process with PID=1. go-init
process starts the SupervisorD daemon.NoteThe user application code has not been copied into the application container yet, so the
SupervisorD
daemon does not execute therun
script.
-
Two init containers are executed:
-
Service
object -
Secret
objects -
PersistentVolumeClaim
object
Indexing files
- A file indexer indexes the files in the source code directory. The indexer traverses through the source code directories recursively and finds files which have been created, deleted, or renamed.
-
A file indexer maintains the indexed information in an odo index file inside the
.odo
directory. - If the odo index file is not present, it means that the file indexer is being executed for the first time, and creates a new odo index JSON file. The odo index JSON file contains a file map - the relative file paths of the traversed files and the absolute paths of the changed and deleted files.
Pushing code
Local code is copied into the application container, usually under
/tmp/src
.Executing
assemble-and-restart
On a successful copy of the source code, the
assemble-and-restart
script is executed inside the running application container.
2.3. Installing odo Copy linkLink copied to clipboard!
The following section describes how to install odo
on different platforms using the CLI.
Currently, odo
does not support installation in a restricted network environment.
You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools
2.3.1. Installing odo on Linux Copy linkLink copied to clipboard!
2.3.1.1. Binary installation Copy linkLink copied to clipboard!
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o /usr/local/bin/odo chmod +x /usr/local/bin/odo
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o /usr/local/bin/odo
# chmod +x /usr/local/bin/odo
2.3.1.2. Tarball installation Copy linkLink copied to clipboard!
sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz | gzip -d > /usr/local/bin/odo' chmod +x /usr/local/bin/odo
# sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz | gzip -d > /usr/local/bin/odo'
# chmod +x /usr/local/bin/odo
2.3.2. Installing odo on Windows Copy linkLink copied to clipboard!
2.3.2.1. Binary installation Copy linkLink copied to clipboard!
-
Download the latest
odo.exe
file. -
Add the location of your
odo.exe
to yourGOPATH/bin
directory.
Setting the PATH
variable for Windows 7/8
The following example demonstrates how to set up a path variable. Your binaries can be located in any location, but this example uses C:\go-bin
as the location.
-
Create a folder at
C:\go-bin
. - Right click Start and click Control Panel.
- Select System and Security and then click System.
- From the menu on the left, select the Advanced systems settings and click the Environment Variables button at the bottom.
- Select Path from the Variable section and click Edit.
-
Click New and type
C:\go-bin
into the field or click Browse and select the directory, and click OK.
Setting the PATH
variable for Windows 10
Edit Environment Variables
using search:
-
Click Search and type
env
orenvironment
. - Select Edit environment variables for your account.
- Select Path from the Variable section and click Edit.
-
Click New and type
C:\go-bin
into the field or click Browse and select the directory, and click OK.
2.3.3. Installing odo on macOS Copy linkLink copied to clipboard!
2.3.3.1. Binary installation Copy linkLink copied to clipboard!
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o /usr/local/bin/odo chmod +x /usr/local/bin/odo
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o /usr/local/bin/odo
# chmod +x /usr/local/bin/odo
2.3.3.2. Tarball installation Copy linkLink copied to clipboard!
sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz | gzip -d > /usr/local/bin/odo' chmod +x /usr/local/bin/odo
# sh -c 'curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz | gzip -d > /usr/local/bin/odo'
# chmod +x /usr/local/bin/odo
2.4. Using odo in a restricted environment Copy linkLink copied to clipboard!
2.4.1. About odo in a restricted environment Copy linkLink copied to clipboard!
To run odo
in a disconnected cluster or a cluster provisioned in a restricted environment, you must ensure that a cluster administrator has created a cluster with a mirrored registry.
To start working in a disconnected cluster, you must first push the odo
init image to the registry of the cluster and then overwrite the odo
init image path using the ODO_BOOTSTRAPPER_IMAGE
environment variable.
After you push the odo
init image, you must mirror a supported builder image from the registry, overwrite a mirror registry and then create your application. A builder image is necessary to configure a runtime environment for your application and also contains the build tool needed to build your application, for example npm for Node.js or Maven for Java. A mirror registry contains all the necessary dependencies for your application.
2.4.2. Pushing the odo init image to the restricted cluster registry Copy linkLink copied to clipboard!
Depending on the configuration of your cluster and your operating system you can either push the odo
init image to a mirror registry or directly to an internal registry.
2.4.2.1. Prerequisites Copy linkLink copied to clipboard!
-
Install
oc
on the client operating system. -
Install
odo
on the client operating system. - Access to a restricted cluster with a configured internal registry or a mirror registry.
2.4.2.2. Pushing the odo init image to a mirror registry Copy linkLink copied to clipboard!
Depending on your operating system, you can push the odo
init image to a cluster with a mirror registry as follows:
2.4.2.2.1. Pushing the init image to a mirror registry on Linux Copy linkLink copied to clipboard!
Procedure
Use
base64
to encode the root certification authority (CA) content of your mirror registry:echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
$ echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the encoded root CA certificate to the appropriate location:
sudo cp ./disconnect-ca.crt /etc/pki/ca-trust/source/anchors/<mirror-registry>.crt
$ sudo cp ./disconnect-ca.crt /etc/pki/ca-trust/source/anchors/<mirror-registry>.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trust a CA in your client platform and log into the OpenShift Container Platform mirror registry:
sudo update-ca-trust enable && sudo systemctl daemon-reload && sudo systemctl restart / docker && docker login <mirror-registry>:5000 -u <username> -p <password>
$ sudo update-ca-trust enable && sudo systemctl daemon-reload && sudo systemctl restart / docker && docker login <mirror-registry>:5000 -u <username> -p <password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the
odo
init image:oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
$ oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:export ODO_BOOTSTRAPPER_IMAGE=<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
$ export ODO_BOOTSTRAPPER_IMAGE=<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2.2.2. Pushing the init image to a mirror registry on MacOS Copy linkLink copied to clipboard!
Procedure
Use
base64
to encode the root certification authority (CA) content of your mirror registry:echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
$ echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the encoded root CA certificate to the appropriate location:
- Restart Docker using the Docker UI.
Run the following command:
docker login <mirror-registry>:5000 -u <username> -p <password>
$ docker login <mirror-registry>:5000 -u <username> -p <password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Mirror the
odo
init image:oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
$ oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:export ODO_BOOTSTRAPPER_IMAGE=<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
$ export ODO_BOOTSTRAPPER_IMAGE=<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2.2.3. Pushing the init image to a mirror registry on Windows Copy linkLink copied to clipboard!
Procedure
Use
base64
to encode the root certification authority (CA) content of your mirror registry:PS C:\> echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
PS C:\> echo <content_of_additional_ca> | base64 -d > disconnect-ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As an administrator, copy the encoded root CA certificate to the appropriate location by executing the following command:
PS C:\WINDOWS\system32> certutil -addstore -f "ROOT" disconnect-ca.crt
PS C:\WINDOWS\system32> certutil -addstore -f "ROOT" disconnect-ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trust a CA in your client platform and log into the OpenShift Container Platform mirror registry:
- Restart Docker using the Docker UI.
Run the following command:
PS C:\WINDOWS\system32> docker login <mirror-registry>:5000 -u <username> -p <password>
PS C:\WINDOWS\system32> docker login <mirror-registry>:5000 -u <username> -p <password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Mirror the
odo
init image:PS C:\> oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
PS C:\> oc image mirror registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:PS C:\> $env:ODO_BOOTSTRAPPER_IMAGE="<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>"
PS C:\> $env:ODO_BOOTSTRAPPER_IMAGE="<mirror-registry>:5000/openshiftdo/odo-init-image-rhel7:<tag>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2.3. Pushing the odo init image to an internal registry directly Copy linkLink copied to clipboard!
If your cluster allows images to be pushed to the internal registry directly, push the odo
init image to the registry as follows:
2.4.2.3.1. Pushing the init image directly on Linux Copy linkLink copied to clipboard!
Procedure
Enable the default route:
oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
$ oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a wildcard route CA:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
base64
to encode the root certification authority (CA) content of your mirror registry:echo <tls.crt> | base64 -d > ca.crt
$ echo <tls.crt> | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trust a CA in your client platform:
sudo cp ca.crt /etc/pki/ca-trust/source/anchors/externalroute.crt && sudo update-ca-trust enable && sudo systemctl daemon-reload && sudo systemctl restart docker
$ sudo cp ca.crt /etc/pki/ca-trust/source/anchors/externalroute.crt && sudo update-ca-trust enable && sudo systemctl daemon-reload && sudo systemctl restart docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the internal registry:
oc get route -n openshift-image-registry docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
$ oc get route -n openshift-image-registry NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-route <registry_path> image-registry <all> reencrypt None $ docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the
odo
init image:docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
$ docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> $ docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> $ docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:export ODO_BOOTSTRAPPER_IMAGE=<registry_path>/openshiftdo/odo-init-image-rhel7:1.0.1
$ export ODO_BOOTSTRAPPER_IMAGE=<registry_path>/openshiftdo/odo-init-image-rhel7:1.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2.3.2. Pushing the init image directly on MacOS Copy linkLink copied to clipboard!
Procedure
Enable the default route:
oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
$ oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a wildcard route CA:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
base64
to encode the root certification authority (CA) content of your mirror registry:echo <tls.crt> | base64 -d > ca.crt
$ echo <tls.crt> | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Trust a CA in your client platform:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the internal registry:
oc get route -n openshift-image-registry docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
$ oc get route -n openshift-image-registry NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-route <registry_path> image-registry <all> reencrypt None $ docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the
odo
init image:docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
$ docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> $ docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> $ docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:export ODO_BOOTSTRAPPER_IMAGE=<registry_path>/openshiftdo/odo-init-image-rhel7:1.0.1
$ export ODO_BOOTSTRAPPER_IMAGE=<registry_path>/openshiftdo/odo-init-image-rhel7:1.0.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.2.3.3. Pushing the init image directly on Windows Copy linkLink copied to clipboard!
Procedure
Enable the default route:
PS C:\> oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
PS C:\> oc patch configs.imageregistry.operator.openshift.io cluster -p '{"spec":{"defaultRoute":true}}' --type='merge' -n openshift-image-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a wildcard route CA:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
base64
to encode the root certification authority (CA) content of your mirror registry:PS C:\> echo <tls.crt> | base64 -d > ca.crt
PS C:\> echo <tls.crt> | base64 -d > ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As an administrator, trust a CA in your client platform by executing the following command:
PS C:\WINDOWS\system32> certutil -addstore -f "ROOT" ca.crt
PS C:\WINDOWS\system32> certutil -addstore -f "ROOT" ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the internal registry:
PS C:\> oc get route -n openshift-image-registry NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-route <registry_path> image-registry <all> reencrypt None PS C:\> docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
PS C:\> oc get route -n openshift-image-registry NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD default-route <registry_path> image-registry <all> reencrypt None PS C:\> docker login <registry_path> -u kubeadmin -p $(oc whoami -t)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the
odo
init image:PS C:\> docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> PS C:\> docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> PS C:\> docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
PS C:\> docker pull registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> PS C:\> docker tag registry.access.redhat.com/openshiftdo/odo-init-image-rhel7:<tag> <registry_path>/openshiftdo/odo-init-image-rhel7:<tag> PS C:\> docker push <registry_path>/openshiftdo/odo-init-image-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Override the default
odo
init image path by setting theODO_BOOTSTRAPPER_IMAGE
environment variable:PS C:\> $env:ODO_BOOTSTRAPPER_IMAGE="<registry_path>/openshiftdo/odo-init-image-rhel7:<tag>"
PS C:\> $env:ODO_BOOTSTRAPPER_IMAGE="<registry_path>/openshiftdo/odo-init-image-rhel7:<tag>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.3. Creating and deploying a component to the disconnected cluster Copy linkLink copied to clipboard!
After you push the init
image to a cluster with a mirrored registry, you must mirror a supported builder image for your application with the oc
tool, overwrite the mirror registry using the environment variable, and then create your component.
2.4.3.1. Prerequisites Copy linkLink copied to clipboard!
-
Install
oc
on the client operating system. -
Install
odo
on the client operating system. - Access to an restricted cluster with a configured internal registry or a mirror registry.
-
Push the
odo
init image to your cluster registry.
2.4.3.2. Mirroring a supported builder image Copy linkLink copied to clipboard!
To use npm packages for Node.js dependencies and Maven packages for Java dependencies and configure a runtime environment for your application, you must mirror a respective builder image from the mirror registry.
Procedure
Verify that the required images tag is not imported:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mirror the supported image tag to the private registry:
oc image mirror registry.access.redhat.com/rhscl/nodejs-10-rhel7:<tag> <private_registry>/rhscl/nodejs-10-rhel7:<tag>
$ oc image mirror registry.access.redhat.com/rhscl/nodejs-10-rhel7:<tag> <private_registry>/rhscl/nodejs-10-rhel7:<tag>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the image:
oc tag <mirror-registry>:<port>/rhscl/nodejs-10-rhel7:<tag> nodejs-10-rhel7:latest --scheduled
$ oc tag <mirror-registry>:<port>/rhscl/nodejs-10-rhel7:<tag> nodejs-10-rhel7:latest --scheduled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You must periodically re-import the image. The
--scheduled
flag enables automatic re-import of the image.Verify that the images with the given tag have been imported:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.3.3. Overwriting the mirror registry Copy linkLink copied to clipboard!
To download npm packages for Node.js dependencies and Maven packages for Java dependencies from a private mirror registry, you must create and configure a mirror npm or Maven registry on the cluster. You can then overwrite the mirror registry on an existing component or when you create a new component.
Procedure
To overwrite the mirror registry on an existing component:
odo config set --env NPM_MIRROR=<npm_mirror_registry>
$ odo config set --env NPM_MIRROR=<npm_mirror_registry>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To overwrite the mirror registry when creating a component:
odo component create nodejs --env NPM_MIRROR=<npm_mirror_registry>
$ odo component create nodejs --env NPM_MIRROR=<npm_mirror_registry>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4.3.4. Creating a Node.js application with odo Copy linkLink copied to clipboard!
To create a Node.js component, download the Node.js application and push the source code to your cluster with odo
.
Procedure
Change the current directory to the directory with your application:
cd <directory_name>
$ cd <directory_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a component of the type Node.js to your application:
odo create nodejs
$ odo create nodejs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the latest image is used. You can also explicitly specify an image version by using
odo create openshift/nodejs:8
.Push the initial source code to the component:
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your component is now deployed to OpenShift Container Platform.
Create a URL and add an entry in the local configuration file as follows:
odo url create --port 8080
$ odo url create --port 8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the changes. This creates a URL on the cluster.
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the URLs to check the desired URL for the component.
odo url list
$ odo url list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View your deployed application using the generated URL.
curl <url>
$ curl <url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Creating a single-component application with odo Copy linkLink copied to clipboard!
With odo
, you can create and deploy applications on clusters.
2.5.1. Prerequisites Copy linkLink copied to clipboard!
-
odo
is installed. - You have a running cluster. You can use CodeReady Containers (CRC) to deploy a local cluster quickly.
2.5.2. Creating a project Copy linkLink copied to clipboard!
Create a project to keep your source code, tests, and libraries organized in a separate single unit.
Procedure
Log in to an OpenShift Container Platform cluster:
odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project:
odo project create myproject
$ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.3. Creating a Node.js application with odo Copy linkLink copied to clipboard!
To create a Node.js component, download the Node.js application and push the source code to your cluster with odo
.
Procedure
Create a directory for your components:
mkdir my_components $$ cd my_components
$ mkdir my_components $$ cd my_components
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the example Node.js application:
git clone https://github.com/openshift/nodejs-ex
$ git clone https://github.com/openshift/nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the current directory to the directory with your application:
cd <directory_name>
$ cd <directory_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a component of the type Node.js to your application:
odo create nodejs
$ odo create nodejs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBy default, the latest image is used. You can also explicitly specify an image version by using
odo create openshift/nodejs:8
.Push the initial source code to the component:
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Your component is now deployed to OpenShift Container Platform.
Create a URL and add an entry in the local configuration file as follows:
odo url create --port 8080
$ odo url create --port 8080
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the changes. This creates a URL on the cluster.
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the URLs to check the desired URL for the component.
odo url list
$ odo url list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View your deployed application using the generated URL.
curl <url>
$ curl <url>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.4. Modifying your application code Copy linkLink copied to clipboard!
You can modify your application code and have the changes applied to your application on OpenShift Container Platform.
- Edit one of the layout files within the Node.js directory with your preferred text editor.
Update your component:
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Refresh your application in the browser to see the changes.
2.5.5. Adding storage to the application components Copy linkLink copied to clipboard!
Persistent storage keeps data available between restarts of odo. You can add storage to your components with the odo storage
command.
Procedure
Add storage to your components:
odo storage create nodestorage --path=/opt/app-root/src/storage/ --size=1Gi
$ odo storage create nodestorage --path=/opt/app-root/src/storage/ --size=1Gi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your component now has 1 GB storage.
2.5.6. Adding a custom builder to specify a build image Copy linkLink copied to clipboard!
With OpenShift Container Platform, you can add a custom image to bridge the gap between the creation of custom images.
The following example demonstrates the successful import and use of the redhat-openjdk-18
image:
Prerequisites
- The OpenShift CLI (oc) is installed.
Procedure
Import the image into OpenShift Container Platform:
oc import-image openjdk18 \ --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift \ --confirm
$ oc import-image openjdk18 \ --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift \ --confirm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tag the image to make it accessible to odo:
oc annotate istag/openjdk18:latest tags=builder
$ oc annotate istag/openjdk18:latest tags=builder
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the image with odo:
odo create openjdk18 --git \ https://github.com/openshift-evangelists/Wild-West-Backend
$ odo create openjdk18 --git \ https://github.com/openshift-evangelists/Wild-West-Backend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.7. Connecting your application to multiple services using OpenShift Service Catalog Copy linkLink copied to clipboard!
The OpenShift service catalog is an implementation of the Open Service Broker API (OSB API) for Kubernetes. You can use it to connect applications deployed in OpenShift Container Platform to a variety of services.
Prerequisites
- You have a running OpenShift Container Platform cluster.
- The service catalog is installed and enabled on your cluster.
Procedure
To list the services:
odo catalog list services
$ odo catalog list services
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use service catalog-related operations:
odo service <verb> <service_name>
$ odo service <verb> <service_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5.8. Deleting an application Copy linkLink copied to clipboard!
Deleting an application will delete all components associated with the application.
Procedure
List the applications in the current project:
odo app list
$ odo app list The project '<project_name>' has the following applications: NAME app
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the components associated with the applications. These components will be deleted with the application:
odo component list
$ odo component list APP NAME TYPE SOURCE STATE app nodejs-nodejs-ex-elyf nodejs file://./ Pushed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the application:
odo app delete <application_name>
$ odo app delete <application_name> ? Are you sure you want to delete the application: <application_name> from project: <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm the deletion with
Y
. You can suppress the confirmation prompt using the-f
flag.
2.6. Creating a multicomponent application with odo Copy linkLink copied to clipboard!
odo
allows you to create a multicomponent application, modify it, and link its components in an easy and automated way.
This example describes how to deploy a multicomponent application - a shooter game. The application consists of a front-end Node.js component and a back-end Java component.
2.6.1. Prerequisites Copy linkLink copied to clipboard!
-
odo
is installed. - You have a running cluster. Developers can use CodeReady Containers (CRC) to deploy a local cluster quickly.
- Maven is installed.
2.6.2. Creating a project Copy linkLink copied to clipboard!
Create a project to keep your source code, tests, and libraries organized in a separate single unit.
Procedure
Log in to an OpenShift Container Platform cluster:
odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project:
odo project create myproject
$ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.3. Deploying the back-end component Copy linkLink copied to clipboard!
To create a Java component, import the Java builder image, download the Java application and push the source code to your cluster with odo
.
Procedure
Import
openjdk18
into the cluster:oc import-image openjdk18 \ --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift --confirm
$ oc import-image openjdk18 \ --from=registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift --confirm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tag the image as
builder
to make it accessible for odo:oc annotate istag/openjdk18:latest tags=builder
$ oc annotate istag/openjdk18:latest tags=builder
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
odo catalog list components
to see the created image:odo catalog list components Odo Supported OpenShift Components: NAME PROJECT TAGS nodejs openshift 10,8,8-RHOAR,latest openjdk18 myproject latest
$ odo catalog list components Odo Supported OpenShift Components: NAME PROJECT TAGS nodejs openshift 10,8,8-RHOAR,latest openjdk18 myproject latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a directory for your components:
mkdir my_components $$ cd my_components
$ mkdir my_components $$ cd my_components
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the example back-end application:
git clone https://github.com/openshift-evangelists/Wild-West-Backend backend
$ git clone https://github.com/openshift-evangelists/Wild-West-Backend backend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change directory to the back-end source directory and check that you have the correct files in the directory:
cd backend ls
$ cd backend $ ls debug.sh pom.xml src
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the back-end source files with Maven to create a JAR file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a component configuration of Java component-type named
backend
:odo create openjdk18 backend --binary target/wildwest-1.0.jar
$ odo create openjdk18 backend --binary target/wildwest-1.0.jar ✓ Validating component [1ms] Please use `odo push` command to create the component with source deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now the configuration file
config.yaml
is in the local directory of the back-end component that contains information about the component for deployment.Check the configuration settings of the back-end component in the
config.yaml
file using:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the component to the OpenShift Container Platform cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using
odo push
, OpenShift Container Platform creates a container to host the back-end component, deploys the container into a pod running on the OpenShift Container Platform cluster, and starts thebackend
component.Validate:
The status of the action in odo:
odo log -f 2019-09-30 20:14:19.738 INFO 444 --- [ main] c.o.wildwest.WildWestApplication : Starting WildWestApplication v1.0 onbackend-app-1-9tnhc with PID 444 (/deployments/wildwest-1.0.jar started by jboss in /deployments)
odo log -f 2019-09-30 20:14:19.738 INFO 444 --- [ main] c.o.wildwest.WildWestApplication : Starting WildWestApplication v1.0 onbackend-app-1-9tnhc with PID 444 (/deployments/wildwest-1.0.jar started by jboss in /deployments)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The status of the back-end component:
odo list
$ odo list APP NAME TYPE SOURCE STATE app backend openjdk18 file://target/wildwest-1.0.jar Pushed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.4. Deploying the front-end component Copy linkLink copied to clipboard!
To create and deploy a front-end component, download the Node.js application and push the source code to your cluster with odo
.
Procedure
Download the example front-end application:
git clone https://github.com/openshift/nodejs-ex
$ git clone https://github.com/openshift/nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the current directory to the front-end directory:
cd <directory-name>
$ cd <directory-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the contents of the directory to see that the front end is a Node.js application.
ls
$ ls assets bin index.html kwww-frontend.iml package.json package-lock.json playfield.png README.md server.js
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe front-end component is written in an interpreted language (Node.js); it does not need to be built.
Create a component configuration of Node.js component-type named
frontend
:odo create nodejs frontend
$ odo create nodejs frontend ✓ Validating component [5ms] Please use `odo push` command to create the component with source deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the component to a running container.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6.5. Linking both components Copy linkLink copied to clipboard!
Components running on the cluster need to be connected in order to interact. OpenShift Container Platform provides linking mechanisms to publish communication bindings from a program to its clients.
Procedure
List all the components that are running on the cluster:
odo list
$ odo list APP NAME TYPE SOURCE STATE app backend openjdk18 file://target/wildwest-1.0.jar Pushed app frontend nodejs file://./ Pushed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Link the current front-end component to the back end:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration information of the back-end component is added to the front-end component and the front-end component restarts.
2.6.6. Exposing components to the public Copy linkLink copied to clipboard!
Procedure
Create an external URL for the application:
cd frontend odo url create frontend --port 8080
$ cd frontend $ odo url create frontend --port 8080 ✓ URL frontend created for component: frontend To create URL on the OpenShift cluster, use `odo push`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the changes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open the URL in a browser to view the application.
If an application requires permissions to the active service account to access the OpenShift Container Platform namespace and delete active pods, the following error may occur when looking at odo log
from the back-end component:
Message: Forbidden!Configured service account doesn’t have access. Service account may have been revoked
To resolve this error, add permissions for the service account role:
oc policy add-role-to-group view system:serviceaccounts -n <project> oc policy add-role-to-group edit system:serviceaccounts -n <project>
$ oc policy add-role-to-group view system:serviceaccounts -n <project>
$ oc policy add-role-to-group edit system:serviceaccounts -n <project>
Do not do this on a production cluster.
2.6.7. Modifying the running application Copy linkLink copied to clipboard!
Procedure
Change the local directory to the front-end directory:
cd ~/frontend
$ cd ~/frontend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the changes on the file system using:
odo watch
$ odo watch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
index.html
file to change the displayed name for the game.NoteA slight delay is possible before odo recognizes the change.
odo pushes the changes to the front-end component and prints its status to the terminal:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Refresh the application page in the web browser. The new name is now displayed.
2.6.8. Deleting an application Copy linkLink copied to clipboard!
Deleting an application will delete all components associated with the application.
Procedure
List the applications in the current project:
odo app list
$ odo app list The project '<project_name>' has the following applications: NAME app
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the components associated with the applications. These components will be deleted with the application:
odo component list
$ odo component list APP NAME TYPE SOURCE STATE app nodejs-nodejs-ex-elyf nodejs file://./ Pushed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the application:
odo app delete <application_name>
$ odo app delete <application_name> ? Are you sure you want to delete the application: <application_name> from project: <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm the deletion with
Y
. You can suppress the confirmation prompt using the-f
flag.
2.7. Creating an application with a database Copy linkLink copied to clipboard!
This example describes how to deploy and connect a database to a front-end application.
2.7.1. Prerequisites Copy linkLink copied to clipboard!
-
odo
is installed. -
oc
client is installed. - You have a running cluster. Developers can use CodeReady Containers (CRC) to deploy a local cluster quickly.
- Service Catalog is enabled.
2.7.2. Creating a project Copy linkLink copied to clipboard!
Create a project to keep your source code, tests, and libraries organized in a separate single unit.
Procedure
Log in to an OpenShift Container Platform cluster:
odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project:
odo project create myproject
$ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.3. Deploying the front-end component Copy linkLink copied to clipboard!
To create and deploy a front-end component, download the Node.js application and push the source code to your cluster with odo
.
Procedure
Download the example front-end application:
git clone https://github.com/openshift/nodejs-ex
$ git clone https://github.com/openshift/nodejs-ex
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the current directory to the front-end directory:
cd <directory-name>
$ cd <directory-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the contents of the directory to see that the front end is a Node.js application.
ls
$ ls assets bin index.html kwww-frontend.iml package.json package-lock.json playfield.png README.md server.js
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe front-end component is written in an interpreted language (Node.js); it does not need to be built.
Create a component configuration of Node.js component-type named
frontend
:odo create nodejs frontend
$ odo create nodejs frontend ✓ Validating component [5ms] Please use `odo push` command to create the component with source deployed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a URL to access the frontend interface.
odo url create myurl
$ odo url create myurl ✓ URL myurl created for component: nodejs-nodejs-ex-pmdp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the component to the OpenShift Container Platform cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.4. Deploying a database in interactive mode Copy linkLink copied to clipboard!
odo provides a command-line interactive mode which simplifies deployment.
Procedure
Run the interactive mode and answer the prompts:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your password or username will be passed to the front-end application as environment variables.
2.7.5. Deploying a database manually Copy linkLink copied to clipboard!
List the available services:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Choose the
mongodb-persistent
type of service and see the required parameters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pass the required parameters as flags and wait for the deployment of the database:
odo service create mongodb-persistent --plan default --wait -p DATABASE_SERVICE_NAME=mongodb -p MEMORY_LIMIT=512Mi -p MONGODB_DATABASE=sampledb -p VOLUME_CAPACITY=1Gi
$ odo service create mongodb-persistent --plan default --wait -p DATABASE_SERVICE_NAME=mongodb -p MEMORY_LIMIT=512Mi -p MONGODB_DATABASE=sampledb -p VOLUME_CAPACITY=1Gi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.6. Connecting the database to the front-end application Copy linkLink copied to clipboard!
Link the database to the front-end service:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the environment variables of the application and the database in the pod:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the URL in the browser and notice the database configuration in the bottom right:
odo url list
$ odo url list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.7. Deleting an application Copy linkLink copied to clipboard!
Deleting an application will delete all components associated with the application.
Procedure
List the applications in the current project:
odo app list
$ odo app list The project '<project_name>' has the following applications: NAME app
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the components associated with the applications. These components will be deleted with the application:
odo component list
$ odo component list APP NAME TYPE SOURCE STATE app nodejs-nodejs-ex-elyf nodejs file://./ Pushed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the application:
odo app delete <application_name>
$ odo app delete <application_name> ? Are you sure you want to delete the application: <application_name> from project: <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Confirm the deletion with
Y
. You can suppress the confirmation prompt using the-f
flag.
2.8. Creating applications by using devfiles Copy linkLink copied to clipboard!
Creating applications by using devfiles with `odo` is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
2.8.1. About the devfile in odo Copy linkLink copied to clipboard!
The devfile is a portable file that describes your development environment. With the devfile, you can define a portable developmental environment without the need for reconfiguration.
With the devfile, you can describe your development environment, such as the source code, IDE tools, application runtimes, and predefined commands. To learn more about the devfile, see the devfile documentation.
With odo
, you can create components from the devfiles. When creating a component by using a devfile, odo
transforms the devfile into a workspace consisting of multiple containers that run on OpenShift Container Platform, Kubernetes, or Docker. odo
automatically uses the default devfile registry but users can add their own registries.
2.8.2. Creating a Java application by using a devfile Copy linkLink copied to clipboard!
2.8.3. Prerequisites Copy linkLink copied to clipboard!
-
You have installed
odo
. -
You must know your ingress domain cluster name. Contact your cluster administrator if you do not know it. For example,
apps-crc.testing
is the cluster domain name for Red Hat CodeReady Containers. You have enabled Experimental Mode in
odo
.-
To enable Experimental Mode in
odo
preferences, runodo preference set Experimental true
or use the environment variableodo config set --env ODO_EXPERIMENTAL=true
-
To enable Experimental Mode in
2.8.3.1. Creating a project Copy linkLink copied to clipboard!
Create a project to keep your source code, tests, and libraries organized in a separate single unit.
Procedure
Log in to an OpenShift Container Platform cluster:
odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project:
odo project create myproject
$ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.3.2. Listing available devfile components Copy linkLink copied to clipboard!
With odo
, you can display all the components that are available for you on the cluster. Components that are available depend on the configuration of your cluster.
Procedure
To list available devfile components on your cluster, run:
odo catalog list components was
$ odo catalog list components was
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output lists the available
odo
components:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8.3.3. Deploying a Java application using a devfile Copy linkLink copied to clipboard!
In this section, you will learn how to deploy a sample Java project that uses Maven and Java 8 JDK using a devfile.
Procedure
Create a directory to store the source code of your component:
mkdir <directory-name>
$ mkdir <directory-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a component configuration of Spring Boot component type named
myspring
and download its sample project:odo create java-spring-boot myspring --downloadSource
$ odo create java-spring-boot myspring --downloadSource
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous command produces the following output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
odo create
command downloads the associateddevfile.yaml
file from the recorded devfile registries.List the contents of the directory to confirm that the devfile and the sample Java application were downloaded:
ls
$ ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous command produces the following output:
README.md devfile.yaml pom.xml src
README.md devfile.yaml pom.xml src
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a URL to access the deployed component:
odo url create --host apps-crc.testing
$ odo url create --host apps-crc.testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous command produces the following output:
✓ URL myspring-8080.apps-crc.testing created for component: myspring To apply the URL configuration changes, please use odo push
✓ URL myspring-8080.apps-crc.testing created for component: myspring To apply the URL configuration changes, please use odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must use your cluster host domain name when creating the URL.
Push the component to the cluster:
odo push
$ odo push
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous command produces the following output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the URLs of the component to verify that the component was pushed successfully:
odo url list
$ odo url list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous command produces the following output:
Found the following URLs for component myspring NAME URL PORT SECURE myspring-8080 http://myspring-8080.apps-crc.testing 8080 false
Found the following URLs for component myspring NAME URL PORT SECURE myspring-8080 http://myspring-8080.apps-crc.testing 8080 false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View your deployed application by using the generated URL:
curl http://myspring-8080.apps-crc.testing
$ curl http://myspring-8080.apps-crc.testing
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Using sample applications Copy linkLink copied to clipboard!
odo
offers partial compatibility with any language or runtime listed within the OpenShift catalog of component types. For example:
For odo
Java and Node.js are the officially supported component types. Run odo catalog list components
to verify the officially supported component types.
In order to access the component over the web, create a URL using odo url create
.
2.9.1. Examples from Git repositories Copy linkLink copied to clipboard!
2.9.1.1. httpd Copy linkLink copied to clipboard!
This example helps build and serve static content using httpd on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Apache HTTP Server container image repository.
odo create httpd --git https://github.com/openshift/httpd-ex.git
$ odo create httpd --git https://github.com/openshift/httpd-ex.git
2.9.1.2. java Copy linkLink copied to clipboard!
This example helps build and run fat JAR Java applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Java S2I Builder image.
odo create java --git https://github.com/spring-projects/spring-petclinic.git
$ odo create java --git https://github.com/spring-projects/spring-petclinic.git
2.9.1.3. nodejs Copy linkLink copied to clipboard!
Build and run Node.js applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Node.js 8 container image.
odo create nodejs --git https://github.com/openshift/nodejs-ex.git
$ odo create nodejs --git https://github.com/openshift/nodejs-ex.git
2.9.1.4. perl Copy linkLink copied to clipboard!
This example helps build and run Perl applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Perl 5.26 container image.
odo create perl --git https://github.com/openshift/dancer-ex.git
$ odo create perl --git https://github.com/openshift/dancer-ex.git
2.9.1.5. php Copy linkLink copied to clipboard!
This example helps build and run PHP applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the PHP 7.1 Docker image.
odo create php --git https://github.com/openshift/cakephp-ex.git
$ odo create php --git https://github.com/openshift/cakephp-ex.git
2.9.1.6. python Copy linkLink copied to clipboard!
This example helps build and run Python applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Python 3.6 container image.
odo create python --git https://github.com/openshift/django-ex.git
$ odo create python --git https://github.com/openshift/django-ex.git
2.9.1.7. ruby Copy linkLink copied to clipboard!
This example helps build and run Ruby applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see Ruby 2.5 container image.
odo create ruby --git https://github.com/openshift/ruby-ex.git
$ odo create ruby --git https://github.com/openshift/ruby-ex.git
2.9.1.8. wildfly Copy linkLink copied to clipboard!
This example helps build and run WildFly applications on CentOS 7. For more information about using this builder image, including OpenShift Container Platform considerations, see the Wildfly - CentOS Docker images for OpenShift.
odo create wildfly --git https://github.com/openshift/openshift-jee-sample.git
$ odo create wildfly --git https://github.com/openshift/openshift-jee-sample.git
2.9.2. Binary examples Copy linkLink copied to clipboard!
2.9.2.1. java Copy linkLink copied to clipboard!
Java can be used to deploy a binary artifact as follows:
git clone https://github.com/spring-projects/spring-petclinic.git cd spring-petclinic mvn package odo create java test3 --binary target/*.jar odo push
$ git clone https://github.com/spring-projects/spring-petclinic.git
$ cd spring-petclinic
$ mvn package
$ odo create java test3 --binary target/*.jar
$ odo push
2.9.2.2. wildfly Copy linkLink copied to clipboard!
WildFly can be used to deploy a binary application as follows:
2.10. Creating instances of services managed by Operators Copy linkLink copied to clipboard!
Creating instances of services managed by Operators in `odo` is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Operators are a method of packaging, deploying, and managing Kubernetes services. With odo
, you can create instances of services from the custom resource definitions (CRDs) provided by the Operators. You can then use these instances in your projects and link them to your components.
To create services from an Operator, you must ensure that the Operator has valid values defined in its metadata
to start the requested service. odo
uses the metadata.annotations.alm-examples
YAML file of an Operator to start the service. If this YAML has placeholder values or sample values, a service cannot start. You can modify the YAML file and start the service with the modified values. To learn how to modify YAML files and start services from it, see Creating services from YAML files.
2.10.1. Prerequisites Copy linkLink copied to clipboard!
Install the
oc
CLI and log into the cluster.- Note that the configuration of the cluster determines the services available to you. To access the Operator services, a cluster administrator must install the respective Operator on the cluster first. To learn more, see Adding Operators to the cluster.
-
Install the
odo
CLI. -
Enable experimental mode. To enable experimental mode in
odo
, run:odo preference set Experimental true
or use the environment variableodo config set --env ODO_EXPERIMENTAL=true
2.10.2. Creating a project Copy linkLink copied to clipboard!
Create a project to keep your source code, tests, and libraries organized in a separate single unit.
Procedure
Log in to an OpenShift Container Platform cluster:
odo login -u developer -p developer
$ odo login -u developer -p developer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a project:
odo project create myproject
$ odo project create myproject ✓ Project 'myproject' is ready for use ✓ New project created and now using project : myproject
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10.3. Listing available services from the Operators installed on the cluster Copy linkLink copied to clipboard!
With odo
, you can display the list of the Operators installed on your cluster, and the services they provide.
To list the Operators installed in current project, run:
odo catalog list services
$ odo catalog list services
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command lists Operators and the CRDs. The output of the command shows the Operators installed on your cluster. For example:
Operators available in the cluster NAME CRDs etcdoperator.v0.9.4 EtcdCluster, EtcdBackup, EtcdRestore mongodb-enterprise.v1.4.5 MongoDB, MongoDBUser, MongoDBOpsManager
Operators available in the cluster NAME CRDs etcdoperator.v0.9.4 EtcdCluster, EtcdBackup, EtcdRestore mongodb-enterprise.v1.4.5 MongoDB, MongoDBUser, MongoDBOpsManager
Copy to Clipboard Copied! Toggle word wrap Toggle overflow etcdoperator.v0.9.4
is the Operator,EtcdCluster
,EtcdBackup
andEtcdRestore
are the CRDs provided by the Operator.
2.10.4. Creating a service from an Operator Copy linkLink copied to clipboard!
If an Operator has valid values defined in its metadata
to start the requested service, you can use the service with odo service create
.
Print the YAML of the service as a file on your local drive:
oc get csv/etcdoperator.v0.9.4 -o yaml
$ oc get csv/etcdoperator.v0.9.4 -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the values of the service are valid:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start an
EtcdCluster
service from theetcdoperator.v0.9.4
Operator:odo service create etcdoperator.v0.9.4 --crd EtcdCluster
$ odo service create etcdoperator.v0.9.4 --crd EtcdCluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that a service has started:
oc get EtcdCluster
$ oc get EtcdCluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10.5. Creating services from YAML files Copy linkLink copied to clipboard!
If the YAML definition of the service or custom resource (CR) has invalid or placeholder data, you can use the --dry-run
flag to get the YAML definition, specify the correct values, and start the service using the corrected YAML definition. Printing and modifying the YAML used to start a service odo
provides the feature to print the YAML definition of the service or CR provided by the Operator before starting a service.
To display the YAML of the service, run:
odo service create <operator-name> --crd <cr-name> --dry-run
$ odo service create <operator-name> --crd <cr-name> --dry-run
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to print YAML definition of
EtcdCluster
provided by theetcdoperator.v0.9.4
Operator, run:odo service create etcdoperator.v0.9.4 --crd EtcdCluster --dry-run
$ odo service create etcdoperator.v0.9.4 --crd EtcdCluster --dry-run
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The YAML is saved as the
etcd.yaml
file.Modify the
etcd.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start a service from the YAML file:
odo service create --from-file etcd.yaml
$ odo service create --from-file etcd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
EtcdCluster
service has started with one pod instead of the pre-configured three pods:oc get pods | grep my-etcd-cluster
$ oc get pods | grep my-etcd-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Debugging applications in odo Copy linkLink copied to clipboard!
Interactive debugging in odo is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
With odo
, you can attach a debugger to remotely debug your application. This feature is only supported for NodeJS and Java components.
Components created with odo
run in the debug mode by default. A debugger agent runs on the component, on a specific port. To start debugging your application, you must start port forwarding and attach the local debugger bundled in your Integrated development environment (IDE).
2.11.1. Debugging an application Copy linkLink copied to clipboard!
You can debug your application on in odo
with the odo debug
command.
Procedure
After an application is deployed, start the port forwarding for your component to debug the application:
odo debug port-forward
$ odo debug port-forward
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Attach the debugger bundled in your IDE to the component. Instructions vary depending on your IDE.
2.11.2. Configuring debugging parameters Copy linkLink copied to clipboard!
You can specify a remote port with odo config
command and a local port with the odo debug
command.
Procedure
To set a remote port on which the debugging agent should run, run:
odo config set DebugPort 9292
$ odo config set DebugPort 9292
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must redeploy your component for this value to be reflected on the component.
To set a local port to port forward, run:
odo debug port-forward --local-port 9292
$ odo debug port-forward --local-port 9292
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe local port value does not persist. You must provide it every time you need to change the port.
2.12. Managing environment variables Copy linkLink copied to clipboard!
odo
stores component-specific configurations and environment variables in the config
file. You can use the odo config
command to set, unset, and list environment variables for components without the need to modify the config
file.
2.12.1. Setting and unsetting environment variables Copy linkLink copied to clipboard!
Procedure
To set an environment variable in a component:
odo config set --env <variable>=<value>
$ odo config set --env <variable>=<value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To unset an environment variable in a component:
odo config unset --env <variable>
$ odo config unset --env <variable>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list all environment variables in a component:
odo config view
$ odo config view
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.13. Configuring the odo CLI Copy linkLink copied to clipboard!
2.13.1. Using command completion Copy linkLink copied to clipboard!
Currently command completion is only supported for bash, zsh, and fish shells.
odo provides a smart completion of command parameters based on user input. For this to work, odo needs to integrate with the executing shell.
Procedure
To install command completion automatically:
Run:
odo --complete
$ odo --complete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Press
y
when prompted to install the completion hook.
-
To install the completion hook manually, add
complete -o nospace -C <full_path_to_your_odo_binary> odo
to your shell configuration file. After any modification to your shell configuration file, restart your shell. To disable completion:
Run:
odo --uncomplete
$ odo --uncomplete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Press
y
when prompted to uninstall the completion hook.
Re-enable command completion if you either rename the odo executable or move it to a different directory.
2.13.2. Ignoring files or patterns Copy linkLink copied to clipboard!
You can configure a list of files or patterns to ignore by modifying the .odoignore
file in the root directory of your application. This applies to both odo push
and odo watch
.
If the .odoignore
file does not exist, the .gitignore
file is used instead for ignoring specific files and folders.
To ignore .git
files, any files with the .js
extension, and the folder tests
, add the following to either the .odoignore
or the .gitignore
file:
.git *.js tests/
.git
*.js
tests/
The .odoignore
file allows any glob expressions.
2.14. odo CLI reference Copy linkLink copied to clipboard!
2.14.1. Basic odo CLI commands Copy linkLink copied to clipboard!
2.14.1.1. app Copy linkLink copied to clipboard!
Perform application operations related to your OpenShift Container Platform project.
Example using app
2.14.1.2. catalog Copy linkLink copied to clipboard!
Perform catalog-related operations.
Example using catalog
2.14.1.3. component Copy linkLink copied to clipboard!
Manage components of an application.
Example using component
Create a new component Create a local configuration and create all objects on the cluster
# Create a new component
odo component create
# Create a local configuration and create all objects on the cluster
odo component create --now
2.14.1.4. config Copy linkLink copied to clipboard!
Modify odo
specific settings within the config
file.
Example using config
Application | Application is the name of application the component needs to be part of |
CPU | The minimum and maximum CPU a component can consume |
Ignore | Consider the .odoignore file for push and watch |
Application | The name of application that the component needs to be part of |
CPU | The minimum and maximum CPU a component can consume |
Ignore |
Whether to consider the |
MaxCPU | The maximum CPU a component can consume |
MaxMemory | The maximum memory a component can consume |
Memory | The minimum and maximum memory a component can consume |
MinCPU | The minimum CPU a component can consume |
MinMemory | The minimum memory a component is provided |
Name | The name of the component |
Ports | Ports to be opened in the component |
Project | The name of the project that the component is part of |
Ref | Git ref to use for creating component from git source |
SourceLocation | The path indicates the location of binary file or git source |
SourceType | Type of component source - git/binary/local |
Storage | Storage of the component |
Type | The type of component |
Url | The URL to access the component |
2.14.1.5. create Copy linkLink copied to clipboard!
Create a configuration describing a component to be deployed on OpenShift Container Platform. If a component name is not provided, it is autogenerated.
By default, builder images are used from the current namespace. To explicitly supply a namespace, use: odo create namespace/name:version
. If a version is not specified, the version defaults to latest
.
Use odo catalog list
to see a full list of component types that can be deployed.
Example using create
2.14.1.6. debug Copy linkLink copied to clipboard!
Debug a component.
2.14.1.7. delete Copy linkLink copied to clipboard!
Delete an existing component.
Example using delete
Delete component named 'frontend'.
# Delete component named 'frontend'.
odo delete frontend
odo delete frontend --all-apps
2.14.1.8. describe Copy linkLink copied to clipboard!
Describe the given component.
Example using describe
Describe nodejs component
# Describe nodejs component
odo describe nodejs
2.14.1.9. link Copy linkLink copied to clipboard!
Link a component to a service or component.
Example using link
Link adds the appropriate secret to the environment of the source component. The source component can then consume the entries of the secret as environment variables. If the source component is not provided, the current active component is assumed.
2.14.1.10. list Copy linkLink copied to clipboard!
List all the components in the current application.
Example using list
List all components in the application
# List all components in the application
odo list
2.14.1.11. log Copy linkLink copied to clipboard!
Retrieve the log for the given component.
Example using log
Get the logs for the nodejs component
# Get the logs for the nodejs component
odo log nodejs
2.14.1.12. login Copy linkLink copied to clipboard!
Log in to the cluster.
Example using login
2.14.1.13. logout Copy linkLink copied to clipboard!
Log out of the current OpenShift Container Platform session.
Example using logout
Log out
# Log out
odo logout
2.14.1.14. preference Copy linkLink copied to clipboard!
Modify odo
specific configuration settings within the global preference file.
Example using preference
By default, the path to the global preference file is ~/.odo/preferece.yaml
and it is stored in the environment variable GLOBALODOCONFIG
. You can set up a custom path by setting the value of the environment variable to a new preference path, for example GLOBALODOCONFIG="new_path/preference.yaml"
NamePrefix | The default prefix is the current directory name. Use this value to set a default name prefix. |
Timeout | The timeout (in seconds) for OpenShift Container Platform server connection checks. |
UpdateNotification | Controls whether an update notification is shown. |
2.14.1.15. project Copy linkLink copied to clipboard!
Perform project operations.
Example using project
2.14.1.16. push Copy linkLink copied to clipboard!
Push source code to a component.
Example using push
2.14.1.17. service Copy linkLink copied to clipboard!
Perform service catalog operations.
Example using service
2.14.1.18. storage Copy linkLink copied to clipboard!
Perform storage operations.
Example using storage
2.14.1.19. unlink Copy linkLink copied to clipboard!
Unlink component or a service.
For this command to be successful, the service or component must have been linked prior to the invocation using odo link
.
Example using unlink
2.14.1.20. update Copy linkLink copied to clipboard!
Update the source code path of a component
Example using update
2.14.1.21. url Copy linkLink copied to clipboard!
Expose a component to the outside world.
Example using url
The URLs that are generated using this command can be used to access the deployed components from outside the cluster.
2.14.1.22. utils Copy linkLink copied to clipboard!
Utilities for terminal commands and modifying odo configurations.
Example using utils
Bash terminal PS1 support
# Bash terminal PS1 support
source <(odo utils terminal bash)
# Zsh terminal PS1 support
source <(odo utils terminal zsh)
2.14.1.23. version Copy linkLink copied to clipboard!
Print the client version information.
Example using version
Print the client version of odo
# Print the client version of odo
odo version
2.14.1.24. watch Copy linkLink copied to clipboard!
odo starts watching for changes and updates the component upon a change automatically.
Example using watch
Watch for changes in directory for current component
# Watch for changes in directory for current component
odo watch
# Watch for changes in directory for component called frontend
odo watch frontend
2.15. odo release notes Copy linkLink copied to clipboard!
2.15.1. Notable changes and improvements in odo Copy linkLink copied to clipboard!
A
--now
flag is added forodo create
,odo url
, andodo config
.Run
odo url create --now
to create the URL in the configuration and propagate all the changes to the cluster.-
An
odo debug info
command is added. The command displays if the debug mode is enabled for a component, if the port-forward process is running, and information about ports used. -
The
odo push
output now shows the exact error from the OpenShift cluster on failure. -
A
--secure
flag is added forodo url
. It indicates whether a URL is secure. -
odo storage list
now displays the information about the storage state:Pushed
,NotPushed
orLocally Deleted
. -
odo debug port-forward
now selects a port automatically if the default port (5858) is occupied. -
The
--all
flag is renamed to--all-apps
. - The default PVC size has been increased to 10GiB.
-
The JSON output for
odo storage list -o json
has been restructured. -
nodejs-8
andnodejs-10
images are no longer supported. Experimental mode feature. By default, features under development or in experimental mode are hidden from the user.
To enable experimental mode, run:
odo config set --env ODO_EXPERIMENTAL=true
$ odo config set --env ODO_EXPERIMENTAL=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow or
odo preference set experimental true
$ odo preference set experimental true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To disable experimental mode, run:
odo config unset --env ODO_EXPERIMENTAL
$ odo config unset --env ODO_EXPERIMENTAL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow or
odo preference set experimental false
$ odo preference set experimental false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
odo
now supports Ingress to create URLs on Kubernetes.
2.15.2. Getting support Copy linkLink copied to clipboard!
For Documentation
If you find an error or have suggestions for improving the documentation, file an issue in Bugzilla. Choose the OpenShift Container Platform product type and the Documentation component type.
For Product
If you find an error, encounter a bug, or have suggestions for improving the functionality of odo
, file an issue in Bugzilla. Choose the Red Hat odo for OpenShift Container Platform product type.
Provide as many details in the issue description as possible.
2.15.3. Fixed issues Copy linkLink copied to clipboard!
-
Bug 1760575 The
odo app delete
command removes application components but not Services. -
Bug 1760577 The
odo push
command does not delete the OpenShift objects when the component name is changed.
2.15.4. Known issues Copy linkLink copied to clipboard!
-
Bug 1760574 A deleted namespace is listed in the
odo project get
command. -
Bug 1760586 The
odo delete
command starts an infinite loop after a project is deleted and a component name is set. -
Bug 1760588 The
odo service create
command crashes when run in Cygwin. -
Bug 1760590 In Git BASH for Windows, the
odo login -u developer
command does not hide a typed password when requested. -
Bug 1783188 In a disconnected cluster, the
odo component create
command throws an error…tag not found…
despite the component being listed in the catalog list. - Bug 1761440 It is not possible to create two Services of the same type in one project.
Bug 1821643
odo push
does not work on the .NET component tag 2.1+.Workaround: specify your .NET project file by running:
odo config set --env DOTNET_STARTUP_PROJECT=<path to your project file>
$ odo config set --env DOTNET_STARTUP_PROJECT=<path to your project file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.15.5. Technology Preview features odo Copy linkLink copied to clipboard!
-
odo debug
is a feature that allows users to attach a local debugger to a component running in the Pod. To learn more, see Debugging applications in odo
odo debug is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Devfile support. You can now create and deploy your applications using the devfiles with
odo
. A devfile is a file that defines the development environment: environment variables, image, and so on. To access this feature, you must enable experimental mode withodo preference set experimental true
.To see the list of currently supported devfile components, run
odo catalog list components
Devfile support is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Chapter 3. Helm CLI Copy linkLink copied to clipboard!
3.1. Getting started with Helm 3 on OpenShift Container Platform Copy linkLink copied to clipboard!
3.1.1. Understanding Helm Copy linkLink copied to clipboard!
Helm is a command-line interface (CLI) tool that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts. A Helm chart is a collection of files that describes OpenShift Container Platform resources. A running instance of the chart in a cluster is called a release. A new release is created every time a chart is installed on the cluster.
3.1.1.1. Key features Copy linkLink copied to clipboard!
Helm provides the ability to:
- Search through a large collection of charts stored in the chart repository.
- Modify existing charts.
- Create your own charts with OpenShift Container Platform or Kubernetes resources.
- Package and share your applications as charts.
3.1.2. Installing Helm Copy linkLink copied to clipboard!
The following section describes how to install Helm on different platforms using the CLI.
You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools.
Prerequisites
- You have installed Go, version 1.13 or higher.
3.1.2.1. On Linux Copy linkLink copied to clipboard!
Download the Helm binary and add it to your path:
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the binary file executable:
chmod +x /usr/local/bin/helm
# chmod +x /usr/local/bin/helm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the installed version:
helm version
$ helm version version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.2. On Windows 7/8 Copy linkLink copied to clipboard!
-
Download the latest
.exe
file and put in a directory of your preference. - Right click Start and click Control Panel.
- Select System and Security and then click System.
- From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom.
- Select Path from the Variable section and click Edit.
-
Click New and type the path to the folder with the
.exe
file into the field or click Browse and select the directory, and click OK.
3.1.2.3. On Windows 10 Copy linkLink copied to clipboard!
-
Download the latest
.exe
file and put in a directory of your preference. -
Click Search and type
env
orenvironment
. - Select Edit environment variables for your account.
- Select Path from the Variable section and click Edit.
- Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK.
3.1.2.4. On MacOS Copy linkLink copied to clipboard!
Download the Helm binary and add it to your path:
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the binary file executable:
chmod +x /usr/local/bin/helm
# chmod +x /usr/local/bin/helm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the installed version:
helm version
$ helm version version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Installing a Helm chart on an OpenShift Container Platform cluster Copy linkLink copied to clipboard!
Prerequisites
- You have a running OpenShift Container Platform cluster and you have logged into it.
- You have installed Helm.
Procedure
Create a new project:
oc new-project mysql
$ oc new-project mysql
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a repository of Helm charts to your local Helm client:
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/ "stable" has been added to your repositories
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the repository:
helm repo update
$ helm repo update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install an example MySQL chart:
helm install example-mysql stable/mysql
$ helm install example-mysql stable/mysql
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the chart has installed successfully:
helm list
$ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-mysql mysql 1 2019-12-05 15:06:51.379134163 -0500 EST deployed mysql-1.5.0 5.7.27
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.4. Creating a custom Helm chart on OpenShift Container Platform Copy linkLink copied to clipboard!
Procedure
Create a new project:
oc new-project nodejs-ex-k
$ oc new-project nodejs-ex-k
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download an example Node.js chart that contains OpenShift Container Platform objects:
git clone https://github.com/redhat-developer/redhat-helm-charts
$ git clone https://github.com/redhat-developer/redhat-helm-charts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Go to the directory with the sample chart:
cd redhat-helm-charts/alpha/nodejs-ex-k/
$ cd redhat-helm-charts/alpha/nodejs-ex-k/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
Chart.yaml
file and add a description of your chart:apiVersion: v2 name: nodejs-ex-k description: A Helm chart for OpenShift icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg
apiVersion: v2
1 name: nodejs-ex-k
2 description: A Helm chart for OpenShift
3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg
4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the chart is formatted properly:
helm lint
$ helm lint [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the chart:
cd .. helm install nodejs-chart nodejs-ex-k
$ cd .. $ helm install nodejs-chart nodejs-ex-k
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the chart has installed successfully:
helm list
$ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Knative CLI (kn) for use with OpenShift Serverless Copy linkLink copied to clipboard!
kn
enables simple interaction with Knative components on OpenShift Container Platform.
You can enable Knative on OpenShift Container Platform by installing OpenShift Serverless. For more information, see the documentation on Getting started with OpenShift Serverless.
OpenShift Serverless cannot be installed using the kn
CLI. A cluster administrator must install the OpenShift Serverless Operator and set up the Knative components, as described in the Serverless applications documentation for OpenShift Container Platform.
4.1. Key features Copy linkLink copied to clipboard!
kn
is designed to make serverless computing tasks simple and concise. Key features of kn
include:
- Deploy serverless applications from the command line.
- Manage features of Knative Serving, such as services, revisions, and traffic-splitting.
Create and manage Knative Eventing components, such as event sources and triggers.
NoteKnative Eventing is currently available as a Technology Preview feature of OpenShift Serverless.
- Create sink binding to connect existing Kubernetes applications and Knative services.
-
Extend
kn
with flexible plugin architecture, similar tokubectl
. - Configure autoscaling parameters for Knative services.
- Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies.
4.2. Installing kn Copy linkLink copied to clipboard!
For information about installing kn
for use with OpenShift Serverless, see the documentation on Installing the Knative CLI (kn
).
Chapter 5. Pipelines CLI (tkn) Copy linkLink copied to clipboard!
5.1. Installing tkn Copy linkLink copied to clipboard!
Use the tkn
CLI to manage OpenShift Pipelines from a terminal. The following section describes how to install tkn
on different platforms.
You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools.
5.1.1. Installing OpenShift Pipelines CLI (tkn) on Linux Copy linkLink copied to clipboard!
For Linux distributions, you can download the CLI directly as a tar.gz
archive.
Procedure
- Download the CLI.
Unpack the archive:
tar xvzf <file>
$ tar xvzf <file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Place the
tkn
binary in a directory that is on yourPATH
. To check your
PATH
, run:echo $PATH
$ echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.2. Installing OpenShift Pipelines CLI (tkn) on Linux using an RPM Copy linkLink copied to clipboard!
For Red Hat Enterprise Linux (RHEL) version 8, you can install the OpenShift Pipelines CLI (tkn
) as an RPM.
Prerequisites
- You have an active OpenShift Container Platform subscription on your Red Hat account.
- You have root or sudo privileges on your local system.
Procedure
Register with Red Hat Subscription Manager:
subscription-manager register
# subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*pipelines*'
# subscription-manager list --available --matches '*pipelines*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the repositories required by OpenShift Pipelines:
subscription-manager repos --enable="pipelines-1.0-for-rhel-8-x86_64-rpms"
# subscription-manager repos --enable="pipelines-1.0-for-rhel-8-x86_64-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
openshift-pipelines-client
package:yum install openshift-pipelines-client
# yum install openshift-pipelines-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you install the CLI, it is available using the tkn
command:
tkn version
$ tkn version
5.1.3. Installing OpenShift Pipelines CLI (tkn) on Windows Copy linkLink copied to clipboard!
For Windows, the tkn
CLI is provided as a zip
archive.
Procedure
- Download the CLI.
- Unzip the archive with a ZIP program.
-
Add the location of your
tkn.exe
file to yourPATH
environment variable. To check your
PATH
, open the command prompt and run the command:path
C:\> path
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.4. Installing OpenShift Pipelines CLI (tkn) on macOS Copy linkLink copied to clipboard!
For macOS, the tkn
CLI is provided as a tar.gz
archive.
Procedure
- Download the CLI.
- Unpack and unzip the archive.
-
Move the
tkn
binary to a directory on your PATH. To check your
PATH
, open a terminal window and run:echo $PATH
$ echo $PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Configuring the OpenShift Pipelines tkn CLI Copy linkLink copied to clipboard!
Configure the OpenShift Pipelines tkn
CLI to enable tab completion.
5.2.1. Enabling tab completion Copy linkLink copied to clipboard!
After you install the tkn
CLI, you can enable tab completion to automatically complete tkn
commands or suggest options when you press Tab.
Prerequisites
-
You must have the
tkn
CLI tool installed. -
You must have
bash-completion
installed on your local system.
Procedure
The following procedure enables tab completion for Bash.
Save the Bash completion code to a file:
tkn completion bash > tkn_bash_completion
$ tkn completion bash > tkn_bash_completion
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the file to
/etc/bash_completion.d/
:sudo cp tkn_bash_completion /etc/bash_completion.d/
$ sudo cp tkn_bash_completion /etc/bash_completion.d/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can save the file to a local directory and source it from your
.bashrc
file instead.
Tab completion is enabled when you open a new terminal.
5.3. OpenShift Pipelines tkn reference Copy linkLink copied to clipboard!
This section lists the basic tkn
CLI commands.
5.3.1. Basic syntax Copy linkLink copied to clipboard!
tkn [command or options] [arguments…]
5.3.2. Global options Copy linkLink copied to clipboard!
--help, -h
5.3.3. Utility commands Copy linkLink copied to clipboard!
5.3.3.1. tkn Copy linkLink copied to clipboard!
Parent command for tkn
CLI.
Example: Display all options
tkn
$ tkn
5.3.3.2. completion [shell] Copy linkLink copied to clipboard!
Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash
and zsh
.
Example: Completion code for bash
shell
tkn completion bash
$ tkn completion bash
5.3.3.3. version Copy linkLink copied to clipboard!
Print version information of the tkn
CLI.
Example: Check the tkn
version
tkn version
$ tkn version
5.3.4. Pipelines management commands Copy linkLink copied to clipboard!
5.3.4.1. pipeline Copy linkLink copied to clipboard!
Manage Pipelines.
Example: Display help
tkn pipeline --help
$ tkn pipeline --help
5.3.4.2. pipeline create Copy linkLink copied to clipboard!
Create a Pipeline.
Example: Create a Pipeline defined by the mypipeline.yaml
file in a namespace
tkn pipeline create -f mypipeline.yaml -n myspace
$ tkn pipeline create -f mypipeline.yaml -n myspace
5.3.4.3. pipeline delete Copy linkLink copied to clipboard!
Delete a Pipeline.
Example: Delete the mypipeline
Pipeline from a namespace
tkn pipeline delete mypipeline -n myspace
$ tkn pipeline delete mypipeline -n myspace
5.3.4.4. pipeline describe Copy linkLink copied to clipboard!
Describe a Pipeline.
Example: Describe mypipeline
Pipeline
tkn pipeline describe mypipeline
$ tkn pipeline describe mypipeline
5.3.4.5. pipeline list Copy linkLink copied to clipboard!
List Pipelines.
Example: Display a list of Pipelines
tkn pipeline list
$ tkn pipeline list
5.3.4.6. pipeline logs Copy linkLink copied to clipboard!
Display Pipeline logs for a specific Pipeline.
Example: Stream live logs for the mypipeline
Pipeline
tkn pipeline logs -f mypipeline
$ tkn pipeline logs -f mypipeline
5.3.4.7. pipeline start Copy linkLink copied to clipboard!
Start a Pipeline.
Example: Start mypipeline
Pipeline
tkn pipeline start mypipeline
$ tkn pipeline start mypipeline
5.3.5. PipelineRun commands Copy linkLink copied to clipboard!
5.3.5.1. pipelinerun Copy linkLink copied to clipboard!
Manage PipelineRuns.
Example: Display help
tkn pipelinerun -h
$ tkn pipelinerun -h
5.3.5.2. pipelinerun cancel Copy linkLink copied to clipboard!
Cancel a PipelineRun.
Example: Cancel the mypipelinerun
PipelineRun from a namespace
tkn pipelinerun cancel mypipelinerun -n myspace
$ tkn pipelinerun cancel mypipelinerun -n myspace
5.3.5.3. pipelinerun delete Copy linkLink copied to clipboard!
Delete a PipelineRun.
Example: Delete PipelineRuns from a namespace
tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace
$ tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace
5.3.5.4. pipelinerun describe Copy linkLink copied to clipboard!
Describe a PipelineRun.
Example: Describe the mypipelinerun
PipelineRun in a namespace
tkn pipelinerun describe mypipelinerun -n myspace
tkn pipelinerun describe mypipelinerun -n myspace
5.3.5.5. pipelinerun list Copy linkLink copied to clipboard!
List PipelineRuns.
Example: Display a list of PipelineRuns in a namespace
tkn pipelinerun list -n myspace
$ tkn pipelinerun list -n myspace
5.3.5.6. pipelinerun logs Copy linkLink copied to clipboard!
Display the logs of a PipelineRun.
Example: Display the logs of the mypipelinerun
PipelineRun with all tasks and steps in a namespace
tkn pipelinerun logs mypipelinerun -a -n myspace
$ tkn pipelinerun logs mypipelinerun -a -n myspace
5.3.6. Task management commands Copy linkLink copied to clipboard!
5.3.6.1. task Copy linkLink copied to clipboard!
Manage Tasks.
Example: Display help
tkn task -h
$ tkn task -h
5.3.6.2. task create Copy linkLink copied to clipboard!
Create a Task.
Example: Create a Task defined by the mytask.yaml
file in a namespace
tkn task create -f mytask.yaml -n myspace
$ tkn task create -f mytask.yaml -n myspace
5.3.6.3. task delete Copy linkLink copied to clipboard!
Delete a Task.
Example: Delete mytask1
and mytask2
Tasks from a namespace
tkn task delete mytask1 mytask2 -n myspace
$ tkn task delete mytask1 mytask2 -n myspace
5.3.6.4. task describe Copy linkLink copied to clipboard!
Describe a Task.
Example: Describe the mytask
Task in a namespace
tkn task describe mytask -n myspace
$ tkn task describe mytask -n myspace
5.3.6.5. task list Copy linkLink copied to clipboard!
List Tasks.
Example: List all the Tasks in a namespace
tkn task list -n myspace
$ tkn task list -n myspace
5.3.6.6. task logs Copy linkLink copied to clipboard!
Display Task logs.
Example: Display logs for the mytaskrun
TaskRun of the mytask
Task
tkn task logs mytask mytaskrun -n myspace
$ tkn task logs mytask mytaskrun -n myspace
5.3.6.7. task start Copy linkLink copied to clipboard!
Start a Task.
Example: Start the mytask
Task in a namespace
tkn task start mytask -s <ServiceAccountName> -n myspace
$ tkn task start mytask -s <ServiceAccountName> -n myspace
5.3.7. TaskRun commands Copy linkLink copied to clipboard!
5.3.7.1. taskrun Copy linkLink copied to clipboard!
Manage TaskRuns.
Example: Display help
tkn taskrun -h
$ tkn taskrun -h
5.3.7.2. taskrun cancel Copy linkLink copied to clipboard!
Cancel a TaskRun.
Example: Cancel the mytaskrun
TaskRun from a namespace
tkn taskrun cancel mytaskrun -n myspace
$ tkn taskrun cancel mytaskrun -n myspace
5.3.7.3. taskrun delete Copy linkLink copied to clipboard!
Delete a TaskRun.
Example: Delete mytaskrun1
and mytaskrun2
TaskRuns from a namespace
tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace
$ tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace
5.3.7.4. taskrun describe Copy linkLink copied to clipboard!
Describe a TaskRun.
Example: Describe the mytaskrun
TaskRun in a namespace
tkn taskrun describe mytaskrun -n myspace
$ tkn taskrun describe mytaskrun -n myspace
5.3.7.5. taskrun list Copy linkLink copied to clipboard!
List TaskRuns.
Example: List all TaskRuns in a namespace
tkn taskrun list -n myspace
$ tkn taskrun list -n myspace
5.3.7.6. taskrun logs Copy linkLink copied to clipboard!
Display TaskRun logs.
Example: Display live logs for the mytaskrun
TaskRun in a namespace
tkn taskrun logs -f mytaskrun -n myspace
$ tkn taskrun logs -f mytaskrun -n myspace
5.3.8. Condition management commands Copy linkLink copied to clipboard!
5.3.8.1. condition Copy linkLink copied to clipboard!
Manage Conditions.
Example: Display help
tkn condition --help
$ tkn condition --help
5.3.8.2. condition delete Copy linkLink copied to clipboard!
Delete a Condition.
Example: Delete the mycondition1
Condition from a namespace
tkn condition delete mycondition1 -n myspace
$ tkn condition delete mycondition1 -n myspace
5.3.8.3. condition describe Copy linkLink copied to clipboard!
Describe a Condition.
Example: Describe the mycondition1
Condition in a namespace
tkn condition describe mycondition1 -n myspace
$ tkn condition describe mycondition1 -n myspace
5.3.8.4. condition list Copy linkLink copied to clipboard!
List Conditions.
Example: List Conditions in a namespace
tkn condition list -n myspace
$ tkn condition list -n myspace
5.3.9. Pipeline Resource management commands Copy linkLink copied to clipboard!
5.3.9.1. resource Copy linkLink copied to clipboard!
Manage Pipeline Resources.
Example: Display help
tkn resource -h
$ tkn resource -h
5.3.9.2. resource create Copy linkLink copied to clipboard!
Create a Pipeline Resource.
Example: Create Pipeline Resource defined by the myresource.yaml
file in a namespace
tkn resource create -f myresource.yaml -n myspace
$ tkn resource create -f myresource.yaml -n myspace
5.3.9.3. resource delete Copy linkLink copied to clipboard!
Delete a Pipeline Resource.
Example: Delete the myresource
Pipeline Resource from a namespace
tkn resource delete myresource -n myspace
$ tkn resource delete myresource -n myspace
5.3.9.4. resource describe Copy linkLink copied to clipboard!
Describe a Pipeline Resource.
Example: Describe the myresource
Pipeline Resource
tkn resource describe myresource -n myspace
$ tkn resource describe myresource -n myspace
5.3.9.5. resource list Copy linkLink copied to clipboard!
List Pipeline Resources.
Example: List all Pipeline Resources in a namespace
tkn resource list -n myspace
$ tkn resource list -n myspace
5.3.10. ClusterTask management commands Copy linkLink copied to clipboard!
5.3.10.1. clustertask Copy linkLink copied to clipboard!
Manage ClusterTasks.
Example: Display help
tkn clustertask --help
$ tkn clustertask --help
5.3.10.2. clustertask delete Copy linkLink copied to clipboard!
Delete a ClusterTask resource in a cluster.
Example: Delete mytask1
and mytask2
ClusterTasks
tkn clustertask delete mytask1 mytask2
$ tkn clustertask delete mytask1 mytask2
5.3.10.3. clustertask describe Copy linkLink copied to clipboard!
Describe a ClusterTask.
Example: Describe the mytask
ClusterTask
tkn clustertask describe mytask1
$ tkn clustertask describe mytask1
5.3.10.4. clustertask list Copy linkLink copied to clipboard!
List ClusterTasks.
Example: List ClusterTasks
tkn clustertask list
$ tkn clustertask list
5.3.10.5. clustertask start Copy linkLink copied to clipboard!
Start ClusterTasks.
Example: Start the mytask
ClusterTask
tkn clustertask start mytask
$ tkn clustertask start mytask
5.3.11. Trigger management commands Copy linkLink copied to clipboard!
5.3.11.1. eventlistener Copy linkLink copied to clipboard!
Manage EventListeners.
Example: Display help
tkn eventlistener -h
$ tkn eventlistener -h
5.3.11.2. eventlistener delete Copy linkLink copied to clipboard!
Delete an EventListener.
Example: Delete mylistener1
and mylistener2
EventListeners in a namespace
tkn eventlistener delete mylistener1 mylistener2 -n myspace
$ tkn eventlistener delete mylistener1 mylistener2 -n myspace
5.3.11.3. eventlistener describe Copy linkLink copied to clipboard!
Describe an EventListener.
Example: Describe the mylistener
EventListener in a namespace
tkn eventlistener describe mylistener -n myspace
$ tkn eventlistener describe mylistener -n myspace
5.3.11.4. eventlistener list Copy linkLink copied to clipboard!
List EventListeners.
Example: List all the EventListeners in a namespace
tkn eventlistener list -n myspace
$ tkn eventlistener list -n myspace
5.3.11.5. triggerbinding Copy linkLink copied to clipboard!
Manage TriggerBindings.
Example: Display TriggerBindings help
tkn triggerbinding -h
$ tkn triggerbinding -h
5.3.11.6. triggerbinding delete Copy linkLink copied to clipboard!
Delete a TriggerBinding.
Example: Delete mybinding1
and mybinding2
TriggerBindings in a namespace
tkn triggerbinding delete mybinding1 mybinding2 -n myspace
$ tkn triggerbinding delete mybinding1 mybinding2 -n myspace
5.3.11.7. triggerbinding describe Copy linkLink copied to clipboard!
Describe a TriggerBinding.
Example: Describe the mybinding
TriggerBinding in a namespace
tkn triggerbinding describe mybinding -n myspace
$ tkn triggerbinding describe mybinding -n myspace
5.3.11.8. triggerbinding list Copy linkLink copied to clipboard!
List TriggerBindings.
Example: List all the TriggerBindings in a namespace
tkn triggerbinding list -n myspace
$ tkn triggerbinding list -n myspace
5.3.11.9. triggertemplate Copy linkLink copied to clipboard!
Manage TriggerTemplates.
Example: Display TriggerTemplate help
tkn triggertemplate -h
$ tkn triggertemplate -h
5.3.11.10. triggertemplate delete Copy linkLink copied to clipboard!
Delete a TriggerTemplate.
Example: Delete mytemplate1
and mytemplate2
TriggerTemplates in a namespace
tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`
$ tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`
5.3.11.11. triggertemplate describe Copy linkLink copied to clipboard!
Describe a TriggerTemplate.
Example: Describe the mytemplate
TriggerTemplate in a namespace
tkn triggertemplate describe mytemplate -n `myspace`
$ tkn triggertemplate describe mytemplate -n `myspace`
5.3.11.12. triggertemplate list Copy linkLink copied to clipboard!
List TriggerTemplates.
Example: List all the TriggerTemplates in a namespace
tkn triggertemplate list -n myspace
$ tkn triggertemplate list -n myspace
5.3.11.13. clustertriggerbinding Copy linkLink copied to clipboard!
Manage ClusterTriggerBindings.
Example: Display ClusterTriggerBindings help
tkn clustertriggerbinding -h
$ tkn clustertriggerbinding -h
5.3.11.14. clustertriggerbinding delete Copy linkLink copied to clipboard!
Delete a ClusterTriggerBinding.
Example: Delete myclusterbinding1
and myclusterbinding2
ClusterTriggerBindings
tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2
$ tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2
5.3.11.15. clustertriggerbinding describe Copy linkLink copied to clipboard!
Describe a ClusterTriggerBinding.
Example: Describe the myclusterbinding
ClusterTriggerBinding
tkn clustertriggerbinding describe myclusterbinding
$ tkn clustertriggerbinding describe myclusterbinding
5.3.11.16. clustertriggerbinding list Copy linkLink copied to clipboard!
List ClusterTriggerBindings.
Example: List all ClusterTriggerBindings
tkn clustertriggerbinding list
$ tkn clustertriggerbinding list
Legal Notice
Copy linkLink copied to clipboard!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.