Chapter 2. Installing CodeReady Workspaces on OpenShift v3
This section describes how to obtain installation files for Red Hat CodeReady Workspaces and how to use them to deploy the product on an instance of OpenShift (such as Red Hat OpenShift Container Platform) v3.
Prerequisites
Minimum hardware requirements
Minimum 5 GB RAM to run CodeReady Workspaces. The Red Hat Single Sign-On (Red Hat SSO) authorization server and the PostgreSQL database require extra RAM. CodeReady Workspaces uses RAM in the following distribution:
- The CodeReady Workspaces server: Approximately 750 MB
- Red Hat SSO: Approximately 1 GB
- PostgreSQL: Approximately 515 MB
- Workspaces: 2 GB of RAM per workspace. The total workspace RAM depends on the size of the workspace runtime(s) and the number of concurrent workspace pods.
Software requirements
- CodeReady Workspaces deployment script and configuration file
Container images required for deployment:
Important- The container images are now published on the registry.redhat.io registry and will only be available from registry.redhat.io. For details, see Red Hat Container Registry Authentication.
To use this registry you must be logged into it.
- To authorize from the local Docker daemon, see Using Authentication.
- To authorize from an OpenShift cluster, see Allowing Pods to Reference Images from Other Secured Registries.
It is not necessary to download any of the referenced images manually.
- All container images required for deployment are automatically downloaded by the CodeReady Workspaces deployment script.
- Stack images are automatically downloaded by CodeReady Workspaces when new workspaces are created.
-
registry.redhat.io/codeready-workspaces/server-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/server-operator-rhel8:1.2
-
registry.redhat.io/rhscl/postgresql-96-rhel7:1-40
-
registry.redhat.io/redhat-sso-7/sso73-openshift:1.0-11
-
registry.redhat.io/ubi8-minimal:8.0-127
Container images with preconfigured stacks for creating workspaces:
-
registry.redhat.io/codeready-workspaces/stacks-java-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-node-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-php-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-python-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-golang-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-java-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8:1.2
-
registry.redhat.io/codeready-workspaces/stacks-node:1.2
-
Other
To be able to download the CodeReady Workspaces deployment script, Red Hat asks that you register for the free Red Hat Developer Program. This allows you to agree to license conditions of the product. For instructions on how to obtain the deployment script, see Section 2.2, “Downloading the CodeReady Workspaces deployment script”.
2.1. Downloading the Red Hat OpenShift Origin Client Tools
This procedure describes steps to obtain and unpack the archive with the Red Hat OpenShift Origin Client Tools.
The CodeReady Workspaces deployment and migration scripts require the use of OpenShift Origin Client Tools 3.11. Later versions may be supported in future but as deployment of CodeReady Workspaces to OpenShift Container Platform and OpenShift Dedicated 4 can be done via the embedded OperatorHub, using a deployment script is no longer necessary.
Procedure
Change to a temporary directory. Create it if necessary. For example:
$ mkdir ~/tmp $ cd ~/tmp
-
Download the archive with the
oc
file from: oc.tar.gz. Unpack the downloaded archive. The
oc
executable file is unpacked in your current directory:$ tar xf oc.tar.gz && ./oc version
-
Add the
oc
file to your path.
2.2. Downloading the CodeReady Workspaces deployment script
This procedure describes how to obtain and unpack the archive with the CodeReady Workspaces deployment shell script.
The CodeReady Workspaces deployment script uses the OpenShift Operator to deploy Red Hat Single Sign-On, the PostgreSQL database, and the CodeReady Workspaces server container images on an instance of Red Hat OpenShift Container Platform. The images are available in the Red Hat Container Catalog.
Procedure
Change to a temporary directory. Create it if necessary. For example:
$ mkdir ~/tmp $ cd ~/tmp
-
Download the archive with the deployment script and the
custom-resource.yaml
file using the browser with which you logged into the Red Hat Developer Portal: codeready-workspaces-1.2.2.GA-operator-installer.tar.gz. Unpack the downloaded archive and change to the created directory:
$ tar xvf codeready-workspaces-1.2.2.GA-operator-installer.tar.gz \ && cd codeready-workspaces-operator-installer/
Next steps
Continue by configuring and running the deployment script. See Section 2.3, “Running the CodeReady Workspaces deployment script”.
2.3. Running the CodeReady Workspaces deployment script
The CodeReady Workspaces deployment script uses command-line arguments and the custom-resource.yaml
file to populate a set of configuration environment variables for the OpenShift Operator used for the actual deployment.
Prerequisites
- Downloaded and unpacked deployment script and the configuration file. See Section 2.2, “Downloading the CodeReady Workspaces deployment script”.
- A running instance of Red Hat OpenShift Container Platform 3.11 or OpenShift Dedicated 3.11. To install OpenShift Container Platform, see the Getting Started with OpenShift Container Platform guide.
-
The OpenShift Origin Client Tools 3.11,
oc
, is in the path. See Section 2.1, “Downloading the Red Hat OpenShift Origin Client Tools”. -
The user is logged in to the OpenShift instance (using, for example,
oc login
). - CodeReady Workspaces is supported for use with Google Chrome 70.0.3538.110 (Official Build) (64bit).
cluster-admin
rights to successfully deploy CodeReady Workspaces using the deploy script. The following table lists the objects and the required permissions:Type of object Name of the object that the installer creates Description Permission required CRD
-
Custom Resource Definition - CheCluster
cluster-admin
CR
codeready
Custom Resource of the CheCluster type of object
cluster-admin
. Alternatively, you can create aclusterrole
.ServiceAccount
codeready-operator
Operator uses this service account to reconcile CodeReady Workspaces objects
The edit role in a target namespace.
Role
codeready-operator
Scope of permissions for the operator-service account
cluster-admin
RoleBinding
codeready-operator
Assignment of a role to the service account
The edit role in a target namespace.
Deployment
codeready-operator
Deployment with operator image in the template specification
The edit role in a target namespace.
ClusterRole
codeready-operator
ClusterRole
allows you to create, update, delete oAuthClientscluster-admin
ClusterRoleBinding
${NAMESPACE}-codeready-operator
ClusterRoleBinding
allows you to create, update, delete oAuthClientscluster-admin
Role
secret-reader
Role
allows you to read secrets in the router namespacecluster-admin
RoleBinding
${NAMESPACE}-codeready-operator
RoleBinding
allows you to read secrets in router namespacecluster-admin
By default, the operator-service account gets privileges to list, get, watch, create, update, and delete ingresses, routes, service accounts, roles, rolebindings, PVCs, deployments, configMaps, secrets. It also has privileges to run execs into pods, watch events, and read pod logs in a target namespace.
With self-signed certificates support enabled, the operator-service account gets privileges to read secrets in an OpenShift router namespace.
With OpenShift OAuth enabled, the operator-service account gets privileges to get, list, create, update, and delete oAuthclients at a cluster scope.
2.3.1. Deploying CodeReady Workspaces with default settings
Run the following command:
$ ./deploy.sh --deploy
NoteRun the
./deploy.sh --help
command to get a list of all available arguments. For a description of all the options, see Section 2.6, “CodeReady Workspaces deployment script parameters”.The following messages indicates that CodeReady Workspaces is getting installed:
[INFO]: Welcome to CodeReady Workspaces Installer [INFO]: Found oc client in PATH [INFO]: Checking if you are currently logged in... [INFO]: Active session found. Your current context is: workspaces/192-168-42-231:8443/admin [INFO]: Creating operator service account [INFO]: Service account already exists [INFO]: Create service account roles [INFO]: Role Binding already exists [INFO]: Self-signed certificate support enabled [INFO]: Adding extra privileges for an operator service account [INFO]: Creating secret-reader role and rolebinding in namespace default [INFO]: Role secret-reader already exists [INFO]: Creating role binding to let operator get secrets in namespace default [INFO]: Role binding codeready-operator already exists in namespace default [INFO]: Creating custom resource definition [INFO]: Creating Operator Deployment [INFO]: Existing operator deployment found. It will be deleted [INFO]: Waiting for the Operator deployment to be scaled to 1. Timeout 5 minutes [INFO]: Codeready Workspaces operator successfully deployed [INFO]: Creating Custom resource. This will initiate CodeReady Workspaces deployment [INFO]: CodeReady is going to be deployed with the following settings: [INFO]: TLS support: false [INFO]: OpenShift oAuth: false [INFO]: Self-signed certs: true [INFO]: Waiting for CodeReady Workspaces to boot. Timeout: 1200 seconds [INFO]: CodeReady Workspaces successfully deployed and is available at http://codeready-workspaces.192.168.42.231.nip.io
The CodeReady Workspaces successfully deployed and available at <URL> message confirms that the deployment is successful.
- Open the OpenShift web console.
- In the My Projects pane, click workspaces.
Click Applications > Pods. The pods are shown running.
Figure 2.1. Pods for codeready shown running
2.3.2. Deploying CodeReady Workspaces with a self-signed certificate and OpenShift OAuth
To deploy CodeReady Workspaces with a self-signed certificate, run the following command:
$ ./deploy.sh --deploy --oauth
If you use the TLS mode with a self-signed certificate, ensure that your browser trusts the certificate. If it does not trust the certificate, the Authorization token is missed error is displayed on the login page and the running workspace may not work as intended.
2.3.3. Deploying CodeReady Workspaces with a public certificate
To deploy CodeReady Workspaces to a cluster configured with public certificates, run the following command:
$ ./deploy.sh --deploy --public-certs
2.3.4. Deploying CodeReady Workspaces with external Red Hat Single Sign-On
To deploy with an external Red Hat Single Sign-On (Red Hat SSO) and enable a Red Hat SSO instance, take the following steps:
Update the following values in the
custom-resource.yaml
file:auth: externalIdentityProvider: 'true' 1 identityProviderURL: 'https://my-red-hat-sso.com' 2 identityProviderRealm: 'myrealm' 3 identityProviderClientId: 'myClient' 4
- 1
- Instructs the operator on whether or not to deploy the Red Hat SSO instance. When set to
true
, provisions the connection details. - 2
- Retrieved from the respective route or ingress unless explicitly specified in CR (when the
externalIdentityProvider
variable istrue
). - 3
- Name of a Red Hat SSO realm. This realm is created when the
externalIdentityProvider
variable istrue
. Otherwise, it is passed to the CodeReady Workspaces server. - 4
- The ID of a Red Hat SSO client. This client is created when the
externalIdentityProvider
variable isfalse
. Otherwise, it is passed to the CodeReady Workspaces server.
Run the deploy script:
$ ./deploy.sh --deploy
2.3.5. Deploying CodeReady Workspaces with external Red Hat SSO and PostgreSQL
The deploy script supports the following combinations of external Red Hat SSO and PostgreSQL:
- PostgreSQL and Red Hat SSO
- Red Hat SSO only
The deploy script does not support the external database and bundled Red Hat SSO combination currently. Provisioning of the database and the Red Hat SSO realm with the client happens only with bundled resources. If you are connecting your own database or Red Hat SSO, you should pre-create resources.
To deploy with the external PostgreSQL database and Red Hat SSO, take the following steps:
Update the following PostgreSQL database-related values in the
custom-resource.yaml
file:database: externalDb: 'true' 1 chePostgresHostname: 'http://postgres' 2 chePostgresPort: '5432' 3 chePostgresUser: 'myuser' 4 chePostgresPassword: 'mypass' 5 chePostgresDb: 'mydb' 6
- 1
- When set to
true
the operator skips deploying PostgreSQL and passes the connection details of the existing database to the CodeReady Workspaces server. Otherwise, a PostgreSQL deployment is created. - 2
- The PostgreSQL database hostname that the CodeReady Workspaces server uses to connect to. Defaults to
postgres
. - 3
- The PostgreSQL database port that the CodeReady Workspaces server uses to connect to. Defaults to
5432
. - 4
- The PostgreSQL user that the CodeReady Workspaces server when making a database connection. Defaults to
pgche
. - 5
- The password of a PostgreSQL user. Auto-generated when left blank.
- 6
- The PostgreSQL database name that the CodeReady Workspaces server uses to connect to. Defaults to
dbche
.
Update the following Red Hat SSO-related values in the
custom-resource.yaml
file:auth: externalIdentityProvider: 'true' 1 identityProviderURL: 'https://my-red-hat-sso.com' 2 identityProviderRealm: 'myrealm' 3 identityProviderClientId: 'myClient' 4
- 1
- Instructs the operator on whether or not to deploy Red Hat SSO instance. When set to
true
, provisions the connection details. - 2
- Retrieved from the respective route or ingress unless explicitly specified in CodeReady Workspaces (when
externalIdentityProvider
istrue
). - 3
- Name of a Red Hat SSO realm. This realm is created when
externalIdentityProvider
istrue
. Otherwise, passed to the CodeReady Workspaces server. - 4
- ID of a Red Hat SSO client. This client is created when
externalIdentityProvider
isfalse
. Otherwise, passed to the CodeReady Workspaces server.
Run the deploy script:
$ ./deploy.sh --deploy
Additional resources
- See Section 2.6, “CodeReady Workspaces deployment script parameters” for definitions of the deployment script parameters.
2.4. Viewing CodeReady Workspaces installation logs
You can view the installation logs in the terminal or from the OpenShift console.
2.4.1. Viewing CodeReady Workspaces installation logs in the terminal
To view the installation logs on the terminal, take the following steps:
To obtain the names of the pods you must switch to project where CodeReady Workspaces is installed:
$ oc get pods -n=<OpenShift-project-name>
Following is an example output.
NAME READY STATUS RESTARTS AGE codeready-76d985c5d8-4sqmm 1/1 Running 2 1d codeready-operator-54b58f8ff7-fc88p 1/1 Running 3 1d keycloak-7668cdb5f5-ss29s 1/1 Running 2 1d postgres-7d94b544dc-nmhwp 1/1 Running 1 1d
To view the logs for the pod, run:
$ oc logs <log-name>
The following is an example output:
Deployment of web application archive [/home/jboss/codeready/tomcat/webapps/dashboard.war] has finished in [286] ms 2019-05-13 12:47:49,201[ost-startStop-1] [INFO ] [o.a.c.startup.HostConfig 957] - Deploying web application archive [/home/jboss/codeready/tomcat/webapps/swagger.war] 2019-05-13 12:47:49,318[ost-startStop-1] [INFO ] [o.a.c.startup.HostConfig 1020] - Deployment of web application archive [/home/jboss/codeready/tomcat/webapps/swagger.war] has finished in [117] ms 2019-05-13 12:47:49,320[ost-startStop-1] [INFO ] [o.a.c.startup.HostConfig 957] - Deploying web application archive [/home/jboss/codeready/tomcat/webapps/workspace-loader.war] 2019-05-13 12:47:49,397[ost-startStop-1] [INFO ] [o.a.c.startup.HostConfig 1020] - Deployment of web application archive [/home/jboss/codeready/tomcat/webapps/workspace-loader.war] has finished in [77] ms 2019-05-13 12:47:49,403[main] [INFO ] [o.a.c.http11.Http11NioProtocol 588] - Starting ProtocolHandler ["http-nio-8080"] 2019-05-13 12:47:49,419[main] [INFO ] [o.a.catalina.startup.Catalina 700] - Server startup in 31229 ms
2.4.2. Viewing CodeReady Workspaces installation logs in the OpenShift console
To view installation logs in OpenShift console, take the following steps:
- Navigate to the OpenShift web console`.
- In the My Projects pane, click workspaces.
- Click Applications > Pods. Click the name of the pod for which you want to view the logs.
Click Logs and click Follow.
time="2019-05-15T08:50:04Z" level=info msg="Deployment 'codeready' successfully scaled to 1" time="2019-05-15T08:50:04Z" level=info msg="Updating codeready CR with Keycloak URL status: http://keycloak-workspaces.192.168.42.231.nip.io " time="2019-05-15T08:50:05Z" level=info msg="Custom resource codeready updated" time="2019-05-15T08:50:05Z" level=info msg="Updating codeready CR with status: CodeReady Workspaces server: Available" time="2019-05-15T08:50:05Z" level=info msg="Custom resource codeready updated" time="2019-05-15T08:50:05Z" level=info msg="Updating codeready CR with CodeReady Workspaces server URL: http://codeready-workspaces.192.168.42.231.nip.io " time="2019-05-15T08:50:05Z" level=info msg="Custom resource codeready updated" time="2019-05-15T08:50:05Z" level=info msg="CodeReady Workspaces is now available at: http://codeready-workspaces.192.168.42.231.nip.io "
2.5. Configuring CodeReady Workspaces to work behind a proxy server
This procedure describes how to configure CodeReady Workspaces for use in a deployment behind a proxy server. To access external resources (for example, to download Maven artifacts to build Java projects), change the workspace configuration.
Prerequisites
-
OpenShift with a logged in
oc
client. - Deployment script. See Section 2.2, “Downloading the CodeReady Workspaces deployment script”.
Procedure
Update the following values in the
custom-resource.yaml
file:apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready spec: server: cheFlavor: codeready cheImage: ${SERVER_IMAGE_NAME} cheImageTag: ${SERVER_IMAGE_TAG} tlsSupport: ${TLS_SUPPORT} selfSignedCert: ${SELF_SIGNED_CERT} proxyURL: 'http://172.19.20.128' 1 proxyPort: '3128' 2 nonProxyHosts: 'localhost|172.30.0.1|*.172.19.20.240.nip.io' 3 proxyUser: '' proxyPassword: ''
- 1
- Substitute
http://172.19.20.128
for the protocol and hostname of your proxy server. - 2
- Substitute
3128
for the port of your proxy server. - 3
- Substitute
172.30.0.1
for the value of the$KUBERNETES_SERVICE_HOST
environment variable (runecho $KUBERNETES_SERVICE_HOST
in any container running in the cluster to obtain this value).
You may also have to add a customnonProxyHosts
value as required by your network. In this example, this value is*.172.19.20.240.nip.io
(the routing suffix of the OpenShift Container Platform installation).
Important- Use correct indentation as shown above.
-
Use the bar sign (
|
) as the delimiter for multiplenonProxyHosts
values. You may need to list the same wildcard and abbreviated
nonProxyHosts
values more than once. For example:nonProxyHosts: 'localhost | 127.0.0.1 | *.nip.io | .nip.io | *.example.com | .example.com'
Run the following command:
$ ./deploy.sh --deploy
Additional resources
2.6. CodeReady Workspaces deployment script parameters
The custom-resource.yaml
file contains default values for the installation parameters. Those parameters that take environment variables as values can be overridden from a command line. Not all installation parameters are available as flags.
Before running the deployment script in a fast mode, review the custom-resource.yaml
file. Run the ./deploy.sh --help
command to get a list of all available arguments.
The following is an annotated example of the custom-resource.yaml
file with all available parameters:
Server settings:
server: cheFlavor: 'codeready' 1 cheImage: '${SERVER_IMAGE_NAME}' 2 cheImageTag: '${SERVER_IMAGE_TAG}' 3 tlsSupport: '${{TLS_SUPPORT}}' 4 selfSignedCert: '${{SELF_SIGNED_CERT}}' 5 proxyURL: '' 6 proxyPort: '' 7 nonProxyHosts: '' 8 proxyUser: '' 9 proxyPassword: '' 10
- 1
- Defaults to
che
. When set tocodeready
, CodeReady Workspaces is deployed. The difference is in images, labels, and in exec commands. - 2
- The server image used in the Che deployment.
- 3
- The tag of an image used in the Che deployment.
- 4
- TLS mode for Che. Ensure that you either have public certificate or set the
selfSignedCert
environment variable totrue
. If you use the TLS mode with a self-signed certificate, ensure that your browser trusts the certificate. If it does not trust the certificate, the Authorization token is missed error is displayed on the login page and the running workspace may not work as intended. - 5
- When set to
true
, the operator attempts to get a secret in the OpenShift router namespace to add it to the ava trust store of the CodeReady Workspaces server. Requires cluster-administrator privileges for the operator service account. - 6
- The protocol and hostname of a proxy server. Automatically added as
JAVA_OPTS
variable andhttps(s)_proxy
to the CodeReady Workspaces server and workspaces containers. - 7
- The port of a proxy server.
- 8
- A list of non-proxy hosts. Use | as a delimiter. Example:
localhost|my.host.com|123.42.12.32
. - 9
- The username for a proxy server.
- 10
- The password for a proxy user.
Storage settings:
storage: pvcStrategy: 'common' 1 pvcClaimSize: '1Gi' 2
- 1
- The persistent volume claim strategy for the CodeReady Workspaces server. Can be common (all workspaces PVCs in one volume), per-workspace (one PVC per workspace for all the declared volumes), or unique (one PVC per declared volume). Defaults to
common
. - 2
- The size of a persistent volume claim for workspaces. Defaults to
1Gi
.
Database settings:
database: externalDb: 'false' 1 chePostgresHostName: '' 2 chePostgresPort: '' 3 chePostgresUser: '' 4 chePostgresPassword: '' 5 chePostgresDb: '' 6
- 1
- When set to
true
, the operator skips deploying PostgreSQL and passes the connection details of the existing database to the CodeReady Workspaces server. Otherwise, a PostgreSQL deployment is created. - 2
- The PostgreSQL database hostname that the CodeReady Workspaces server uses to connect to. Defaults to
postgres
. - 3
- The PostgreSQL database port that the CodeReady Workspaces server uses to connect to. Defaults to
5432
. - 4
- The Postgres user that the CodeReady Workspaces server when making a databse connection. Defaults to
pgche
. - 5
- The password of a PostgreSQL user. Auto-generated when left blank.
- 6
- The PostgreSQL database name that the CodeReady Workspaces server uses to connect to. Defaults to
dbche
.
auth settings:
auth: openShiftoAuth: '${{ENABLE_OPENSHIFT_OAUTH}}' 1 externalIdentityProvider: 'false' 2 identityProviderAdminUserName: 'admin' 3 identityProviderPassword: 'admin' 4 identityProviderURL: '' 5 identityProviderRealm: '' 6 identityProviderClientId: '' 7
- 1
- Instructs an operator to enable the OpenShift v3 identity provider in Red Hat SSO and create the respective oAuthClient and configure the Che configMap accordingly.
- 2
- Instructs the operator on whether or not to deploy the RH SSO instance. When set to
true
, it provisions the connection details. - 3
- The desired administrator username of the Red Hat SSO administrator (applicable only when the
externalIdentityProvider
variable isfalse
). - 4
- The desired password of the Red Hat SSO administrator (applicable only when the
externalIdentityProvider
variable isfalse
). - 5
- Retrieved from the respective route or ingress unless explicitly specified in CR (when the
externalIdentityProvider
variable istrue
). - 6
- The name of a Red Hat SSO realm. This realm is created when the
externalIdentityProvider
variable istrue
. Otherwise, it is passed to the CodeReady Workspaces server. - 7
- The ID of a Red Hat SSO client. This client is created when the
externalIdentityProvider
variable isfalse
. Otherwise, it is passed to the CodeReady Workspaces server.