此内容没有您所选择的语言版本。
Chapter 3. Installing CodeReady Workspaces
This section contains instructions to install Red Hat CodeReady Workspaces. The installation method depends on the target platform and the environment restrictions.
This section describes how to install CodeReady Workspaces using the CodeReady Workspaces Operator available in OpenShift 4 web console.
Operators are a method of packaging, deploying, and managing an OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and independent software vendor (ISV) content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
Prerequisites
- An administrator account on a running instance of OpenShift 4.
Red Hat CodeReady Workspaces Operator provides all the resources for running CodeReady Workspaces, such as PostgreSQL, RH-SSO, image registries, and the CodeReady Workspaces server, and it also configures all these services.
Prerequisites
- Access to the OpenShift web console on the cluster.
Procedure
-
In the left panel, navigate to the Operators
OperatorHub page. -
In the Filter by keyword field, enter
Red Hat CodeReady Workspaces. - Click the Red Hat CodeReady Workspaces tile.
- In the Red Hat CodeReady Workspaces pop-up window, click the button.
- On the Install Operator page, click the button.
Verification steps
-
To verify that the Red Hat CodeReady Workspaces Operator has installed correctly, in the left panel, navigate to the Operators
Installed Operators page. - On the Installed Operators page, click the Red Hat CodeReady Workspaces name and navigate to the Details tab.
In the ClusterServiceVersion details section, wait for the following messages:
-
Status:
Succeeded -
Status reason:
install strategy completed with no errors
-
Status:
-
Navigate to the Events tab and wait for the following message:
install strategy completed with no errors.
Follow this procedure to install Red Hat CodeReady Workspaces with the default configuration. To modify the configuration, see Chapter 2, Configuring the CodeReady Workspaces installation.
Procedure
-
Using the left panel, navigate to the Operators
Installed Operators page. - In the Installed Operators page, click the Red Hat CodeReady Workspaces name.
In the Operator details page, in the Details tab, click the Create instance link in the Provided APIs section.
This navigates you to the Create CheCluster page, which contains the configuration needed to create a CodeReady Workspaces instance, stored in the
CheClusterCustom Resource.- Create the codeready-workspaces cluster using the button at the end of the page using the default values.
- In the Operator details page, in the Red Hat CodeReady Workspaces Cluster tab, click the codeready-workspaces link.
Navigate to the codeready-workspaces instance using the link displayed under the Red Hat CodeReady Workspaces URL output.
NoteThe installation might take more than 5 minutes. The URL appears when the Red Hat CodeReady Workspaces installation finishes.
Verification
- To verify the CodeReady Workspaces instance has installed correctly, navigate to the CodeReady Workspaces Cluster tab of the Operator details page. The CheClusters page displays the list of CodeReady Workspaces instances and their status.
-
Click codeready-workspaces
CheClusterand navigate to the Details tab. See the content of the following fields:
-
The Message field contains error messages. The expected content is
None. - The Red Hat CodeReady Workspaces URL field contains the URL of the Red Hat CodeReady Workspaces instance. The URL appears when the deployment finishes successfully.
-
The Message field contains error messages. The expected content is
- Navigate to the Resources tab. View the list of resources assigned to the CodeReady Workspaces deployment and their status.
Additional resources
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.14/html-single/end-user_guide/index#navigating-che-using-the-dashboard.adoc.
- https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.14/html-single/administration_guide/index#viewing-operator-events.adoc.
This section describes how to install CodeReady Workspaces on OpenShift 4 with the crwctl CLI management tool.
Prerequisites
- An OpenShift cluster with an administrator account.
-
ocis available. See Getting started with the OpenShift CLI.ocversion must match the OpenShift cluster version. - You have logged in to OpenShift. See Logging in to the CLI.
-
crwctlis available. See Section 3.3.1, “Installing the crwctl CLI management tool”.
Procedure
Run the
server:deploycommand to create the CodeReady Workspaces instance:crwctl server:deploy -n openshift-workspaces
$ crwctl server:deploy -n openshift-workspacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
The output of the
server:deploycommand ends with:Command server:deploy has completed successfully.
Command server:deploy has completed successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the CodeReady Workspaces cluster instance:
\https://codeready-<openshift_deployment_name>.<domain_name>.
3.3.1. Installing the crwctl CLI management tool 复制链接链接已复制到粘贴板!
This section describes how to install crwctl, the CodeReady Workspaces CLI management tool.
Procedure
- Navigate to https://developers.redhat.com/products/codeready-workspaces/download.
- Download the CodeReady Workspaces CLI management tool archive for version 2.14.
-
Extract the archive to a folder, such as
$HOME/crwctlor/opt/crwctl. -
Run the
crwctlexecutable from the extracted folder. In this example,$HOME/crwctl/bin/crwctl version. -
Optionally, add the
binfolder to your$PATH, for example,PATH=$PATH:$HOME/crwctl/binto enable runningcrwctlwithout the full path specification.
Verification step
Running crwctl version displays the current version of the tool.
This section describes how to install CodeReady Workspaces on OpenShift 3 with the crwctl CLI management tool. The method of installation is using the Operator and enable TLS (HTTPS).
Operators are a method of packaging, deploying, and managing a OpenShift application which also provide the following:
- Repeatability of installation and upgrade.
- Constant health checks of every system component.
- Over-the-air (OTA) updates for OpenShift components and independent software vendor (ISV) content.
- A place to encapsulate knowledge from field engineers and spread it to all users.
This approach is only supported for use with OpenShift Container Platform and OpenShift Dedicated version 3.11, but also work for newer versions of OpenShift Container Platform and OpenShift Dedicated, and serves as a backup installation method for situations when the installation method using OperatorHub is not available.
Prerequisites
- Administrator rights on a running instance of OpenShift 3.11.
-
An installation of the
ocOpenShift 3.11 CLI management tool. See Installing the OpenShift 3.11 CLI. -
An installation of the
crwctlmanagement tool. See Section 3.3.1, “Installing the crwctl CLI management tool”. -
To apply settings that the main crwctl command-line parameters cannot set, prepare a configuration file
operator-cr-patch.yamlthat will override the default values in theCheClusterCustom Resource used by the Operator. See Chapter 2, Configuring the CodeReady Workspaces installation. - Use the openshift-workspaces namespace as the default installation project.
-
Configure OpenShift to pull images from
registry.redhat.com. See Red Hat Container Registry Authentication.
Procedure
Log in to OpenShift. See Basic Setup and Login.
oc login
$ oc loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to verify that the version of the
ocOpenShift CLI management tool is 3.11:oc version
$ oc version oc v3.11.0+0cbc58bCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to create the CodeReady Workspaces instance in the default project called openshift-workspaces:
crwctl server:deploy -p openshift
$ crwctl server:deploy -p openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
The output of the previous command ends with:
Command server:deploy has completed successfully.
Command server:deploy has completed successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the CodeReady Workspaces cluster instance:
\https://codeready-<openshift_deployment_name>.<domain_name>.
By default, Red Hat CodeReady Workspaces uses various external resources, mainly container images available in public registries.
To deploy CodeReady Workspaces in an environment where these external resources are not available (for example, on a cluster that is not exposed to the public Internet):
- Identify the image registry used by the OpenShift cluster, and ensure you can push to it.
- Push all the images needed for running CodeReady Workspaces to this registry.
- Configure CodeReady Workspaces to use the images that have been pushed to the registry.
- Proceed to the CodeReady Workspaces installation.
The procedure for installing CodeReady Workspaces in restricted environments is different based on the installation method you use:
Notes on network connectivity in restricted environments
Restricted network environments range from a private subnet in a cloud provider to a separate network owned by a company, disconnected from the public Internet. Regardless of the network configuration, CodeReady Workspaces works provided that the Routes that are created for CodeReady Workspaces components (codeready-workspaces-server, identity provider, devfile and plugin registries) are accessible from inside the OpenShift cluster.
Take into account the network topology of the environment to determine how best to accomplish this. For example, on a network owned by a company or an organization, the network administrators must ensure that traffic bound from the cluster can be routed to Route hostnames. In other cases, for example, on AWS, create a proxy configuration allowing the traffic to leave the node to reach an external-facing Load Balancer.
When the restricted network involves a proxy, follow the instructions provided in Section 3.4.3, “Preparing CodeReady Workspaces Custom Resource for installing behind a proxy”.
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 4.3 documentation for instructions on how to install an OpenShift cluster on a restricted network.
- Access to the mirror registry used to installed the OpenShift disconnected cluster in restricted network. See the Related OpenShift Container Platform 4.3 documentation about creating a mirror registry for installation in a restricted network.
On disconnected OpenShift 4 clusters running on restricted networks, an Operator can be successfully installed from OperatorHub only if it meets the additional requirements defined in Enabling your Operator for restricted network environments.
The CodeReady Workspaces operator meets these requirements and is therefore compatible with the official documentation about OLM on a restricted network.
Procedure
To install CodeReady Workspaces from OperatorHub:
-
Build a
redhat-operatorscatalog image. See Building an Operator catalog image. - Configure OperatorHub to use this catalog image for operator installations. See Configuring OperatorHub for restricted networks.
- Proceed to the CodeReady Workspaces installation as usual as described in Section 3.1, “Installing CodeReady Workspaces on OpenShift 4 using OperatorHub”.
Use CodeReady Workspaces CLI management tool to install CodeReady Workspaces on restricted networks if installation through OperatorHub is not available. This method is supported for OpenShift Container Platform 3.11.
Prerequisites
- A running OpenShift cluster. See the OpenShift Container Platform 3.11 documentation for instructions on how to install an OpenShift cluster.
3.4.2.1. Preparing an private registry 复制链接链接已复制到粘贴板!
Prerequisites
-
The
octool is available. -
The
skopeotool, version 0.1.40 or later, is available. -
The
podmantool is available. - An image registry accessible from the OpenShift cluster and supporting the format of the V2 image manifest, schema version 2. Ensure you can push to it from a location having, at least temporarily, access to the internet.
|
| Full coordinates of the source image, including registry, organization, and digest. |
|
| Host name and port of the target container-image registry. |
|
| Organization in the target container-image registry |
|
| Image name and digest in the target container-image registry. |
|
| User name in the target container-image registry. |
|
| User password in the target container-image registry. |
Procedure
Log into the internal image registry:
podman login --username <user> --password <password> <target-registry>
$ podman login --username <user> --password <password> <target-registry>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you encounter an error, like
x509: certificate signed by unknown authority, when attempting to push to the internal registry, try one of these workarounds:-
add the OpenShift cluster’s certificate to
/etc/containers/certs.d/<target-registry> -
add the registry as an insecure registry by adding the following lines to the Podman configuration file located at
/etc/containers/registries.conf:
[registries.insecure] registries = ['<target-registry>']
[registries.insecure] registries = ['<target-registry>']Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
add the OpenShift cluster’s certificate to
Copy images without changing their digest. Repeat this step for every image in the following table:
skopeo copy --all docker://<source-image> docker://<target-registry>/<target-organization>/<target-image>
$ skopeo copy --all docker://<source-image> docker://<target-registry>/<target-organization>/<target-image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteExpand Table 3.2. Understanding the usage of the container-images from the prefix or keyword they include in their name Usage Prefix or keyword Essential
not
stacks-,plugin-, or-openj9-Workspaces
stacks-,plugin-IBM Z and IBM Power Systems
-openj9-NoteImages suffixed with
openj9are theEclipse OpenJ9image equivalents of the OpenJDK images used on x86_64. IBM Power Systems and IBM Z use Eclipse OpenJ9 from IBM Semeru for better performance on those systems. See IBM Semeru Runtimes.Expand Table 3.3. Images to copy in the private registry <source-image> <target-image> registry.redhat.io/codeready-workspaces/backup-rhel8@sha256:ea8b95650e7597bb406d0608835a4adb7464353cdd02e24a974f9008e842f154backup-rhel8@sha256:ea8b95650e7597bb406d0608835a4adb7464353cdd02e24a974f9008e842f154registry.redhat.io/codeready-workspaces/configbump-rhel8@sha256:6f920f581cd54575ae032a95a5b7c06a280a44cfb698659b8e5c89adcf60ff6econfigbump-rhel8@sha256:6f920f581cd54575ae032a95a5b7c06a280a44cfb698659b8e5c89adcf60ff6eregistry.redhat.io/codeready-workspaces/crw-2-rhel8-operator@sha256:d9484d6981f247aadd6d248b509d1016d590a3e63cc70db05c80688d40fb0d00crw-2-rhel8-operator@sha256:d9484d6981f247aadd6d248b509d1016d590a3e63cc70db05c80688d40fb0d00registry.redhat.io/codeready-workspaces/dashboard-rhel8@sha256:e28e5d2e903d1bf43245b73ee4f430fa4ba870ed09749ef5490e3c38da83271edashboard-rhel8@sha256:e28e5d2e903d1bf43245b73ee4f430fa4ba870ed09749ef5490e3c38da83271eregistry.redhat.io/codeready-workspaces/devfileregistry-rhel8@sha256:a837e71e12904d5016d1abb28b3adae26e06b72caa25b7919a41e1c2a00e5c3adevfileregistry-rhel8@sha256:a837e71e12904d5016d1abb28b3adae26e06b72caa25b7919a41e1c2a00e5c3aregistry.redhat.io/codeready-workspaces/idea-rhel8@sha256:073d1a803faac261bae15642e66213a54cf2cb96429254c1138b0eda3f182105idea-rhel8@sha256:073d1a803faac261bae15642e66213a54cf2cb96429254c1138b0eda3f182105registry.redhat.io/codeready-workspaces/jwtproxy-rhel8@sha256:0ecfe3a467683d0a28f7b0736e884a3710438798ded46470451de63ec209380fjwtproxy-rhel8@sha256:0ecfe3a467683d0a28f7b0736e884a3710438798ded46470451de63ec209380fregistry.redhat.io/codeready-workspaces/machineexec-rhel8@sha256:1e25377fe0538ef380030a898fcfcff9493ff0bdbaa4db77d648cdcb0036816bmachineexec-rhel8@sha256:1e25377fe0538ef380030a898fcfcff9493ff0bdbaa4db77d648cdcb0036816bregistry.redhat.io/codeready-workspaces/plugin-java11-openj9-rhel8@sha256:fc5e110243a8e30d23705897a1766de20ec637db4442d419ba05ace3b874c27fplugin-java11-openj9-rhel8@sha256:fc5e110243a8e30d23705897a1766de20ec637db4442d419ba05ace3b874c27fregistry.redhat.io/codeready-workspaces/plugin-java11-rhel8@sha256:2036cbb70aae5f5d507657bd4b820e340ee0bacf3d4b520d80dbd21aad85e13aplugin-java11-rhel8@sha256:2036cbb70aae5f5d507657bd4b820e340ee0bacf3d4b520d80dbd21aad85e13aregistry.redhat.io/codeready-workspaces/plugin-java8-openj9-rhel8@sha256:27fe438df6cfccdfb5d1e927cfa2f360b3bed3fbc409e923e68714a1ef586461plugin-java8-openj9-rhel8@sha256:27fe438df6cfccdfb5d1e927cfa2f360b3bed3fbc409e923e68714a1ef586461registry.redhat.io/codeready-workspaces/plugin-java8-rhel8@sha256:f0ecc1812888611407c23ede1d3952dfb7b9bd597c336f22995cc4d8d9c23eddplugin-java8-rhel8@sha256:f0ecc1812888611407c23ede1d3952dfb7b9bd597c336f22995cc4d8d9c23eddregistry.redhat.io/codeready-workspaces/plugin-kubernetes-rhel8@sha256:5f40400fb032b419e90bb334c8748470eb50e9dc4662b487364e494ccf8a3f05plugin-kubernetes-rhel8@sha256:5f40400fb032b419e90bb334c8748470eb50e9dc4662b487364e494ccf8a3f05registry.redhat.io/codeready-workspaces/plugin-openshift-rhel8@sha256:c4be840840349bb647e6ace19b519b8b3e9676da42bb094512be1fafd411ae37plugin-openshift-rhel8@sha256:c4be840840349bb647e6ace19b519b8b3e9676da42bb094512be1fafd411ae37registry.redhat.io/codeready-workspaces/pluginbroker-artifacts-rhel8@sha256:bde2f4c7c21d7cd7d826d4f4bbd2ee9f31b2119e2d2aa10253592099598cf5bapluginbroker-artifacts-rhel8@sha256:bde2f4c7c21d7cd7d826d4f4bbd2ee9f31b2119e2d2aa10253592099598cf5baregistry.redhat.io/codeready-workspaces/pluginbroker-metadata-rhel8@sha256:457dd2db3d72cc1d823e1219d657ae32e3a9da26f7dd420e0185d1cbe872a792pluginbroker-metadata-rhel8@sha256:457dd2db3d72cc1d823e1219d657ae32e3a9da26f7dd420e0185d1cbe872a792registry.redhat.io/codeready-workspaces/pluginregistry-rhel8@sha256:650a715a08927b11d78d8f520d0d9b623a3f9193eb98e1aed6eeebcaf4517b15pluginregistry-rhel8@sha256:650a715a08927b11d78d8f520d0d9b623a3f9193eb98e1aed6eeebcaf4517b15registry.redhat.io/codeready-workspaces/server-rhel8@sha256:3843f4e3271d927cb1955bfa54b144729676988219dc21849a30a06c9aaab215server-rhel8@sha256:3843f4e3271d927cb1955bfa54b144729676988219dc21849a30a06c9aaab215registry.redhat.io/codeready-workspaces/stacks-cpp-rhel8@sha256:fc621b59be72465ab82cfa293b5b190521eecfed9c353051a7e72592837891c1stacks-cpp-rhel8@sha256:fc621b59be72465ab82cfa293b5b190521eecfed9c353051a7e72592837891c1registry.redhat.io/codeready-workspaces/stacks-dotnet-rhel8@sha256:88134d9fd6b7c81e237e6295183d59cfe3e546762315e93f4d6fb547ecdfaebastacks-dotnet-rhel8@sha256:88134d9fd6b7c81e237e6295183d59cfe3e546762315e93f4d6fb547ecdfaebaregistry.redhat.io/codeready-workspaces/stacks-golang-rhel8@sha256:ef135a05399a4d5f58bcb059b6634498bee5adbbcf8ddb2956abf25819e82462stacks-golang-rhel8@sha256:ef135a05399a4d5f58bcb059b6634498bee5adbbcf8ddb2956abf25819e82462registry.redhat.io/codeready-workspaces/stacks-php-rhel8@sha256:f2ee2cf24f649092568f932977193f585caac19ef23892968d0fe4dbc90f4a35stacks-php-rhel8@sha256:f2ee2cf24f649092568f932977193f585caac19ef23892968d0fe4dbc90f4a35registry.redhat.io/codeready-workspaces/theia-endpoint-rhel8@sha256:128e281bceaccfcb3f9c3aebdd218b6bb6381f9c41cff2259eba47dd49d95c4dtheia-endpoint-rhel8@sha256:128e281bceaccfcb3f9c3aebdd218b6bb6381f9c41cff2259eba47dd49d95c4dregistry.redhat.io/codeready-workspaces/theia-rhel8@sha256:928f5792cc39e6b7785f4f92ec0d6a5b9cd36fb285c1f72d12239beb05d8696etheia-rhel8@sha256:928f5792cc39e6b7785f4f92ec0d6a5b9cd36fb285c1f72d12239beb05d8696eregistry.redhat.io/codeready-workspaces/traefik-rhel8@sha256:93e8f8eed5b1c723213ab4bc538c04fe0d6e25fd66d26de1f8c632b701fe2eb8traefik-rhel8@sha256:93e8f8eed5b1c723213ab4bc538c04fe0d6e25fd66d26de1f8c632b701fe2eb8registry.redhat.io/devworkspace/devworkspace-rhel8-operator@sha256:e68ec2fe7ac27e59641bdfc7794ae99fdfaa60e5b6d0cc0e3f20ab3f7a31bc11devworkspacedevworkspace-rhel8-operator@sha256:e68ec2fe7ac27e59641bdfc7794ae99fdfaa60e5b6d0cc0e3f20ab3f7a31bc11registry.redhat.io/jboss-eap-7/eap-xp3-openj9-11-openshift-rhel8@sha256:44f82c43a730acbfb4ce2be81ca32197099c370eeb85cedbee3d1e89e9ac7684eap-xp3-openj9-11-openshift-rhel8@sha256:44f82c43a730acbfb4ce2be81ca32197099c370eeb85cedbee3d1e89e9ac7684registry.redhat.io/jboss-eap-7/eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19eap-xp3-openjdk11-openshift-rhel8@sha256:3875b2ee2826a6d8134aa3b80ac0c8b5ebc4a7f718335d76dfc3461b79f93d19registry.redhat.io/jboss-eap-7/eap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632deap74-openjdk8-openshift-rhel7@sha256:b4a113c4d4972d142a3c350e2006a2b297dc883f8ddb29a88db19c892358632dregistry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:86e5fa1fa294987114be200890c2e516501e424aee0fb98ece25c95e7716295bopenshift4ose-kube-rbac-proxy@sha256:86e5fa1fa294987114be200890c2e516501e424aee0fb98ece25c95e7716295bregistry.redhat.io/openshift4/ose-oauth-proxy@sha256:30692aed2508e0576f9769fedb87333ab027babda774a870edfbdf2b3ecabed0openshift4ose-oauth-proxy@sha256:30692aed2508e0576f9769fedb87333ab027babda774a870edfbdf2b3ecabed0registry.redhat.io/rh-sso-7/sso74-openj9-openshift-rhel8@sha256:046d86f43fe0d22531505f3a7cf3050baa5967e6443ac226d5a8402d589fab13sso74-openj9-openshift-rhel8@sha256:046d86f43fe0d22531505f3a7cf3050baa5967e6443ac226d5a8402d589fab13registry.redhat.io/rh-sso-7/sso74-openshift-rhel8@sha256:90a68849d9f739087cb045b62036cf4adcb4b63e7f1b1cabb12a6d6e3cc76cffsso74-openshift-rhel8@sha256:90a68849d9f739087cb045b62036cf4adcb4b63e7f1b1cabb12a6d6e3cc76cffregistry.redhat.io/rhel8/postgresql-13@sha256:487183263b25ff4a0d68e97f17756aa9600ca640b20804ca34f19718e471f647postgresql-13@sha256:487183263b25ff4a0d68e97f17756aa9600ca640b20804ca34f19718e471f647registry.redhat.io/rhel8/postgresql-96@sha256:314747a4a64ac16c33ead6a34479dccf16b9a07abf440ea7eeef7cda4cd19e32postgresql-96@sha256:314747a4a64ac16c33ead6a34479dccf16b9a07abf440ea7eeef7cda4cd19e32registry.redhat.io/rhscl/mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73mongodb-36-rhel7@sha256:9f799d356d7d2e442bde9d401b720600fd9059a3d8eefea6f3b2ffa721c0dc73registry.redhat.io/ubi8/ubi-minimal@sha256:c536d4c63253318fdfc1db499f8f4bb0881db7fbd6f3d1554b4d54c812f85cc7ubi8ubi-minimal@sha256:c536d4c63253318fdfc1db499f8f4bb0881db7fbd6f3d1554b4d54c812f85cc7
Verification steps
Verify the images have the same digests:
skopeo inspect docker://<source-image> skopeo inspect docker://<target-registry>/<target-organization>/<target-image>
$ skopeo inspect docker://<source-image> $ skopeo inspect docker://<target-registry>/<target-organization>/<target-image>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
-
To find the sources of the images list, see the values of the
relatedImagesattribute in the link: - CodeReady Workspaces Operator ClusterServiceVersion sources.
When installing CodeReady Workspaces in a restricted environment using crwctl or OperatorHub, provide a CheCluster custom resource with additional information.
Procedure
- Download the default custom resource YAML file.
-
Name the downloaded custom resource
org_v1_che_cr.yaml. Keep it for further modification and usage.
Prerequisites
- All required images available in an image registry that is visible to the OpenShift cluster where CodeReady Workspaces is to be deployed. This is described in Section 3.4.2.1, “Preparing an private registry”, where the placeholders used in the following examples are also defined.
Procedure
In the
CheClusterCustom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This sections describes how to start the CodeReady Workspaces installation in a restricted environment using the CodeReady Workspaces CLI management tool.
Prerequisites
- CodeReady Workspaces CLI management tool is installed. See Section 3.3.1, “Installing the crwctl CLI management tool”.
-
The
octool is installed. - Access to an OpenShift instance.
Procedure
Log in to OpenShift Container Platform:
oc login ${OPENSHIFT_API_URL} --username ${OPENSHIFT_USERNAME} \ --password ${OPENSHIFT_PASSWORD}$ oc login ${OPENSHIFT_API_URL} --username ${OPENSHIFT_USERNAME} \ --password ${OPENSHIFT_PASSWORD}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install CodeReady Workspaces with a customized Custom Resource to add fields related to the restricted environment:
crwctl server:start \ --che-operator-image=<target-registry>/<target-organization>/crw-2-rhel8-operator:2.14 \ --che-operator-cr-yaml=org_v1_che_cr.yaml
$ crwctl server:start \ --che-operator-image=<target-registry>/<target-organization>/crw-2-rhel8-operator:2.14 \ --che-operator-cr-yaml=org_v1_che_cr.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For slow systems or internet connections, add the --k8spodwaittimeout=1800000 flag option to the crwctl server:start command to extend the Pod timeout period to 1800000 ms or longer.
This procedure describes how to provide necessary additional information to the CheCluster custom resource when installing CodeReady Workspaces behind a proxy.
Procedure
In the
CheClusterCustom Resource, which is managed by the CodeReady Workspaces Operator, add the fields used to facilitate deploying an instance of CodeReady Workspaces in a restricted environment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In addition to those basic settings, the proxy configuration usually requires adding the host of the external OpenShift cluster API URL in the list of the hosts to be accessed from CodeReady Workspaces without using the proxy.
To retrieve this cluster API host, run the following command against the OpenShift cluster:
oc whoami --show-server | sed 's#https://##' | sed 's#:.*$##'
$ oc whoami --show-server | sed 's#https://##' | sed 's#:.*$##'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The corresponding field of the
CheClusterCustom Resource isnonProxyHosts. If a host already exists in this field, use|as a delimiter to add the cluster API host:# [...] spec: server: nonProxyHosts: 'anotherExistingHost|<cluster api host>' # [...]# [...] spec: server: nonProxyHosts: 'anotherExistingHost|<cluster api host>' # [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow