이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Installing Dev Spaces
This section contains instructions to install Red Hat OpenShift Dev Spaces.
You can deploy only one instance of OpenShift Dev Spaces per cluster.
2.1. Installing Dev Spaces on OpenShift using CLI
You can install OpenShift Dev Spaces on OpenShift.
Prerequisites
- OpenShift Container Platform
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc
. See: Section 1.2, “Installing the dsc management tool”.
Procedure
Optional: If you previously deployed OpenShift Dev Spaces on this OpenShift cluster, ensure that the previous OpenShift Dev Spaces instance is removed:
$ dsc server:delete
Create the OpenShift Dev Spaces instance:
$ dsc server:deploy --platform openshift
Verification steps
Verify the OpenShift Dev Spaces instance status:
$ dsc server:status
Navigate to the OpenShift Dev Spaces cluster instance:
$ dsc dashboard:open
2.2. Installing Dev Spaces on OpenShift using the web console
If you have trouble installing OpenShift Dev Spaces on the command line, you can install it through the OpenShift web console.
Prerequisites
- An OpenShift web console session by a cluster administrator. See Accessing the web console.
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. - For a repeat installation on the same OpenShift cluster: you uninstalled the previous OpenShift Dev Spaces instance according to Chapter 7, Uninstalling Dev Spaces.
Procedure
-
In the Administrator view of the OpenShift web console, go to Operators
OperatorHub and search for Red Hat OpenShift Dev Spaces
. Install the Red Hat OpenShift Dev Spaces Operator.
TipCautionThe Red Hat OpenShift Dev Spaces Operator depends on the Dev Workspace Operator. If you install the Red Hat OpenShift Dev Spaces Operator manually to a non-default namespace, ensure that the Dev Workspace Operator is also installed in the same namespace. This is required as the Operator Lifecycle Manager will attempt to install the Dev Workspace Operator as a dependency within the Red Hat OpenShift Dev Spaces Operator namespace, potentially resulting in two conflicting installations of the Dev Workspace Operator if the latter is installed in a different namespace.
Create the
openshift-devspaces
project in OpenShift as follows:oc create namespace openshift-devspaces
-
Go to Operators
Installed Operators Red Hat OpenShift Dev Spaces instance Specification Create CheCluster YAML view. -
In the YAML view, replace
namespace: openshift-operators
withnamespace: openshift-devspaces
. Select Create.
Tip
Verification
- In Red Hat OpenShift Dev Spaces instance Specification, go to devspaces, landing on the Details tab.
- Under Message, check that there is None, which means no errors.
- Under Red Hat OpenShift Dev Spaces URL, wait until the URL of the OpenShift Dev Spaces instance appears, and then open the URL to check the OpenShift Dev Spaces dashboard.
- In the Resources tab, view the resources for the OpenShift Dev Spaces deployment and their status.
2.3. Installing Dev Spaces in a restricted environment
On an OpenShift cluster operating in a restricted network, public resources are not available.
However, deploying OpenShift Dev Spaces and running workspaces requires the following public resources:
- Operator catalog
- Container images
- Sample projects
To make these resources available, you can replace them with their copy in a registry accessible by the OpenShift cluster.
Prerequisites
- The OpenShift cluster has at least 64 GB of disk space.
- The OpenShift cluster is ready to operate on a restricted network, and the OpenShift control plane has access to the public internet. See About disconnected installation mirroring and Using Operator Lifecycle Manager on restricted networks.
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
An active
oc registry
session to theregistry.redhat.io
Red Hat Ecosystem Catalog. See: Red Hat Container Registry authentication.
-
opm
. See Installing theopm
CLI. -
jq
. See Downloadingjq
. -
podman
. See Podman Installation Instructions. -
skopeo
version 1.6 or higher. See Installing Skopeo. -
An active
skopeo
session with administrative access to the private Docker registry. Authenticating to a registry, and Mirroring images for a disconnected installation. -
dsc
for OpenShift Dev Spaces version 3.11. See Section 1.2, “Installing the dsc management tool”.
Procedure
Download and execute the mirroring script to install a custom Operator catalog and mirror the related images: prepare-restricted-environment.sh.
$ bash prepare-restricted-environment.sh \ --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.14\ --devworkspace_operator_version "v0.25.0" \ --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.14" \ --prod_operator_package_name "devspaces" \ --prod_operator_bundle_name "devspacesoperator" \ --prod_operator_version "v3.11.0" \ --my_registry "<my_registry>" 1
- 1
- The private Docker registry where the images will be mirrored
Install OpenShift Dev Spaces with the configuration set in the
che-operator-cr-patch.yaml
during the previous step:$ dsc server:deploy \ --platform=openshift \ --olm-channel stable \ --catalog-source-name=devspaces-disconnected-install \ --catalog-source-namespace=openshift-marketplace \ --skip-devworkspace-operator \ --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml
- Allow incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user projects. See: Section 3.7.1, “Configuring network policies”.
Additional resources
2.3.1. Setting up an Ansible sample
Follow these steps to use an Ansible sample in restricted environments.
Prerequisites
- Microsoft Visual Studio Code - Open Source IDE
- A 64-bit x86 system.
Procedure
Mirror the following images:
quay.io/devspaces/ansible-creator-ee@sha256:3ff5d2d5f17c9c1e4a352d9922e27be09641647ac028a56845aaab6f6e3c7958 quay.io/devspaces/ansible-creator-ee@sha256:04c7aa48f34ab28dc21f36acfe472b249f29c24d1a52d98b2c8da75dd6587d79
Configure the cluster proxy to allow access to the following domains:
.ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com
Support for the following IDE and CPU architectures is planned for a future release:
IDE
- JetBrains IntelliJ IDEA Community Edition IDE (Technology Preview)
CPU architectures
- IBM Power (ppc64le)
- IBM Z (s390x)
2.4. Installing Dev Spaces in the cloud
Deploy and run Red Hat OpenShift Dev Spaces in the cloud.
Prerequisites
- A OpenShift cluster to deploy OpenShift Dev Spaces on.
-
dsc
: The command line tool for Red Hat OpenShift Dev Spaces. See: Section 1.2, “Installing the dsc management tool”.
2.4.1. Deploying OpenShift Dev Spaces in the cloud
Follow the instructions below to start the OpenShift Dev Spaces Server in the cloud using the dsc
tool.
2.4.2. Installing and setting up Dev Spaces on Microsoft Azure
Microsoft Azure is a cloud computing service for building, testing, deploying, and managing applications and services through Microsoft-managed data centers.
Follow the instructions below to install and enable OpenShift Dev Spaces on Microsoft Azure.
Prerequisites
-
helm
: The package manager for Kubernetes. See: Installing Helm. -
az
: The Microsoft Azure CLI command line tool. See: How to install Microsoft Azure CLI. -
kubelogin
: The credential plugin. See: How to install kubelogin.
2.4.2.1. Preparing Microsoft Azure for OpenShift Dev Spaces installation
Prepare Microsoft Azure for OpenShift Dev Spaces installation.
Procedure
Log in to Microsoft Azure:
az login
Create a resource group (to list the locations, use the
az account list-locations
command):# Resource group name ECLIPSE_CHE_RESOURCE_GROUP=eclipse-che # Azure region AZURE_REGION=centralus az group create --name $ECLIPSE_CHE_RESOURCE_GROUP --location $AZURE_REGION
Create a cluster admins group:
# Azure Active Directory group name AAD_GROUP_NAME=AKSAdmins az ad group create --display-name $AAD_GROUP_NAME --mail-nickname $AAD_GROUP_NAME
Add the current user to the cluster admins group:
az ad group member add --group $AAD_GROUP_NAME \ --member-id $(az ad signed-in-user show --query id --output tsv)
Create the Microsoft Entra integrated cluster:
# Azure Kubernetes Service cluster name AKS_CLUSTER_NAME=eclipse-che az aks create \ --resource-group $ECLIPSE_CHE_RESOURCE_GROUP \ --name $AKS_CLUSTER_NAME \ --enable-aad \ --aad-admin-group-object-ids $(az ad group list --query "[?displayName=='$AAD_GROUP_NAME'].id" --output tsv) \ --generate-ssh-keys
Get the user credentials to access your cluster:
az aks get-credentials \ --resource-group $ECLIPSE_CHE_RESOURCE_GROUP \ --name $AKS_CLUSTER_NAME \ --admin
Set
kubelogin
to use the Microsoft Azure CLI:kubelogin convert-kubeconfig -l azurecli
View the pods in the cluster :
oc get pods --all-namespaces
- Verification
All pods in the running state are displayed.
Additional resources
2.4.2.2. Installing NGINX Ingress Controller on Microsoft Azure Kubernetes Service
Use the following instructions to install the NGINX Ingress Controller on Microsoft Azure Kubernetes Service.
Procedure
Install NGINX Ingress Controller:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx \ --wait \ --create-namespace \ --namespace ingress-nginx \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Wait for the external IP. Note that a
<pending>
status for the external IP is shown before the exact external IP address is displayed.oc get services ingress-nginx-controller --namespace ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.65.52 XX.XXX.XX.XXX 80:31104/TCP,443:32552/TCP 13m
Additional resources
2.4.2.3. Installing cert-manager on Microsoft Azure Kubernetes Service
Learn how to install cert-manager on Microsoft Azure Kubernetes Service.
Procedure
Install the cert-manager:
helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --wait \ --create-namespace \ --namespace cert-manager \ --set installCRDs=true
Additional resources
2.4.2.4. Configuring DNS on Microsoft Azure
Configure DNS on Microsoft Azure. Before you start, make sure you have a registered domain.
Prerequisites
- A registered domain.
Procedure
Define the domain name.
export DOMAIN_NAME=azr.my-ide.cloud
Create a DNS zone:
az network dns zone create \ --resource-group $ECLIPSE_CHE_RESOURCE_GROUP \ --name $DOMAIN_NAME
Create a DNS record set:
az network dns record-set a add-record \ --resource-group $ECLIPSE_CHE_RESOURCE_GROUP \ --zone-name $DOMAIN_NAME \ --record-set-name "*" \ --ipv4-address $(oc get service -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.status.loadBalancer.ingress[0].ip}')
If you use a registrar such as GoDaddy, you will need to add the following two DNS records in your registrar and point them to the IP address of the ingress controller: * type: A
* names: @
and *
+ .Additional resources
2.4.2.5. Creating Let’s Encrypt certificate for devspaces on Microsoft Azure
Follow these instructions to create a Let’s Encrypt certificate for OpenShift Dev Spaces on Microsoft Azure.
Procedure
Create a service principal:
CERT_MANAGER_SERVICE_PRINCIPAL_NAME=cert-manager-eclipse-che CERT_MANAGER_SERVICE_PRINCIPAL_APP_ID=$(az ad sp create-for-rbac --name $CERT_MANAGER_SERVICE_PRINCIPAL_NAME --query "appId" --output tsv)
Give access to the DNS zone:
az role assignment create \ --assignee $CERT_MANAGER_SERVICE_PRINCIPAL_APP_ID \ --scope $(az network dns zone show --name $DOMAIN_NAME --resource-group $ECLIPSE_CHE_RESOURCE_GROUP --query "id" --output tsv) \ --role "DNS Zone Contributor"
Create the openshift-devspaces namespace:
oc create namespace openshift-devspaces
Create a Service Account Secret:
oc create secret generic azuredns-config \ --from-literal=clientSecret=$(az ad sp create-for-rbac --name $CERT_MANAGER_SERVICE_PRINCIPAL_NAME --query "password" --output tsv) \ --namespace openshift-devspaces
Create the Issuer and replace
MY_EMAIL_ADDRESS
with a valid address:oc apply -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: devspaces-letsencrypt namespace: openshift-devspaces spec: acme: solvers: - dns01: azureDNS: clientID: $CERT_MANAGER_SERVICE_PRINCIPAL_APP_ID clientSecretSecretRef: name: azuredns-config key: clientSecret subscriptionID: $(az account show --query "id" --output tsv) tenantID: $(az account show --query "tenantId" --output tsv) resourceGroupName: $ECLIPSE_CHE_RESOURCE_GROUP hostedZoneName: $DOMAIN_NAME email:
MY_EMAIL_ADDRESS
privateKeySecretRef: name: letsencrypt server: https://acme-v02.api.letsencrypt.org/directory EOFCreate the Certificate:
oc apply -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: che-tls namespace: openshift-devspaces spec: secretName: che-tls issuerRef: name: devspaces-letsencrypt kind: Issuer commonName: '$DOMAIN_NAME' dnsNames: - '$DOMAIN_NAME' - '*.$DOMAIN_NAME' usages: - server auth - digital signature - key encipherment - key agreement - data encipherment EOF
If you use a registrar such as GoDaddy, you need to duplicate the following DNS records in your registrar: * type: TXT
* name: _acme-challenge
.
+ .Additional resources
2.4.2.6. Registering a client application in Microsoft Entra ID
Learn how to register a client application in Microsoft Entra ID
Procedure
Create the application:
# Eclipse Che Application display name ECLIPSE_CHE_APPLICATION_DISPLAY_NAME="Eclipse Che" az ad app create \ --display-name $ECLIPSE_CHE_APPLICATION_DISPLAY_NAME \ --enable-access-token-issuance \ --required-resource-accesses '[{"resourceAccess":[{"id":"34a47c2f-cd0d-47b4-a93c-2c41130c671c","type":"Scope"}],"resourceAppId":"6dae42f8-4368-4678-94ff-3960e28e3630"},{"resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}],"resourceAppId":"00000003-0000-0000-c000-000000000000"}]' \ --optional-claims '{"accessToken":[{"additionalProperties":[],"essential":false,"name":"groups","source":null}]}' \ --sign-in-audience AzureADMyOrg \ --web-redirect-uris https://$DOMAIN_NAME/oauth/callback
Update the application group membership claims:
az ad app update \ --id $(az ad app list --query "[?displayName=='$ECLIPSE_CHE_APPLICATION_DISPLAY_NAME'].id" --output tsv) \ --set groupMembershipClaims=SecurityGroup
2.4.2.7. Installing OpenShift Dev Spaces on Microsoft Azure Kubernetes Service
Install OpenShift Dev Spaces on Microsoft Azure Kubernetes Service.
Procedure
Prepare a CheCluster patch YAML file:
cat > che-cluster-patch.yaml << EOF spec: networking: auth: identityProviderURL: "https://sts.windows.net/$(az account show --query "tenantId" --output tsv)/v2.0/" identityToken: access_token oAuthClientName: $(az ad app list --query "[?displayName=='$ECLIPSE_CHE_APPLICATION_DISPLAY_NAME'].appId" --output tsv) oAuthSecret: $(az ad app credential reset --id $ECLIPSE_CHE_APPLICATION_ID --query "password" --output tsv) oAuthScope: openid email profile 6dae42f8-4368-4678-94ff-3960e28e3630/user.read gateway: deployment: containers: - env: - name: OAUTH2_PROXY_INSECURE_OIDC_ALLOW_UNVERIFIED_EMAIL value: "true" name: oauth-proxy components: cheServer: extraProperties: CHE_OIDC_AUTH__SERVER__URL: "https://sts.windows.net/$(az account show --query "tenantId" --output tsv)/v2.0/" CHE_OIDC_EMAIL__CLAIM: unique_name EOF
Deploy OpenShift Dev Spaces:
chectl server:deploy \ --platform=k8s \ --che-operator-cr-patch-yaml=che-cluster-patch.yaml \ --skip-oidc-provider-check \ --skip-cert-manager \ --domain=$DOMAIN_NAME
Navigate to the OpenShift Dev Spaces cluster instance:
$ dsc dashboard:open
Additional resources
2.5. Finding the fully qualified domain name (FQDN)
You can get the fully qualified domain name (FQDN) of your organization’s instance of OpenShift Dev Spaces on the command line or in the OpenShift web console.
You can find the FQDN for your organization’s OpenShift Dev Spaces instance in the Administrator view of the OpenShift web console as follows. Go to Operators
Prerequisites
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
dsc
. See Section 1.2, “Installing the dsc management tool”.
Procedure
Run the following command:
$ dsc server:status
-
Copy the Red Hat OpenShift Dev Spaces URL without the trailing
/dashboard/
.