Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Running Red Hat OpenShift Dev Spaces at scale


Even though Kubernetes has emerged as a powerful foundation for deploying and managing containerized workloads at scale, achieving scale with Cloud Development Environments (CDEs), particularly in the range of thousands of concurrent workspaces, presents significant challenges.

Such a scale imposes high infrastructure demands and introduces potential bottlenecks that can impact the performance and stability of the entire system. Addressing these challenges requires meticulous planning, strategic architectural choices, monitoring, and continuous optimization to ensure a seamless and efficient development experience for a large number of users.

Note

CDE workloads are complex to scale mainly because the underlying IDE solutions, such as Visual Studio Code - Open Source ("Code - OSS") or JetBrains Gateway, are designed as single-user applications, not as multitenant services.

Resource quantity and object maximums

While there is no strict limit on the number of resources in a Kubernetes cluster, there are certain considerations for large clusters to remember

Note

Learn more about the Kubernetes scalability in the "Scalability, with Wojciech Tyczynski" episode of Kubernetes Podcast.

OpenShift Container Platform, which is a certified distribution of Kubernetes, also provides a set of tested maximums for various resources, which can serve as an initial guideline for planning your environment:

Expand
Resource typeTested maximum

Number of nodes

2000

Number of pods

150000

Number of pods per node

2500

Number of namespace

10000

Number of services

10000

Number of secrets

80000

Number of config maps

90000

Table 1: OpenShift Container Platform tested cluster maximums for various resources.

Note

You can find more details on the OpenShift Container Platform tested object maximums in the official documentation.

For example, it is generally not recommended to have more than 10,000 namespaces due to potential performance and management overhead. In Red Hat OpenShift Dev Spaces, each user is allocated a namespace. If you expect the user base to be large, consider spreading workloads across multiple "fit-for-purpose" clusters and potentially leveraging solutions for multi-cluster orchestration.

Resource requirements

When deploying Red Hat OpenShift Dev Spaces on Kubernetes, it is crucial to accurately calculate the resource requirements and determine memory and CPU / GPU needs for each CDE to come up with the right sizing of the cluster. In general, the CDE size is limited by and cannot be bigger than the worker node size. The resource requirements for CDEs can vary significantly based on the specific workloads and configurations. For example, a simple CDE might require only a few hundred megabytes of memory, while a more complex one will need several gigabytes of memory and multiple CPU cores.

Note

You can find more details about calculating resource requirements in the official documentation.

Using etcd

The primary datastore of Kubernetes cluster configuration and state is etcd. It holds the cluster state and configuration, including information about nodes, pods, services, and custom resources. As a distributed key-value store, etcd does not scale well past a certain threshold, and as the size of etcd grows, so does the load on the cluster, risking its stability.

Important

The default etcd size is 2 GB, and the recommended maximum is 8 GB. Exceeding the maximum limit can make the Kubernetes cluster unstable and unresponsive.

Object size as a factor

Not only the overall number, but also the size of the objects stored in etcd is a critical factor that can significantly impact its performance and stability. Each object stored in etcd consumes space, and as the number of objects increases, the overall size of etcd grows too. The larger the object is, the more space it takes in etcd. For example, etcd can be overloaded with just a few thousand of relatively big Kubernetes objects.

Important

Even though the data stored in a ConfigMap cannot exceed 1 MiB by design, a few thousand of relatively big ConfigMap objects can overload etcd storage.

In the context of Red Hat OpenShift Dev Spaces, by default the Operator creates and manages the 'ca-certs-merged' ConfigMap, which contains the Certificate Authorities (CAs) bundle, in every user namespace. With a large number of TLS certificates in the cluster, this results in additional etcd usage.

To disable mounting the CA bundle by using the ConfigMap under the /etc/pki/ca-trust/extracted/pem path, configure the CheCluster Custom Resource by setting the disableWorkspaceCaBundleMount property to true. With this configuration, only custom certificates will be mounted under the path /public-certs:

spec:
  devEnvironments:
    trustedCerts:
      disableWorkspaceCaBundleMount: true
Copy to Clipboard Toggle word wrap

Dev Workspace objects

For large Kubernetes deployments, particularly those involving a high number of custom resources such as DevWorkspace objects, which represent CDEs, etcd can become a significant performance bottleneck.

Important

Based on the load testing for 6,000 DevWorkspace objects, storage consumption for etcd was approximately 2.5GB.

Starting from Dev Workspace Operator version 0.34.0, you can configure a pruner that automatically cleans up DevWorkspace objects that were not in use for a certain period of time. To set the pruner up, configure the DevWorkspaceOperatorConfig object as follows:

apiVersion: controller.devfile.io/v1alpha1
kind: DevWorkspaceOperatorConfig
metadata:
  name: devworkspace-operator-config
  namespace: crw
config:
  workspace:
    cleanupCronJob:
      enabled: true
      dryRun: false
      retainTime: 2592000 
1

      schedule: “0 0 1 * *” 
2
Copy to Clipboard Toggle word wrap
1
By default, if a workspace was not started for more than 30 days it will be marked for deletion.
2
By default, the pruner will run once per month.

OLMConfig

When an Operator is installed by the Operator Lifecycle Manager (OLM), a stripped-down copy of its CSV is created in every namespace the Operator is configured to watch. These stripped-down CSVs are known as “Copied CSVs” and communicate to users which controllers are actively reconciling resource events in a given namespace. On especially large clusters, with namespaces and installed operators tending in the hundreds or thousands, Copied CSVs consume an unsustainable amount of resources; e.g., OLM memory usage, cluster etcd limits, networking, etc. To eliminate the CSVs copied to every namespace, configure the OLMConfig object accordingly:

apiVersion: operators.coreos.com/v1
kind: OLMConfig
metadata:
  name: cluster
spec:
  features:
    disableCopiedCSVs: true
Copy to Clipboard Toggle word wrap
Note

Additional information about the disableCopiedCSVs feature is available in its original enhancement proposal.

The primary impact of the disableCopiedCSVs property on etcd is related to resource consumption. In clusters with a large number of namespaces and many cluster-wide Operators, the creation and maintenance of numerous Copied CSVs can lead to increased etcd storage usage and memory consumption. By disabling Copied CSVs, the amount of data stored in etcd is significantly reduced, which can help improve overall cluster performance and stability.

This is particularly important for large clusters where the number of namespaces and operators can quickly add up to a significant amount of data. Disabling Copied CSVs can help reduce the load on etcd, leading to improved performance and responsiveness of the cluster. Additionally, it can help reduce the memory footprint of OLM, as it no longer needs to maintain and manage these additional resources.

Note

You can find more details about "Disabling Copied CSVs" in the official documentation.

Cluster Autoscaling

Although cluster autoscaling is a powerful Kubernetes feature, you cannot always fall back on it. You should always consider predictive scaling by analyzing load data on your environment to detect daily or weekly usage patterns. If your workloads follow a pattern and there are dramatic peaks throughout the day, you should consider provisioning worker nodes accordingly. For example, if you have a predictable load pattern where the number of workspaces increases during business hours and decreases during off-hours, you can use predictive scaling to adjust the number of worker nodes based on the expected load. This can help ensure that you have enough resources available to handle the peak load while minimizing costs during off-peak hours.

Note

Consider leveraging open-source solutions such as Karpenter for configuration and lifecycle management of the worker nodes. Karpenter can dynamically provision and optimize worker nodes based on the specific requirements of the workloads, helping to improve resource utilization and reduce costs.

Multi-cluster

By design, Red Hat OpenShift Dev Spaces is not multi-cluster aware, and you can only have one instance per cluster. However, you can run Red Hat OpenShift Dev Spaces in a multi-cluster environment by deploying Red Hat OpenShift Dev Spaces in each different cluster and using a load balancer or DNS-based routing to direct traffic to the appropriate instance based on the user’s location or other criteria. This approach can help improve performance and reliability by distributing the workload across multiple clusters and providing redundancy in case of cluster failures.

Figure 2.1. Scheme of a multi-cluster environment

You can test running OpenShift Dev Spaces in a multi-cluster by using the Developer Sandbox, a free trial environment by Red Hat with pre-configured tools and services. From an infrastructure perspective, the Developer Sandbox consists of multiple ROSA clusters. On each cluster, the productized version of Red Hat OpenShift Dev Spaces is installed and configured using Argo CD. Since the user base is spread across multiple clusters, workspaces.openshift.com is used as a single entry point to the productized Red Hat OpenShift Dev Spaces instances. You can find implementation details about the multicluster redirector in the following GitHub repository.

Important

The multi-cluster architecture of workspaces.openshift.com is part of the Developer Sandbox. It is a Developer Sandbox-specific solution that cannot be reused as-is in other environments. However, you can use it as a reference for implementing a similar solution well-tailored to your specific multicluster needs.

Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat