Questo contenuto non è disponibile nella lingua selezionata.
Chapter 9. Quotas
9.1. Resource quotas per project Copia collegamentoCollegamento copiato negli appunti!
A resource quota, defined by a
ResourceQuota
This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them.
9.1.1. Resources managed by quotas Copia collegamentoCollegamento copiato negli appunti!
The following describes the set of compute resources and object types that can be managed by a quota.
A pod is in a terminal state if
status.phase in (Failed, Succeeded)
| Resource Name | Description |
|---|---|
|
| The sum of CPU requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of memory requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of CPU requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of memory requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. |
|
| The sum of memory limits across all pods in a non-terminal state cannot exceed this value. |
| Resource Name | Description |
|---|---|
|
| The sum of storage requests across all persistent volume claims in any state cannot exceed this value. |
|
| The total number of persistent volume claims that can exist in the project. |
|
| The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. |
|
| The total number of persistent volume claims with a matching storage class that can exist in the project. |
|
| The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value.
|
|
| The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. |
| Resource Name | Description |
|---|---|
|
| The total number of pods in a non-terminal state that can exist in the project. |
|
| The total number of ReplicationControllers that can exist in the project. |
|
| The total number of resource quotas that can exist in the project. |
|
| The total number of services that can exist in the project. |
|
| The total number of services of type
|
|
| The total number of services of type
|
|
| The total number of secrets that can exist in the project. |
|
| The total number of
|
|
| The total number of persistent volume claims that can exist in the project. |
|
| The total number of imagestreams that can exist in the project. |
9.1.2. Quota scopes Copia collegamentoCollegamento copiato negli appunti!
Each quota can have an associated set of scopes. A quota only measures usage for a resource if it matches the intersection of enumerated scopes.
Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error.
| Scope | Description |
|
| Match pods that have best effort quality of service for either
|
|
| Match pods that do not have best effort quality of service for
|
A
BestEffort
-
pods
A
NotBestEffort
-
pods -
memory -
requests.memory -
limits.memory -
cpu -
requests.cpu -
limits.cpu
9.1.3. Quota enforcement Copia collegamentoCollegamento copiato negli appunti!
After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics.
After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.
When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.
If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system.
9.1.4. Requests versus limits Copia collegamentoCollegamento copiato negli appunti!
When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.
If the quota has a value specified for
requests.cpu
requests.memory
limits.cpu
limits.memory
9.1.5. Sample resource quota definitions Copia collegamentoCollegamento copiato negli appunti!
core-object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: core-object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
- 1
- The total number of
ConfigMapobjects that can exist in the project. - 2
- The total number of persistent volume claims (PVCs) that can exist in the project.
- 3
- The total number of replication controllers that can exist in the project.
- 4
- The total number of secrets that can exist in the project.
- 5
- The total number of services that can exist in the project.
- 6
- The total number of services of type
LoadBalancerthat can exist in the project.
openshift-object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: openshift-object-counts
spec:
hard:
openshift.io/imagestreams: "10"
- 1
- The total number of image streams that can exist in the project.
compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
- 1
- The total number of pods in a non-terminal state that can exist in the project.
- 2
- Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
- 3
- Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi.
- 4
- Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores.
- 5
- Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi.
besteffort.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: besteffort
spec:
hard:
pods: "1"
scopes:
- BestEffort
compute-resources-long-running.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-long-running
spec:
hard:
pods: "4"
limits.cpu: "4"
limits.memory: "2Gi"
scopes:
- NotTerminating
- 1
- The total number of pods in a non-terminal state.
- 2
- Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
- 3
- Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
- 4
- Restricts the quota to only matching pods where
spec.activeDeadlineSecondsis set tonil. Build pods fall underNotTerminatingunless theRestartNeverpolicy is applied.
compute-resources-time-bound.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-time-bound
spec:
hard:
pods: "2"
limits.cpu: "1"
limits.memory: "1Gi"
scopes:
- Terminating
- 1
- The total number of pods in a terminating state.
- 2
- Across all pods in a terminating state, the sum of CPU limits cannot exceed this value.
- 3
- Across all pods in a terminating state, the sum of memory limits cannot exceed this value.
- 4
- Restricts the quota to only matching pods where
spec.activeDeadlineSeconds >=0. For example, this quota charges for build or deployer pods, but not long running pods like a web server or database.
storage-consumption.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-consumption
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: "50Gi"
gold.storageclass.storage.k8s.io/requests.storage: "10Gi"
silver.storageclass.storage.k8s.io/requests.storage: "20Gi"
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
bronze.storageclass.storage.k8s.io/requests.storage: "0"
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
requests.ephemeral-storage: 2Gi
limits.ephemeral-storage: 4Gi
- 1
- The total number of persistent volume claims in a project
- 2
- Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
- 3
- Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value.
- 4
- Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value.
- 5
- Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value.
- 6
- Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to
0, it means bronze storage class cannot request storage. - 7
- Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to
0, it means bronze storage class cannot create claims. - 8
- Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi.
- 9
- Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi.
9.1.6. Creating a quota Copia collegamentoCollegamento copiato negli appunti!
You can create a quota to constrain resource usage in a given project.
Procedure
- Define the quota in a file.
Use the file to create the quota and apply it to a project:
$ oc create -f <file> [-n <project_name>]For example:
$ oc create -f core-object-counts.yaml -n demoproject
9.1.6.1. Creating object count quotas Copia collegamentoCollegamento copiato negli appunti!
You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as
BuildConfig
DeploymentConfig
When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project.
Procedure
To configure an object count quota for a resource:
Run the following command:
$ oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>1 - 1
- The
<resource>variable is the name of the resource, and<group>is the API group, if applicable. Use theoc api-resourcescommand for a list of resources and their associated API groups.
For example:
$ oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4Example output
resourcequota "test" createdThis example limits the listed resources to the hard limit in each project in the cluster.
Verify that the quota was created:
$ oc describe quota testExample output
Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4
9.1.6.2. Setting resource quota for extended resources Copia collegamentoCollegamento copiato negli appunti!
Overcommitment of resources is not allowed for extended resources, so you must specify
requests
limits
requests.
nvidia.com/gpu
Procedure
Determine how many GPUs are available on a node in your cluster. For example:
# oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'Example output
openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0In this example, 2 GPUs are available.
Set a quota in the namespace
. In this example, the quota isnvidia:1# cat gpu-quota.yamlExample output
apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1Create the quota:
# oc create -f gpu-quota.yamlExample output
resourcequota/gpu-quota createdVerify that the namespace has the correct quota set:
# oc describe quota gpu-quota -n nvidiaExample output
Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1Define a pod that asks for a single GPU. The following example definition file is called
:gpu-pod.yamlapiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1Create the pod:
# oc create -f gpu-pod.yamlVerify that the pod is running:
# oc get podsExample output
NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1mVerify that the quota
counter is correct:Used# oc describe quota gpu-quota -n nvidiaExample output
Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1Attempt to create a second GPU pod in the
namespace. This is technically available on the node because it has 2 GPUs:nvidia# oc create -f gpu-pod.yamlExample output
Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota.
9.1.7. Viewing a quota Copia collegamentoCollegamento copiato negli appunti!
You can view usage statistics related to any hard limits defined in a project’s quota by navigating in the web console to the project’s Quota page.
You can also use the CLI to view quota details.
Procedure
Get the list of quotas defined in the project. For example, for a project called
:demoproject$ oc get quota -n demoprojectExample output
NAME AGE besteffort 11m compute-resources 2m core-object-counts 29mDescribe the quota you are interested in, for example the
quota:core-object-counts$ oc describe quota core-object-counts -n demoprojectExample output
Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10
9.1.8. Configuring explicit resource quotas Copia collegamentoCollegamento copiato negli appunti!
Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects.
Prerequisites
- Access to the cluster as a user with the cluster-admin role.
-
Install the OpenShift CLI ().
oc
Procedure
Add a resource quota definition to a project request template:
If a project request template does not exist in a cluster:
Create a bootstrap project template and output it to a file called
:template.yaml$ oc adm create-bootstrap-project-template -o yaml > template.yamlAdd a resource quota definition to
. The following example defines a resource quota named 'storage-consumption'. The definition must be added before thetemplate.yamlsection in the template:parameters:- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: ${PROJECT_NAME} spec: hard: persistentvolumeclaims: "10"1 requests.storage: "50Gi"2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi"3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi"4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5"5 bronze.storageclass.storage.k8s.io/requests.storage: "0"6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0"7 - 1
- The total number of persistent volume claims in a project.
- 2
- Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
- 3
- Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value.
- 4
- Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value.
- 5
- Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value.
- 6
- Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to
0, the bronze storage class cannot request storage. - 7
- Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to
0, the bronze storage class cannot create claims.
Create a project request template from the modified
file in thetemplate.yamlnamespace:openshift-config$ oc create -f template.yaml -n openshift-configNoteTo include the configuration as a
annotation, add thekubectl.kubernetes.io/last-applied-configurationoption to the--save-configcommand.oc createBy default, the template is called
.project-request
If a project request template already exists within a cluster:
NoteIf you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead.
List templates in the
namespace:openshift-config$ oc get templates -n openshift-configEdit an existing project request template:
$ oc edit template <project_request_template> -n openshift-config-
Add a resource quota definition, such as the preceding example, into the existing template. The definition must be added before the
storage-consumptionsection in the template.parameters:
If you created a project request template, reference it in the cluster’s project configuration resource:
Access the project configuration resource for editing:
By using the web console:
-
Navigate to the Administration
Cluster Settings page. - Click Configuration to view all configuration resources.
- Find the entry for Project and click Edit YAML.
-
Navigate to the Administration
By using the CLI:
Edit the
resource:project.config.openshift.io/cluster$ oc edit project.config.openshift.io/cluster
Update the
section of the project configuration resource to include thespecandprojectRequestTemplateparameters. The following example references the default project request template namename:project-requestapiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: project-request
Verify that the resource quota is applied when projects are created:
Create a project:
$ oc new-project <project_name>List the project’s resource quotas:
$ oc get resourcequotasDescribe the resource quota in detail:
$ oc describe resourcequotas <resource_quota_name>
9.2. Resource quotas across multiple projects Copia collegamentoCollegamento copiato negli appunti!
A multi-project quota, defined by a
ClusterResourceQuota
This guide describes how cluster administrators can set and manage resource quotas across multiple projects.
9.2.1. Selecting multiple projects during quota creation Copia collegamentoCollegamento copiato negli appunti!
When creating quotas, you can select multiple projects based on annotation selection, label selection, or both.
Procedure
To select projects based on annotations, run the following command:
$ oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20This creates the following
object:ClusterResourceQuotaapiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota:1 hard: pods: "10" secrets: "20" selector: annotations:2 openshift.io/requester: <user_name> labels: null3 status: namespaces:4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total:5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9"- 1
- The
ResourceQuotaSpecobject that will be enforced over the selected projects. - 2
- A simple key-value selector for annotations.
- 3
- A label selector that can be used to select projects.
- 4
- A per-namespace map that describes current quota usage in each selected project.
- 5
- The aggregate usage across all selected projects.
This multi-project quota document controls all projects requested by
using the default project request endpoint. You are limited to 10 pods and 20 secrets.<user_name>Similarly, to select projects based on labels, run this command:
$ oc create clusterresourcequota for-name \1 --project-label-selector=name=frontend \2 --hard=pods=10 --hard=secrets=20This creates the following
object definition:ClusterResourceQuotaapiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend
9.2.2. Viewing applicable cluster resource quotas Copia collegamentoCollegamento copiato negli appunti!
A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the
AppliedClusterResourceQuota
Procedure
To view quotas applied to a project, run:
$ oc describe AppliedClusterResourceQuotaExample output
Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20
9.2.3. Selection granularity Copia collegamentoCollegamento copiato negli appunti!
Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects.