Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Known issues
This section describes known issues in OpenShift sandboxed containers 1.8.
Increasing container CPU resource limits fails if CPUs are offline
Using container CPU resource limits to increase the number of available CPUs for a pod fails if the requested CPUs are offline. If the functionality is available, you can diagnose CPU resource issues by running the oc rsh <pod>
command to access a pod and then running the lscpu
command:
$ lscpu
Example output:
CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13
The list of offline CPUs is unpredictable and can change from run to run.
To work around this problem, use a pod annotation to request additional CPUs as in the following example:
metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16"
OpenShift sandboxed containers 1.7 and later do not work with OpenShift Container Platform 4.14 and older versions
You must upgrade to OpenShift Container Platform 4.15 or later before installing or upgrading the OpenShift sandboxed containers Operator. For more information, see OpenShift sandboxed containers operator 1.7 is not available and Upgrade to OSC 1.7.0 put running Peer Pods into ContainerCreating status in the KnowledgeBase.
Secure boot disabled by default for Confidential Containers on Azure
Secure boot is disabled by default for Confidential Containers on Azure. This is a security risk. To work around this problem, set ENABLE_SECURE_BOOT
to true
when you update the peer pods config map.
PodVM image not deleted when deleting KataConfig on Azure
The pod VM image might not be deleted after you delete the KataConfig
custom resource. To work around this problem, use the Azure CLI to check the pod VM gallery for the image and then delete the image if necessary.
Prerequisites
- You have installed and configured the Azure CLI tool.
Procedure
Obtain the Azure resource group of the pod VM by running the following command:
$ RESOURCE_GROUP=$(oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath='{.data.AZURE_RESOURCE_GROUP}')
List the images in this resource group by running the following command:
$ az image list -g ${RESOURCE_GROUP} --query '[].id' -o tsv
Expected output:
/subscriptions/<...>/resourceGroups/<...>/providers/Microsoft.Compute/images/podvm-image-0.0.2024112013
Delete each pod VM image by running the following command:
$ az image delete --ids /subscriptions/<...>/resourceGroups/<...>/providers/Microsoft.Compute/images/podvm-image-0.0.2024112013
For more information, see Configuring an Azure account.
Increasing the sizeLimit
does not expand an ephemeral volume
You cannot use the sizeLimit
parameter in the pod specification to expand ephemeral volumes because the volume size default is 50% of the memory assigned to the sandboxed container.
Workaround: Change the size by remounting the volume. For example, if the memory assigned to the sandboxed container is 6 GB and the ephemeral volume is mounted at /var/lib/containers
, you can increase the size of this volume beyond the 3 GB default by running the following command:
$ mount -o remount,size=4G /var/lib/containers
Note that the mount command needs to run inside the pod. You can either have this as part of the pod manifest itself or you can start a shell session in the pod by running oc rsh
and execute the mount
command.
Podvm image builder on AWS leaves snapshot behind
Podvm image builder creates an AMI image from a snapshot and while during proper uninstall process AMI is deleted, the snapshot itself is not deleted and require manual deletion
This happens in all peer pods versions on AWS.
Jira:KATA-3478
Without proper kataconfig deletion before cluster decommission, active pod vms could remain running.
Without peer pods feature, you could decommission a cluster w/o uninstalling the OSC operator, but with peer pods, the pods are running outside the cluster worker nodes (on podvm instances created per peer pod) and when no proper kataconfig deletion is performed before shutting down a cluster, these podvm instances are abandoned and never terminated.
Jira:KATA-3480