此内容没有您所选择的语言版本。

Chapter 7. Creating a virtual machine


7.1. Creating virtual machines from instance types

You can simplify virtual machine (VM) creation by using instance types, whether you use the OpenShift Container Platform web console or the CLI to create VMs.

7.1.1. About instance types

An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization.

To create a new instance type, you must first create a manifest, either manually or by using the virtctl CLI tool. You then create the instance type object by applying the manifest to your cluster.

OpenShift Virtualization provides two CRDs for configuring instance types:

  • A namespaced object: VirtualMachineInstancetype
  • A cluster-wide object: VirtualMachineClusterInstancetype

These objects use the same VirtualMachineInstancetypeSpec.

7.1.1.1. Required attributes

When you configure an instance type, you must define the cpu and memory attributes. Other attributes are optional.

Note

When you create a VM from an instance type, you cannot override any parameters defined in the instance type.

Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type.

You can manually create an instance type manifest. For example:

apiVersion: instancetype.kubevirt.io/v1beta1
kind: VirtualMachineInstancetype
metadata:
  name: example-instancetype
spec:
  cpu:
    guest: 1
  memory:
    guest: 128Mi
Copy to Clipboard Toggle word wrap
  • spec.cpu.guest is a required field that specifies the number of vCPUs to allocate to the guest.
  • spec.memory.guest is a required field that specifies an amount of memory to allocate to the guest.

You can create an instance type manifest by using the virtctl CLI utility. For example:

$ virtctl create instancetype --cpu 2 --memory 256Mi
Copy to Clipboard Toggle word wrap

where:

--cpu <value>
Specifies the number of vCPUs to allocate to the guest. Required.
--memory <value>
Specifies an amount of memory to allocate to the guest. Required.
Tip

You can immediately create the object from the new manifest by running the following command:

$ virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
Copy to Clipboard Toggle word wrap

7.1.1.2. Optional attributes

In addition to the required cpu and memory attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec:

annotations
List annotations to apply to the VM.
gpus
List vGPUs for passthrough.
hostDevices
List host devices for passthrough.
ioThreadsPolicy
Define an IO threads policy for managing dedicated disk access.
launchSecurity
Configure Secure Encrypted Virtualization (SEV).
nodeSelector
Specify node selectors to control the nodes where this VM is scheduled.
schedulerName
Define a custom scheduler to use for this VM instead of the default scheduler.

7.1.1.3. Controller revisions

When you create a VM by using an instance type, a ControllerRevision object retains an immutable snapshot of the instance type object. This snapshot locks in resource-related characteristics defined in the instance type object, such as the required guest CPU and memory. The VM status also contains a reference to the ControllerRevision object.

This snapshot is essential for versioning, and ensures that the VM instance created when starting a VM does not change if the underlying instance type object is updated while the VM is running.

7.1.2. Pre-defined instance types

OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes. Some are specialized for specific workloads and others are workload-agnostic.

These instance type resources are named according to their series, version, and size. The size value follows the . delimiter and ranges from nano to 8xlarge.

Expand
Table 7.1. common-instancetypes series comparison
Use caseSeriesCharacteristicsvCPU to memory ratioExample resource

Network

N

  • Hugepages
  • Dedicated CPU
  • Isolated emulator threads
  • Requires nodes capable of running DPDK workloads

1:2

n1.medium
  • 4 vCPUs
  • 4GiB Memory

Overcommitted

O

  • Overcommitted memory
  • Burstable CPU performance

1:4

o1.small
  • 1 vCPU
  • 2GiB Memory

Compute Exclusive

CX

  • Hugepages
  • Dedicated CPU
  • Isolated emulator threads
  • vNUMA

1:2

cx1.2xlarge
  • 8 vCPUs
  • 16GiB Memory

General Purpose

U

  • Burstable CPU performance

1:4

u1.medium
  • 1 vCPU
  • 4GiB Memory

Memory Intensive

M

  • Hugepages
  • Burstable CPU performance

1:8

m1.large
  • 2 vCPUs
  • 16GiB Memory

7.1.3. Specifying an instance type or preference

You can specify an instance type, a preference, or both to define a set of workload sizing and runtime characteristics for reuse across multiple VMs.

You can specify instance types and preferences by using flags.

Prerequisites

  • You must have an instance type, preference, or both on the cluster.

Procedure

  1. To specify an instance type when creating a VM, use the --instancetype flag. To specify a preference, use the --preference flag. The following example includes both flags:

    $ virtctl create vm --instancetype <my_instancetype> --preference <my_preference>
    Copy to Clipboard Toggle word wrap
  2. Optional: To specify a namespaced instance type or preference, include the kind in the value passed to the --instancetype or --preference flag command. The namespaced instance type or preference must be in the same namespace you are creating the VM in. The following example includes flags for a namespaced instance type and a namespaced preference:

    $ virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>
    Copy to Clipboard Toggle word wrap

7.1.3.2. Inferring an instance type or preference

Inferring instance types, preferences, or both is enabled by default, and the inferFromVolumeFailure policy of the inferFromVolume attribute is set to Ignore. When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset.

However, when flags are applied, the inferFromVolumeFailure policy defaults to Reject. When inferring from the boot volume, errors result in the rejection of the creation of that VM.

You can use the --infer-instancetype and --infer-preference flags to infer which instance type, preference, or both to use to define the workload sizing and runtime characteristics of a VM.

Prerequisites

  • You have installed the virtctl tool.

Procedure

  • To explicitly infer instance types from the volume used to boot the VM, use the --infer-instancetype flag. To explicitly infer preferences, use the --infer-preference flag. The following command includes both flags:

    $ virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference
    Copy to Clipboard Toggle word wrap
  • To infer an instance type or preference from a volume other than the volume used to boot the VM, use the --infer-instancetype-from and --infer-preference-from flags to specify any of the virtual machine’s volumes. In the example below, the virtual machine boots from volume-a but infers the instancetype and preference from volume-b.

    $ virtctl create vm \
      --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a \
      --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b \
      --infer-instancetype-from volume-b \
      --infer-preference-from volume-b
    Copy to Clipboard Toggle word wrap

7.1.3.3. Setting the inferFromVolume labels

Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume.

  • A cluster-wide instance type: instancetype.kubevirt.io/default-instancetype label.
  • A namespaced instance type: instancetype.kubevirt.io/default-instancetype-kind label. Defaults to the VirtualMachineClusterInstancetype label if left empty.
  • A cluster-wide preference: instancetype.kubevirt.io/default-preference label.
  • A namespaced preference: instancetype.kubevirt.io/default-preference-kind label. Defaults to VirtualMachineClusterPreference label, if left empty.

Prerequisites

  • You must have an instance type, preference, or both on the cluster.
  • You have installed the OpenShift CLI (oc).

Procedure

  • To apply a label to a data source, use oc label. The following command applies a label that points to a cluster-wide instance type:

    $ oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>
    Copy to Clipboard Toggle word wrap

You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.

You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.

Procedure

  1. In the web console, navigate to Virtualization Catalog.

    The InstanceTypes tab opens by default.

    Note

    When configuring a downward-metrics device on an IBM Z® system that uses a VM preference, set the spec.preference.name value to rhel.9.s390x or another available preference with the format *.s390x.

  2. Heterogeneous clusters only: To filter the bootable volumes using the options provided, click Architecture.
  3. Select either of the following options:

    • Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.

      Note

      The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label.

      • Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
    • Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save.

      Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.

      In addition, there is a link to the Create a Windows bootable volume quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.

      Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.

  4. Click an instance type tile and select the resource size appropriate for your workload. You can select huge pages for Red Hat-provided instance types of the M and CX series. Huge page options are identified by names that end with 1gi.
  5. Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:

    • For a Linux-based volume, follow these steps to configure SSH:

      1. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
      2. Select one of the following options:

        • Use existing: Select a secret from the secrets list.
        • Add new: Follow these steps:

          1. Browse to the public SSH key file or paste the file in the key field.
          2. Enter the secret name.
          3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
      3. Click Save.
    • For a Windows volume, follow either of these set of steps to configure sysprep options:

      • If you have not already added sysprep options for the Windows volume, follow these steps:

        1. Click the edit icon beside Sysprep in the VirtualMachine details section.
        2. Add the Autoattend.xml answer file.
        3. Add the Unattend.xml answer file.
        4. Click Save.
      • If you want to use existing sysprep options for the Windows volume, follow these steps:

        1. Click Attach existing sysprep.
        2. Enter the name of the existing sysprep Unattend.xml answer file.
        3. Click Save.
  6. Optional: If you are creating a Windows VM, you can mount a Windows driver disk:

    1. Click the Customize VirtualMachine button.
    2. On the VirtualMachine details page, click Storage.
    3. Select the Mount Windows drivers disk checkbox.
  7. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
  8. Click Create VirtualMachine.

Result

After the VM is created, you can monitor the status on the VirtualMachine details page.

7.1.5. Changing the instance type for a VM

As a cluster administrator or VM owner, you might want to change the instance type for an existing VM for the following reasons:

  • If a VM’s workload has increased, you might change the instance type to one with more CPU, more memory, or specific hardware resources, to prevent performance bottlenecks.
  • If you are using specialized workloads, you might switch to a different instance type to improve performance, as some instance types are optimized for specific use cases.

You can use the OpenShift Container Platform web console or the OpenShift CLI (oc) to change the instance type for an existing VM.

You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately.

Prerequisites

  • You created the VM by using an instance type.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization VirtualMachines.
  2. Select a VM to open the VirtualMachine details page.
  3. Click the Configuration tab.
  4. On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory.
  5. Edit the instance type by using the Series and Size lists.

    1. Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose.
    2. Select the VM’s new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory, which is available in the General Purpose series.
  6. Click Save.

Verification

  1. Click the YAML tab.
  2. Click Reload.
  3. Review the VM YAML to confirm that the instance type changed.

To change the instance type of a VM, change the name field in the VM spec. This triggers the update logic, which ensures that a new, immutable controller revision snapshot is taken of the new resource configuration.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You created the VM by using an instance type, or have administrator privileges for the VM that you want to modify.

Procedure

  1. Stop the VM.
  2. Run the following command, and replace <vm_name> with the name of your VM, and <new_instancetype> with the name of the instance type you want to change to:

    $ oc patch vm/<vm_name> --type merge -p '{"spec":{"instancetype":{"name": "<new_instancetype>"}}}'
    Copy to Clipboard Toggle word wrap

Verification

  • Check the controller revision reference in the updated VM status field. Run the following command and verify that the revision name is updated in the output:

    $ oc get vms/<vm_name> -o json | jq .status.instancetypeRef
    Copy to Clipboard Toggle word wrap

    Example output:

    {
      "controllerRevisionRef": {
        "name": "vm-cirros-csmall-csmall-3e86e367-9cd7-4426-9507-b14c27a08671-2"
      },
      "kind": "VirtualMachineInstancetype",
      "name": "csmall"
    }
    Copy to Clipboard Toggle word wrap
  • Optional: Check that the VM instance is running the new configuration defined in the latest controller revision. For example, if you updated the instance type to use 2 vCPUs instead of 1, run the following command and check the output:

    $ oc get vmi/<vm_name> -o json | jq .spec.domain.cpu
    Copy to Clipboard Toggle word wrap

    Example output that verifies that the revision uses 2 vCPUs:

    {
      "cores": 1,
      "model": "host-model",
      "sockets": 2,
      "threads": 1
    }
    Copy to Clipboard Toggle word wrap

7.2. Creating virtual machines from templates

You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console.

7.2.1. About VM templates

You can use VM templates to help you easily create VMs.

Expedite creation with boot sources

You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.

Templates without a boot source are labeled Boot source required. See Managing automatic boot source updates for details.

Customize before starting the VM

You can customize the disk source and VM parameters before you start the VM.

Note

If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Removing a deprecated designation from a customized VM template by using the web console.

Single-node OpenShift
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles.

7.2.2. Creating a VM from a template

You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console. You can customize template or VM parameters, such as data sources, Cloud-init, or SSH keys, before you start the VM.

You can choose between two views in the web console to create the VM:

  • A virtualization-focused view, which provides a concise list of virtualization-related options at the top of the view
  • A general view, which provides access to the various web console options, including Virtualization

Procedure

  1. From the OpenShift Container Platform web console, choose your view:

    • For a virtualization-focused view, select Administrator Virtualization Catalog.
    • For a general view, navigate to Virtualization Catalog.
  2. Click the Template catalog tab.
  3. Click the Boot source available checkbox to filter templates with boot sources. The catalog displays the default templates.
  4. Heterogeneous clusters only: To filter the search results to show templates associated with a particular architecture, click Architecture Type .
  5. Click All templates to view the available templates for your filters.

    • To focus on particular templates, enter the keyword in the Filter by keyword field.
    • Choose a template project from the All projects dropdown menu, or view all projects.
  6. Click a template tile to view its details.

    • Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox.
    • If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template.
    • If you need to customize the template or VM parameters, do the following:

      1. Click Customize VirtualMachine. The Customize and create VirtualMachine page displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.
      2. Click the Scripts tab to edit the parameters that must be set before the VM boots, such as Cloud-init, SSH key, or Sysprep (Windows VM only).
      3. Optional: Click the Start this virtualmachine after creation (Always) checkbox.
      4. Click Create VirtualMachine.

        The VirtualMachine details page displays the provisioning status.

You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.

You can remove the deprecated designation from the customized template.

Procedure

  1. Navigate to Virtualization Templates in the web console.
  2. From the list of VM templates, click the template marked as deprecated.
  3. Click Edit next to the pencil icon beside Labels.
  4. Remove the following two labels:

    • template.kubevirt.io/type: "base"
    • template.kubevirt.io/version: "version"
  5. Click Save.
  6. Click the pencil icon beside the number of existing Annotations.
  7. Remove the following annotation:

    • template.kubevirt.io/deprecated
  8. Click Save.

7.2.3.1. Creating a custom VM template in the web console

You can create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console.

Procedure

  1. In the web console, click Virtualization Templates in the side menu.
  2. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default.
  3. Click Create Template.
  4. Specify the template parameters by editing the YAML file.
  5. Click Create.

    The template is displayed on the Templates page.

  6. Optional: Click Download to download and save the YAML file.

You can enable dedicated resources for a virtual machine (VM) template in the OpenShift Container Platform web console. VMs that are created from this template will be scheduled with dedicated resources.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization Templates in the side menu.
  2. Select the template that you want to edit to open the Template details page.
  3. On the Scheduling tab, click the edit icon beside Dedicated Resources.
  4. Select Schedule this workload with dedicated resources (guaranteed policy).
  5. Click Save.

You can configure IBM® Secure Execution virtual machines (VMs) on IBM Z® and IBM® LinuxONE.

IBM® Secure Execution for Linux is a s390x security technology that is introduced with IBM® z15 and IBM® LinuxONE III. It protects data of workloads that run in a KVM guest from being inspected or modified by the server environment.

Hardware administrators, KVM administrators, and KVM code cannot access data in an IBM® Secure Execution guest VM.

To enable IBM® Secure Execution virtual machines (VMs) on IBM Z® and IBM® LinuxONE on the compute nodes of your cluster, you must ensure that you meet the prerequisites and complete the following steps.

Prerequisites

  • Your cluster has logical partition (LPAR) nodes running on IBM® z15 or later, or IBM® LinuxONE III or later.
  • You have IBM® Secure Execution workloads available to run on the cluster.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. To run IBM® Secure Execution VMs, you must add the prot_virt=1 kernel parameter for each compute node. To enable all compute nodes, create a file named secure-execution.yaml that contains the following machine config manifest:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: secure-execution
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      kernelArguments:
        - prot_virt=1
    Copy to Clipboard Toggle word wrap

    where:

    prot_virt=1
    Specifies that the ultravisor can store memory security information.
  2. Apply the changes by running the following command:

    $ oc apply -f secure-execution.yaml
    Copy to Clipboard Toggle word wrap

    The Machine Config Operator (MCO) applies the changes and reboots the nodes in a controlled rollout.

Before launching an IBM® Secure Execution VM on IBM Z® and IBM® LinuxONE, you must add the launchSecurity parameter to the VM manifest. Otherwise, the VM does not start correctly because it does not have access to the devices.

You can launch an IBM® Secure Execution VM on IBM Z® and IBM® LinuxONE by using the command-line interface.

To launch IBM® Secure Execution VMs, you must include the launchSecurity parameter to the VirtualMachine manifest. The rest of the VM manifest depends on your setup.

Procedure

  • Apply a VirtualMachine manifest similar to the following, to the cluster:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: f41-se
      name: f41-se
    spec:
      runStrategy: Always
      template:
        metadata:
          labels:
            kubevirt.io/vm: f41-se
        spec:
          domain:
            launchSecurity: {}
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootfs
            machine:
              type: ""
            resources:
              requests:
                memory: 4Gi
          terminationGracePeriodSeconds: 0
          volumes:
            - name: rootfs
              dataVolume:
                name: f41-se
    Copy to Clipboard Toggle word wrap

    where:

    spec.template.spec.domain.launchSecurity

    Specifies to enable hardware-based memory encryption.

    Note

    Because the memory of the VM is protected, you cannot live migrate IBM® Secure Execution VMs. The VMs can only be migrated offline.

You can launch an IBM® Secure Execution VM on IBM Z® and IBM® LinuxONE by using a common instance type.

Prerequisites

  • You have followed the procedure described in "Creating a VM from an instance type by using the web console" and performed the required steps.
  • You are using an IBM® Secure Execution enabled VM image.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click the Customize VirtualMachine button.
  3. Click the YAML tab, and include the launchSecurity: {} parameter in the YAML.

    spec:
      template:
        spec:
           domain:
             launchSecurity: {}
    Copy to Clipboard Toggle word wrap
  4. Click Save.
  5. Click Create VirtualMachine.

You can create a bootable and encrypted IBM Secure Execution VM image for Red Hat Enterprise Linux (RHEL) on IBM Z and IBM LinuxONE.

Prerequisites

  • You are using an IBM® Secure Execution enabled VM image.

Procedure

  1. On a trusted instance, create the install.ks kickstart file in the /var/lib/libvirt/image/ directory with the following content:

    [trusted instance ~]
    text
    lang en_US.UTF-8
    keyboard us
    network --bootproto=dhcp
    rootpw --plaintext <password>
    timezone <>
    firewall --enabled
    selinux --enforcing
    bootloader --location=mbr
    reboot
    
    # Wipe and partition the disk
    clearpart --all --initlabel
    zerombr
    
    # /boot gets encrypted on post reboot
    part /boot --fstype ext4 --size=512 --label=boot
    # Root (/) is LUKS-encrypted
    part / --fstype xfs --size=3000 --pbkdf=pbkdf2 --encrypted --passphrase <passphrase>
    # SE (/se) Non Encrypted for encrypted boot image.
    part /se --fstype xfs --size=512 --label=se
    #Packages
    %packages
    @core
    dracut
    s390-tools
    %end
    Copy to Clipboard Toggle word wrap
  2. Create the VM with the RHEL image by running the following command:

    [trusted instance ~]$ qemu-img create -f qcow2 <path to qcow2 image> <size>G
    Copy to Clipboard Toggle word wrap
  3. Run the virt-install command with the following parameters:

    [trusted instance ~]virt-install
        --name <guest_vm_name> \
        --memory 4096 --vcpus 2 \
        --disk path=<path_to_qcow2_image>,format=qcow2,bus=virtio,cache=none \
        --location <path_to_os>  \
        --initrd-inject=<path_to_kickstart_file> \
        --extra-args="inst.ks=file:/<kickstart_file_name> console=ttyS0 \
        --inst.text inst.noninteractive" \
        --os-variant=<os_variant> \
        --launchSecurity type=s390-pv \
        --graphics none
    Copy to Clipboard Toggle word wrap
  4. Run the virsh start command to access the system console.
  5. Run the sudo -s command to achieve root user privileges.
  6. Generate keyfiles for the root and the boot partition by running the following commands:

    [secure guest ~]$ mkdir -p /etc/luks
    Copy to Clipboard Toggle word wrap
    [secure guest ~]$ chmod 700 /etc/luks
    Copy to Clipboard Toggle word wrap
    [secure guest ~]$ dd if=/dev/urandom of=/etc/luks/root_keyfile.bin bs=1024 count=4
    Copy to Clipboard Toggle word wrap
    [secure guest ~]$ dd if=/dev/urandom of=/etc/luks/boot_keyfile.bin bs=1024 count=4
    Copy to Clipboard Toggle word wrap
    [secure guest ~]$ cryptsetup luksAddkey <root_partition_device> /etc/luks/root_keyfile.bin --pbkdf pbkdf2
    Copy to Clipboard Toggle word wrap
  7. Obtain the LUKS device name and UUID by running the following command:

    $ lsblk -f
    Copy to Clipboard Toggle word wrap
  8. Rename the existing fstab file to /etc/fstab_bak.
  9. Create new crypttab and fstab files similar to the following examples:

    Crypttab example output:

    luks device name   UUID                                       KEYFILE 			      OPTIONS
    root 		       UUID=9cb04587-a670-458a-97eb-52fc0f4008ae  /etc/luks/keyfile.bin   luks
    Copy to Clipboard Toggle word wrap

    Fstab example output:

    /dev/mapper/root /          xfs	  defaults 0 1
    Copy to Clipboard Toggle word wrap
  10. Add the SE boot filesystem entry into the /etc/fstab file by running the following command:

    [secure guest ~]$ grep ‘/se’ /etc/fstab_bak >> /etc/fstab
    Copy to Clipboard Toggle word wrap
  11. Add entries to the initramfs by running the following commands:

    [secure guest ~]$ cat > /etc/dracut.conf.d/10-lukskey.conf <<'EOF'
        install_items+=" /etc/luks/root_keyfile.bin /etc/luks/boot_keyfile.bin "
        EOF
    Copy to Clipboard Toggle word wrap
    [secure guest ~]$ dracut -f --regenerate-all
    Copy to Clipboard Toggle word wrap
  12. Verify that the key files are present in initramfs by running the following command:

    [secure guest ~]$ lsinitrd /boot/initramfs-$(uname-r) | grep -i luks
    Copy to Clipboard Toggle word wrap
  13. LUKS Encrypt the /boot volume.

    1. Change into the boot directory by running the following command:

      [secure guest ~]$ cd /boot
      Copy to Clipboard Toggle word wrap
    2. Backup the existing boot volume content by running the following commands:

      [secure guest /boot ~]$ tar -cf /root/boot_backup.tar
      Copy to Clipboard Toggle word wrap
      [secure guest /boot ~]$ cd
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ umount /boot
      Copy to Clipboard Toggle word wrap
    3. Encrypt the boot volume by running the following commands:

      [secure guest ~]$ cryptsetup -q luksFormat <boot_partition> --key-file /etc/luks/boot_keyfile.bin
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ cryptsetup luksOpen <boot_partition> boot -–key-file /etc/luks/boot_keyfile.bin
      Copy to Clipboard Toggle word wrap
    4. Create the file system by running the following command:

      [secure guest ~]$ mke2fs –t ext4 /dev/mapper/boot
      Copy to Clipboard Toggle word wrap
    5. Obtain the boot UUID by running the following command:

      [secure guest ~]$ blkid –s UUID  -o value <boot_partition>
      Copy to Clipboard Toggle word wrap
    6. Add the boot partition with the key file to /etc/crypttab by running the following command:

      [secure guest ~]$ echo “boot <UUID> /etc/luks/boot_keyfile.bin luks” >>  /etc/crypttab
      Copy to Clipboard Toggle word wrap
    7. Add the mount entry to the fstab file by running the following command:

      [secure guest ~]$ echo “/dev/mapper/boot  /boot ext4 defaults 1 2” >> /etc/fstab
      Copy to Clipboard Toggle word wrap
    8. Mount the boot volume by running the following command:

      [secure guest ~]$ mount /dev/mapper/boot /boot
      Copy to Clipboard Toggle word wrap
    9. Change into the boot directory by running the following command:

      [secure guest ~]$ cd /boot
      Copy to Clipboard Toggle word wrap
    10. Restore the boot backup file by running the following command:

      [secure guest /boot~]$ tar -xvf /root/boot_backup.tar
      Copy to Clipboard Toggle word wrap
  14. Set up SSH key login for the local user and disable password login and root login.
  15. Security hardening the VM.

    1. To disable login on consoles by disabling serial and virtual TTYs, run the following commands:

      [secure guest ~]$ mkdir -p /etc/systemd/system/serial-getty@.service.d
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ echo -e "[Unit]\nConditionKernelCommandLine=allowlocallogin" | tee /etc/systemd/system/serial-getty@.service.d/disable.conf
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ mkdir -p /etc/systemd/system/autovt@.service.d
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ echo -e "[Unit]\nConditionKernelCommandLine=allowlocallogin" | tee /etc/systemd/system/autovt@.service.d/disable.conf
      Copy to Clipboard Toggle word wrap
    2. Disable debug, emergency, and rescue shells by running the following commands:

      [secure guest ~]$ systemctl mask emergency.service
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ systemctl mask emergency.target
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ systemctl mask rescue.service
      Copy to Clipboard Toggle word wrap
      [secure guest ~]$ systemctl mask rescue.target
      Copy to Clipboard Toggle word wrap
    3. Disable the virtio-rng device by running the following command:

      [secure guest ~]$ echo "blacklist virtio-rng" | tee /etc/modprobe.d/virtio-rng.conf
      Copy to Clipboard Toggle word wrap
  16. Enable IBM Secure Execution for the guest.

    1. Copy the current command line to a file by running the following command:

      [secure guest ~]$ cat /proc/cmdline > parmfile
      Copy to Clipboard Toggle word wrap
    2. Append the following parameters to the parmfile:

      loglevel=0 systemd.show_status=0 panic=0 crashkernel=196M swiotlb=262144
      Copy to Clipboard Toggle word wrap
    3. Generate the IBM SEL image on the /se partition by running the following command:

      [secure guest ~]$ genprotimg -i <image> \
                                   -r <ramdisk> \
                                   -p <parmfile> \
                                   -k </path/to/host-key-doc.crt> \
                                   --cert <ibm_signkey>  \
                                   -o /se/secure-linux.img
      Copy to Clipboard Toggle word wrap

      where:

      <image>
      Specifies the original guest kernel image.
      <ramdisk>
      Specifies the original initial RAM file system.
      <parmfile>
      Specifies the file that contains the kernel parameters.
      </path/to/host-key-doc.crt>
      Specifies the public host key document.
      <ibm_signkey>
      Specifies the IBM Z® signing-key certificate and the DigiCert intermediate certificate for the verification of the host key documents.
    4. Update the boot configuration by running the following command:

      [secure guest ~]$ zipl -i /se/secure-linux.img -t /se
      Copy to Clipboard Toggle word wrap
    5. Reboot the VM by running the following command:

      [secure guest ~]$ reboot
      Copy to Clipboard Toggle word wrap
    6. Verify that the guest VM is secure by running the following command:

      [secure guest ~]$ cat /sys/firmware/uv/prot_virt_guest
      Copy to Clipboard Toggle word wrap

      Example output:

      1
      Copy to Clipboard Toggle word wrap

      The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0.

7.4. Creating a license-compliant AWS EC2 Windows VM

If you are running Windows virtual machines (VMs) on OpenShift Container Platform hosts, such as AMD64 bare metal EC2 instances with Amazon Web Services (AWS) Windows License Included (LI) enabled, you must ensure that any VMs you create are compliant with licensing requirements.

When you configure your Windows VMs correctly, they activate automatically with the AWS Key Management Service (KMS), and run using optimized drivers for the underlying bare-metal hardware. Proper configuration also ensures that billing is correct.

If you do not configure your Windows VMs so that they are license-compliant, they might fail to activate, suffer degraded system performance due to sub-optimal CPU pinning, and risk failing a licensing audit.

You can create license-compliant Windows virtual machines (VMs) by enabling the dedicatedCpuPlacement attribute. This attribute is enabled by default on instance types from the d1 family. In the OpenShift Container Platform web console, you can create a compliant VM by selecting from a list of available bootable volumes.

Procedure

  1. In the OpenShift Container Platform web console, go to Virtualization Catalog. The InstanceTypes tab opens by default.
  2. Click Add volume to create a Windows boot source. You can create a Windows boot source by uploading a new volume or by using an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume.
  3. In the Volume metadata section, select a preference with a name that begins with windows and is followed by the Windows version of your choice. For example, windows.11.virtio. Click Save.
  4. Select a bootable volume from the list. If the list is truncated, click Show all to display the entire list. The bootable volume table contains the previously uploaded boot source.
  5. In the User provided tab, select an instance type with a name that begins with d1. For example, d1.2xmedium for a Windows 11 VM.
  6. Optional: You can mount a Windows driver disk by completing the following steps:

    1. Click Customize VirtualMachine.
    2. On the VirtualMachine details page, click Storage.
    3. Select the Mount Windows drivers disk checkbox.
  7. Click Create VirtualMachine.
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部