Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 12. Performing post-upgrade actions

download PDF

After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.

Important

If you run additional overcloud commands after the upgrade from Red Hat OpenStack Platform 16.2 to 17.1, you must consider the following:

  • Overcloud commands that you run after the upgrade must include the YAML files that you created or updated during the upgrade process. For example, to provision overcloud nodes during a scale-up operation, use the /home/stack/tripleo-[stack]-baremetal-deploy.yaml file instead of the /home/stack/templates/overcloud-baremetal-deployed.yaml file.
  • Include all the options that you passed to the last run of the openstack overcloud upgrade prepare command, except for the system_upgrade.yaml file and the upgrades-environment.yaml file.

12.1. Upgrading the overcloud images

You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of Red Hat OpenStack Platform software.

Note

You must use the new version of the overcloud images if you redeploy your overcloud. For more information on installing overcloud images, see Installing the overcloud images in Installing and managing Red Hat OpenStack Platform with director.

Prerequisites

  • You have upgraded the undercloud to the latest version.

Procedure

  1. Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

    $ rm -rf ~/images/*
  2. Extract the archives:

    $ cd ~/images
    $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-17.1.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-17.1.tar; do tar -xvf $i; done
    $ cd ~
  3. Import the images into director:

    (undercloud) [stack@director images]$ openstack overcloud image upload --image-path /home/stack/images/ --update-existing

    The command completes the following tasks:

    • Converts the image format from QCOW to RAW.
    • Provides status updates about the upload of the image.

12.2. Updating CPU pinning parameters

You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the following parameters after completing the upgrade to Red Hat OpenStack Platform 17.1:

NovaComputeCpuDedicatedSet
Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet
Sets the shared (unpinned) CPUs.

Procedure

  1. Log in to the undercloud as the stack user.
  2. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the hw:cpu_thread_policy=isolated policy, you must perform one of the following options:

    • Create a new flavor that does not set the hw:cpu_thread_policy thread policy and resize the instances with that flavor:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
      2. Create a flavor with the default thread policy, prefer:

        (overcloud) $ openstack flavor create <flavor>
        Note

        When you resize an instance, you must use a new flavor. You cannot reuse the current flavor. For more information, see Resizing an instance in the Creating and managing instances guide.

      3. Convert the instances to use the new flavor:

        (overcloud) $ openstack server resize --flavor <flavor> <server>
        (overcloud) $ openstack server resize confirm <server>
      4. Repeat this step for all pinned instances that use the hw:cpu_thread_policy=isolated policy.
    • Migrate instances from the Compute node and disable SMT on the Compute node:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
      2. Disable the Compute node from accepting new virtual machines:

        (overcloud) $ openstack compute service list
        (overcloud) $ openstack compute service set <hostname> nova-compute --disable
      3. Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
      4. Reboot the Compute node and disable SMT in the BIOS of the Compute node.
      5. Boot the Compute node.
      6. Re-enable the Compute node:

        (overcloud) $ openstack compute service set <hostname> nova-compute --enable
  3. Source the stackrc file:

    $ source ~/stackrc
  4. Edit the environment file that contains the NovaVcpuPinSet parameter.
  5. Migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet:

    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuDedicatedSet for hosts that were previously used for pinned instances.
    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuSharedSet for hosts that were previously used for unpinned instances.
    • If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, depending on the type of instances you intend to host on the nodes.

    For example, your previous environment file might contain the following pinning configuration:

    parameter_defaults:
      ...
      NovaVcpuPinSet: 1,2,3,5,6,7
      ...

    To migrate the configuration to a pinned configuration, set the NovaComputeCpuDedicatedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuDedicatedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...

    To migrate the configuration to an unpinned configuration, set the NovaComputeCpuSharedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuSharedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...
    Important

    Ensure the configuration of either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet matches the configuration defined in NovaVcpuPinSet. To change the configuration for either of these, or to configure both NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.

  6. Save the file.
  7. Run the deployment command to update the overcloud with the new CPU pinning parameters.

    (undercloud) $ openstack overcloud deploy \
        --stack _STACK NAME_ \
        --templates \
        ...
        -e /home/stack/templates/<compute_environment_file>.yaml
        ...

12.3. Updating the default machine type for hosts after an upgrade to RHOSP 17

The machine type of an instance is a virtual chipset that provides certain default devices, such as a PCIe graphics card or Ethernet controller. Cloud users can specify the machine type for their instances by using an image with the hw_machine_type metadata property that they require.

Cloud administrators can use the Compute parameter NovaHWMachineType to configure each Compute node architecture with a default machine type to apply to instances hosted on that architecture. If the hw_machine_type image property is not provided when launching the instance, the default machine type for the host architecture is applied to the instance. Red Hat OpenStack Platform (RHOSP) 17 is based on RHEL 9. The pc-i440fx QEMU machine type is deprecated in RHEL 9, therefore the default machine type for x86_64 instances that run on RHEL 9 has changed from pc to q35. Based on this change in RHEL 9, the default value for machine type x86_64 has also changed from pc in RHOSP 16 to q35 in RHOSP 17.

From RHOSP 16.2 and later, the Compute service records the instance machine type within the system metadata of the instance when it launches an instance. This means that it is now possible to change the NovaHWMachineType during the lifetime of a RHOSP deployment without affecting the machine type of existing instances.

The Compute service records the machine type of instances that are not in a SHELVED_OFFLOADED state. Therefore, after an upgrade to RHOSP 17 you must manually record the machine type of instances that are in SHELVED_OFFLOADED state, and verify that all instances within the environment or specific cell have had a machine type recorded. After you have updated the system metadata for each instance with the machine types, you can update the NovaHWMachineType parameter to the RHOSP 17 default, q35, without affecting the machine type of existing instances.

Note

From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.

Prerequisites

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file.

    $ source ~/stackrc
  3. Log in to a Controller node as the heat-admin user:

    (undercloud)$ metalsmith list
    $ ssh heat-admin@<controller_ip>

    Replace <controller_ip> with the IP address of the Controller node.

  4. Retrieve the list of instances that have no machine type set:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt list_unset_machine_type
  5. Check the NovaHWMachineType parameter in the nova-hw-machine-type-upgrade.yaml file for the default machine type for the instance host. The default value for the NovaHWMachineType parameter in RHOSP 16.2 is as follows:

    x86_64=pc-i440fx-rhel7.6.0,aarch64=virt-rhel7.6.0,ppc64=pseries-rhel7.6.0,ppc64le=pseries-rhel7.6.0

  6. Update the system metadata of each instance with the default instance machine type:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt update_machine_type <instance_uuid> <machine_type>
    • Replace <instance_uuid> with the UUID of the instance.
    • Replace <machine_type> with the machine type to record for the instance.

      Warning

      If you set the machine type to something other than the machine type of the image on which the instance was booted, the existing instance might fail to boot.

  7. Confirm that the machine type is recorded for all instances:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-status upgrade check

    This command returns a warning if an instance is found without a machine type. If you get this warning, repeat this procedure from step 4.

  8. Change the default value of NovaHWMachineType in a Compute environment file to x86_64=q35 and deploy the overcloud.

Verification

  1. Create an instance that has the default machine type:

    (overcloud)$ openstack server create --flavor <flavor> \
      --image <image> --network <network> \
      --wait defaultMachineTypeInstance
    • Replace <flavor> with the name or ID of a flavor for the instance.
    • Replace <image> with the name or ID of an image that does not set hw_machine_type.
    • Replace <network> with the name or ID of the network to connect the instance to.
  2. Verify that the instance machine type is set to the default value:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt get_machine_type <instance_uuid>

    Replace <instance_uuid> with the UUID of the instance.

  3. Hard reboot an instance with a machine type of x86_64=pc-i440fx:

    (overcloud)$ openstack server reboot --hard <instance_uuid>

    Replace <instance_uuid> with the UUID of the instance.

  4. Verify that the instance machine type has not been changed:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt get_machine_type <instance_uuid>

    Replace <instance_uuid> with the UUID of the instance.

12.4. Re-enabling fencing in the overcloud

Before you upgraded the overcloud, you disabled fencing in Disabling fencing in the overcloud. After you upgrade your environment, re-enable fencing to protect your data if a node fails.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Log in to a Controller node and run the Pacemaker command to re-enable fencing:

    $ ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true"
    • Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the openstack server list command.
  4. In the fencing.yaml environment file, set the EnableFencing parameter to true.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.