Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 11. Performing a minor update of the RHOSP overcloud with director Operator

download PDF

After you update the openstackclient pod, update the overcloud by running the overcloud and container image preparation deployments, updating your nodes, and running the overcloud update converge deployment. During a minor update, the control plane API is available.

A minor update of your Red Hat OpenStack Platform (RHOSP) environment involves updating the RPM packages and containers on the overcloud nodes. You might also need to update the configuration of some services. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment:

  1. Prepare your RHOSP environment for the minor update.
  2. Optional: Update the ovn-controller container.
  3. Update Controller nodes and composable nodes that contain Pacemaker services.
  4. Update Compute nodes.
  5. Update Red Hat Ceph Storage nodes.
  6. Update the Red Hat Ceph Storage cluster.
  7. Reboot the overcloud nodes.

Prerequisites

11.1. Preparing director Operator for a minor update

To prepare your Red Hat OpenStack Platform (RHOSP) environment to perform a minor update with director Operator (OSPdO), complete the following tasks:

  1. Lock the RHOSP environment to a Red Hat Enterprise Linux (RHEL) release.
  2. Update RHOSP repositories.
  3. Update the container image preparation file.
  4. Disable fencing in the overcloud.

11.1.1. Locking the RHOSP environment to a RHEL release

Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release.

Procedure

  1. Copy the overcloud subscription management environment file, rhsm.yaml, to openstackclient:

    $ oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yaml
  2. Access the remote shell for the openstackclient pod:

    $ oc rsh openstackclient
  3. Open the rhsm.yaml file and check if your subscription management configuration includes the rhsm_release parameter. If the rhsm_release parameter is not present, add it and set it to 9.2:

    parameter_defaults:
      RhsmVars:
        …​
        rhsm_username: "myusername"
        rhsm_password: "p@55w0rd!"
        rhsm_org_id: "1234567"
        rhsm_pool_ids: "1a85f9223e3d5e43013e3d6e8ff506fd"
        rhsm_method: "portal"
        rhsm_release: "9.2"
  4. Save the rhsm.yaml file.
  5. Create a playbook named set_release.yaml that contains a task to lock the operating system version to RHEL 9.2 on all nodes:

    - hosts: all
      gather_facts: false
      tasks:
        - name: set release to 9.2
          command: subscription-manager release --set=9.2
          become: true
  6. Run the set_release.yaml playbook on the openstackclient pod:

    $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,Compute

    Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because you might have a different subscription for these nodes.

    Note

    To manually lock a node to a version, log in to the node and run the subscription-manager release command:

    $ sudo subscription-manager release --set=9.2
  7. Exit the remote shell for the openstackclient pod:

    $ exit

11.1.2. Updating RHOSP repositories

Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1.

Procedure

  1. Open the rhsm.yaml file and update the rhsm_repos parameter to the correct repository versions:

    parameter_defaults:
      RhsmVars:
        rhsm_repos:
          - rhel-9-for-x86_64-baseos-eus-rpms
          - rhel-9-for-x86_64-appstream-eus-rpms
          - rhel-9-for-x86_64-highavailability-eus-rpms
          - openstack-17.1-for-rhel-9-x86_64-rpms
          - fast-datapath-for-rhel-9-x86_64-rpms
  2. Save the rhsm.yaml file.
  3. Access the remote shell for the openstackclient pod:

    $ oc rsh openstackclient
  4. Create a playbook named update_rhosp_repos.yaml that contains a task to set the repositories to RHOSP 17.1 on all nodes:

    - hosts: all
      gather_facts: false
      tasks:
        - name: change osp repos
          command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms
          become: true
  5. Run the update_rhosp_repos.yaml playbook on the openstackclient pod:

    $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute

    Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription.

  6. Create a playbook named update_ceph_repos.yaml that contains a task to set the repositories to RHOSP 17.1 on all Red Hat Ceph Storage nodes:

    - hosts: all
      gather_facts: false
      tasks:
        - name: change ceph repos
          command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms
          become: true
  7. Run the update_ceph_repos.yaml playbook on the openstackclient pod:

    $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage

    Use the --limit option to apply the content to Red Hat Ceph Storage nodes.

  8. Exit the remote shell for the openstackclient pod:

    $ exit

11.1.3. Updating the container image preparation file

The container preparation file is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud.

Before you update your environment, check the file to ensure that you obtain the correct image versions.

Procedure

  1. Edit the container preparation file. The default name for this file is containers-prepare-parameter.yaml.
  2. Ensure the tag parameter is set to 17.1 for each rule set:

    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: false
        set:
          ...
          tag: '17.1'
        tag_from_label: '{version}-{release}'
    Note

    If you do not want to use a specific tag for the update, such as 17.1 or 17.1.1, remove the tag key-value pair and specify tag_from_label only. The tag_from_label tag uses the installed Red Hat OpenStack Platform (RHOSP) version to determine the value for the tag to use as part of the update process.

  3. Save the containers-prepare-parameter.yaml file.

11.1.4. Disabling fencing in the overcloud

Before you update the overcloud, ensure that fencing is disabled.

If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results.

If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update.

Procedure

  1. Access the remote shell for the openstackclient pod:

    $ oc rsh openstackclient
  2. Log in to a Controller node and run the Pacemaker command to disable fencing:

    $ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=false"
    • Replace <controller-0.ctlplane> with the name of your Controller node.
  3. Exit the remote shell for the openstackclient pod:

    $ exit

11.2. Running the overcloud update preparation for director Operator

To prepare the overcloud for the update process, generate an update prepare configuration, which creates updated ansible playbooks and prepares the nodes for the update.

Procedure

  1. Create an OpenStackConfigGenerator resource called osconfiggenerator-update-prepare.yaml:

    $ cat <<EOF > osconfiggenerator-update-prepare.yaml
    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackConfigGenerator
    metadata:
      name: "update"
      namespace: openstack
    spec:
      gitSecret: git-secret
      enableFencing: false
      heatEnvs:
        - lifecycle/update-prepare.yaml
      heatEnvConfigMap: heat-env-config-update
      tarballConfigMap: tripleo-tarball-config-update
    EOF
  2. Apply the configuration:

    $ oc apply -f osconfiggenerator-update-prepare.yaml
  3. Wait until the update preparation process completes.

11.3. Updating the ovn-controller container on all overcloud servers

If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller container to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. The update occurs on every overcloud server that runs the ovn-controller container.

Important

The following procedure updates the ovn-controller containers on Compute nodes before it updates the ovn-northd service on Controller nodes. If you accidentally update the ovn-northd service before following this procedure, you might not be able to reach your virtual machine instances or create new instances or virtual networks. The following procedure restores connectivity.

Procedure

  1. Create an OpenStackDeploy custom resource (CR) named osdeploy-ovn-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ovn-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: externalUpdate
      advancedSettings:
        tags:
          - ovn
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-ovn-update.yaml
  3. Wait until the ovn-controller container update completes.

11.4. Updating all Controller nodes

Update all the Controller nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.

Procedure

  1. Create an OpenStackDeploy custom resource (CR) named osdeploy-controller-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: controller-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: update
      advancedSettings:
        limit: Controller
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-controller-update.yaml
  3. Wait until the Controller node update completes.

11.5. Updating all Compute nodes

Update all Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update Compute nodes, create an OpenStackDeploy custom resource (CR) with the limit: Compute option to restrict operations only to the Compute nodes.

Procedure

  1. Create an OpenStackDeploy CR named osdeploy-compute-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: compute-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: update
      advancedSettings:
        limit: Compute
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-compute-update.yaml
  3. Wait until the Compute node update completes.

11.6. Updating all HCI Compute nodes

Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update the HCI Compute nodes, create an OpenStackDeploy custom resource (CR) with the limit: ComputeHCI option to restrict operations to only the HCI nodes. You must also create an OpenStackDeploy CR with the mode: external-update and tags: ["ceph"] options to perform an update to a containerized Red Hat Ceph Storage 4 cluster.

Procedure

  1. Create an OpenStackDeploy CR named osdeploy-computehci-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: computehci-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: update
      advancedSettings:
        limit: ComputeHCI
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-computehci-update.yaml
  3. Wait until the ComputeHCI node update completes.
  4. Create an OpenStackDeploy CR named osdeploy-ceph-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ceph-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: external-update
      advancedSettings:
        tags:
          - ceph
  5. Apply the updated configuration:

    $ oc apply -f osdeploy-ceph-update.yaml
  6. Wait until the Red Hat Ceph Storage node update completes.

11.7. Updating all Red Hat Ceph Storage nodes

Update the Red Hat Ceph Storage nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.

Important

RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the CephStorage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations.

Procedure

  1. Create an OpenStackDeploy custom resource (CR) named osdeploy-cephstorage-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: cephstorage-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: externalUpdate
      advancedSettings:
        limit: CephStorage
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-cephstorage-update.yaml
  3. Wait until the Red Hat Ceph Storage node update completes.
  4. Create an OpenStackDeploy CR named osdeploy-ceph-update.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ceph-update
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: externalUpdate
      advancedSettings:
        tags:
          - ceph
  5. Apply the updated configuration:

    $ oc apply -f osdeploy-ceph-update.yaml
  6. Wait until the Red Hat Ceph Storage node update completes.

11.8. Updating the Red Hat Ceph Storage cluster

Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm Orchestrator.

Procedure

  1. Access the remote shell for the openstackclient pod:

    $ oc rsh openstackclient
  2. Log in to a Controller node:

    $ ssh <controller-0.ctlplane>
    • Replace <controller-0.ctlplane> with the name of your Controller node.
  3. Log into the cephadm shell:

    [cloud-admin@controller-0 ~]$ sudo cephadm shell
  4. Upgrade your Red Hat Ceph Storage cluster by using cephadm. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide.
  5. Exit the remote shell for the openstackclient pod:

    $ exit

11.9. Performing online database updates

Some overcloud components require an online update or migration of their databases tables. Online database updates apply to the following components:

  • Block Storage service (cinder)
  • Compute service (nova)

Procedure

  1. Create an OpenStackDeploy custom resource (CR) named osdeploy-online-migration.yaml:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: online-migration
    spec:
      configVersion: <config_version>
      configGenerator: update
      mode: external-update
      advancedSettings:
        tags:
          - online_upgrade
  2. Apply the updated configuration:

    $ oc apply -f osdeploy-online-migration.yaml

11.10. Re-enabling fencing in the overcloud

To update to the latest Red Hat OpenStack Platform (RHOSP) 17.1, you must re-enable fencing in the overcloud.

Procedure

  1. Access the remote shell for the openstackclient pod:

    $ oc rsh openstackclient
  2. Log in to a Controller node and run the Pacemaker command to enable fencing:

    $ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=true"
    • Replace <controller-0.ctlplane> with the name of your Controller node.
  3. Exit the remote shell for the openstackclient pod:

    $ exit

11.11. Rebooting the overcloud

After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.1 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures.

Use the following guidance to understand how to reboot different node types:

11.11.1. Rebooting Controller and composable nodes

Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes.

Procedure

  1. Log in to the node that you want to reboot.
  2. Optional: If the node uses Pacemaker resources, stop the cluster:

    [tripleo-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
  3. Reboot the node:

    [tripleo-admin@overcloud-controller-0 ~]$ sudo reboot
  4. Wait until the node boots.

Verification

  1. Verify that the services are enabled.

    1. If the node uses Pacemaker services, check that the node has rejoined the cluster:

      [tripleo-admin@overcloud-controller-0 ~]$ sudo pcs status
    2. If the node uses Systemd services, check that all services are enabled:

      [tripleo-admin@overcloud-controller-0 ~]$ sudo systemctl status
    3. If the node uses containerized services, check that all containers on the node are active:

      [tripleo-admin@overcloud-controller-0 ~]$ sudo podman ps

11.11.2. Rebooting a Ceph Storage (OSD) cluster

Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.

Prerequisites

  • On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean:

    $ sudo cephadm -- shell ceph status

    If the Ceph cluster is healthy, it returns a status of HEALTH_OK.

    If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.

Procedure

  1. Log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and disable Ceph Storage cluster rebalancing temporarily:

    $ sudo cephadm shell -- ceph osd set noout
    $ sudo cephadm shell -- ceph osd set norebalance
    Note

    If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring.

  2. Select the first Ceph Storage node that you want to reboot and log in to the node.
  3. Reboot the node:

    $ sudo reboot
  4. Wait until the node boots.
  5. Log in to the node and check the Ceph cluster status:

    $ sudo cephadm -- shell ceph status

    Check that the pgmap reports all pgs as normal (active+clean).

  6. Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes.
  7. When complete, log in to a Ceph Monitor or Controller node that is running the ceph-mon service and enable Ceph cluster rebalancing:

    $ sudo cephadm shell -- ceph osd unset noout
    $ sudo cephadm shell -- ceph osd unset norebalance
    Note

    If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring

  8. Perform a final status check to verify that the cluster reports HEALTH_OK:

    $ sudo cephadm shell ceph status

11.11.3. Rebooting Compute nodes

To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot.

Migrating instances workflow

  1. Decide whether to migrate instances to another Compute node before rebooting the node.
  2. Select and disable the Compute node that you want to reboot so that it does not provision new instances.
  3. Migrate the instances to another Compute node.
  4. Reboot the empty Compute node.
  5. Enable the empty Compute node.

Prerequisites

  • Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.

    Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation.

    Note

    If you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation.

  • If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:

    NovaResumeGuestsStateOnHostBoot
    Determines whether to return instances to the same state on the Compute node after reboot. When set to False, the instances remain down and you must start them manually. The default value is False.
    NovaResumeGuestsShutdownTimeout

    Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0. The default value is 300.

    For more information about overcloud parameters and their usage, see Overcloud parameters.

Procedure

  1. Log in to the undercloud as the stack user.
  2. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot:

    (undercloud)$ source ~/overcloudrc
    (overcloud)$ openstack compute service list

    Identify the host name of the Compute node that you want to reboot.

  3. Disable the Compute service on the Compute node that you want to reboot:

    (overcloud)$ openstack compute service list
    (overcloud)$ openstack compute service set <hostname> nova-compute --disable
    • Replace <hostname> with the host name of your Compute node.
  4. List all instances on the Compute node:

    (overcloud)$ openstack server list --host <hostname> --all-projects
  5. Optional: To migrate the instances to another Compute node, complete the following steps:

    1. If you decide to migrate the instances to another Compute node, use one of the following commands:

      • To migrate the instance to a different host, run the following command:

        (overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
        • Replace <instance_id> with your instance ID.
        • Replace <target_host> with the host that you are migrating the instance to.
      • Let nova-scheduler automatically select the target host:

        (overcloud) $ nova live-migration <instance_id>
      • Live migrate all instances at once:

        $ nova host-evacuate-live <hostname>
        Note

        The nova command might cause some deprecation warnings, which are safe to ignore.

    2. Wait until migration completes.
    3. Confirm that the migration was successful:

      (overcloud) $ openstack server list --host <hostname> --all-projects
    4. Continue to migrate instances until none remain on the Compute node.
  6. Log in to the Compute node and reboot the node:

    [tripleo-admin@overcloud-compute-0 ~]$ sudo reboot
  7. Wait until the node boots.
  8. Re-enable the Compute node:

    $ source ~/overcloudrc
    (overcloud) $ openstack compute service set <hostname>  nova-compute --enable
  9. Check that the Compute node is enabled:

    (overcloud) $ openstack compute service list

11.11.4. Validating RHOSP after the overcloud update

After you update your Red Hat OpenStack Platform (RHOSP) environment, validate your overcloud with the tripleo-validations playbooks.

For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Run the validation:

    $ validation run -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml --group post-update
    • Replace <stack> with the name of the stack.

Verification

  1. To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
Note

If a host is not found when you run a validation, the command reports the status as SKIPPED. A status of SKIPPED means that the validation is not executed, which is expected. Additionally, if a validation’s pass criteria is not met, the command reports the status as FAILED. A FAILED validation does not prevent you from using your updated RHOSP environment. However, a FAILED validation can indicate an issue with your environment.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.