Chapter 3. Restoring the undercloud and control plane nodes
If your undercloud or control plane nodes become corrupted or if an error occurs during an update or upgrade, you can restore the undercloud or overcloud control plane nodes from a backup to their previous state. If the restore process fails to automatically restore the Galera cluster or nodes with colocated Ceph monitors, you can restore these components manually.
3.1. Preparing a control plane with colocated Ceph monitors for the restore process Copy linkLink copied to clipboard!
Before you restore a control plane nodes with colocated Ceph monitors, prepare your environment by creating a script that mounts the Ceph monitor backup file to the node file system and another script that ReaR uses to locate the backup file.
If you cannot back up the /var/lib/ceph directory, you must contact the Red Hat Technical Support team to rebuild the ceph-mon index. For more information, see Red Hat Technical Support Team.
Prerequisites
- You have created a backup of the undercloud node. For more information, see Section 1.7, “Creating a backup of the undercloud node”.
- You have created a backup of the control plane nodes. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDSparameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
On each node that you want to restore, create the script
/usr/share/rear/setup/default/011_backup_ceph.shand add the following content:mount -t <file_type> <device_disk> /mnt/local cd /mnt/local [ -d "var/lib/ceph" ] && tar cvfz /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' --acls cd / umount <device_disk>
mount -t <file_type> <device_disk> /mnt/local cd /mnt/local [ -d "var/lib/ceph" ] && tar cvfz /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' --acls cd / umount <device_disk>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<file_type>and<device_disk>with the type and location of the backup file. Normally, the file type isxfsand the location is/dev/vda2.On the same node, create the script
/usr/share/rear/wrapup/default/501_restore_ceph.shand add the following content:if [ -f "/tmp/ceph.tar.gz" ]; then rm -rf /mnt/local/var/lib/ceph/* tar xvC /mnt/local -f /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' fi
if [ -f "/tmp/ceph.tar.gz" ]; then rm -rf /mnt/local/var/lib/ceph/* tar xvC /mnt/local -f /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' fiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Restoring the undercloud node Copy linkLink copied to clipboard!
You can restore the undercloud node to its previous state using the backup ISO image that you created using ReaR. You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access.
Prerequisites
- You have created a backup of the undercloud node. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDSparameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
- Power off the undercloud node. Ensure that the undercloud node is powered off completely before you proceed.
- Boot the undercloud node with the backup ISO image.
When the
Relax-and-Recoverboot menu displays, selectRecover <undercloud_node>. Replace<undercloud_node>with the name of your undercloud node.NoteIf your system uses UEFI, select the
Relax-and-Recover (no Secure Boot)option.Log in as the
rootuser and restore the node:The following message displays:
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <undercloud_node>:~ # rear recover
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <undercloud_node>:~ # rear recoverCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the undercloud node restoration process completes, the console displays the following message:
Finished recovering your system Exiting rear recover Running exit tasks
Finished recovering your system Exiting rear recover Running exit tasksCopy to Clipboard Copied! Toggle word wrap Toggle overflow Power off the node:
RESCUE <undercloud_node>:~ # poweroff
RESCUE <undercloud_node>:~ # poweroffCopy to Clipboard Copied! Toggle word wrap Toggle overflow On boot up, the node resumes its previous state.
3.3. Restoring the control plane nodes Copy linkLink copied to clipboard!
If an error occurs during an update or upgrade, you can restore the control plane nodes to their previous state using the backup ISO image that you have created using ReaR.
To restore the control plane, you must restore all control plane nodes to ensure state consistency.
You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access.
Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation.
Prerequisites
- You have created a backup of the control plane nodes. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDSparameter. For more information, see see Section 2.4, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
- Power off each control plane node. Ensure that the control plane nodes are powered off completely before you proceed.
- Boot each control plane node with the corresponding backup ISO image.
When the
Relax-and-Recoverboot menu displays, on each control plane node, selectRecover <control_plane_node>. Replace<control_plane_node>with the name of the corresponding control plane node.NoteIf your system uses UEFI, select the
Relax-and-Recover (no Secure Boot)option.On each control plane node, log in as the
rootuser and restore the node:The following message displays:
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <control_plane_node>:~ # rear recover
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <control_plane_node>:~ # rear recoverCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the control plane node restoration process completes, the console displays the following message:
Finished recovering your system Exiting rear recover Running exit tasks
Finished recovering your system Exiting rear recover Running exit tasksCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the command line console is available, restore the
config-drivepartition of each control plane node:once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ $ dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>
# once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ $ dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Power off the node:
RESCUE <control_plane_node>:~ # poweroff
RESCUE <control_plane_node>:~ # poweroffCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the boot sequence to the normal boot device. On boot up, the node resumes its previous state.
To ensure that the services are running correctly, check the status of pacemaker. Log in to a Controller node as the
rootuser and enter the following command:pcs status
# pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest).
Troubleshooting
-
Clear resource alarms that are displayed by
pcs statusby running the following command:
pcs resource clean
# pcs resource clean
-
Clear STONITH fencing action errors that are displayed by
pcs statusby running the following commands:
pcs resource clean pcs stonith history cleanup
# pcs resource clean
# pcs stonith history cleanup
3.4. Restoring the Galera cluster manually Copy linkLink copied to clipboard!
If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually.
In this procedure, you must perform some steps on one Controller node. Ensure that you perform these steps on the same Controller node as you go through the procedure.
Procedure
On
Controller-0, retrieve the Galera cluster virtual IP:sudo hiera -c /etc/puppet/hiera.yaml mysql_vip
$ sudo hiera -c /etc/puppet/hiera.yaml mysql_vipCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the database connections through the virtual IP on all Controller nodes:
sudo iptables -I INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROP
$ sudo iptables -I INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROPCopy to Clipboard Copied! Toggle word wrap Toggle overflow On
Controller-0, retrieve the MySQL root password:sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
$ sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_passwordCopy to Clipboard Copied! Toggle word wrap Toggle overflow On
Controller-0, set the Galera resource tounmanagedmode:sudo pcs resource unmanage galera-bundle
$ sudo pcs resource unmanage galera-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the MySQL containers on all Controller nodes:
sudo podman container stop $(sudo podman container ls --all --format "{{.Names}}" --filter=name=galera-bundle)$ sudo podman container stop $(sudo podman container ls --all --format "{{.Names}}" --filter=name=galera-bundle)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the current directory on all Controller nodes:
sudo mv /var/lib/mysql /var/lib/mysql-save
$ sudo mv /var/lib/mysql /var/lib/mysql-saveCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new directory
/var/lib/mysqon all Controller nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the MySQL containers on all Controller nodes:
sudo podman container start $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)$ sudo podman container start $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the MySQL database on all Controller nodes:
sudo podman exec -i $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --user=mysql --log_error=/var/log/mysql/mysql_init.log"$ sudo podman exec -i $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --user=mysql --log_error=/var/log/mysql/mysql_init.log"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the database on all Controller nodes:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF --log-error=/var/log/mysql/mysql_safe.log" &$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF --log-error=/var/log/mysql/mysql_safe.log" &Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the
.my.cnfGalera configuration file on all Controller nodes:sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reset the Galera root password on all Controller nodes:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;update user set password=PASSWORD(\"$ROOTPASSWORD\")where User=\"root\";flush privileges;'"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;update user set password=PASSWORD(\"$ROOTPASSWORD\")where User=\"root\";flush privileges;'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the
.my.cnfGalera configuration file inside the Galera container on all Controller nodes:sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow On
Controller-0, copy the backup database files to/var/lib/MySQL:sudo cp $BACKUP_FILE /var/lib/mysql sudo cp $BACKUP_GRANT_FILE /var/lib/mysql
$ sudo cp $BACKUP_FILE /var/lib/mysql $ sudo cp $BACKUP_GRANT_FILE /var/lib/mysqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe path to these files is /home/heat-admin/.
On
Controller-0, restore the MySQL database:sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_FILE\" " sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_GRANT_FILE\" "$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_FILE\" " $ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_GRANT_FILE\" "Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the databases on all Controller nodes:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqladmin shutdown"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqladmin shutdown"Copy to Clipboard Copied! Toggle word wrap Toggle overflow On
Controller-0, start the bootstrap node:sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=gcomm:// &$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=gcomm:// &Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verification: On Controller-0, check the status of the cluster:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.
On
Controller-0, retrieve the cluster address from the configuration:sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" | awk '{print $3}'$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" | awk '{print $3}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow On each of the remaining Controller nodes, start the database and validate the cluster:
Start the database:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \ --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=$CLUSTER_ADDRESS &$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \ --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=$CLUSTER_ADDRESS &Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the MYSQL cluster:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.
Stop the MySQL container on all Controller nodes:
sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqladmin -u root shutdown$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqladmin -u root shutdownCopy to Clipboard Copied! Toggle word wrap Toggle overflow On all Controller nodes, remove the following firewall rule to allow database connections through the virtual IP address:
sudo iptables -D INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROP
$ sudo iptables -D INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROPCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the MySQL container on all Controller nodes:
sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
clustercheckcontainer on all Controller nodes:sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=clustercheck)$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=clustercheck)Copy to Clipboard Copied! Toggle word wrap Toggle overflow On
Controller-0, set the Galera resource tomanagedmode:sudo pcs resource manage galera-bundle
$ sudo pcs resource manage galera-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To ensure that services are running correctly, check the status of pacemaker:
sudo pcs status
$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest).
If you suspect an issue with a particular node, check the state of the cluster with
clustercheck:sudo podman exec clustercheck /usr/bin/clustercheck
$ sudo podman exec clustercheck /usr/bin/clustercheckCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Restoring the undercloud node database manually Copy linkLink copied to clipboard!
If the undercloud database does not restore as part of the undercloud restore process, you can restore the database manually. You can only restore the database if you previously created a standalone database backup.
Prerequisites
- You have created a standalone backup of the undercloud database. For more information, see Section 1.5, “Creating a standalone database backup of the undercloud nodes”.
Procedure
-
Log in to the director undercloud node as the
rootuser. Stop all tripleo services:
systemctl stop tripleo_*
[root@director ~]# systemctl stop tripleo_*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that no containers are running on the server by entering the following command:
podman ps
[root@director ~]# podman psCopy to Clipboard Copied! Toggle word wrap Toggle overflow If any containers are running, enter the following command to stop the containers:
podman stop <container_name>
[root@director ~]# podman stop <container_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup of the current
/var/lib/mysqldirectory and then delete the directory:cp -a /var/lib/mysql /var/lib/mysql_bck rm -rf /var/lib/mysql
[root@director ~]# cp -a /var/lib/mysql /var/lib/mysql_bck [root@director ~]# rm -rf /var/lib/mysqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Recreate the database directory and set the SELinux attributes for the new directory:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a local tag for the
mariadbimage. Replace<image_id>and<undercloud.ctlplane.example.com>with the values applicable in your environment:podman images | grep mariadb <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB
[root@director ~]# podman images | grep mariadb <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MBCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman tag <image_id> mariadb
[root@director ~]# podman tag <image_id> mariadbCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman images | grep maria localhost/mariadb latest <image_id> 3 weeks ago 718 MB <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB
[root@director ~]# podman images | grep maria localhost/mariadb latest <image_id> 3 weeks ago 718 MB <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Initialize the
/var/lib/mysqldirectory with the container:podman run --net=host -v /var/lib/mysql:/var/lib/mysql localhost/mariadb mysql_install_db --datadir=/var/lib/mysql --user=mysql
[root@director ~]# podman run --net=host -v /var/lib/mysql:/var/lib/mysql localhost/mariadb mysql_install_db --datadir=/var/lib/mysql --user=mysqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the database backup file that you want to import to the database:
cp /root/undercloud-all-databases.sql /var/lib/mysql
[root@director ~]# cp /root/undercloud-all-databases.sql /var/lib/mysqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the database service to import the data:
podman run --net=host -dt -v /var/lib/mysql:/var/lib/mysql localhost/mariadb /usr/libexec/mysqld
[root@director ~]# podman run --net=host -dt -v /var/lib/mysql:/var/lib/mysql localhost/mariadb /usr/libexec/mysqldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import the data and configure the
max_allowed_packetparameter:Log in to the container and configure it:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the container:
podman stop <container_id>
[root@director ~]# podman stop <container_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that no containers are running:
podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@director ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@director ~]#Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart all tripleo services:
systemctl start multi-user.target
[root@director ~]# systemctl start multi-user.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow