Este contenido no está disponible en el idioma seleccionado.
Appendix G. Manually Upgrading from Red Hat Ceph Storage 2 to 3
You can upgrade the Ceph Storage Cluster from version 2 to 3 in a rolling fashion and while the cluster is running. Upgrade each node in the cluster sequentially, only proceeding to the next node after the previous node is done.
Red Hat recommends upgrading the Ceph components in the following order:
- Monitor nodes
- OSD nodes
- Ceph Object Gateway nodes
- All other Ceph client nodes
Red Hat Ceph Storage 3 introduces a new daemon Ceph Manager (ceph-mgr). Install ceph-mgr after upgrading the Monitor nodes.
Two methods are available to upgrade a Red Hat Ceph Storage 2 to 3:
- Using Red Hat’s Content Delivery Network (CDN)
- Using a Red Hat provided ISO image file
After upgrading the storage cluster you can have a health warning regarding the CRUSH map using legacy tunables. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 3.
Example
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
If the cluster you want to upgrade contains Ceph Block Device images that use the
exclusive-lockfeature, ensure that all Ceph Block Device users have permissions to blacklist clients:ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing-OSD-user-capabilities>'
ceph auth caps client.<ID> mon 'allow r, allow command "osd blacklist"' osd '<existing-OSD-user-capabilities>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Upgrading Monitor Nodes
This section describes steps to upgrade a Ceph Monitor node to a later version. There must be an odd number of Monitors. While you are upgrading one Monitor, the storage cluster will still have quorum.
Procedure
Do the following steps on each Monitor node in the storage cluster. Upgrade only one Monitor node at a time.
If you installed Red Hat Ceph Storage 2 by using software repositories, disable the repositories:
If the following lines exist in the
/etc/apt/sources.listor/etc/apt/sources.list.d/ceph.listfiles, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/ToolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following files from the
/etc/apt/sources.list.d/directory:Installer.list Tools.list
Installer.list Tools.listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable the Red Hat Ceph Storage 3 Monitor repository:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/MON $(lsb_release -sc) main | tee /etc/apt/sources.list.d/MON.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/MON $(lsb_release -sc) main | tee /etc/apt/sources.list.d/MON.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, stop the Monitor process:Syntax
sudo stop ceph-mon id=<monitor_host_name>
$ sudo stop ceph-mon id=<monitor_host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo stop ceph-mon id=node1
$ sudo stop ceph-mon id=node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, update theceph-monpackage:sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install ceph-mon
$ sudo apt-get update $ sudo apt-get dist-upgrade $ sudo apt-get install ceph-monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the latest Red Hat version is installed:
dpkg -s ceph-base | grep Version
$ dpkg -s ceph-base | grep Version Version: 10.2.2-19redhat1trustyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As
root, update the owner and group permissions:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the Ceph Monitor node is colocated with an OpenStack Controller node, then the Glance and Cinder keyring files must be owned by
glanceandcinderrespectively. For example:ls -l /etc/ceph/
# ls -l /etc/ceph/ ... -rw-------. 1 glance glance 64 <date> ceph.client.glance.keyring -rw-------. 1 cinder cinder 64 <date> ceph.client.cinder.keyring ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove packages that are no longer needed:
sudo apt-get purge ceph ceph-osd
$ sudo apt-get purge ceph ceph-osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, replay device events from the kernel:udevadm trigger
# udevadm triggerCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, enable theceph-monprocess:sudo systemctl enable ceph-mon.target sudo systemctl enable ceph-mon@<monitor_host_name>
$ sudo systemctl enable ceph-mon.target $ sudo systemctl enable ceph-mon@<monitor_host_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, reboot the Monitor node:shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once the Monitor node is up, check the health of the Ceph storage cluster before moving to the next Monitor node:
ceph -s
# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
G.1. Manually installing Ceph Manager Copiar enlaceEnlace copiado en el portapapeles!
Usually, the Ansible automation utility installs the Ceph Manager daemon (ceph-mgr) when you deploy the Red Hat Ceph Storage cluster. However, if you do not use Ansible to manage Red Hat Ceph Storage, you can install Ceph Manager manually. Red Hat recommends to colocate the Ceph Manager and Ceph Monitor daemons on a same node.
Prerequisites
- A working Red Hat Ceph Storage cluster
-
rootorsudoaccess -
The
rhel-7-server-rhceph-3-mon-els-rpmsrepository enabled -
Open ports
6800-7300on the public network if firewall is used
Procedure
Use the following commands on the node where ceph-mgr will be deployed and as the root user or with the sudo utility.
Install the
ceph-mgrpackage:sudo apt-get install ceph-mgr
[user@node1 ~]$ sudo apt-get install ceph-mgrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
/var/lib/ceph/mgr/ceph-hostname/directory:mkdir /var/lib/ceph/mgr/ceph-hostname
mkdir /var/lib/ceph/mgr/ceph-hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace hostname with the host name of the node where the
ceph-mgrdaemon will be deployed, for example:sudo mkdir /var/lib/ceph/mgr/ceph-node1
[user@node1 ~]$ sudo mkdir /var/lib/ceph/mgr/ceph-node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the newly created directory, create an authentication key for the
ceph-mgrdaemon:sudo ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyring
[user@node1 ~]$ sudo ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' -o /var/lib/ceph/mgr/ceph-node1/keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the owner and group of the
/var/lib/ceph/mgr/directory toceph:ceph:sudo chown -R ceph:ceph /var/lib/ceph/mgr
[user@node1 ~]$ sudo chown -R ceph:ceph /var/lib/ceph/mgrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
ceph-mgrtarget:sudo systemctl enable ceph-mgr.target
[user@node1 ~]$ sudo systemctl enable ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
ceph-mgrinstance:systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostname
systemctl enable ceph-mgr@hostname systemctl start ceph-mgr@hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace hostname with the host name of the node where the
ceph-mgrwill be deployed, for example:sudo systemctl enable ceph-mgr@node1 sudo systemctl start ceph-mgr@node1
[user@node1 ~]$ sudo systemctl enable ceph-mgr@node1 [user@node1 ~]$ sudo systemctl start ceph-mgr@node1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
ceph-mgrdaemon started successfully:ceph -s
ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output will include a line similar to the following one under the
services:section:mgr: node1(active)
mgr: node1(active)Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Install more
ceph-mgrdaemons to serve as standby daemons that become active if the current active daemon fails.
Additional resources
Upgrading OSD Nodes
This section describes steps to upgrade a Ceph OSD node to a later version.
Prerequisites
When upgrading an OSD node, some placement groups will become degraded because the OSD might be down or restarting. To prevent Ceph from starting the recovery process, on a Monitor node, set the noout and norebalance OSD flags:
ceph osd set noout ceph osd set norebalance
[root@monitor ~]# ceph osd set noout
[root@monitor ~]# ceph osd set norebalance
Procedure
Do the following steps on each OSD node in the storage cluster. Upgrade only one OSD node at a time. If an ISO-based installation was performed for Red Hat Ceph Storage 2.3, then skip this first step.
As
root, disable the Red Hat Ceph Storage 2 repositories:If the following lines exist in the
/etc/apt/sources.listor/etc/apt/sources.list.d/ceph.listfiles, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/ToolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following files from the /etc/apt/sources.list.d/ directory:
Installer.list Tools.list
Installer.list Tools.listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemove any reference to Red Hat Ceph Storage 2 in the APT source file(s). If an ISO-based installation was performed for Red Hat Ceph Storage 2, then skip this first step.
Enable the Red Hat Ceph Storage 3 OSD repository:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/OSD $(lsb_release -sc) main | tee /etc/apt/sources.list.d/OSD.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/OSD $(lsb_release -sc) main | tee /etc/apt/sources.list.d/OSD.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, stop any running OSD process:Syntax
sudo stop ceph-osd id=<osd_id>
$ sudo stop ceph-osd id=<osd_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
sudo stop ceph-osd id=0
$ sudo stop ceph-osd id=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, update theceph-osdpackage:sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install ceph-osd
$ sudo apt-get update $ sudo apt-get dist-upgrade $ sudo apt-get install ceph-osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the latest Red Hat version is installed:
dpkg -s ceph-base | grep Version
$ dpkg -s ceph-base | grep Version Version: 10.2.2-19redhat1trustyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
As
root, update the owner and group permissions on the newly created directory and files:Syntax
chown -R <owner>:<group> <path_to_directory>
# chown -R <owner>:<group> <path_to_directory>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
chown -R ceph:ceph /var/lib/ceph/osd chown -R ceph:ceph /var/log/ceph chown -R ceph:ceph /var/run/ceph chown -R ceph:ceph /etc/ceph
# chown -R ceph:ceph /var/lib/ceph/osd # chown -R ceph:ceph /var/log/ceph # chown -R ceph:ceph /var/run/ceph # chown -R ceph:ceph /etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUsing the following
findcommand might quicken the process of changing ownership by using thechowncommand in parallel on a Ceph storage cluster with a large number of disks:find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
# find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove packages that are no longer needed:
sudo apt-get purge ceph ceph-mon
$ sudo apt-get purge ceph ceph-monCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
cephpackage is now a meta-package. Only theceph-monpackage is needed on the Monitor nodes, only theceph-osdpackage is needed on the OSD nodes, and only theceph-radosgwpackage is needed on the RADOS Gateway nodes.As
root, replay device events from the kernel:udevadm trigger
# udevadm triggerCopy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, enable theceph-osdprocess:sudo systemctl enable ceph-osd.target sudo systemctl enable ceph-osd@<osd_id>
$ sudo systemctl enable ceph-osd.target $ sudo systemctl enable ceph-osd@<osd_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow As
root, reboot the OSD node:shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move to the next OSD node.
NoteIf the
nooutandnorebalanceflags are set, the storage cluster is inHEALTH_WARNstateceph health
$ ceph health HEALTH_WARN noout,norebalance flag(s) setCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Once you are done upgrading the Ceph Storage Cluster, unset the previously set OSD flags and verify the storage cluster status.
On a Monitor node, and after all OSD nodes have been upgraded, unset the noout and norebalance flags:
ceph osd unset noout ceph osd unset norebalance
# ceph osd unset noout
# ceph osd unset norebalance
In addition, execute the ceph osd require-osd-release <release> command. This command ensures that no more OSDs with Red Hat Ceph Storage 2.3 can be added to the storage cluster. If you do not run this command, the storage status will be HEALTH_WARN.
ceph osd require-osd-release luminous
# ceph osd require-osd-release luminous
Additional Resources
- To expand the storage capacity by adding new OSDs to the storage cluster, see the Add an OSD section in the Administration Guide for Red Hat Ceph Storage 3
Upgrading the Ceph Object Gateway Nodes
This section describes steps to upgrade a Ceph Object Gateway node to a later version.
Red Hat recommends to back up the system before proceeding with these upgrade procedures.
Prerequisites
- Red Hat recommends putting a Ceph Object Gateway behind a load balancer, such as HAProxy. If you use a load balancer, remove the Ceph Object Gateway from the load balancer once no requests are being served.
If you use a custom name for the region pool, specified in the
rgw_region_root_poolparameter, add thergw_zonegroup_root_poolparameter to the[global]section of the Ceph configuration file. Set the value ofrgw_zonegroup_root_poolto be the same asrgw_region_root_pool, for example:[global] rgw_zonegroup_root_pool = .us.rgw.root
[global] rgw_zonegroup_root_pool = .us.rgw.rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Do the following steps on each Ceph Object Gateway node in the storage cluster. Upgrade only one node at a time.
If you used online repositories to install Red Hat Ceph Storage, disable the 2 repositories.
Comment out the following lines in the
/etc/apt/sources.listand/etc/apt/sources.list.d/ceph.listfiles.deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
# deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer # deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/ToolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following files from the
/etc/apt/sources.list.d/directory.rm /etc/apt/sources.list.d/Installer.list rm /etc/apt/sources.list.d/Tools.list
# rm /etc/apt/sources.list.d/Installer.list # rm /etc/apt/sources.list.d/Tools.listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Enable the Red Hat Ceph Storage 3 Tools repository:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the Ceph Object Gateway process (
ceph-radosgw):sudo stop radosgw id=rgw.<hostname>
$ sudo stop radosgw id=rgw.<hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<hostname>with the name of Ceph Object Gateway host, for examplegateway-node.sudo stop radosgw id=rgw.node
$ sudo stop radosgw id=rgw.nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
ceph-radosgwpackage:sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install radosgw
$ sudo apt-get update $ sudo apt-get dist-upgrade $ sudo apt-get install radosgwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the owner and group permissions on the newly created
/var/lib/ceph/radosgw/and/var/log/ceph/directories and their content toceph.chown -R ceph:ceph /var/lib/ceph/radosgw chown -R ceph:ceph /var/log/ceph
# chown -R ceph:ceph /var/lib/ceph/radosgw # chown -R ceph:ceph /var/log/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove packages that are no longer needed.
sudo apt-get purge ceph
$ sudo apt-get purge cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
cephpackage is now a meta-package. Only theceph-mon,ceph-osd, andceph-radosgwpackages are required on the Monitor, OSD, and Ceph Object Gateway nodes respectively.Enable the
ceph-radosgwprocess:sudo systemctl enable ceph-radosgw.target sudo systemctl enable ceph-radosgw@rgw.<hostname>
$ sudo systemctl enable ceph-radosgw.target $ sudo systemctl enable ceph-radosgw@rgw.<hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<hostname>with the name of the Ceph Object Gateway host, for examplegateway-node.sudo systemctl enable ceph-radosgw.target sudo systemctl enable ceph-radosgw@rgw.gateway-node
$ sudo systemctl enable ceph-radosgw.target $ sudo systemctl enable ceph-radosgw@rgw.gateway-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the Ceph Object Gateway node:
shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you use a load balancer, add the Ceph Object Gateway node back to the load balancer.
See Also
Upgrading a Ceph Client Node
Ceph clients are:
- Ceph Block Devices
- OpenStack Nova compute nodes
- QEMU/KVM hypervisors
- Any custom application that uses the Ceph client-side libraries
Red Hat recommends all Ceph clients to be running the same version as the Ceph storage cluster.
Prerequisites
- Stop all I/O requests against a Ceph client node while upgrading the packages to prevent unexpected errors to occur
Procedure
If you installed Red Hat Ceph Storage 2 clients by using software repositories, disable the repositories:
If the following lines exist in the
/etc/apt/sources.listor/etc/apt/sources.list.d/ceph.listfiles, comment out the online repositories for Red Hat Ceph Storage 2 by adding a hash sign (#) to the beginning of the line.deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Tools
deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/Installer deb https://<customer_name>:<customer_password>@rhcs.download.redhat.com/ubuntu/2-updates/ToolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the following files from the /etc/apt/sources.list.d/ directory:
Installer.list Tools.list
Installer.list Tools.listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRemove any reference to Red Hat Ceph Storage 2 in the APT source file(s).
On the client node, enable the Red Hat Ceph Storage Tools 3 repository:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' $ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' $ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the client node, update the
ceph-commonpackage:sudo apt-get install ceph-common
$ sudo apt-get install ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart any application that depends on the Ceph client-side libraries after upgrading the ceph-common package.
If you are upgrading OpenStack Nova compute nodes that have running QEMU/KVM instances or use a dedicated QEMU/KVM client, stop and start the QEMU/KVM instance because restarting the instance does not work in this case.