Upgrading OpenStack
Upgrading Red Hat Enterprise Linux OpenStack Platform
Abstract
Chapter 1. Introduction Copy linkLink copied to clipboard!
1.1. Upgrade Methods Comparison Copy linkLink copied to clipboard!
Method | Description | Pros | Cons |
---|---|---|---|
All at Once |
In this method, you take down all of the OpenStack services at the same time, do the upgrade, then bring all services back up after the upgrade process is complete.
For more information, see Chapter 3, Upgrade OpenStack All at Once.
|
This upgrade process is simple. Because everything is down, no orchestration is required. Although services will be down, VM workloads can be kept running if there are no requirements to move from one version of Red Hat Enterprise Linux to another (that is, from v7.0 to v7.1).
|
All of your services will be unavailable at the same time. In a large environment, the upgrade can result in a potentially extensive downtime while you wait for database-schema upgrades to complete. Downtime can be mitigated by proper dry-runs of your upgrade procedure on a test environment as well as scheduling a specific downtime window for the upgrade.
|
Service by Service with Live Compute Upgrade |
This method is a variation of the service-by-service upgrade, with a change in how the Compute service is upgraded. With this method, you can take advantage of Red Hat Enterprise Linux OpenStack Platform 7 features that allow to run older compute nodes in parallel with upgraded compute nodes.
For more information, see Chapter 4, Upgrade OpenStack by Updating Each Service Individually, with Live Compute.
|
This method minimizes interruptions to your Compute service, with only a few minutes for the smaller services, and a longer migration interval for the workloads moving to newly-upgraded Compute hosts. Existing workloads can run indefinitely, and you do not need to wait for a database migration.
|
Additional hardware resources may be required to bring up the Red Hat Enterprise Linux OpenStack Platform 7 (Kilo) Compute nodes.
|
- Ensure you have subscribed to the correct channels for this release on all hosts.
- The upgrade will involve some service interruptions.
- Running instances will not be affected by the upgrade process unless you either reboot a Compute node or explicitly shut down an instance.
- To upgrade OpenStack Networking, you will need to set the correct
libvirt_vif_driver
in/etc/nova/nova.conf
as the old hybrid driver is now deprecated. To do so, run the following on your Compute API host:openstack-config --set /etc/nova/nova.conf \ DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver
# openstack-config --set /etc/nova/nova.conf \ DEFAULT libvirt_vif_driver nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Warning
- Upgrading any Beta release of Red Hat Enterprise Linux OpenStack Platform to any supported release (for example, 6 or 7).
- Upgrading Compute Networking (nova-networking) to OpenStack Networking (neutron) in Red Hat Enterprise Linux OpenStack Platform 7. The only supported networking upgrade is between versions of OpenStack Networking (neutron) from Red Hat Enterprise Linux OpenStack Platform version 6 to version 7 .
Chapter 2. Prerequisites Copy linkLink copied to clipboard!
Warning
- Red Hat Enterprise Linux OpenStack Platform 4 (Havana) -- rhel-6-server-openstack-4.0-rpms
- Red Hat Enterprise Linux OpenStack Platform 5 (Icehouse) -- rhel-7-server-openstack-5.0-rpms
- Red Hat Enterprise Linux OpenStack Platform 6 (Juno) -- rhel-7-server-openstack-6.0-rpms
Note
cloud-init
.
subscription-manager repos --enable=rhel-7-server-rh-common-rpms
# subscription-manager repos --enable=rhel-7-server-rh-common-rpms
2.1. Configure Content Delivery Network (CDN) Channels Copy linkLink copied to clipboard!
subscription-manager
to use the correct channels.
subscription-manager repos --enable=[reponame]
# subscription-manager repos --enable=[reponame]
subscription-manager repos --disable=[reponame]
# subscription-manager repos --disable=[reponame]
The following tables outline the channels for Red Hat Enterprise Linux 7.
Channel | Repository Name |
---|---|
Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
Red Hat OpenStack 7.0 for Server 7 (RPMS) |
rhel-7-server-openstack-7.0-rpms
|
Red Hat Enterprise Linux 7 Server - RH Common (RPMs) |
rhel-7-server-rh-common-rpms
|
Channel | Repository Name |
---|---|
Red Hat Enterprise Linux 7 Server - Optional |
rhel-7-server-optional-rpms
|
The following tables outline the channels for the Red Hat Enterprise Linux OpenStack Platform Director.
Channel | Repository Name |
---|---|
Red Hat Enterprise Linux OpenStack Platform Director 7.0 (RPMs) |
rhel-7-server-openstack-7.0-director-rpms
|
Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server |
rhel-server-rhscl-7-rpms
|
Images on CDN (Optional) |
rhel-7-server-openstack-7.0-files
|
Red Hat Enterprise Linux OpenStack Platform 7.0 Operational Tools |
rhel-7-server-openstack-7.0-optools-rpms
|
The following table outlines the channels you must disable to ensure Red Hat Enterprise Linux OpenStack Platform 7 functions correctly.
Channel | Repository Name |
---|---|
Red Hat CloudForms Management Engine |
"cf-me-*"
|
Red Hat CloudForms Tools for RHEL 6 |
"rhel-6-server-cf-*"
|
Red Hat Enterprise Virtualization |
"rhel-6-server-rhev*"
|
Red Hat Enterprise Linux 6 Server - Extended Update Support |
"*-eus-rpms"
|
Chapter 3. Upgrade OpenStack All at Once Copy linkLink copied to clipboard!
Note
3.1. Upgrade All OpenStack Services Simultaneously Copy linkLink copied to clipboard!
Procedure 3.1. Upgrading OpenStack components on a host
- Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Check that the
openstack-utils
package is installed:yum install openstack-utils
# yum install openstack-utils
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Take down all OpenStack services on all the nodes. This step depends on how your services are distributed among your nodes.
- In a non-HA environment:To stop all the OpenStack services running on a node, login to the node and run:
openstack-service stop
# openstack-service stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In an HA environment:
- To stop all the OpenStack services running on a node, login to the node and run:
openstack-service stop
# openstack-service stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable all Pacemaker-managed resources by setting the
stop-all-resources
property on the cluster. Run the following on a single member of your Pacemaker cluster:pcs property set stop-all-resources=true
# pcs property set stop-all-resources=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then wait until the output ofpcs status
shows that all resources have stopped.
- Perform a complete upgrade of all packages, and then flush expired tokens in the Identity service (might decrease the time required to synchronize the database):
yum upgrade
# yum upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform the necessary configuration updates on each of your services.
- Identity serviceIn the RHEL OpenStack Platform 7 (Kilo) release, the location of the token persistence backends has changed. You need to update the
driver
option in the[token]
section of thekeystone.conf
. To do this, replace any instance ofkeystone.token.backends
withkeystone.token.persistence.backends
.sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \ /etc/keystone/keystone.conf
# sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \ /etc/keystone/keystone.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Package updates may include newsystemd
unit files, so confirm thatsystemd
is aware of any updated files.systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - OpenStack Networking serviceOnce you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filter
configuration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filters
file, replace the value ofdnsmasq
. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:dnsmasq: CommandFilter, dnsmasq, root
dnsmasq: CommandFilter, dnsmasq, root
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Upgrade the database schema for each service that uses the database. To do so, login to the node hosting the service and run:
openstack-db --service SERVICENAME --update
# openstack-db --service SERVICENAME --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the service's project name as the SERVICENAME. For example, to upgrade the database schema of the Identity service:openstack-db --service keystone --update
# openstack-db --service keystone --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 3.1. Project name of each OpenStack service that uses the database Service Project name Identity keystone Block Storage cinder Image Service glance Compute nova Networking neutron Orchestration heat Certain services require additional database maintenance as part of the Juno to Kilo upgrade that is not covered by theopenstack-db
command:- Identity serviceEarlier versions of the installer may not have configured your system to automatically purge expired Keystone tokens. It is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.You can alleviate this problem by running the following command before beginning the Keystone database upgrade process:
keystone-manage token_flush
# keystone-manage token_flush
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will flush expired tokens from the database. You should arrange to run this command periodically (for example, daily) usingcron
. - ComputeAfter fully upgrading to Kilo (that is, all nodes running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as a
nova
user:runuser -u nova -- nova-manage db migrate_flavor_data
# runuser -u nova -- nova-manage db migrate_flavor_data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Review the resulting configuration files. The upgraded packages will have installed
.rpmnew
files appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite. - If the package upgrades you performed require a reboot (for example, if a new kernel was installed as part of the upgrade), reboot the affected hosts now while the OpenStack service is still disabled.
- In a non-HA environment:To restart the OpenStack service, run the following command on each node:
openstack-service start
# openstack-service start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In a HA environment:
- Allow Pacemaker to restart your resources by resetting the
stop-all-resources
property. On a single member of your Pacemaker cluster, run:pcs property set stop-all-resources=false
# pcs property set stop-all-resources=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then wait until the output ofpcs status
shows that all resources have started (this may take several minutes). - Restart OpenStack services on the compute nodes. On each compute node, run:
openstack-service start
# openstack-service start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Upgrade OpenStack by Updating Each Service Individually, with Live Compute Copy linkLink copied to clipboard!
Note
Note
4.1. Upgrading OpenStack by Updating Service Individually, With Live Compute in a Non-HA Environment Copy linkLink copied to clipboard!
- Pre-upgrade tasks:On all of your hosts:
- Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Upgrade the
openstack-selinux
package, if available:yum upgrade openstack-selinux
# yum upgrade openstack-selinux
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.
- Upgrade each of your services:The following steps provide specific instructions for each service, and the order in which they should be upgraded.
- Identity (keystone)Earlier versions of the installer may not have configured your system to automatically purge expired Keystone token, it is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.To flush expired tokens from the database and alleviate the problem, the
keystone-manage
command can be used before running the Identity database upgrade.This will flush expired tokens from the database. You can arrange to run this command periodically (e.g., daily) usingcron
.On your Identity host, run:openstack-service stop keystone yum -d1 -y upgrade \*keystone\* keystone-manage token_flush openstack-db --service keystone --update openstack-service start keystone
# openstack-service stop keystone # yum -d1 -y upgrade \*keystone\* # keystone-manage token_flush # openstack-db --service keystone --update # openstack-service start keystone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Object Storage (swift)On your Object Storage hosts, run:
openstack-service stop swift yum -d1 -y upgrade \*swift\* openstack-service start swift
# openstack-service stop swift # yum -d1 -y upgrade \*swift\* # openstack-service start swift
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Image Service (glance)On your Image Service host, run:
openstack-service stop glance yum -d1 -y upgrade \*glance\* openstack-db --service glance --update openstack-service start glance
# openstack-service stop glance # yum -d1 -y upgrade \*glance\* # openstack-db --service glance --update # openstack-service start glance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Block Storage (cinder)On your Block Storage host, run:
openstack-service stop cinder yum -d1 -y upgrade \*cinder\* openstack-db --service cinder --update openstack-service start cinder
# openstack-service stop cinder # yum -d1 -y upgrade \*cinder\* # openstack-db --service cinder --update # openstack-service start cinder
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Orchestration (heat)On your Orchestration host, run:
openstack-service stop heat yum -d1 -y upgrade \*heat\* openstack-db --service heat --update openstack-service start heat
# openstack-service stop heat # yum -d1 -y upgrade \*heat\* # openstack-db --service heat --update # openstack-service start heat
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Telemetry (ceilometer)
- On all nodes hosting Telemetry component services, run:
openstack-service stop ceilometer yum -d1 -y upgrade \*ceilometer\*
# openstack-service stop ceilometer # yum -d1 -y upgrade \*ceilometer\*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On the controller node, where database is installed, run:
ceilometer-dbsync
# ceilometer-dbsync
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command allows you to configure MySQL as a back-end for Telemetry service.For a list of Telemetry component services, refer to Launch the Telemetry API and Agents. - After completing the package upgrade, restart the Telemetry service by running the following command on all nodes hosting Telemetry component services:
openstack-service start ceilometer
# openstack-service start ceilometer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Compute (nova)
- If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
compute
option in the[upgrade_levels]
section ofnova.conf
tojuno
:crudini --set /etc/nova/nova.conf upgrade_levels compute juno
# crudini --set /etc/nova/nova.conf upgrade_levels compute juno
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You need to make this change on your controllers and on your compute hosts.You should undo this operation after upgrading all of your compute hosts to OpenStack Kilo. - On your Compute host, run:
openstack-service stop nova yum -d1 -y upgrade \*nova\* openstack-db --service nova --update
# openstack-service stop nova # yum -d1 -y upgrade \*nova\* # openstack-db --service nova --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After fully upgrading to Kilo (i.e. all nodes are running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as a
nova
user:runuser -u nova -- nova-manage db migrate_flavor_data
# runuser -u nova -- nova-manage db migrate_flavor_data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you have upgraded all of your hosts to Kilo, you will want to remove the API limits configured in the previous step. On all of your hosts:
crudini --del /etc/nova/nova.conf upgrade_levels compute
# crudini --del /etc/nova/nova.conf upgrade_levels compute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Compute service on all the compute hosts and controllers:
openstack-service start nova
# openstack-service start nova
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- OpenStack Networking (neutron)
- On your OpenStack Networking host, run:
openstack-service stop neutron yum -d1 -y upgrade \*neutron\* openstack-db --service neutron --update
# openstack-service stop neutron # yum -d1 -y upgrade \*neutron\* # openstack-db --service neutron --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Once you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filter
configuration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filters
file, replace the value ofdnsmasq
. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow with:dnsmasq: CommandFilter, dnsmasq, root
dnsmasq: CommandFilter, dnsmasq, root
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the OpenStack Networking service:
openstack-service start neutron
# openstack-service start neutron
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Dashboard (horizon)On your Dashboard host, run:
yum -y upgrade \*horizon\* \*openstack-dashboard\* yum -d1 -y upgrade \*horizon\* \*python-django\* systemctl restart httpd
# yum -y upgrade \*horizon\* \*openstack-dashboard\* # yum -d1 -y upgrade \*horizon\* \*python-django\* # systemctl restart httpd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Post-upgrade tasks:
- After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:
yum upgrade
# yum upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries. - Review the resulting configuration files. The upgraded packages will have installed
.rpmnew
files appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite.
4.2. Upgrading OpenStack by Updating Service Individually, With Live Compute in an HA Environment Copy linkLink copied to clipboard!
- Pre-upgrade tasks:On all of your hosts:
- If you are running
Puppet
as configured by Staypuft, you must disable it:systemctl stop puppet systemctl disable puppet
# systemctl stop puppet # systemctl disable puppet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures that the Staypuft-configured puppet will not revert changes made as part of the upgrade process. - Install the yum repository for Red Hat Enterprise Linux OpenStack Platform 7 (Kilo).
- Manually upgrade all the python packages.
yum upgrade python*
# yum upgrade python*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade the
openstack-selinux
package, if available:yum upgrade openstack-selinux
# yum upgrade openstack-selinux
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is necessary to ensure that the upgraded services will run correctly on a system with SELinux enabled.
- Service upgrades:Upgrade each of your services. The following is a reasonable order in which to perform the upgrades on your controllers:Upgrade MariaDB:Perform the follow steps on each host running MariaDB. Complete the steps on one host before starting the process on another host.
- Stop the service from running on the local node:
pcs resource ban galera-master $(crm_node -n)
# pcs resource ban galera-master $(crm_node -n)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until
pcs status
shows that the service is no longer running on the local node. This may take a few minutes. The local node will first transition to slave mode:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Slaves: [ pcmk-mac5254004bd62f ]
Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Slaves: [ pcmk-mac5254004bd62f ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It will eventually transition to stopped:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Stopped: [ pcmk-mac5254004bd62f ]
Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac525400aeb753 pcmk-mac525400bab8ae ] Stopped: [ pcmk-mac5254004bd62f ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade the relevant packages.
yum upgrade '*mariadb*' '*galera*'
# yum upgrade '*mariadb*' '*galera*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Allow Pacemaker to schedule the
galera
resource on the local node:pcs resource clear galera-master
# pcs resource clear galera-master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until
pcs status
shows that the galera resource is running on the local node as a master. The output frompcs status
should include something like:Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac5254004bd62f pcmk-mac525400aeb753 pcmk-mac525400bab8ae ]
Master/Slave Set: galera-master [galera] Masters: [ pcmk-mac5254004bd62f pcmk-mac525400aeb753 pcmk-mac525400bab8ae ]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Upgrade MongoDB:- Remove the
mongod
resource from Pacemaker's control:pcs resource unmanage mongod-clone
# pcs resource unmanage mongod-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the service on all of your controllers. On each controller, run:
systemctl stop mongod
# systemctl stop mongod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade the relevant packages:
yum upgrade 'mongodb*' 'python-pymongo*'
# yum upgrade 'mongodb*' 'python-pymongo*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the
mongod
service on your controllers by running, on each controller:systemctl start mongod
# systemctl start mongod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up the resource to Pacemaker control:
pcs resource cleanup mongod-clone
# pcs resource cleanup mongod-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Return the resource to Pacemaker control:
pcs resource manage mongod-clone
# pcs resource manage mongod-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Identity service (keystone):- Remove Identity service from Pacemaker's control:
pcs resource unmanage keystone-clone
# pcs resource unmanage keystone-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop the Identity service by running the following on each of your controllers:
systemctl stop openstack-keystone
# systemctl stop openstack-keystone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade the relevant packages:
yum upgrade 'openstack-keystone*' 'python-keystone*'
# yum upgrade 'openstack-keystone*' 'python-keystone*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the RHEL OpenStack Platform 7 (Kilo) release, the location of the token persistence backends has changed. You need to update the
driver
option in the[token]
section of thekeystone.conf
. To do this, replace any instance ofkeystone.token.backends
withkeystone.token.persistence.backends
.sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \ /etc/keystone/keystone.conf
# sed -i 's/keystone.token.backends/keystone.token.persistence.backends/g' \ /etc/keystone/keystone.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Package updates may include newsystemd
unit files, so ensure thatsystemd
is aware of any updated files.systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Earlier versions of the installer may not have configured your system to automatically purge expired Keystone token, it is possible that your token table has a large number of expired entries. This can dramatically increase the time it takes to complete the database schema upgrade.To flush expired tokens from the database and alleviate the problem, the
keystone-manage
command can be used before running the Identity database upgrade.This will flush expired tokens from the database. You can arrange to run this command periodically (e.g., daily) usingcron
.keystone-manage token_flush
# keystone-manage token_flush
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the Identity service database schema:
openstack-db --service keystone --update
# openstack-db --service keystone --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the service by running the following on each of your controllers:
systemctl start openstack-keystone
# systemctl start openstack-keystone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up the Identity service using Pacemaker:
pcs resource cleanup keystone-clone
# pcs resource cleanup keystone-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Return the resource to Pacemaker control:
pcs resource manage keystone-clone
# pcs resource manage keystone-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Image service (glance):- Stop the Image service resources in Pacemaker:
pcs resource disable glance-registry-clone pcs resource disable glance-api-clone
# pcs resource disable glance-registry-clone # pcs resource disable glance-api-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that both services have stopped running. - Upgrade the relevant packages:
yum upgrade 'openstack-glance*' 'python-glance*'
# yum upgrade 'openstack-glance*' 'python-glance*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the Image service database schema:
openstack-db --service glance --update
# openstack-db --service glance --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up the Image service using Pacemaker:
pcs resource cleanup glance-api-clone pcs resource cleanup glance-registry-clone
# pcs resource cleanup glance-api-clone # pcs resource cleanup glance-registry-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart Image service resources in Pacemaker:
pcs resource enable glance-api-clone pcs resource enable glance-registry-clone
# pcs resource enable glance-api-clone # pcs resource enable glance-registry-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Block Storage service (cinder):- Stop all Block Storage service resources in Pacemaker:
pcs resource disable cinder-api-clone pcs resource disable cinder-scheduler-clone pcs resource disable cinder-volume
# pcs resource disable cinder-api-clone # pcs resource disable cinder-scheduler-clone # pcs resource disable cinder-volume
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above services have stopped running. - Upgrade the relevant packages:
yum upgrade 'openstack-cinder*' 'python-cinder*'
# yum upgrade 'openstack-cinder*' 'python-cinder*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the Block Storage service database schema:
openstack-db --service cinder --update
# openstack-db --service cinder --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up the Block Storage service using Pacemaker:
pcs resource cleanup cinder-volume pcs resource cleanup cinder-scheduler-clone pcs resource cleanup cinder-api-clone
# pcs resource cleanup cinder-volume # pcs resource cleanup cinder-scheduler-clone # pcs resource cleanup cinder-api-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart all Block Storage service resources in Pacemaker:
pcs resource enable cinder-volume pcs resource enable cinder-scheduler-clone pcs resource enable cinder-api-clone
# pcs resource enable cinder-volume # pcs resource enable cinder-scheduler-clone # pcs resource enable cinder-api-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Orchestration (heat):- Stop Orchestration resources in Pacemaker:
pcs resource disable heat-api-clone pcs resource disable heat-api-cfn-clone pcs resource disable heat-api-cloudwatch-clone pcs resource disable heat
# pcs resource disable heat-api-clone # pcs resource disable heat-api-cfn-clone # pcs resource disable heat-api-cloudwatch-clone # pcs resource disable heat
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above services have stopped running. - Upgrade the relevant packages:
yum upgrade 'openstack-heat*' 'python-heat*'
# yum upgrade 'openstack-heat*' 'python-heat*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the Orchestration database schema:
openstack-db --service heat --update
# openstack-db --service heat --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up the Orchestration service using Pacemaker:
pcs resource cleanup heat pcs resource cleanup heat-api-cloudwatch-clone pcs resource cleanup heat-api-cfn-clone pcs resource cleanup heat-api-clone
# pcs resource cleanup heat # pcs resource cleanup heat-api-cloudwatch-clone # pcs resource cleanup heat-api-cfn-clone # pcs resource cleanup heat-api-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart Orchestration resources in Pacemaker:
pcs resource enable heat pcs resource enable heat-api-cloudwatch-clone pcs resource enable heat-api-cfn-clone pcs resource enable heat-api-clone
# pcs resource enable heat # pcs resource enable heat-api-cloudwatch-clone # pcs resource enable heat-api-cfn-clone # pcs resource enable heat-api-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Telemetry (ceilometer):- Stop all Telemetry resources in Pacemaker:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above services have stopped running. - Upgrade the relevant packages:
yum upgrade 'openstack-ceilometer*' 'python-ceilometer*'
# yum upgrade 'openstack-ceilometer*' 'python-ceilometer*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are using the MySQL backend for Telemetry, update the Telemetry database schema.
openstack-db --service ceilometer --update
# openstack-db --service ceilometer --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not necessary of you are using the MongoDB backend. - Clean up the Telemetry service using Pacemaker:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart all Telemetry resources in Pacemaker:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Compute (nova):- Stop all Compute resources in Pacemaker:
pcs resource disable openstack-nova-novncproxy-clone pcs resource disable openstack-nova-consoleauth-clone pcs resource disable openstack-nova-conductor-clone pcs resource disable openstack-nova-api-clone pcs resource disable openstack-nova-scheduler-clone
# pcs resource disable openstack-nova-novncproxy-clone # pcs resource disable openstack-nova-consoleauth-clone # pcs resource disable openstack-nova-conductor-clone # pcs resource disable openstack-nova-api-clone # pcs resource disable openstack-nova-scheduler-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above services have stopped running. - Upgrade the relevant packages:
yum upgrade 'openstack-nova*' 'python-nova*'
# yum upgrade 'openstack-nova*' 'python-nova*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the Compute database schema:
openstack-db --service nova --update
# openstack-db --service nova --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After fully upgrading to Kilo (i.e. all nodes are running Kilo), you should start a background migration of flavor information. Kilo conductor nodes will do this on the fly when necessary, but the rest of the idle data needs to be migrated in the the background. Run the following command as anova
user:runuser -u nova -- nova-manage db migrate_flavor_data
# runuser -u nova -- nova-manage db migrate_flavor_data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
compute
option in the[upgrade_levels]
section ofnova.conf
tojuno
:crudini --set /etc/nova/nova.conf upgrade_levels compute juno
# crudini --set /etc/nova/nova.conf upgrade_levels compute juno
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will need to first unmanage the Compute resources by runningpcs resource unmanage
on one of your controllers:pcs resource unmanage openstack-nova-novncproxy-clone pcs resource unmanage openstack-nova-consoleauth-clone pcs resource unmanage openstack-nova-conductor-clone pcs resource unmanage openstack-nova-api-clone pcs resource unmanage openstack-nova-scheduler-clone
# pcs resource unmanage openstack-nova-novncproxy-clone # pcs resource unmanage openstack-nova-consoleauth-clone # pcs resource unmanage openstack-nova-conductor-clone # pcs resource unmanage openstack-nova-api-clone # pcs resource unmanage openstack-nova-scheduler-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the services on all controllers:openstack-service restart nova
# openstack-service restart nova
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should return control to the Pacemaker after upgrading all of your compute hosts to OpenStack Kilo.pcs resource manage openstack-nova-scheduler-clone pcs resource manage openstack-nova-api-clone pcs resource manage openstack-nova-conductor-clone pcs resource manage openstack-nova-consoleauth-clone pcs resource manage openstack-nova-novncproxy-clone
# pcs resource manage openstack-nova-scheduler-clone # pcs resource manage openstack-nova-api-clone # pcs resource manage openstack-nova-conductor-clone # pcs resource manage openstack-nova-consoleauth-clone # pcs resource manage openstack-nova-novncproxy-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up all Compute resources in Pacemaker:
pcs resource cleanup openstack-nova-scheduler-clone pcs resource cleanup openstack-nova-api-clone pcs resource cleanup openstack-nova-conductor-clone pcs resource cleanup openstack-nova-consoleauth-clone pcs resource cleanup openstack-nova-novncproxy-clone
# pcs resource cleanup openstack-nova-scheduler-clone # pcs resource cleanup openstack-nova-api-clone # pcs resource cleanup openstack-nova-conductor-clone # pcs resource cleanup openstack-nova-consoleauth-clone # pcs resource cleanup openstack-nova-novncproxy-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart all Compute resources in Pacemaker:
pcs resource enable openstack-nova-scheduler-clone pcs resource enable openstack-nova-api-clone pcs resource enable openstack-nova-conductor-clone pcs resource enable openstack-nova-consoleauth-clone pcs resource enable openstack-nova-novncproxy-clone
# pcs resource enable openstack-nova-scheduler-clone # pcs resource enable openstack-nova-api-clone # pcs resource enable openstack-nova-conductor-clone # pcs resource enable openstack-nova-consoleauth-clone # pcs resource enable openstack-nova-novncproxy-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade OpenStack Networking (neutron):- Prevent Pacemaker from triggering the OpenStack Networking cleanup scripts:
pcs resource unmanage neutron-ovs-cleanup-clone pcs resource unmanage neutron-netns-cleanup-clone
# pcs resource unmanage neutron-ovs-cleanup-clone # pcs resource unmanage neutron-netns-cleanup-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Stop OpenStack Networking resources in Pacemaker:
pcs resource disable neutron-server-clone pcs resource disable neutron-openvswitch-agent-clone pcs resource disable neutron-dhcp-agent-clone pcs resource disable neutron-l3-agent-clone pcs resource disable neutron-metadata-agent-clone
# pcs resource disable neutron-server-clone # pcs resource disable neutron-openvswitch-agent-clone # pcs resource disable neutron-dhcp-agent-clone # pcs resource disable neutron-l3-agent-clone # pcs resource disable neutron-metadata-agent-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade the relevant packages
yum upgrade 'openstack-neutron*' 'python-neutron*'
# yum upgrade 'openstack-neutron*' 'python-neutron*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Install packages for the advanced Openstack Networking services enabled in the
neutron.conf
file, for example,openstack-neutron-vpnaas
,openstack-neutron-fwaas
andopenstack-neutron-lbaas
.yum install openstack-neutron-vpnaas yum install openstack-neutron-fwaas yum install openstack-neutron-lbaas
# yum install openstack-neutron-vpnaas # yum install openstack-neutron-fwaas # yum install openstack-neutron-lbaas
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Installing these packages will create the corresponding configuration files. - For the VPNaaS, LBaaS service entries in the
neutron.conf
file, copy theservice_provider
entries to the correspondingneutron-*aas.conf
file located in/etc/neutron
and comment these entries from theneutron.conf
file.For the FWaaS service entry, theservice_provider
parameters should remain in theneutron.conf
file. - On every node that runs the LBaaS agents, install the
openstack-neutron-lbaas
package.yum install openstack-neutron-lbaas
# yum install openstack-neutron-lbaas
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the OpenStack Networking database schema:
openstack-db --service neutron --update
# openstack-db --service neutron --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Once you have completed upgrading the OpenStack Networking service, you need to edit the
rootwrap dhcp.filter
configuration file.To do so, in the/usr/share/neutron/rootwrap/dhcp.filters
file, replace the value ofdnsmasq
. For example, replace:dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
dnsmasq: EnvFilter, env, root, CONFIG_FILE=, NETWORK_ID=, dnsmasq
Copy to Clipboard Copied! Toggle word wrap Toggle overflow withdnsmasq: CommandFilter, dnsmasq, root
dnsmasq: CommandFilter, dnsmasq, root
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Clean up OpenStack Networking resources in Pacemaker:
pcs resource cleanup neutron-metadata-agent-clone pcs resource cleanup neutron-l3-agent-clone pcs resource cleanup neutron-dhcp-agent-clone pcs resource cleanup neutron-openvswitch-agent-clone pcs resource cleanup neutron-server-clone
# pcs resource cleanup neutron-metadata-agent-clone # pcs resource cleanup neutron-l3-agent-clone # pcs resource cleanup neutron-dhcp-agent-clone # pcs resource cleanup neutron-openvswitch-agent-clone # pcs resource cleanup neutron-server-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart OpenStack Networking resources in Pacemaker:
pcs resource enable neutron-metadata-agent-clone pcs resource enable neutron-l3-agent-clone pcs resource enable neutron-dhcp-agent-clone pcs resource enable neutron-openvswitch-agent-clone pcs resource enable neutron-server-clone
# pcs resource enable neutron-metadata-agent-clone # pcs resource enable neutron-l3-agent-clone # pcs resource enable neutron-dhcp-agent-clone # pcs resource enable neutron-openvswitch-agent-clone # pcs resource enable neutron-server-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Return the cleanup agents to Pacemaker control:
pcs resource manage neutron-ovs-cleanup-clone pcs resource manage neutron-netns-cleanup-clone
# pcs resource manage neutron-ovs-cleanup-clone # pcs resource manage neutron-netns-cleanup-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resources are running.
Upgrade Dashboard (horizon):- Stop the Dashboard resource in Pacemaker:
pcs resource disable horizon-clone
# pcs resource disable horizon-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the service has stopped running. - Upgrade the relevant packages:
yum upgrade httpd 'openstack-dashboard*' 'python-django*'
# yum upgrade httpd 'openstack-dashboard*' 'python-django*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reload
systemd
to account for updated unit files:systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Correct the Dashboard configuration:Fix Apache Configuration:The
openstack-dashboard
package installs/etc/httpd/conf.d/openstack-dashboard.conf
file, but the Staypuft installer replaces this with the/etc/httpd/conf.d/15-horizon_vhost.conf
file. After upgrading horizon, you will have the following configuration files:15-horizon_vhost.conf
openstack-dashboard.conf
openstack-dashboard.conf.rpmnew
Ensure you make the following changes:- Remove the
openstack-dashboard.conf.rpmnew
file:rm openstack-dashboard.conf.rpmnew
# rm openstack-dashboard.conf.rpmnew
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Modify the
15-horizon_vhost.conf
file by replacing:Alias /static "/usr/share/openstack-dashboard/static"
Alias /static "/usr/share/openstack-dashboard/static"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow withAlias /dashboard/static "/usr/share/openstack-dashboard/static"
Alias /dashboard/static "/usr/share/openstack-dashboard/static"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Fix Dashboard Configuration:Theopenstack-dashboard
package installs the/etc/openstack-dashboard/local_settings
file. After an upgrade, you will find the following configuration files:/etc/openstack-dashboard/local_settings
/etc/openstack-dashboard/local_settings.rpmnew
Ensure you make the following changes:- Backup your existing
local_settings
file:cp local_settings local_settings.old
# cp local_settings local_settings.old
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Rename the
local_settings.rpmnew
file tolocal_settings
file:mv local_settings.rpmnew local_settings
# mv local_settings.rpmnew local_settings
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace the following configuration options with the corresponding value from your
local_settings.old
file:- ALLOWED_HOSTS
- SECRET_KEY
- CACHES
- OPENSTACK_KEYSTONE_URL
- Restart the web server on all your controllers to apply all changes:
service httpd restart
# service httpd restart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Clean up the Dashboard resource in Pacemaker:
pcs resource cleanup horizon-clone
# pcs resource cleanup horizon-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Dashboard resource in Pacemaker:
pcs resource enable horizon-clone
# pcs resource enable horizon-clone
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the output of
pcs status
shows that the above resource is running.
Upgrade Compute hosts (nova):On each compute host:- Stop all OpenStack services on the host:
openstack-service stop
# openstack-service stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Upgrade all packages:
yum upgrade
# yum upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are performing a rolling upgrade of your compute hosts you need to set explicit API version limits to ensure compatibility between your Juno and Kilo environments.Before starting Kilo controller or compute services, you need to set the
compute
option in the[upgrade_levels]
section ofnova.conf
tojuno
:crudini --set /etc/nova/nova.conf upgrade_levels compute juno
# crudini --set /etc/nova/nova.conf upgrade_levels compute juno
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You need to make this change on your controllers and on your compute hosts. - Start all openstack services on the host:
openstack-service start
# openstack-service start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After you have upgraded all of your hosts to Kilo, you will want to remove the API limits configured in the previous step. On all of your hosts:
crudini --del /etc/nova/nova.conf upgrade_levels compute
# crudini --del /etc/nova/nova.conf upgrade_levels compute
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Post-upgrade tasks:- After completing all of your individual service upgrades, you should perform a complete package upgrade on all of your systems:
yum upgrade
# yum upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This will ensure that all packages are up-to-date. You may want to schedule a restart of your OpenStack hosts at a future date in order to ensure that all running processes are using updated versions of the underlying binaries. - Review the resulting configuration files. The upgraded packages will have installed
.rpmnew
files appropriate to the Red Hat Enterprise Linux OpenStack Platform 7 version of the service.New versions of OpenStack services may deprecate certain configuration options. You should also review your OpenStack logs for any deprecation warnings, because these may cause problems during a future upgrade. For more information on the new, updated and deprecated configuration options for each service , see Configuration Reference available from: Red Hat Enterprise Linux OpenStack Platform Documentation Suite.