Chapter 7. Performing Tasks after Overcloud Creation
This chapter explores some of the functions you perform after creating your overcloud of choice.
7.1. Creating the Overcloud Tenant Network
The overcloud requires a Tenant network for instances. Source the overcloud
and create an initial Tenant network in Neutron. For example:
$ source ~/overcloudrc $ openstack network create default $ openstack subnet create default --network default --gateway 172.20.1.1 --subnet-range 172.20.0.0/16
This creates a basic Neutron network called default
. The overcloud automatically assigns IP addresses from this network using an internal DHCP mechanism.
Confirm the created network with neutron net-list
:
$ openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | 95fadaa1-5dda-4777... | default | 7e060813-35c5-462c-a56a-1c6f8f4f332f | +-----------------------+-------------+--------------------------------------+
7.2. Creating the Overcloud External Network
You need to create the External network on the overcloud so that you can assign floating IP addresses to instances.
Using a Native VLAN
This procedure assumes a dedicated interface or native VLAN for the External network.
Source the overcloud
and create an External network in Neutron. For example:
$ source ~/overcloudrc $ openstack network create public --external --provider-network-type flat --provider-physical-network datacentre $ openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
In this example, you create a network with the name public
. The overcloud requires this specific name for the default floating IP pool. This is also important for the validation tests in Section 7.6, “Validating the Overcloud”.
This command also maps the network to the datacentre
physical network. As a default, datacentre
maps to the br-ex
bridge. Leave this option as the default unless you have used custom neutron settings during the overcloud creation.
Using a Non-Native VLAN
If not using the native VLAN, assign the network to a VLAN using the following commands:
$ source ~/overcloudrc $ openstack network create public --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 104 $ openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
The provider:segmentation_id
value defines the VLAN to use. In this case, you can use 104.
Confirm the created network with neutron net-list
:
$ openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | d474fe1f-222d-4e32... | public | 01c5f621-1e0f-4b9d-9c30-7dc59592a52f | +-----------------------+-------------+--------------------------------------+
7.3. Creating Additional Floating IP Networks
Floating IP networks can use any bridge, not just br-ex
, as long as you meet the following conditions:
-
NeutronExternalNetworkBridge
is set to"''"
in your network environment file. You have mapped the additional bridge during deployment. For example, to map a new bridge called
br-floating
to thefloating
physical network:$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --neutron-bridge-mappings datacentre:br-ex,floating:br-floating
Create the Floating IP network after creating the overcloud:
$ openstack network create ext-net --external --provider-physical-network floating --provider-network-type vlan --provider-segment 105 $ openstack subnet create ext-subnet --network ext-net --dhcp --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24
7.4. Creating the Overcloud Provider Network
A provider network is a network attached physically to a network existing outside of the deployed overcloud. This can be an existing infrastructure network or a network that provides external access directly to instances through routing instead of floating IPs.
When creating a provider network, you associate it with a physical network, which uses a bridge mapping. This is similar to floating IP network creation. You add the provider network to both the Controller and the Compute nodes because the Compute nodes attach VM virtual network interfaces directly to the attached network interface.
For example, if the desired provider network is a VLAN on the br-ex bridge, use the following command to add a provider network on VLAN 201:
$ openstack network create provider_network --provider-physical-network datacentre --provider-network-type vlan --provider-segment 201 --share
This command creates a shared network. It is also possible to specify a tenant instead of specifying --share
. That network will only be available to the specified tenant. If you mark a provider network as external, only the operator may create ports on that network.
Add a subnet to a provider network if you want neutron to provide DHCP services to the tenant instances:
$ openstack subnet create provider-subnet --network provider_network --dhcp --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 --subnet-range 10.9.101.0/24
Other networks might require access externally through the provider network. In this situation, create a new router so that other networks can route traffic through the provider network:
$ openstack router create external $ openstack router set --external-gateway provider_network external
Attach other networks to this router. For example, if you had a subnet called subnet1
, you can attach it to the router with the following commands:
$ openstack router add subnet external subnet1
This adds subnet1
to the routing table and allows traffic using subnet1
to route to the provider network.
7.5. Creating a basic Overcloud flavor
Validation steps in this guide assume that your installation contains flavors. If you have not already created at least one flavor, use the following commands to create a basic set of default flavors that have a range of storage and processing capability:
$ openstack flavor create m1.tiny --ram 512 --disk 0 --vcpus 1 $ openstack flavor create m1.smaller --ram 1024 --disk 0 --vcpus 1 $ openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1 $ openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2 $ openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4 $ openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8
Command options
- ram
-
Use the
ram
option to define the maximum RAM for the flavor. - disk
-
Use the
disk
option to define the hard disk space for the flavor. - vcpus
-
Use the
vcpus
option to define the quantity of virtual CPUs for the flavor.
Use $ openstack flavor create --help
to learn more about the openstack flavor create
command.
7.6. Validating the Overcloud
The overcloud uses the OpenStack Integration Test Suite (tempest) tool set to conduct a series of integration tests. This section provides information on preparations for running the integration tests. For full instruction on using the OpenStack Integration Test Suite, see the OpenStack Integration Test Suite Guide.
Before Running the Integration Test Suite
If running this test from the undercloud, ensure that the undercloud host has access to the overcloud’s Internal API network. For example, add a temporary VLAN on the undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address:
$ source ~/stackrc $ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal $ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
Before running OpenStack Validation, check that the heat_stack_owner
role exists in your overcloud:
$ source ~/overcloudrc $ openstack role list +----------------------------------+------------------+ | ID | Name | +----------------------------------+------------------+ | 6226a517204846d1a26d15aae1af208f | swiftoperator | | 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner | +----------------------------------+------------------+
If the role does not exist, create it:
$ openstack role create heat_stack_owner
After Running the Integration Test Suite
After completing the validation, remove any temporary connections to the overcloud’s Internal API. In this example, use the following commands to remove the previously created VLAN on the undercloud:
$ source ~/stackrc $ sudo ovs-vsctl del-port vlan201
7.7. Fencing the Controller Nodes
Fencing is the process of isolating a node to protect a cluster and its resources. Without fencing, a faulty node can cause data corruption in a cluster.
The director uses Pacemaker to provide a highly available cluster of Controller nodes. Pacemaker uses a process called STONITH (Shoot-The-Other-Node-In-The-Head) to help fence faulty nodes. By default, STONITH is disabled on your cluster and requires manual configuration so that Pacemaker can control the power management of each node in the cluster.
Login to each node as the heat-admin
user from the stack
user on the director. The overcloud creation automatically copies the stack
user’s SSH key to each node’s heat-admin
.
Verify you have a running cluster with pcs status
:
$ sudo pcs status Cluster name: openstackHA Last updated: Wed Jun 24 12:40:27 2015 Last change: Wed Jun 24 11:36:18 2015 Stack: corosync Current DC: lb-c1a2 (2) - partition with quorum Version: 1.1.12-a14efad 3 Nodes configured 141 Resources configured
Verify that stonith is disabled with pcs property show
:
$ sudo pcs property show Cluster Properties: cluster-infrastructure: corosync cluster-name: openstackHA dc-version: 1.1.12-a14efad have-watchdog: false stonith-enabled: false
The Controller nodes contain a set of fencing agents for the various power management devices the director supports. This includes:
Device | Type |
| The Intelligent Platform Management Interface (IPMI) |
| Dell Remote Access Controller (DRAC) |
| Integrated Lights-Out (iLO) |
| Cisco UCS - For more information, see Configuring Cisco Unified Computing System (UCS) Fencing on an OpenStack High Availability Environment |
| Libvirt and SSH |
The rest of this section uses the IPMI agent (fence_ipmilan
) as an example.
View a full list of IPMI options that Pacemaker supports:
$ sudo pcs stonith describe fence_ipmilan
Each node requires configuration of IPMI devices to control the power management. This involves adding a stonith
device to Pacemaker for each node. Use the following commands for the cluster:
The second command in each example is to prevent the node from asking to fence itself.
For Controller node 0:
$ sudo pcs stonith create my-ipmilan-for-controller-0 fence_ipmilan pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller-0 avoids overcloud-controller-0
For Controller node 1:
$ sudo pcs stonith create my-ipmilan-for-controller-1 fence_ipmilan pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller-1 avoids overcloud-controller-1
For Controller node 2:
$ sudo pcs stonith create my-ipmilan-for-controller-2 fence_ipmilan pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s $ sudo pcs constraint location my-ipmilan-for-controller-2 avoids overcloud-controller-2
Run the following command to see all stonith resources:
$ sudo pcs stonith show
Run the following command to see a specific stonith resource:
$ sudo pcs stonith show [stonith-name]
Finally, enable fencing by setting the stonith
property to true
:
$ sudo pcs property set stonith-enabled=true
Verify the property:
$ sudo pcs property show
7.8. Modifying the Overcloud Environment
Sometimes you might intend to modify the overcloud to add additional features, or change the way it operates. To modify the overcloud, make modifications to your custom environment files and Heat templates, then rerun the openstack overcloud deploy
command from your initial overcloud creation. For example, if you created an overcloud using Section 5.6, “Creating the Overcloud with the CLI Tools”, you would rerun the following command:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan
The director checks the overcloud
stack in heat, and then updates each item in the stack with the environment files and heat templates. It does not recreate the overcloud, but rather changes the existing overcloud.
If you aim to include a new environment file, add it to the openstack overcloud deploy
command with a -e
option. For example:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/new-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org --neutron-network-type vxlan --neutron-tunnel-types vxlan
This includes the new parameters and resources from the environment file into the stack.
It is advisable not to make manual modifications to the overcloud’s configuration as the director might overwrite these modifications later.
7.9. Importing Virtual Machines into the Overcloud
Use the following procedure if you have an existing OpenStack environment and aim to migrate its virtual machines to your Red Hat OpenStack Platform environment.
Create a new image by taking a snapshot of a running server and download the image.
$ openstack server image create instance_name --name image_name $ openstack image save image_name --file exported_vm.qcow2
Upload the exported image into the overcloud and launch a new instance.
$ openstack image create imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare $ openstack server create imported_instance --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id
Each VM disk has to be copied from the existing OpenStack environment and into the new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering system.
7.10. Running Ansible Automation
The director provides the ability to run Ansible-based automation on your OpenStack Platform environment. The director uses the tripleo-ansible-inventory
command to generate a dynamic inventory of nodes in your environment.
The dynamic inventory tool only includes the undercloud and the default controller
and compute
overcloud nodes. Other roles are not supported.
To view a dynamic inventory of nodes, run the tripleo-ansible-inventory
command after sourcing stackrc
:
$ souce ~/stackrc $ tripleo-ansible-inventory --list
The --list
option provides details on all hosts.
This outputs the dynamic inventory in a JSON format:
{"overcloud": {"children": ["controller", "compute"], "vars": {"ansible_ssh_user": "heat-admin"}}, "controller": ["192.0.2.2"], "undercloud": {"hosts": ["localhost"], "vars": {"overcloud_horizon_url": "http://192.0.2.4:80/dashboard", "overcloud_admin_password": "abcdefghijklm12345678", "ansible_connection": "local"}}, "compute": ["192.0.2.3"]}
To execute Ansible automations on your environment, run the ansible
command and include the full path of the dynamic inventory tool using the -i
option. For example:
ansible [HOSTS] -i /bin/tripleo-ansible-inventory [OTHER OPTIONS]
Exchange
[HOSTS]
for the type of hosts to use. For example:-
controller
for all Controller nodes -
compute
for all Compute nodes -
overcloud
for all overcloud child nodes i.e.controller
andcompute
-
undercloud
for the undercloud -
"*"
for all nodes
-
Exchange
[OTHER OPTIONS]
for the additional Ansible options. Some useful options include:-
--ssh-extra-args='-o StrictHostKeyChecking=no'
to bypasses confirmation on host key checking. -
-u [USER]
to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using theansible_ssh_user
parameter in the dynamic inventory. The-u
option overrides this parameter. -
-m [MODULE]
to use a specific Ansible module. The default iscommand
, which executes Linux commands. -
-a [MODULE_ARGS]
to define arguments for the chosen module.
-
Ansible automation on the overcloud falls outside the standard overcloud stack. This means subsequent execution of the openstack overcloud deploy
command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes.
7.11. Protecting the Overcloud from Removal
To avoid accidental removal of the overcloud with the heat stack-delete overcloud
command, Heat contains a set of policies to restrict certain actions. Edit the /etc/heat/policy.json
and find the following parameter:
"stacks:delete": "rule:deny_stack_user"
Change it to:
"stacks:delete": "rule:deny_everybody"
Save the file.
This prevents removal of the overcloud with the heat
client. To allow removal of the overcloud, revert the policy to the original value.
7.12. Removing the Overcloud
The whole overcloud can be removed when desired.
Delete any existing overcloud:
$ openstack stack delete overcloud
Remove the overcloud plan from the director:
$ openstack overcloud plan delete overcloud
Confirm the deletion of the overcloud:
$ openstack stack list
Deletion takes a few minutes.
Once the removal completes, follow the standard steps in the deployment scenarios to recreate your overcloud.