Chapter 4. Test the deployment
4.1. Perform a basic test 링크 복사링크가 클립보드에 복사되었습니다!
The basic test verifies that instances can ping each other. The test also checks the Floating IP SSH access. This example describes how you can perform this test from the undercloud.
This procedure requires that you follow a large number of individual steps. For convenience, the procedure is divided into smaller parts. However, you must follow all steps in the following order.
In this setup, a flat network is used to create the _External_ network, and _VXLAN_ is used for the _Tenant_ networks. _VLAN External_ networks and _VLAN Tenant_ networks are also supported, depending on the desired deployment.
In this setup, a flat network is used to create the _External_ network, and _VXLAN_ is used for the _Tenant_ networks. _VLAN External_ networks and _VLAN Tenant_ networks are also supported, depending on the desired deployment.
4.1.1. Create a new network for testing 링크 복사링크가 클립보드에 복사되었습니다!
Source the credentials to access the overcloud:
source /home/stack/overcloudrc
$ source /home/stack/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an external neutron network to access the instance from outside the overcloud:
openstack network create --external --project service --external --provider-network-type flat --provider-physical-network datacentre external
$ openstack network create --external --project service --external --provider-network-type flat --provider-physical-network datacentre externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the corresponding neutron subnet for the new external network that you create in the previous step:
openstack subnet create --project service --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnet
$ openstack subnet create --project service --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Download the cirros image that you want to use to create overcloud instances:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.imgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the cirros image to glance on the overcloud:
openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare
$ openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bareCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
tinyflavor to use for overcloud instances:openstack flavor create m1.tiny --ram 512 --disk 1 --public
$ openstack flavor create m1.tiny --ram 512 --disk 1 --publicCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a VXLAN tenant network to host the instances:
openstack network create net_test --provider-network-type=vxlan --provider-segment 100
$ openstack network create net_test --provider-network-type=vxlan --provider-segment 100Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet for the tenant network that you created in the previous step:
openstack subnet create --network net_test --subnet-range 123.123.123.0/24 test
$ openstack subnet create --network net_test --subnet-range 123.123.123.0/24 testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the ID of the tenant network:
net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')$ net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance
cirros1and attach it to thenet_testnetwork andSSHsecurity group:openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros1
$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a second instance called
cirros2, also attached to thenet_testnetwork andSSHsecurity group:openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros2
$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.2. Set up networking in the test environment 링크 복사링크가 클립보드에 복사되었습니다!
Find and store the ID of the admin project:
admin_project_id=$(openstack project list | grep admin | awk '{print $2}')$ admin_project_id=$(openstack project list | grep admin | awk '{print $2}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the default security group of the admin project:
admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')$ admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow ICMP traffic ingress:
openstack security group rule create $admin_sec_group_id --protocol icmp --ingress
$ openstack security group rule create $admin_sec_group_id --protocol icmp --ingressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow ICMP traffic egress:
openstack security group rule create $admin_sec_group_id --protocol icmp --egress
$ openstack security group rule create $admin_sec_group_id --protocol icmp --egressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow SSH traffic ingress:
openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingress
$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow SSH traffic egress:
openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egress
$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egressCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.3. Test the connectivity 링크 복사링크가 클립보드에 복사되었습니다!
-
From horizon, you should be able to access the novnc console for an instance. Use the password from overcloudrc to log in to horizon as admin. The default credentials for cirros images are username
cirrosand passwordcubswin:). From the novnc console, verify that the instance receives a DHCP address:
ip addr show
$ ip addr showCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also run the command
nova console-log <instance id>from the undercloud to verify that the instance receives a DHCP address.- Repeat the steps 1 and 2 for all other instances.
- From one instance, attempt to ping the other instances. This will validate the basic Tenant network connectivity in the overcloud.
- Verify that you can reach other instances by using a Floating IP.
4.1.4. Create devices 링크 복사링크가 클립보드에 복사되었습니다!
Create a floating IP on the external network to be associated with
cirros1instance:openstack floating ip create external
$ openstack floating ip create externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a router to handle NAT between the floating IP and
cirros1tenant IP:openstack router create test
$ openstack router create testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the gateway of the router to be the external network:
openstack router set test --external-gateway external
$ openstack router set test --external-gateway externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add an interface to the router attached to the tenant network:
openstack router add subnet test test
$ openstack router add subnet test testCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the floating IP that you create in Step 1:
floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')
$ floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Associate the floating IP with the
cirros1instance:openstack server add floating ip cirros1 $floating_ip
$ openstack server add floating ip cirros1 $floating_ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow From a node that has external network access, attempt to log in to the instance:
ssh cirros@$floating_ip
$ ssh cirros@$floating_ipCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Perform advanced tests 링크 복사링크가 클립보드에 복사되었습니다!
You can test several components of the OpenDaylight configuration and deployment after you deploy OpenDaylight. To test specific parts of the installation, you must follow several procedures. Each procedure is described separately.
You must perform the procedures on the overcloud nodes.
4.2.1. Connect to overcloud nodes 링크 복사링크가 클립보드에 복사되었습니다!
To connect to the overcloud nodes and ensure that they are operating correctly, complete the following steps:
Procedure
- Log in to the undercloud.
Enter the following command to start the process:
source /home/stack/stackrc
$ source /home/stack/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all instances:
openstack server list
$ openstack server listCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Choose the required instance and note the instance IP address in the list.
Connect to the machine with the IP address from the list that you obtain in the previous step:
ssh heat-admin@<IP from step 4>
$ ssh heat-admin@<IP from step 4>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to superuser:
sudo -i
$ sudo -iCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Test OpenDaylight 링크 복사링크가 클립보드에 복사되었습니다!
To test that OpenDaylight is operating correctly, you must verify that the service is operational and that the specified features are loaded correctly.
Procedure
- Log in to the overcloud node running OpenDaylight as a superuser, or to an OpenDaylight node running in a custom role.
Verify that the OpenDaylight Controller is running on all Controller nodes:
docker ps | grep opendaylight
# docker ps | grep opendaylight 2363a99d514a 192.168.24.1:8787/rhosp13/openstack-opendaylight:latest "kolla_start" 4 hours ago Up 4 hours (healthy) opendaylight_apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that HAProxy is properly configured to listen on port 8081:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use HAproxy IP to connect the karaf account. The karaf password is
karaf:ssh -p 8101 karaf@localhost
# ssh -p 8101 karaf@localhostCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the installed features.
feature:list -i | grep odl-netvirt-openstack
# feature:list -i | grep odl-netvirt-openstackCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an
xin the third column of the list, as generated during the procedure, then the feature is correctly installed.Verify that the API is operational.
web:list | grep neutron
# web:list | grep neutronCopy to Clipboard Copied! Toggle word wrap Toggle overflow This API endpoint is set in
/etc/neutron/plugins/ml2/ml2_conf.iniand used by the neutron to communicate with OpenDaylight.Verify that VXLAN tunnels between the nodes are up.
vxlan:show
# vxlan:showCopy to Clipboard Copied! Toggle word wrap Toggle overflow To test that the REST API is responding correctly, list the modules that are using it.
curl -u "admin:admin" http://localhost:8081/restconf/modules
# curl -u "admin:admin" http://localhost:8081/restconf/modulesCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output will be similar (the example has been shortened).
{"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...{"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the REST streams that use the host internal_API IP.
curl -u "admin:admin" http://localhost:8081/restconf/streams
# curl -u "admin:admin" http://localhost:8081/restconf/streamsCopy to Clipboard Copied! Toggle word wrap Toggle overflow You get a similar output:
{"streams":{}}{"streams":{}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command with host internal_API IP to verify that NetVirt is operational:
curl -u "admin:admin" http://localhost:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1
# curl -u "admin:admin" http://localhost:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the following output to confirm that NetVirt is operational.
{"topology":[{"topology-id":"netvirt:1"}]}{"topology":[{"topology-id":"netvirt:1"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Test Open vSwitch 링크 복사링크가 클립보드에 복사되었습니다!
To validate Open vSwitch, connect to one of the Compute nodes and verify that it is properly configured and connected to OpenDaylight.
Procedure
- Connect to one of the Compute nodes in the overcloud as a superuser.
List the Open vSwitch settings.
ovs-vsctl show
# ovs-vsctl showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note multiple Managers in the output. In this example, lines 2 and 3 display multiple Managers.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
tcpmanager points to the IP of the node where OpenDaylight is running. -
Verify that the Managers show
is_connected: trueto ensure that connectivity to OpenDaylight from OVS is established and uses the OVSDB protocol. - Verify that each bridge (other than br-int) exists and corresponds to the NIC template used for deployment with the Compute role.
- Verify that the tcp connection corresponds to the IP where the OpenDaylight service is running.
-
Verify that the bridge br-int shows
is_connected: trueand that an OpenFlow protocol connection to OpenDaylight is established.
More information
- OpenDaylight creates the br-int bridge automatically.
4.2.4. Verify the Open vSwitch configuration on Compute nodes. 링크 복사링크가 클립보드에 복사되었습니다!
- Connect to a Compute node as a superuser.
List the Open vSwitch configuration settings.
ovs-vsctl list open_vswitch
# ovs-vsctl list open_vswitchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Read the output. It will be similar to this example.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the value of the
other_configoption has the correctlocal_ipset for the local interface that connects to the Tenant network through VXLAN tunnels. -
Verify that the
provider_mappingsvalue under theother_configoption matches the value in the OpenDaylightProviderMappings heat template parameter. This configuration maps the neutron logical networks to corresponding physical interfaces.
4.2.5. Verify neutron configuration 링크 복사링크가 클립보드에 복사되었습니다!
Procedure
- Connect to the superuser account on one of the Controller role nodes.
-
Ensure that the
/etc/neutron/neutron.conffile containsservice_plugins=odl-router_v2,trunk. Ensure that the
/etc/neutron/plugin.inifile contains the following ml2 configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow On one of the overcloud controllers, verify that neutron agents are running properly.
openstack network agent list
# openstack network agent listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
admin_state_upvalue for both the Metadata and DHCP agents areTrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
More information
-
The IP in the
plugin.ini, mentioned in step 3, should be the InternalAPI Virtual IP Address (VIP). - There is no Open vSwitch agent, nor L3 agent, listed in output of step 5, which is a desired state, as both are now managed by OpenDaylight.