Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Test the deployment
4.1. Perform a basic test Copiar o linkLink copiado para a área de transferência!
The basic test will verify that instances are able to ping each other. It will also check the Floating IP SSH access. This example describes how you can perform the test from the undercloud.
This procedure requires you to follow a large number of individual steps; for convenience, the procedure was divided into smaller parts. However, the steps must be followed in the given order.
In this setup, a flat network is used to create the _External_ network, and _VXLAN_ is used for the _Tenant_ networks. _VLAN External_ networks and _VLAN Tenant_ networks are also supported, depending on the desired deployment.
In this setup, a flat network is used to create the _External_ network, and _VXLAN_ is used for the _Tenant_ networks. _VLAN External_ networks and _VLAN Tenant_ networks are also supported, depending on the desired deployment.
4.1.1. Create a new network for testing Copiar o linkLink copiado para a área de transferência!
Source the credentials to access the overcloud:
source /home/stack/overcloudrc
$ source /home/stack/overcloudrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an external neutron network that will be used to access the instance from outside of the overcloud:
openstack network create --external --project service --external --provider-network-type flat --provider-physical-network datacentre
$ openstack network create --external --project service --external --provider-network-type flat --provider-physical-network datacentre
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the corresponding neutron subnet for the new external network (created in the previous step):
openstack subnet create --project service --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnet
$ openstack subnet create --project service --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the cirros image to be used for creating overcloud instances:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the cirros image into glance on the overcloud:
openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare
$ openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
tiny
flavor to use for overcloud instances:openstack flavor create m1.tiny --ram 512 --disk 1 --public
$ openstack flavor create m1.tiny --ram 512 --disk 1 --public
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a tenant network based on VXLAN to host the instances:
openstack network create net_test --provider-network-type=vxlan --provider-segment 100
$ openstack network create net_test --provider-network-type=vxlan --provider-segment 100
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a subnet for the tenant network (created in the previous step):
openstack subnet create --network net_test --subnet-range 123.123.123.0/24 test
$ openstack subnet create --network net_test --subnet-range 123.123.123.0/24 test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the ID of the tenant network:
net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')
$ net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an instance called
cirros1
and attach it to thenet_test
network:openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros1
$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a second instance called
cirros2
, also attached to thenet_test
network:openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros2
$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.2. Set up networking in the test environment Copiar o linkLink copiado para a área de transferência!
Find and store the ID of the admin project:
admin_project_id=$(openstack project list | grep admin | awk '{print $2}')
$ admin_project_id=$(openstack project list | grep admin | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the admin project’s default security group:
admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')
$ admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow ICMP traffic ingress:
openstack security group rule create $admin_sec_group_id --protocol icmp --ingress
$ openstack security group rule create $admin_sec_group_id --protocol icmp --ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow ICMP traffic egress:
openstack security group rule create $admin_sec_group_id --protocol icmp --egress
$ openstack security group rule create $admin_sec_group_id --protocol icmp --egress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow SSH traffic ingress:
openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingress
$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a rule to the admin default security group to allow SSH traffic egress:
openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egress
$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egress
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.3. Test the connectivity Copiar o linkLink copiado para a área de transferência!
-
From horizon, you should be able to access the novnc console for an instance. Use the password from overcloudrc to login to horizon as admin. The default login for cirros images is the user name
cirros
, andcubswin:)
as the password. From the novnc console, verify that the instance received a DHCP address:
ip addr show
$ ip addr show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAnother method of doing this is by using the
nova console-log <instance id>
from the undercloud, which will show if a DHCP lease was obtained.- Now repeat the steps 1 and 2 for all other instances.
- From one instance, attempt to ping the other instances. This will validate the basic Tenant network connectivity in the overcloud.
- Verify that you can reach other instances by using a Floating IP.
4.1.4. Create devices Copiar o linkLink copiado para a área de transferência!
Create a floating IP on the external network to be associated with
cirros1
instance:openstack floating ip create external
$ openstack floating ip create external
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a router which will be used to handle NAT between the floating IP and
cirros1
tenant IP:openstack router create test
$ openstack router create test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the gateway of the router to be the external network:
openstack router set test --external-gateway external
$ openstack router set test --external-gateway external
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add and interface to the router attached to the tenant network:
openstack router add subnet test test
$ openstack router add subnet test test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find and store the floating IP created in Step 23:
floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')
$ floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Associate the floating IP with the
cirros1
instance:openstack server add floating ip cirros1 $floating_ip
$ openstack server add floating ip cirros1 $floating_ip
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From a node that has external network access, attempt to login to the instance:
ssh cirros@$floating_ip
$ ssh cirros@$floating_ip
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Perform advanced tests Copiar o linkLink copiado para a área de transferência!
Several components of the OpenDaylight configuration and deployment may be checked post deployment. To test specific parts of the installation, you need to follow several procedures. Each procedure is described separately.
The procedures are to be performed on the overcloud nodes.
4.2.1. Connect to overcloud nodes Copiar o linkLink copiado para a área de transferência!
This procedure lets you connect to the overcloud nodes and test that they are up and running.
Procedure
- Login onto the undercloud.
Enter the following command to start the process:
source /home/stack/stackrc
$ source /home/stack/stackrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List all instances:
openstack server list
$ openstack server list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Choose the required instance and note its IP address in the list.
Connect to the machine. You will use the IP address from the list above:
ssh heat-admin@<IP from step 4>
$ ssh heat-admin@<IP from step 4>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to superuser:
sudo -i
$ sudo -i
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Test OpenDaylight Copiar o linkLink copiado para a área de transferência!
To test that OpenDaylight is working, you have to verify that the service is up and that the particular features are correctly loaded.
Procedure
- As a superuser, login to the overcloud node running OpenDaylight or an OpenDaylight node running in custom role.
Verify that the OpenDaylight controller is running on all controller nodes:
docker ps | grep opendaylight
# docker ps | grep opendaylight 2363a99d514a 192.168.24.1:8787/rhosp12/openstack-opendaylight:latest "kolla_start" 4 hours ago Up 4 hours (healthy) opendaylight_api
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that HAProxy is properly configured to listen on port 8081:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use HAproxy IP to connect the karaf account:
ssh -p 8101 karaf@localhost
# ssh -p 8101 karaf@localhost
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the installed features.
feature:list -i | grep odl-netvirt-openstack
# feature:list -i | grep odl-netvirt-openstack
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an
x
in the third column of the list, as generated during the procedure, then the feature is correctly installed.Verify that the API is up and running.
web:list | grep neutron
# web:list | grep neutron
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This API endpoint is set in
/etc/neutron/plugins/ml2/ml2_conf.ini
and used by the neutron to communicate with OpenDaylight.Verify that VXLAN tunnels between the nodes are up.
vxlan:show
# vxlan:show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To test that the REST API is responding correctly, you can list the modules that are using it.
curl -u "admin:admin" http://localhost:8181/restconf/modules
# curl -u "admin:admin" http://localhost:8181/restconf/modules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output will be similar (the example has been shortened).
{"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...
{"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the REST streams using the host internal_API IP.
curl -u "admin:admin" http://localhost:8181/restconf/streams
# curl -u "admin:admin" http://localhost:8181/restconf/streams
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You get a similar output:
{"streams":{}}
{"streams":{}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command using host internal_API IP to verify that NetVirt is ready and running:
curl -u "admin:admin" http://localhost:8181/restconf/operational/network-topology:network-topology/topology/netvirt:1
# curl -u "admin:admin" http://localhost:8181/restconf/operational/network-topology:network-topology/topology/netvirt:1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following output confirms it.
{"topology":[{"topology-id":"netvirt:1"}]}
{"topology":[{"topology-id":"netvirt:1"}]}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Test Open vSwitch Copiar o linkLink copiado para a área de transferência!
In order to validate Open vSwitch, connect to one of the Compute nodes and verify that it is properly configured and connected to OpenDaylight.
Procedure
- Connect to one of the Compute nodes in the overcloud as a superuser.
List the Open vSwitch settings.
ovs-vsctl show
# ovs-vsctl show
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Notice multiple Managers in the output (lines 2 and 3 in the example).
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
tcp
manager points to the IP of the node where OpenDaylight is running. -
Verify that the Managers show
is_connected: true
to ensure that connectivity to OpenDaylight from OVS is established and uses the OVSDB protocol. - Verify that each bridge (other than br-int) exists and matches the NIC template used for deployment with the Compute role.
- Verify that the tcp connection corresponds to the IP where the OpenDaylight service is running.
-
Verify that the bridge br-int shows
is_connected: true
and an OpenFlow protocol connection to OpenDaylight is established.
More information
- The br-int bridge is created automatically by OpenDaylight.
4.2.4. Verify the Open vSwitch configuration on Compute nodes. Copiar o linkLink copiado para a área de transferência!
- Connect to a Compute node as a superuser.
List the Open vSwitch configuration settings.
ovs-vsctl list open_vswitch
# ovs-vsctl list open_vswitch
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Read the output. It will be similar to this example.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the value of the
other_config
option has the correctlocal_ip
set for the local interface that connects to the Tenant network through VXLAN tunnels. -
Verify that the
provider_mappings
value under theother_config
option matches the value given in the OpenDaylightProviderMappings heat template parameter. This configuration maps the neutron logical networks to corresponding physical interfaces.
4.2.5. Verify neutron configuration Copiar o linkLink copiado para a área de transferência!
Procedure
- Connect to the superuser account on one of the controller role nodes.
-
Make sure that the file
/etc/neutron/neutron.conf
containsservice_plugins=odl-router_v2,trunk
. Check that the file
/etc/neutron/plugin.ini
contains the following ml2 configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow On one of the overcloud controllers, verify that neutron agents are running properly.
openstack network agent list
# openstack network agent list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that both the Metadata and DHCP agents are in the up state (the
admin_state_up
option isTrue
):Copy to Clipboard Copied! Toggle word wrap Toggle overflow
More information
-
The IP in the
plugin.ini
, mentioned in step 3, should be the InternalAPI Virtual IP Address (VIP). - Note, that there is no Open vSwitch agent, nor L3 agent, listed in output of step 5, which is a desired state, as both are now managed by OpenDaylight.