Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Test the deployment


5.1. Perform a basic test

The basic test will verify that instances are able to ping each other. It will also check the Floating IP SSH access. This example describes how you can perform the test from the undercloud.

This procedure requires you to follow a large number of individual steps; for convenience, the procedure was divided into smaller parts. However, the steps must be followed in the given order.

Note

In this setup, a flat network is used to create the External network, and VXLAN is used for the Tenant networks. VLAN External networks and VLAN Tenant networks are also supported, depending on the desired deployment.

5.1.1. Create a new network for testing

1. Source the credentials to access the overcloud:

$ source /home/stack/overcloudrc
Copy to Clipboard Toggle word wrap

2. Create an external neutron network that will be used to access the instance from outside of the overcloud:

$ openstack network create --external --project $(openstack project show service | grep id | awk '{ print $4 }')  --provider-network-type flat --provider-physical-network external
Copy to Clipboard Toggle word wrap

3. Create the corresponding neutron subnet for the new external network (created in the previous step):

$ openstack subnet create  --project $(openstack project show service | grep id | awk '{ print $4 }') --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnet
Copy to Clipboard Toggle word wrap

4. Download the cirros image to be used for creating overcloud instances:

$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Copy to Clipboard Toggle word wrap

5. Upload the cirros image into glance on the overcloud:

$ openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare
Copy to Clipboard Toggle word wrap

6. Create a tiny flavor to use for overcloud instances:

$ openstack flavor create m1.tiny --ram 512 --disk 1 --public
Copy to Clipboard Toggle word wrap

7. Create a VXLAN-based tenant network to host the instances:

$ openstack network create net_test --provider-network-type=vxlan --provider-segment 100
Copy to Clipboard Toggle word wrap

8. Create a subnet for the tenant network (created in the previous step):

$ openstack subnet create --network net_test --subnet-range 123.123.123.0/24 test
Copy to Clipboard Toggle word wrap

9. Find and store the ID of the tenant network:

$ net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')
Copy to Clipboard Toggle word wrap

10. Create an instance called cirros1 and attach it to the net_test network:

$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros1
Copy to Clipboard Toggle word wrap

11. Create a second instance called cirros2, also attached to the net_test network:

$ openstack server create --flavor m1.tiny --image cirros --nic net-id=$net_mgmt_id cirros2
Copy to Clipboard Toggle word wrap

5.1.2. Set up networking in the test environment

1. Find and store the ID of the admin project:

$ admin_project_id=$(openstack project list | grep admin | awk '{print $2}')
Copy to Clipboard Toggle word wrap

2. Find and store the admin project’s default security group:

$ admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')
Copy to Clipboard Toggle word wrap

3. Add a rule to the admin default security group to allow ICMP traffic ingress:

$ openstack security group rule create $admin_sec_group_id --protocol icmp --ingress
Copy to Clipboard Toggle word wrap

4. Add a rule to the admin default security group to allow ICMP traffic egress:

$ openstack security group rule create $admin_sec_group_id --protocol icmp --egress
Copy to Clipboard Toggle word wrap

5. Add a rule to the admin default security group to allow SSH traffic ingress:

$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingress
Copy to Clipboard Toggle word wrap

6. Add a rule to the admin default security group to allow SSH traffic egress:

$ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egress
Copy to Clipboard Toggle word wrap

5.1.3. Test the connectivity

1. From horizon, you should be able to access the novnc console for an instance. Use the password from overcloudrc to login to horizon as admin. The default login for cirros images is the username cirros, and cubswin:) as the password.

2. From the novnc console, verify that the instance received a DHCP address:

$ ip addr show
Copy to Clipboard Toggle word wrap
Note

Another method of doing this is by using the nova console-log <instance id> from the undercloud, which will show if a DHCP lease was obtained.

3. Now repeat the steps 1 and 2 for all other instances.

4. From one instance, attempt to ping the other instances. This will validate the basic Tenant network connectivity in the overcloud.

5. Verify that you can reach other instances by using a Floating IP.

5.1.4. Create devices

1. Create a floating IP on the external network to be associated with cirros1 instance:

$ openstack floating ip create external
Copy to Clipboard Toggle word wrap

2. Create a router which will be used to handle NAT between the floating IP and cirros1 tenant IP:

$ openstack router create test
Copy to Clipboard Toggle word wrap

3. Set the gateway of the router to be the external network:

$ neutron router-gateway-set test external
Copy to Clipboard Toggle word wrap

4. Add and interface to the router attached to the tenant network:

$ neutron router-interface-add test test
Copy to Clipboard Toggle word wrap

5. Find and store the floating IP created in Step 23:

floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')
Copy to Clipboard Toggle word wrap

6. Associate the floating IP with the cirros1 instance:

$ openstack server add floating ip cirros1 $floating_ip
Copy to Clipboard Toggle word wrap

7. From a node that has external network access, attempt to login to the instance:

$ ssh cirros@$floating_ip
Copy to Clipboard Toggle word wrap

5.2. Perform advanced tests

Several components of the OpenDaylight configuration and deployment may be checked post deployment. To test specific parts of the installation, you need to follow several procedures. Each procedure is described separately.

The procedures are to be performed on the overcloud nodes.

5.2.1. Connect to overcloud nodes

  1. Login onto the undercloud.
  2. Run the following command to start the process:

    $ source /home/stack/stackrc
    Copy to Clipboard Toggle word wrap
  3. List all instances:

    $ nova list
    Copy to Clipboard Toggle word wrap
  4. Choose the required instance and note its IP address in the list:
  5. Connect to the machine. You will use the IP address from the list above:

    $ ssh heat-admin@<IP from step 3>
    Copy to Clipboard Toggle word wrap
  6. Switch to superuser:

    $ sudo -i
    Copy to Clipboard Toggle word wrap

5.2.2. Test OpenDaylight

To test that OpenDaylight is working, you have to verify that the service is up and that the particular features are correctly loaded.

  1. As a superuser, login to the node.
  2. Verify that the OpenDaylight service is active:

    # systemctl status opendaylight
    Copy to Clipboard Toggle word wrap
  3. Verify that HAProxy is properly configured to listen on port 8081:

    # grep -A7 opendaylight /etc/haproxy/haproxy.cfg
    Copy to Clipboard Toggle word wrap
  4. Connect to the karaf account:

    $ ssh -p 8101 karaf@localhost
    Copy to Clipboard Toggle word wrap
  5. List the installed features.

    $ feature:list | grep odl-netvirt-openstack
    Copy to Clipboard Toggle word wrap
  6. Verify that the API is up and running.

    # web:list | grep neutron
    Copy to Clipboard Toggle word wrap
  7. Verify that inter-nodes' VXLAN tunnels are up.

    # vxlan:show
    Copy to Clipboard Toggle word wrap
  8. To test that the REST API is responding correctly, you can list the modules that are using it.

    # curl -u "admin:admin" http://localhost:8181/restconf/modules
    Copy to Clipboard Toggle word wrap

    The output will be similar (the example has been shortened).

    {"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...
    Copy to Clipboard Toggle word wrap
  9. You can list the REST streams.

    # curl -u "admin:admin" http://localhost:8181/restconf/streams
    Copy to Clipboard Toggle word wrap

    You will get something like this:

    {"streams":{}}
    Copy to Clipboard Toggle word wrap
  10. Enter the following command to verify that NetVirt is ready and running.

    # curl -u "admin:admin" http://localhost:8181/restconf/operational/network-topology:network-topology/topology/netvirt:1
    Copy to Clipboard Toggle word wrap

    The following output will confirm it.

    {"topology":[{"topology-id":"netvirt:1"}]}
    Copy to Clipboard Toggle word wrap

More information

  • Step 3: As mentioned before, OpenDaylight is not running in HA mode yet, therefore the service is only active on one node.
  • Step 5: If there is an x in the third column of the list, as generated during the procedure, then the feature is correctly installed.
  • Step 6: This API endpoint is set in /etc/neutron/plugins/ml2/ml2_conf.ini and used by the neutron to communicate with OpenDaylight.

5.2.3. Test Open vSwitch

In order to validate Open vSwitch, connect to one of the Compute nodes and verify that it is properly configured and connected to OpenDaylight.

  1. Connect to one of the Compute nodes in the overcloud as a superuser.
  2. List the Open vSwitch settings.

    # ovs-vsctl show
    Copy to Clipboard Toggle word wrap
  3. Notice multiple Managers in the output (lines 2 and 3 in the example).

    4b624d8f-a7af-4f0f-b56a-b8cfabf7635d
        Manager "ptcp:6639:127.0.0.1"
        Manager "tcp:192.0.2.4:6640"
            is_connected: true
        Bridge br-extu
            Port br-ex
                Interface br-ex
                    type: internal
            Port "eth2"
                Interface "eth2"
            Port br-ex-int-patch
                Interface br-ex-int-patch
                    type: patch
                    options: {peer=br-ex-patch}
        Bridge br-int
            Controller "tcp:192.0.2.4:6653"
                is_connected: true
            fail_mode: secure
            Port br-int
                Interface br-int
                    type: internal
            Port br-ex-patch
                Interface br-ex-patch
                    type: patch
                    options: {peer=br-ex-int-patch}
        ovs_version: "2.5.0"
    Copy to Clipboard Toggle word wrap
  4. Verify that the tcp manager points to the IP of the node where OpenDaylight is running.
  5. Verify that the Managers show is_connected: true to ensure that connectivity to OpenDaylight from OVS is established and uses the OVSDB protocol.
  6. Verify that each bridge (other than br-int) exists and matches the NIC template used for deployment with the Compute role.
  7. Verify that the tcp connection corresponds to the IP where the OpenDaylight service is running.
  8. Verify that the bridge br-int shows is_connected: true and an OpenFlow protocol connection to OpenDaylight is established.

More information

  • The br-int bridge is created automatically by OpenDaylight.

5.2.4. Verify the Open vSwitch configuration on Compute nodes.

  1. Connect to a Compute node as a superuser.
  2. List the Open vSwitch configuration settings.

    # ovs-vsctl list open_vswitch
    Copy to Clipboard Toggle word wrap
  3. Read the output. It will be similar to this example.

    _uuid               : 4b624d8f-a7af-4f0f-b56a-b8cfabf7635d
    bridges             : [11127421-3bcc-4f9a-9040-ff8b88486508, 350135a4-4627-4e1b-8bef-56a1e4249bef]
    cur_cfg             : 7
    datapath_types      : [netdev, system]
    db_version          : "7.12.1"
    external_ids        : {system-id="b8d16d0b-a40a-47c8-a767-e118fe22759e"}
    iface_types         : [geneve, gre, internal, ipsec_gre, lisp, patch, stt, system, tap, vxlan]
    manager_options     : [c66f2e87-4724-448a-b9df-837d56b9f4a9, defec179-720e-458e-8875-ea763a0d8909]
    next_cfg            : 7
    other_config        : {local_ip="11.0.0.30", provider_mappings="datacentre:br-ex"}
    ovs_version         : "2.5.0"
    ssl                 : []
    statistics          : {}
    system_type         : RedHatEnterpriseServer
    system_version      : "7.3-Maipo"
    Copy to Clipboard Toggle word wrap
  1. Verify that the value of the other_config option has the correct local_ip set for the local interface that connects to the Tenant network through VXLAN tunnels.
  2. Verify that the provider_mappings value under the other_config option matches the value given in the OpenDaylightProviderMappings heat template parameter. This configuration maps the neutron logical networks to corresponding physical interfaces.

5.2.5. Verify neutron configuration

  1. Connect to the superuser account on one of the controller role nodes.
  2. Make sure that the file /etc/neutron/neutron.conf contains service_plugins=odl-router_v2.
  3. Check that the file /etc/neutron/plugin.ini contains the following ml2 configuration:

    [ml2]
    mechanism_drivers=opendaylight_v2
    
    [ml2_odl]
    password=admin
    username=admin
    url=http://192.0.2.9:8081/controller/nb/v2/neutron
    Copy to Clipboard Toggle word wrap
  4. On one of the overcloud controllers, verify that neutron agents are running properly.

    # openstack network agent list
    Copy to Clipboard Toggle word wrap
  5. Verify that both the Metadata and DHCP agents are in the up state (the admin_state_up option is True):

    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    | id                                   | agent_type     | host                     | availability_zone | alive | admin_state_up | binary                 |
    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    | 3be198c5-b3aa-4d0e-abb4-51b29db3af47 | Metadata agent | controller-0.localdomain |                   | :-)   | True           | neutron-metadata-agent |
    | 79579d47-dd7d-4ef3-9614-cd2f736043f3 | DHCP agent     | controller-0.localdomain | nova              | :-)   | True           | neutron-dhcp-agent     |
    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    Copy to Clipboard Toggle word wrap

More information

  • The IP in the plugin.ini, mentioned in step 3, should be the InternalAPI Virtual IP Address (VIP).
  • Note, that there is no Open vSwitch agent, nor L3 agent, listed in output of step 5, which is a desired state, as both are now managed by OpenDaylight.
Voltar ao topo
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2025 Red Hat