Chapter 4. Test the deployment


4.1. Perform a basic test

The basic test verifies that instances can ping each other. The test also checks the Floating IP SSH access. This example describes how you can perform this test from the undercloud.

This procedure requires that you follow a large number of individual steps. For convenience, the procedure is divided into smaller parts. However, you must follow all steps in the following order.

Note
In this setup, a flat network is used to create the _External_ network, and _VXLAN_ is used for the _Tenant_ networks. _VLAN External_ networks and _VLAN Tenant_ networks are also supported, depending on the desired deployment.
Copy to Clipboard Toggle word wrap

4.1.1. Create a new network for testing

  1. Source the credentials to access the overcloud:

    $ source /home/stack/overcloudrc
    Copy to Clipboard Toggle word wrap
  2. Create an external neutron network to access the instance from outside the overcloud:

    $ openstack network create --external --project service --external  --provider-network-type flat --provider-physical-network datacentre external
    Copy to Clipboard Toggle word wrap
  3. Create the corresponding neutron subnet for the new external network that you create in the previous step:

    $ openstack subnet create  --project service --no-dhcp --network external --gateway 192.168.37.1 --allocation-pool start=192.168.37.200,end=192.168.37.220 --subnet-range 192.168.37.0/24 external-subnet
    Copy to Clipboard Toggle word wrap
  4. Download the cirros image that you want to use to create overcloud instances:

    $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
    Copy to Clipboard Toggle word wrap
  5. Upload the cirros image to glance on the overcloud:

    $ openstack image create cirros --public --file ./cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare
    Copy to Clipboard Toggle word wrap
  6. Create a tiny flavor to use for overcloud instances:

    $ openstack flavor create m1.tiny --ram 512 --disk 1 --public
    Copy to Clipboard Toggle word wrap
  7. Create a VXLAN tenant network to host the instances:

    $ openstack network create net_test --provider-network-type=vxlan --provider-segment 100
    Copy to Clipboard Toggle word wrap
  8. Create a subnet for the tenant network that you created in the previous step:

    $ openstack subnet create --network net_test --subnet-range 123.123.123.0/24 test
    Copy to Clipboard Toggle word wrap
  9. Find and store the ID of the tenant network:

    $ net_mgmt_id=$(openstack network list | grep net_test | awk '{print $2}')
    Copy to Clipboard Toggle word wrap
  10. Create an instance cirros1 and attach it to the net_test network and SSH security group:

    $ openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros1
    Copy to Clipboard Toggle word wrap
  11. Create a second instance called cirros2, also attached to the net_test network and SSH security group:

    $ openstack server create --flavor m1.tiny --image cirros --nic net-id=$vlan1 --security-group SSH --key-name RDO_KEY --availability-zone nova:overcloud-novacompute-0.localdomain cirros2
    Copy to Clipboard Toggle word wrap

4.1.2. Set up networking in the test environment

  1. Find and store the ID of the admin project:

    $ admin_project_id=$(openstack project list | grep admin | awk '{print $2}')
    Copy to Clipboard Toggle word wrap
  2. Find and store the default security group of the admin project:

    $ admin_sec_group_id=$(openstack security group list | grep $admin_project_id | awk '{print $2}')
    Copy to Clipboard Toggle word wrap
  3. Add a rule to the admin default security group to allow ICMP traffic ingress:

    $ openstack security group rule create $admin_sec_group_id --protocol icmp --ingress
    Copy to Clipboard Toggle word wrap
  4. Add a rule to the admin default security group to allow ICMP traffic egress:

    $ openstack security group rule create $admin_sec_group_id --protocol icmp --egress
    Copy to Clipboard Toggle word wrap
  5. Add a rule to the admin default security group to allow SSH traffic ingress:

    $ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --ingress
    Copy to Clipboard Toggle word wrap
  6. Add a rule to the admin default security group to allow SSH traffic egress:

    $ openstack security group rule create $admin_sec_group_id --protocol tcp --dst-port 22 --egress
    Copy to Clipboard Toggle word wrap

4.1.3. Test the connectivity

  1. From horizon, you should be able to access the novnc console for an instance. Use the password from overcloudrc to log in to horizon as admin. The default credentials for cirros images are username cirros and password cubswin:).
  2. From the novnc console, verify that the instance receives a DHCP address:

    $ ip addr show
    Copy to Clipboard Toggle word wrap
    Note

    You can also run the command nova console-log <instance id> from the undercloud to verify that the instance receives a DHCP address.

  3. Repeat the steps 1 and 2 for all other instances.
  4. From one instance, attempt to ping the other instances. This will validate the basic Tenant network connectivity in the overcloud.
  5. Verify that you can reach other instances by using a Floating IP.

4.1.4. Create devices

  1. Create a floating IP on the external network to be associated with cirros1 instance:

    $ openstack floating ip create external
    Copy to Clipboard Toggle word wrap
  2. Create a router to handle NAT between the floating IP and cirros1 tenant IP:

    $ openstack router create test
    Copy to Clipboard Toggle word wrap
  3. Set the gateway of the router to be the external network:

    $ openstack router set test --external-gateway external
    Copy to Clipboard Toggle word wrap
  4. Add an interface to the router attached to the tenant network:

    $ openstack router add subnet test test
    Copy to Clipboard Toggle word wrap
  5. Find and store the floating IP that you create in Step 1:

    $ floating_ip=$(openstack floating ip list | head -n -1 | grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')
    Copy to Clipboard Toggle word wrap
  6. Associate the floating IP with the cirros1 instance:

    $ openstack server add floating ip cirros1 $floating_ip
    Copy to Clipboard Toggle word wrap
  7. From a node that has external network access, attempt to log in to the instance:

    $ ssh cirros@$floating_ip
    Copy to Clipboard Toggle word wrap

4.2. Perform advanced tests

You can test several components of the OpenDaylight configuration and deployment after you deploy OpenDaylight. To test specific parts of the installation, you must follow several procedures. Each procedure is described separately.

You must perform the procedures on the overcloud nodes.

4.2.1. Connect to overcloud nodes

To connect to the overcloud nodes and ensure that they are operating correctly, complete the following steps:

Procedure

  1. Log in to the undercloud.
  2. Enter the following command to start the process:

      $ source /home/stack/stackrc
    Copy to Clipboard Toggle word wrap
  3. List all instances:

      $ openstack server list
    Copy to Clipboard Toggle word wrap
  4. Choose the required instance and note the instance IP address in the list.
  5. Connect to the machine with the IP address from the list that you obtain in the previous step:

      $ ssh heat-admin@<IP from step 4>
    Copy to Clipboard Toggle word wrap
  6. Switch to superuser:

      $ sudo -i
    Copy to Clipboard Toggle word wrap

4.2.2. Test OpenDaylight

To test that OpenDaylight is operating correctly, you must verify that the service is operational and that the specified features are loaded correctly.

Procedure

  1. Log in to the overcloud node running OpenDaylight as a superuser, or to an OpenDaylight node running in a custom role.
  2. Verify that the OpenDaylight Controller is running on all Controller nodes:

    # docker ps | grep opendaylight
    2363a99d514a        192.168.24.1:8787/rhosp13/openstack-opendaylight:latest         "kolla_start"            4 hours ago         Up 4 hours (healthy)                       opendaylight_api
    Copy to Clipboard Toggle word wrap
  3. Verify that HAProxy is properly configured to listen on port 8081:

    # docker exec -it haproxy-bundle-docker-0 grep -A7 opendaylight /etc/haproxy/haproxy.cfg
    listen opendaylight
      bind 172.17.0.10:8081 transparent
      bind 192.168.24.10:8081 transparent
      mode http
      balance source
      server overcloud-controller-0.internalapi.localdomain 172.17.0.22:8081 check fall 5 inter 2000 rise 2
      server overcloud-controller-1.internalapi.localdomain 172.17.0.12:8081 check fall 5 inter 2000 rise 2
      server overcloud-controller-2.internalapi.localdomain 172.17.0.13:8081 check fall 5 inter 2000 rise 2
    Copy to Clipboard Toggle word wrap
  4. Use HAproxy IP to connect the karaf account. The karaf password is karaf:

    # ssh -p 8101 karaf@localhost
    Copy to Clipboard Toggle word wrap
  5. List the installed features.

    # feature:list -i | grep odl-netvirt-openstack
    Copy to Clipboard Toggle word wrap

    If there is an x in the third column of the list, as generated during the procedure, then the feature is correctly installed.

  6. Verify that the API is operational.

      # web:list | grep neutron
    Copy to Clipboard Toggle word wrap

    This API endpoint is set in /etc/neutron/plugins/ml2/ml2_conf.ini and used by the neutron to communicate with OpenDaylight.

  7. Verify that VXLAN tunnels between the nodes are up.

    # vxlan:show
    Copy to Clipboard Toggle word wrap
  8. To test that the REST API is responding correctly, list the modules that are using it.

    # curl -u "admin:admin" http://localhost:8081/restconf/modules
    Copy to Clipboard Toggle word wrap

    The output will be similar (the example has been shortened).

    {"modules":{"module":[{"name":"netty-event-executor","revision":"2013-11-12","namespace":"urn:opendaylight:params:xml:ns:yang:controller:netty:eventexecutor"},{"name" ...
    Copy to Clipboard Toggle word wrap
  9. List the REST streams that use the host internal_API IP.

    # curl -u "admin:admin" http://localhost:8081/restconf/streams
    Copy to Clipboard Toggle word wrap

    You get a similar output:

    {"streams":{}}
    Copy to Clipboard Toggle word wrap
  10. Run the following command with host internal_API IP to verify that NetVirt is operational:

    # curl -u "admin:admin" http://localhost:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1
    Copy to Clipboard Toggle word wrap

    Note the following output to confirm that NetVirt is operational.

    {"topology":[{"topology-id":"netvirt:1"}]}
    Copy to Clipboard Toggle word wrap

4.2.3. Test Open vSwitch

To validate Open vSwitch, connect to one of the Compute nodes and verify that it is properly configured and connected to OpenDaylight.

Procedure

  1. Connect to one of the Compute nodes in the overcloud as a superuser.
  2. List the Open vSwitch settings.

    # ovs-vsctl show
    Copy to Clipboard Toggle word wrap
  3. Note multiple Managers in the output. In this example, lines 2 and 3 display multiple Managers.

        6b003705-48fc-4534-855f-344327d36f2a
            Manager "ptcp:6639:127.0.0.1"
            Manager "tcp:172.17.1.16:6640"
                is_connected: true
            Bridge br-ex
                fail_mode: standalone
                Port br-ex-int-patch
                    Interface br-ex-int-patch
                        type: patch
                        options: {peer=br-ex-patch}
                Port br-ex
                    Interface br-ex
                        type: internal
                Port "eth2"
                    Interface "eth2"
            Bridge br-isolated
                fail_mode: standalone
                Port "eth1"
                    Interface "eth1"
                Port "vlan50"
                    tag: 50
                    Interface "vlan50"
                        type: internal
                Port "vlan30"
                    tag: 30
                    Interface "vlan30"
                        type: internal
                Port br-isolated
                    Interface br-isolated
                        type: internal
                Port "vlan20"
                    tag: 20
                    Interface "vlan20"
                        type: internal
            Bridge br-int
                Controller "tcp:172.17.1.16:6653"
                    is_connected: true
                fail_mode: secure
                Port br-ex-patch
                    Interface br-ex-patch
                        type: patch
                        options: {peer=br-ex-int-patch}
                Port "tun02d236d8248"
                    Interface "tun02d236d8248"
                        type: vxlan
                        options: {key=flow, local_ip="172.17.2.18", remote_ip="172.17.2.20"}
                Port br-int
                    Interface br-int
                        type: internal
                Port "tap1712898f-15"
                    Interface "tap1712898f-15"
            ovs_version: "2.7.0"
    Copy to Clipboard Toggle word wrap
  4. Verify that the tcp manager points to the IP of the node where OpenDaylight is running.
  5. Verify that the Managers show is_connected: true to ensure that connectivity to OpenDaylight from OVS is established and uses the OVSDB protocol.
  6. Verify that each bridge (other than br-int) exists and corresponds to the NIC template used for deployment with the Compute role.
  7. Verify that the tcp connection corresponds to the IP where the OpenDaylight service is running.
  8. Verify that the bridge br-int shows is_connected: true and that an OpenFlow protocol connection to OpenDaylight is established.

More information

  • OpenDaylight creates the br-int bridge automatically.

4.2.4. Verify the Open vSwitch configuration on Compute nodes.

  1. Connect to a Compute node as a superuser.
  2. List the Open vSwitch configuration settings.

    # ovs-vsctl list open_vswitch
    Copy to Clipboard Toggle word wrap
  3. Read the output. It will be similar to this example.

     _uuid               : 4b624d8f-a7af-4f0f-b56a-b8cfabf7635d
     bridges             : [11127421-3bcc-4f9a-9040-ff8b88486508, 350135a4-4627-4e1b-8bef-56a1e4249bef]
     cur_cfg             : 7
     datapath_types      : [netdev, system]
     db_version          : "7.12.1"
     external_ids        : {system-id="b8d16d0b-a40a-47c8-a767-e118fe22759e"}
     iface_types         : [geneve, gre, internal, ipsec_gre, lisp, patch, stt, system, tap, vxlan]
     manager_options     : [c66f2e87-4724-448a-b9df-837d56b9f4a9, defec179-720e-458e-8875-ea763a0d8909]
     next_cfg            : 7
     other_config        : {local_ip="11.0.0.30", provider_mappings="datacentre:br-ex"}
     ovs_version         : "2.7.0"
     ssl                 : []
     statistics          : {}
     system_type         : RedHatEnterpriseServer
     system_version      : "7.4-Maipo"
    Copy to Clipboard Toggle word wrap
  4. Verify that the value of the other_config option has the correct local_ip set for the local interface that connects to the Tenant network through VXLAN tunnels.
  5. Verify that the provider_mappings value under the other_config option matches the value in the OpenDaylightProviderMappings heat template parameter. This configuration maps the neutron logical networks to corresponding physical interfaces.

4.2.5. Verify neutron configuration

Procedure

  1. Connect to the superuser account on one of the Controller role nodes.
  2. Ensure that the /etc/neutron/neutron.conf file contains service_plugins=odl-router_v2,trunk.
  3. Ensure that the /etc/neutron/plugin.ini file contains the following ml2 configuration:

    [ml2]
    mechanism_drivers=opendaylight_v2
    
    [ml2_odl]
    password=admin
    username=admin
    url=http://192.0.2.9:8081/controller/nb/v2/neutron
    Copy to Clipboard Toggle word wrap
  4. On one of the overcloud controllers, verify that neutron agents are running properly.

    # openstack network agent list
    Copy to Clipboard Toggle word wrap
  5. Verify that the admin_state_up value for both the Metadata and DHCP agents are True:

    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    | id                                   | agent_type     | host                     | availability_zone | alive | admin_state_up | binary                 |
    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    | 3be198c5-b3aa-4d0e-abb4-51b29db3af47 | Metadata agent | controller-0.localdomain |                   | :-)   | True           | neutron-metadata-agent |
    | 79579d47-dd7d-4ef3-9614-cd2f736043f3 | DHCP agent     | controller-0.localdomain | nova              | :-)   | True           | neutron-dhcp-agent     |
    +--------------------------------------+----------------+--------------------------+-------------------+-------+----------------+------------------------+
    Copy to Clipboard Toggle word wrap

More information

  • The IP in the plugin.ini, mentioned in step 3, should be the InternalAPI Virtual IP Address (VIP).
  • There is no Open vSwitch agent, nor L3 agent, listed in output of step 5, which is a desired state, as both are now managed by OpenDaylight.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat