IPv6 Networking for the Overcloud
Configuring an Overcloud to Use IPv6 Networking
概要
第1章 Introduction
Red Hat OpenStack Platform director creates a cloud environment called the Overcloud. As a default, the Overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the Overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. This guide provides information and a configuration example for using IPv6 in your Overcloud.
1.1. Defining IPv6 Networking
IPv6 is the latest version of the Internet Protocol standard. Internet Engineering Task Force (IETF) developed IPv6 as a means to combat the exhaustion of IP address from the current common IPv4 standard. IPv6 has various differences from IPv4 including:
- Large IP Address Range
- The IPv6 range is much larger than the IPv4 range.
- Better End-to-End Connectivity
- The larger IP range provides better end-to-end connectivity due to less reliance on network address translation.
- No Broadcasting
- IPv6 does not support traditional IP broadcasting. Instead, IPv6 uses multicasting to send packets to applicable hosts in a hierarchical manner.
- Stateless Address Autoconfiguration (SLAAC)
- IPv6 provides features for automatically configuring IP addresses and detecting duplicate addresses on a network. This reduces the reliance on a DHCP server to assign addresses.
IPv6 uses 128 bits (represented with 4 hexadecimals using groups of 16 bits) to define addresses while IPv4 only uses only 32 bits (represented with decimal digits using groups of 8 bits). For example, a representation of an IPv4 address (192.168.0.1) looks like this:
Bits | Representation |
---|---|
11000000 | 192 |
10101000 | 168 |
00000000 | 0 |
00000001 | 1 |
For an IPv6 address (2001:db8:88ec:9fb3::1), the representation looks like this:
Bits | Representation |
---|---|
0010 0000 0000 0001 | 2001 |
0000 1101 1011 1000 | 0db8 |
1000 1000 1110 1100 | 88ec |
1001 1111 1011 0011 | 9fb3 |
0000 0000 0000 0000 | 0000 |
0000 0000 0000 0000 | 0000 |
0000 0000 0000 0000 | 0000 |
0000 0000 0000 0001 | 0001 |
Notice you can also represent IPv6 addresses without leading zeros in each bit group and omit a set of zero bit groups once per IP address. In our example, you can represent the 0db8 bit grouping as just db8 and omit the three sets of 0000 bit groups, which shortens the representation from 2001:0db8:88ec:9fb3:0000:0000:0000:0001 to 2001:db8:88ec:9fb3::1. For more information, see "RFC 5952: A Recommendation for IPv6 Address Text Representation"
Subnetting in IPv6
Similar to IPv4, an IPv6 address uses a bit mask to define the address prefix as its network. For example, if you include a /64 bit mask to our sample IP address (e.g. 2001:db8:88ec:9fb3::1/64) the bit mask acts as a prefix that defines the first 64 bits (2001:db8:88ec:9fb3) as the network. The remaining bits (0000:0000:0000:0001) define the host.
IPv6 also uses some special address types, including:
- Loopback
- The loopback device uses an IPv6 for the internal communication within the host. This device is always ::1/128.
- Link Local
- A link local address is an IP address valid within a particular network segment. IPv6 requires each network device to have a link local address and use the prefix fe80::/10. However, most of the time, these addresses are prefixed with fe80::/64.
- Unique local
- A unique local address is intended for local communication. These addresses use a fc00::/7 prefix.
- Multicast
- Hosts use multicast addresses to join multicast groups. These addresses use a ff00::/8 prefix. For example, FF02::1 is a multicast group for all nodes on the network and FF02::2 is a multicast group for all routers.
- Global Unicast
- These addresses are usually reserved for public IP address. These addresses use a 2000::/3 prefix.
1.2. Using IPv6 in Red Hat OpenStack Platform
Red Hat OpenStack Platform director provides a method for mapping OpenStack services to isolated networks. These networks include:
- Internal API
- Storage
- Storage Management
- Project(tenant) Networks (Neutron VLAN mode)
- External
For more information about these network traffic types, see the Director Installation and Usage guide.
Red Hat OpenStack Platform director also provides methods to use IPv6 communication for these networks. This means the required OpenStack services, databases, and other related services use IPv6 addresses to communicate. This also applies to environments using a high availability solution involving multiple Controller nodes. This helps organizations integrate Red Hat OpenStack Platform with their IPv6 infrastructure.
Use the following table as a guide for what networks support IPv6 in Red Hat OpenStack Platform:
Network Type | Dual Stack (IPv4/v6) | Single Stack (IPv6) | Single Stack (IPv4) | Notes |
---|---|---|---|---|
Internal API | Yes | Yes | ||
Storage | Yes | Yes | ||
Storage Management | Yes | Yes | ||
Project Networks | Yes | Yes | Yes | |
Project Network Endpoints | Yes | Yes | Yes | This refers to the IP address of the network hosting the project network tunnels, not the project networks themselves. IPv6 for network endpoints supports only VXLAN and Geneve. Generic routing encapsulation (GRE) is not yet supported. |
External - Public API (and Horizon) | Yes | Yes | ||
External - Floating IPs | Yes | Yes | IPv6 uses Global Unicast Addresses (GUAs) instead of NAT and floating IP addresses. The Networking (neutron) service expects the IPv6 addressing between project networks to use GUAs, with no overlap in GUAs across the project networks, and therefore can be routed without NAT. With dual stack (IPv4/v6), you can use floating IP addresses to only reach the IP addresses on IPv4 subnets. | |
Provider Networks | Yes | Yes | Yes | IPv6 support is dependent on the project operating system. |
Provisioning (PXE/DHCP) | Yes | Interfaces on this network are IPv4 only. | ||
IPMI or other BMC | Yes | RHOSP communicates with baseboard management controller (BMC) interfaces over the Provisioning network, which is IPv4. If BMC interfaces support dual stack IPv4 or IPv6, tools that are not part of RHOSP can use IPv6 to communicate with the BMCs. | ||
Overcloud Provisioning network | The Provisioning network used for ironic in the overcloud. | |||
Overcloud Cleaning network | The isolated network used to clean a machine before it is ready for reuse. |
1.3. Setting Requirements
This guide acts as supplementary information for the Director Installation and Usage guide. This means the same requirements specified in Director Installation and Usage also apply to this guide. Implement these requirements as necessary.
This guide also requires the following:
- An Undercloud host with the Red Hat OpenStack Platform director installed. See the Director Installation and Usage guide.
- Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Both will be used in the deployment.
1.4. Defining the Scenario
The scenario for this guide is to create an Overcloud with an isolated network that uses IPv6. The guide aims to achieve this objective through network isolation configured using Heat templates and environment files. This scenario also provides certain variants to these Heat templates and environment files to demonstrate specific differences in configuration.
In this scenario, the Undercloud still uses IPv4 connectivity for PXE boot, introspection, deployment, and other services.
This guide uses a scenario similar to the Advanced Overcloud scenario in the Director Installation and Usage guide. The main difference is the omission of the Ceph Storage nodes.
For more information about this scenario, see the Director Installation and Usage guide.
This guide uses the 2001:DB8::/32 IPv6 prefix for documentation purposes as defined in RFC 3849. Make sure to substitute these example addresses for IPv6 addresses from your own network.
第2章 Configuring the Overcloud before Creation
The following chapter provides the configuration required before running the openstack overcloud deploy
command. This includes preparing nodes for provisioning, configuring an IPv6 address on the Undercloud, and creating a network environment file that defines the IPv6 parameters for the Overcloud.
2.1. Initializing the Stack User
Log into the director host as the stack
user and run the following command to initialize your director configuration:
$ source ~/stackrc
This sets up environment variables containing authentication details to access the director’s CLI tools.
2.2. Configuring an IPv6 Address on the Undercloud
The Undercloud requires access to the Overcloud’s Public API, which is on the External network. To accomplish this, the Undercloud host requires an IPv6 address on the interface accessing the External network.
The Provisioning network still requires IPv4 connectivity for every node. The Undercloud and the Overcloud nodes use this network for PXE boot, introspection, and deployment. In addition, the nodes use this network to access DNS and NTP services over IPv4.
Native VLAN or Dedicated Interface
If the Undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip
command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0
:
$ sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0
Trunked VLAN Interface
If the Undercloud uses a trunked VLAN on the same interface as the control plane bridge (br-ctlplane
) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. For example, our scenario uses 100 for the External network’s VLAN ID:
$ sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal $ sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100
Confirming the IPv6 Address
Confirm the addition of the IPv6 address with the ip
command:
$ ip addr
The IPv6 address appears on the chosen interface.
Setting a Persistent IPv6 Address
In addition to the above, you might want to make the IPv6 address permanent. In this case, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/
(In our example, either ifcfg-eth0
or ifcfg-vlan100
). Include the following lines:
IPV6INIT=yes IPV6ADDR=2001:db8::1/64
For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal.
2.3. Setting up your Environment
This section uses a cutdown version of the process from Configuring Basic Overcloud Requirements with the CLI Tools in the Director Installation and Usage.
Use the following workflow to setup your environment:
- Create a node definition template and register blank nodes in the director.
- Inspect hardware of all nodes.
- Manually tag nodes into roles.
- Create flavors and tag them into roles.
2.3.1. Registering Nodes
A node definition template (instackenv.json
) is a JSON format file and contains the hardware and power management details for registering nodes. For example:
{ "nodes":[ { "mac":[ "bb:bb:bb:bb:bb:bb" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "cc:cc:cc:cc:cc:cc" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "dd:dd:dd:dd:dd:dd" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "ee:ee:ee:ee:ee:ee" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" } { "mac":[ "ff:ff:ff:ff:ff:ff" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" } { "mac":[ "gg:gg:gg:gg:gg:gg" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" } ] }
The Provisioning network uses IPv4 addresses. The IPMI addresses must also be IPv4 addresses, and they must either be directly attached or reachable through routing over the Provisioning network.
After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json
), then import it into the director. Use the following command to accomplish this:
$ openstack overcloud node import ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack overcloud node configure
The nodes are now registered and configured in the director.
2.3.2. Inspecting the Hardware of Nodes
After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:
$ openstack overcloud node introspect --all-manageable
The nodes must be in the manageable
state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
2.3.3. Manually Tagging the Nodes
After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.
Retrieve a list of your nodes to identify their UUIDs:
$ ironic node-list
To manually tag a node to a specific profile, add a profile option to the properties/capabilities
parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:
$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'
The addition of the profile:compute
and profile:control
options tag the nodes into each respective profiles.
As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data.
2.4. Configuring the Network
This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our IPv6 options.
2.4.1. Configuring Composable Network Details
Copy the default
network_data
file:$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.
Edit the local copy of the
network_data.yaml
file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details:- name: External vip: true name_lower: external vlan: 10 ipv6: true ipv6_subnet: '2001:db8:fd00:1000::/64' ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}] gateway_ipv6: '2001:db8:fd00:1000::1'
-
name
is the only mandatory value, however you can also usename_lower
to normalize names for readability. For example, changingInternalApi
tointernal_api
. -
vip: true
creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network. -
ipv6
defines whether to enable IPv6. -
ipv6_subnet
andipv6_allocation_pools
, andgateway_ip6
set the default IPv6 subnet and IP range for the network.
-
Include the custom network_data
file with your deployment using the -n
option. Without the -n
option, the deployment command uses the default network details.
2.4.2. Network Isolation
The overcloud assigns services to the provisioning network by default. However, Red Hat OpenStack Platform director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml
.
IPv6 VXLAN is not supported for Tenant networks.
When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate the service is running on an IPv6 network. The network each service runs on is defined by the file network/service_net_map.yaml
, and may be overridden by declaring parameter defaults for individual ServiceNetMap
entries. These services require the parameter default to be set in an environment file:
parameter_defaults: # Enable IPv6 for Ceph. CephIPv6: True # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster. CorosyncIPv6: True # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP. MongoDbIPv6: True # Enable various IPv6 features in Nova. NovaIPv6: True # Enable IPv6 environment for RabbitMQ. RabbitIPv6: True # Enable IPv6 environment for Memcached. MemcachedIPv6: True # Enable IPv6 environment for MySQL. MysqlIPv6: True # Enable IPv6 environment for Manila ManilaIPv6: True # Enable IPv6 environment for Redis. RedisIPv6: True
The environments/network-isolation.j2.yaml
file in the director’s core Heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml
file in the same location with the full resource registry.
2.4.3. Configuring Interfaces
The Overcloud requires a set of network interface templates. The director contains a set of Jinja2-based Heat templates, which render based on your network_data
file:
NIC directory | Description | Environment file |
---|---|---|
|
Single NIC ( |
|
|
Single NIC ( |
|
|
Control plane attached to |
|
|
Control plane attached to |
|
For this example, we use the single-nic-vlans
template collection.
2.4.4. Configuring the IPv6 Isolated Network
The default Heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml
. When rendered with our network_data
file, it results in a standard YAML file called network-environment.yaml
. Some parts of this file might require overrides, which is why you should create your own custom network-environment.yaml
file. For this scenario, create a custom environment file (/home/stack/network-environment.yaml
) with the following details:
parameter_defaults: DnsServers: ["8.8.8.8","8.8.4.4"] ControlPlaneDefaultRoute: 192.0.2.1 ControlPlaneSubnetCidr: "24" EC2MetadataIp: 192.0.2.1
The parameter_defaults
section contains the customization for certain services that remain on IPv4.
2.5. Completing Overcloud Configuration
This completes the necessary steps to configure an IPv6-based Overcloud. The next chapter uses the openstack overcloud deploy
command to create the Overcloud using the configuration from this chapter.
第3章 Creating the Overcloud
The creation of an Overcloud that uses IPv6 networking requires additional arguments for the openstack overcloud deploy
command. For example:
$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/templates/network-environment.yaml \ --ntp-server pool.ntp.org \ [ADDITIONAL OPTIONS]
The above command uses the following options:
-
--templates
- Creates the Overcloud from the default Heat template collection. -
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
- Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
- Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /home/stack/network-environment.yaml
- Adds an additional environment file to the Overcloud deployment. In this case, it includes overrides related to IPv6.Ensure that network_data.yaml includes the setting
ipv6: true
. Previous versions of Red Hat OpenStack director, included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, ensure that the controller definition in roles_data.yaml contains both networks in default_route_networks (for example,default_route_networks: ['External', 'ControlPlane']
).-
--ntp-server pool.ntp.org
- Sets our NTP server.
The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack
user and run:
$ source ~/stackrc $ heat stack-list --show-nested
3.1. Accessing the Overcloud
The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file (overcloudrc
) in your stack
user’s home directory. Run the following command to use this file:
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your Overcloud from the director host’s CLI. To return to interacting with the director’s host, run the following command:
$ source ~/stackrc
第4章 Configuring the Overcloud after Creation
The creation process results in a fully operational Overcloud with IPv6 network. However, the Overcloud requires some post-creation configuration.
4.1. Creating the Overcloud Project Network
The Overcloud requires a IPv6-based Project network for instances. Source the overcloudrc
file and create an initial Project network in neutron
. For example:
$ source ~/overcloudrc $ openstack network create default --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 101 $ openstack subnet create default --subnet-range 2001:db8:fd00:6000::/64 --ipv6-address-mode slaac --ipv6-ra-mode slaac --ip-version 6 --network default
This creates a basic neutron
network called default
. Confirm the created network with the network list
and subnet list
commands:
$ openstack network list $ openstack subnet list
4.2. Creating the Overcloud Public Network
This scenario configured the node interfaces to use the External network. However, you still need to create this network on the Overcloud so that we can provide network access.
$ openstack network create public --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 100 $ openstack subnet create public --network public --subnet-range 2001:db8:0:2::/64 --ip-version 6 --gateway 2001:db8::1 --allocation-pool start=2001:db8:0:2::2,end=2001:db8:0:2::ffff --ipv6-address-mode slaac --ipv6-ra-mode slaac
This creates a network called public
provides an allocation pool of over 65000 IPv6 addresses for our instances.
Create a router to route instance traffic to the External network.
$ openstack router create public-router $ openstack router set public-router --external-gateway public
第5章 Conclusion
This concludes the creation and configuration of an IPv6-based Overcloud. For general Overcloud post-creation functions, the Director Installation and Usage guide.