이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Configuring the Overcloud before Creation
The following chapter provides the configuration required before running the openstack overcloud deploy
command. This includes preparing nodes for provisioning, configuring an IPv6 address on the Undercloud, and creating a network environment file that defines the IPv6 parameters for the Overcloud.
2.1. Initializing the Stack User
Log into the director host as the stack
user and run the following command to initialize your director configuration:
$ source ~/stackrc
This sets up environment variables containing authentication details to access the director’s CLI tools.
2.2. Configuring an IPv6 Address on the Undercloud
The Undercloud requires access to the Overcloud’s Public API, which is on the External network. To accomplish this, the Undercloud host requires an IPv6 address on the interface accessing the External network.
The Provisioning network still requires IPv4 connectivity for every node. The Undercloud and the Overcloud nodes use this network for PXE boot, introspection, and deployment. In addition, the nodes use this network to access DNS and NTP services over IPv4.
Native VLAN or Dedicated Interface
If the Undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip
command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0
:
$ sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0
Trunked VLAN Interface
If the Undercloud uses a trunked VLAN on the same interface as the control plane bridge (br-ctlplane
) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. For example, our scenario uses 100 for the External network’s VLAN ID:
$ sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal $ sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100
Confirming the IPv6 Address
Confirm the addition of the IPv6 address with the ip
command:
$ ip addr
The IPv6 address appears on the chosen interface.
Setting a Persistent IPv6 Address
In addition to the above, you might want to make the IPv6 address permanent. In this case, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/
(In our example, either ifcfg-eth0
or ifcfg-vlan100
). Include the following lines:
IPV6INIT=yes IPV6ADDR=2001:db8::1/64
For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal.
2.3. Setting up your Environment
This section uses a cutdown version of the process from Configuring Basic Overcloud Requirements with the CLI Tools in the Director Installation and Usage.
Use the following workflow to setup your environment:
- Create a node definition template and register blank nodes in the director.
- Inspect hardware of all nodes.
- Manually tag nodes into roles.
- Create flavors and tag them into roles.
2.3.1. Registering Nodes
A node definition template (instackenv.json
) is a JSON format file and contains the hardware and power management details for registering nodes. For example:
{ "nodes":[ { "mac":[ "bb:bb:bb:bb:bb:bb" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.205" }, { "mac":[ "cc:cc:cc:cc:cc:cc" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.206" }, { "mac":[ "dd:dd:dd:dd:dd:dd" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.207" }, { "mac":[ "ee:ee:ee:ee:ee:ee" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.208" } { "mac":[ "ff:ff:ff:ff:ff:ff" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.209" } { "mac":[ "gg:gg:gg:gg:gg:gg" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.0.2.210" } ] }
The Provisioning network uses IPv4 addresses. The IPMI addresses must also be IPv4 addresses, and they must either be directly attached or reachable through routing over the Provisioning network.
After creating the template, save the file to the stack user’s home directory (/home/stack/instackenv.json
), then import it into the director. Use the following command to accomplish this:
$ openstack overcloud node import ~/instackenv.json
This imports the template and registers each node from the template into the director.
Assign the kernel and ramdisk images to all nodes:
$ openstack overcloud node configure
The nodes are now registered and configured in the director.
2.3.2. Inspecting the Hardware of Nodes
After registering the nodes, inspect the hardware attribute of each node. Run the following command to inspect the hardware attributes of each node:
$ openstack overcloud node introspect --all-manageable
The nodes must be in the manageable
state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.
2.3.3. Manually Tagging the Nodes
After registering and inspecting the hardware of each node, tag them into specific profiles. These profile tags match your nodes to flavors, and in turn the flavors are assigned to a deployment role.
Retrieve a list of your nodes to identify their UUIDs:
$ ironic node-list
To manually tag a node to a specific profile, add a profile option to the properties/capabilities
parameter for each node. For example, to tag three nodes to use a controller profile and one node to use a compute profile, use the following commands:
$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' $ ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' $ ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'
The addition of the profile:compute
and profile:control
options tag the nodes into each respective profiles.
As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data.
2.4. Configuring the Network
This section examines the network configuration for the Overcloud. This includes isolating our services to use specific network traffic and configuring the Overcloud with our IPv6 options.
2.4.1. Configuring Composable Network Details
Copy the default
network_data
file:$ cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.
Edit the local copy of the
network_data.yaml
file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details:- name: External vip: true name_lower: external vlan: 10 ipv6: true ipv6_subnet: '2001:db8:fd00:1000::/64' ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}] gateway_ipv6: '2001:db8:fd00:1000::1'
-
name
is the only mandatory value, however you can also usename_lower
to normalize names for readability. For example, changingInternalApi
tointernal_api
. -
vip: true
creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network. -
ipv6
defines whether to enable IPv6. -
ipv6_subnet
andipv6_allocation_pools
, andgateway_ip6
set the default IPv6 subnet and IP range for the network.
-
Include the custom network_data
file with your deployment using the -n
option. Without the -n
option, the deployment command uses the default network details.
2.4.2. Network Isolation
The overcloud assigns services to the provisioning network by default. However, Red Hat OpenStack Platform director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml
.
When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate the service is running on an IPv6 network. The network each service runs on is defined by the file network/service_net_map.yaml
, and may be overridden by declaring parameter defaults for individual ServiceNetMap
entries. These services require the parameter default to be set in an environment file:
parameter_defaults: # Enable IPv6 for Ceph. CephIPv6: True # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster. CorosyncIPv6: True # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP. MongoDbIPv6: True # Enable various IPv6 features in Nova. NovaIPv6: True # Enable IPv6 environment for RabbitMQ. RabbitIPv6: True # Enable IPv6 environment for Memcached. MemcachedIPv6: True # Enable IPv6 environment for MySQL. MysqlIPv6: True # Enable IPv6 environment for Manila ManilaIPv6: True # Enable IPv6 environment for Redis. RedisIPv6: True
The environments/network-isolation.j2.yaml
file in the director’s core Heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml
file in the same location with the full resource registry.
2.4.3. Configuring Interfaces
The Overcloud requires a set of network interface templates. The director contains a set of Jinja2-based Heat templates, which render based on your network_data
file:
NIC directory | Description | Environment file |
---|---|---|
|
Single NIC ( |
|
|
Single NIC ( |
|
|
Control plane attached to |
|
|
Control plane attached to |
|
For this example, we use the sinle-nic-vlans
template collection.
2.4.4. Configuring the IPv6 Isolated Network
The default Heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml
. When rendered with our network_data
file, it results in a standard YAML file called network-environment.yaml
. Some parts of this file might require overrides, which is why you should create your own custom network-environment.yaml
file. For this scenario, create a custom environment file (/home/stack/network-environment.yaml
) with the following details:
parameter_defaults: DnsServers: ["8.8.8.8","8.8.4.4"] ControlPlaneDefaultRoute: 192.0.2.1 ControlPlaneSubnetCidr: "24" EC2MetadataIp: 192.0.2.1
The parameter_defaults
section contains the customization for certain services that remain on IPv4.
2.5. Completing Overcloud Configuration
This completes the necessary steps to configure an IPv6-based Overcloud. The next chapter uses the openstack overcloud deploy
command to create the Overcloud using the configuration from this chapter.