6.3. Controlling Node Placement
The default behavior for the director is to randomly select nodes for each role, usually based on their profile tag. However, the director provides the ability to define specific node placement. This is a useful method to:
- Assign specific node IDs e.g.
controller-0
,controller-1
, etc - Assign custom hostnames
- Assign specific IP addresses
- Assign specific Virtual IP addresses
Note
Manually setting predictable IP addresses, virtual IP addresses, and ports for a network alleviates the need for allocation pools. However, it is recommended to retain allocation pools for each network to ease with scaling new nodes. Make sure that any statically defined IP addresses fall outside the allocation pools. For more information on setting allocation pools, see Section 6.2.2, “Creating a Network Environment File”.
6.3.1. Assigning Specific Node IDs
This procedure assigns node ID to specific nodes. Examples of node IDs include
controller-0
, controller-1
, compute-0
, compute-1
, and so forth.
The first step is to assign the ID as a per-node capability that the Nova scheduler matches on deployment. For example:
ironic node-update <id> replace properties/capabilities='node:controller-0,boot_option:local'
This assigns the capability
node:controller-0
to the node. Repeat this pattern using a unique continuous index, starting from 0, for all nodes. Make sure all nodes for a given role (Controller, Compute, or each of the storage roles) are tagged in the same way or else the Nova scheduler will not match the capabilities correctly.
The next step is to create a Heat environment file (for example,
scheduler_hints_env.yaml
) that uses scheduler hints to match the capabilities for each node. For example:
parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%'
To use these scheduler hints, include the
scheduler_hints_env.yaml
environment file with the overcloud deploy command
during Overcloud creation.
The same approach is possible for each role via these parameters:
ControllerSchedulerHints
for Controller nodes.NovaComputeSchedulerHints
for Compute nodes.BlockStorageSchedulerHints
for Block Storage nodes.ObjectStorageSchedulerHints
for Object Storage nodes.CephStorageSchedulerHints
for Ceph Storage nodes.
Note
Node placement takes priority over profile matching. To avoid scheduling failures, use the default
baremetal
flavor for deployment and not the flavors designed for profile matching (compute
, control
, etc). For example:
$ openstack overcloud deploy ... --control-flavor baremetal --compute-flavor baremetal ...
6.3.2. Assigning Custom Hostnames
In combination with the node ID configuration in Section 6.3.1, “Assigning Specific Node IDs”, the director can also assign a specific custom hostname to each node. This is useful when you need to define where a system is located (e.g.
rack2-row12
), match an inventory identifier, or other situations where a custom hostname is desired.
To customize node hostnames, use the
HostnameMap
parameter in an environment file, such as the scheduler_hints_env.yaml
file from Section 6.3.1, “Assigning Specific Node IDs”. For example:
parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' NovaComputeSchedulerHints: 'capabilities:node': 'compute-%index%' HostnameMap: overcloud-controller-0: overcloud-controller-prod-123-0 overcloud-controller-1: overcloud-controller-prod-456-0 overcloud-controller-2: overcloud-controller-prod-789-0 overcloud-compute-0: overcloud-compute-prod-abc-0
Define the
HostnameMap
in the parameter_defaults
section, and set each mapping as the original hostname that Heat defines using HostnameFormat
parameters (e.g. overcloud-controller-0
) and the second value is the desired custom hostname for that node (e.g. overcloud-controller-prod-123-0
).
Using this method in combination with the node ID placement ensures each node has a custom hostname.
6.3.3. Assigning Predictable IPs
For further control over the resulting environment, the director can assign Overcloud nodes with specific IPs on each network as well. Use the
environments/ips-from-pool-all.yaml
environment file in the core Heat template collection. Copy this file to the stack
user's templates
directory.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-all.yaml ~/templates/.
There are two major sections in the
ips-from-pool-all.yaml
file.
The first is a set of
resource_registry
references that override the defaults. These tell the director to use a specific IP for a given port on a node type. Modify each resource to use the absolute path of its respective template. For example:
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/external_from_pool.yaml OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/storage_mgmt_from_pool.yaml OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/tenant_from_pool.yaml
The default configuration sets all networks on all node types to use pre-assigned IPs. To allow a particular network or node type to use default IP assignment instead, simply remove the
resource_registry
entries related to that node type or network from the environment file.
The second section is parameter_defaults, where the actual IP addresses are assigned. Each node type has an associated parameter:
ControllerIPs
for Controller nodes.NovaComputeIPs
for Compute nodes.CephStorageIPs
for Ceph Storage nodes.BlockStorageIPs
for Block Storage nodes.SwiftStorageIPs
for Object Storage nodes.
Each parameter is a map of network names to a list of addresses. Each network type must have at least as many addresses as there will be nodes on that network. The director assigns addresses in order. The first node of each type receives the first address on each respective list, the second node receives the second address on each respective lists, and so forth.
For example, if an Overcloud will contain three Ceph Storage nodes, the CephStorageIPs parameter might look like:
CephStorageIPs: storage: - 172.16.1.100 - 172.16.1.101 - 172.16.1.102 storage_mgmt: - 172.16.3.100 - 172.16.3.101 - 172.16.3.102
The first Ceph Storage node receives two addresses: 172.16.1.100 and 172.16.3.100. The second receives 172.16.1.101 and 172.16.3.101, and the third receives 172.16.1.102 and 172.16.3.102. The same pattern applies to the other node types.
Make sure the chosen IP addresses fall outside the allocation pools for each network defined in your network environment file (see Section 6.2.2, “Creating a Network Environment File”). For example, make sure the
internal_api
assignments fall outside of the InternalApiAllocationPools
range. This avoids conflicts with any IPs chosen automatically. Likewise, make sure the IP assignments do not conflict with the VIP configuration, either for standard predictable VIP placement (see Section 6.3.4, “Assigning Predictable Virtual IPs”) or external load balancing (see Section 6.5, “Configuring External Load Balancing”).
To apply this configuration during a deployment, include the environment file with the
openstack overcloud deploy
command. If using network isolation (see Section 6.2, “Isolating Networks”), include this file after the network-isolation.yaml
file. For example:
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/ips-from-pool-all.yaml [OTHER OPTIONS]
6.3.4. Assigning Predictable Virtual IPs
In addition to defining predictable IP addresses for each node, the director also provides a similar ability to define predictable Virtual IPs (VIPs) for clustered services. To accomplish this, edit the network environment file from Section 6.2.2, “Creating a Network Environment File” and add the VIP parameters in the
parameter_defaults
section:
parameter_defaults: ... ControlFixedIPs: [{'ip_address':'192.168.201.101'}] InternalApiVirtualFixedIPs: [{'ip_address':'172.16.0.9'}] PublicVirtualFixedIPs: [{'ip_address':'10.1.1.9'}] StorageVirtualFixedIPs: [{'ip_address':'172.18.0.9'}] StorageMgmtVirtualFixedIPs: [{'ip_address':'172.19.0.9'}] RedisVirtualFixedIPs: [{'ip_address':'172.16.0.8'}]
Select these IPs from outside of their respective allocation pool ranges. For example, select an IP address for
InternalApiVirtualFixedIPs
that is not within the InternalApiAllocationPools
range.