Chapter 8. Provisioning bare metal nodes before deploying the overcloud
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The overcloud deployment process contains two primary operations:
- Provisioning nodes
- Deploying the overcloud
You can mitigate some of the risk involved with this process and identify points of failure more efficiently if you separate these operations into distinct processes:
Provision your bare metal nodes.
- Create a node definition file in yaml format.
- Run the provisioning command, including the node definition file.
Deploy your overcloud.
- Run the deployment command, including the heat environment file that the provisioning command generates.
The provisioning process provisions your nodes and generates a heat environment file that contains various node specifications, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command.
You cannot combine pre-provisioned nodes with director-provisioned nodes.
8.1. Registering nodes for the overcloud
Director requires a node definition template that specifies the hardware and power management details of your nodes. You can create this template in JSON format, nodes.json
, or YAML format, nodes.yaml
.
Procedure
Create a template named
nodes.json
ornodes.yaml
that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template:Example JSON template
{ "nodes": [{ "name": "node01", "ports": [{ "address": "aa:aa:aa:aa:aa:aa", "physical_network": "ctlplane", "local_link_connection": { "switch_id": "52:54:00:00:00:00", "port_id": "p0" } }], "cpu": "4", "memory": "6144", "disk": "40", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "admin", "pm_password": "p@55w0rd!", "pm_addr": "192.168.24.205" }, { "name": "node02", "ports": [{ "address": "bb:bb:bb:bb:bb:bb", "physical_network": "ctlplane", "local_link_connection": { "switch_id": "52:54:00:00:00:00", "port_id": "p0" } }], "cpu": "4", "memory": "6144", "disk": "40", "arch": "x86_64", "pm_type": "ipmi", "pm_user": "admin", "pm_password": "p@55w0rd!", "pm_addr": "192.168.24.206" }] }
Example YAML template
nodes: - name: "node01" ports: - address: "aa:aa:aa:aa:aa:aa" physical_network: ctlplane local_link_connection: switch_id: "52:54:00:00:00:00" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: "x86_64" pm_type: "ipmi" pm_user: "admin" pm_password: "p@55w0rd!" pm_addr: "192.168.24.205" - name: "node02" ports: - address: "bb:bb:bb:bb:bb:bb" physical_network: ctlplane local_link_connection: switch_id: "52:54:00:00:00:00" port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: "x86_64" pm_type: "ipmi" pm_user: "admin" pm_password: "p@55w0rd!" pm_addr: "192.168.24.206"
This template contains the following attributes:
- name
- The logical name for the node.
- ports
The port to access the specific IPMI device. You can define the following optional port attributes:
-
address
: The MAC address for the network interface on the node. Use only the MAC address for the Provisioning NIC of each system. -
physical_network
: The physical network that is connected to the Provisioning NIC. -
local_link_connection
: If you use IPv6 provisioning and LLDP does not correctly populate the local link connection during introspection, you must include fake data with theswitch_id
andport_id
fields in thelocal_link_connection
parameter. For more information on how to include fake data, see Using director introspection to collect bare metal node hardware information.
-
- cpu
- (Optional) The number of CPUs on the node.
- memory
- (Optional) The amount of memory in MB.
- disk
- (Optional) The size of the hard disk in GB.
- arch
(Optional) The system architecture.
ImportantWhen building a multi-architecture cloud, the
arch
key is mandatory to distinguish nodes usingx86_64
andppc64le
architectures.- pm_type
The power management driver that you want to use. This example uses the IPMI driver (
ipmi
).NoteIPMI is the preferred supported power management driver. For more information about supported power management types and their options, see Power management drivers. If these power management drivers do not work as expected, use IPMI for your power management.
- pm_user; pm_password
- The IPMI username and password.
- pm_addr
- The IP address of the IPMI device.
After you create the template, run the following commands to verify the formatting and syntax:
$ source ~/stackrc (undercloud)$ openstack overcloud node import --validate-only ~/nodes.json
ImportantYou must also include the
--http-boot /var/lib/ironic/tftpboot/
option for multi-architecture nodes.-
Save the file to the home directory of the
stack
user (/home/stack/nodes.json
). Import the template to director to register each node from the template into director:
(undercloud)$ openstack overcloud node import ~/nodes.json
NoteIf you use UEFI boot mode, you must also set the boot mode on each node. If you introspect your nodes without setting UEFI boot mode, the nodes boot in legacy mode. For more information, see Setting the boot mode to UEFI boot mode.
Wait for the node registration and configuration to complete. When complete, confirm that director has successfully registered the nodes:
(undercloud)$ openstack baremetal node list
8.2. Creating an inventory of the bare-metal node hardware
Director needs the hardware inventory of the nodes in your Red Hat OpenStack Platform (RHOSP) deployment for profile tagging, benchmarking, and manual root disk assignment.
You can provide the hardware inventory to director by using one of the following methods:
- Automatic: You can use director’s introspection process, which collects the hardware information from each node. This process boots an introspection agent on each node. The introspection agent collects hardware data from the node and sends the data back to director. Director stores the hardware data in the Object Storage service (swift) running on the undercloud node.
- Manual: You can manually configure a basic hardware inventory for each bare metal machine. This inventory is stored in the Bare Metal Provisioning service (ironic) and is used to manage and deploy the bare-metal machines.
You must use director’s automatic introspection process if you use derive_params.yaml
for your overcloud, which requires introspection data to be present. For more information on derive_params.yaml
, see Workflows and derived parameters.
The director automatic introspection process provides the following advantages over the manual method for setting the Bare Metal Provisioning service ports:
-
Introspection records all of the connected ports in the hardware information, including the port to use for PXE boot if it is not already configured in
nodes.yaml
. -
Introspection sets the
local_link_connection
attribute for each port if the attribute is discoverable using LLDP. When you use the manual method, you must configurelocal_link_connection
for each port when you register the nodes. -
Introspection sets the
physical_network
attribute for the Bare Metal Provisioning service ports when deploying a spine-and-leaf or DCN architecture.
8.2.1. Using director introspection to collect bare metal node hardware information
After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using director introspection.
As an alternative to automatic introspection, you can manually provide director with the hardware information for your bare metal nodes. For more information, see Manually configuring bare metal node hardware information.
Prerequisites
- You have registered the bare-metal nodes for your overcloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the pre-introspection validation group to check the introspection requirements:
(undercloud)$ openstack tripleo validator run --group pre-introspection
- Review the results of the validation report.
Optional: Review detailed output from a specific validation:
(undercloud)$ openstack tripleo validator show run --full <validation>
Replace
<validation>
with the UUID of the specific validation from the report that you want to review.ImportantA
FAILED
validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, aFAILED
validation can indicate a potential issue with a production environment.
Inspect the hardware attributes of each node. You can inspect the hardware attributes of all nodes, or specific nodes:
Inspect the hardware attributes of all nodes:
(undercloud)$ openstack overcloud node introspect --all-manageable --provide
-
Use the
--all-manageable
option to introspect only the nodes that are in a managed state. In this example, all nodes are in a managed state. -
Use the
--provide
option to reset all nodes to anavailable
state after introspection.
-
Use the
Inspect the hardware attributes of specific nodes:
(undercloud)$ openstack overcloud node introspect --provide <node1> [node2] [noden]
-
Use the
--provide
option to reset all the specified nodes to anavailable
state after introspection. -
Replace
<node1>
,[node2]
, and all nodes up to[noden]
with the UUID of each node that you want to introspect.
-
Use the
Monitor the introspection progress logs in a separate terminal window:
(undercloud)$ sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log
ImportantEnsure that the introspection process runs to completion. Introspection usually takes 15 minutes for bare metal nodes. However, incorrectly sized introspection networks can cause it to take much longer, which can result in the introspection failing.
Optional: If you have configured your undercloud for bare metal provisioning over IPv6, then you need to also check that LLDP has set the
local_link_connection
for Bare Metal Provisioning service (ironic) ports:(undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
If the Local Link Connection field is empty for the port on your bare metal node, you must populate the
local_link_connection
value manually with fake data. The following example sets the fake switch ID to52:54:00:00:00:00
, and the fake port ID top0
:(undercloud)$ openstack baremetal port set <port_uuid> \ --local-link-connection switch_id=52:54:00:00:00:00 \ --local-link-connection port_id=p0
Verify that the Local Link Connection field contains the fake data:
(undercloud)$ openstack baremetal port list --long -c UUID -c "Node UUID" -c "Local Link Connection"
After the introspection completes, all nodes change to an available
state.
8.2.2. Manually configuring bare-metal node hardware information
After you register a physical machine as a bare metal node, you can manually add its hardware details and create bare-metal ports for each of its Ethernet MAC addresses. You must create at least one bare-metal port before deploying the overcloud.
As an alternative to manual introspection, you can use the automatic director introspection process to collect the hardware information for your bare metal nodes. For more information, see Using director introspection to collect bare metal node hardware information.
Prerequisites
- You have registered the bare-metal nodes for your overcloud.
-
You have configured
local_link_connection
for each port on the registered nodes innodes.json
. For more information, see Registering nodes for the overcloud.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Set the boot option to
local
for each registered node by addingboot_option':'local
to the capabilities of the node:(undercloud)$ openstack baremetal node set \ --property capabilities="boot_option:local" <node>
-
Replace
<node>
with the ID of the bare metal node.
-
Replace
Specify the deploy kernel and deploy ramdisk for the node driver:
(undercloud)$ openstack baremetal node set <node> \ --driver-info deploy_kernel=<kernel_file> \ --driver-info deploy_ramdisk=<initramfs_file>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<kernel_file>
with the path to the.kernel
image, for example,file:///var/lib/ironic/httpboot/agent.kernel
. -
Replace
<initramfs_file>
with the path to the.initramfs
image, for example,file:///var/lib/ironic/httpboot/agent.ramdisk
.
-
Replace
Update the node properties to match the hardware specifications on the node:
(undercloud)$ openstack baremetal node set <node> \ --property cpus=<cpu> \ --property memory_mb=<ram> \ --property local_gb=<disk> \ --property cpu_arch=<arch>
-
Replace
<node>
with the ID of the bare metal node. -
Replace
<cpu>
with the number of CPUs. -
Replace
<ram>
with the RAM in MB. -
Replace
<disk>
with the disk size in GB. -
Replace
<arch>
with the architecture type.
-
Replace
Optional: Specify the IPMI cipher suite for each node:
(undercloud)$ openstack baremetal node set <node> \ --driver-info ipmi_cipher_suite=<version>
-
Replace
<node>
with the ID of the bare metal node. Replace
<version>
with the cipher suite version to use on the node. Set to one of the following valid values:-
3
- The node uses the AES-128 with SHA1 cipher suite. -
17
- The node uses the AES-128 with SHA256 cipher suite.
-
-
Replace
Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment:
(undercloud)$ openstack baremetal node set <node> \ --property root_device='{"<property>": "<value>"}'
-
Replace
<node>
with the ID of the bare metal node. Replace
<property>
and<value>
with details about the disk that you want to use for deployment, for exampleroot_device='{"size": "128"}'
RHOSP supports the following properties:
-
model
(String): Device identifier. -
vendor
(String): Device vendor. -
serial
(String): Disk serial number. -
hctl
(String): Host:Channel:Target:Lun for SCSI. -
size
(Integer): Size of the device in GB. -
wwn
(String): Unique storage identifier. -
wwn_with_extension
(String): Unique storage identifier with the vendor extension appended. -
wwn_vendor_extension
(String): Unique vendor storage identifier. -
rotational
(Boolean): True for a rotational device (HDD), otherwise false (SSD). name
(String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names.NoteIf you specify more than one property, the device must match all of those properties.
-
-
Replace
Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network:
(undercloud)$ openstack baremetal port create --node <node_uuid> <mac_address>
-
Replace
<node_uuid>
with the unique ID of the bare metal node. -
Replace
<mac_address>
with the MAC address of the NIC used to PXE boot.
-
Replace
Validate the configuration of the node:
(undercloud)$ openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+
The validation output
Result
indicates the following:-
False
: The interface has failed validation. If the reason provided includes missing theinstance_info
parameters[\'ramdisk', \'kernel', and \'image_source']
, this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only setimage_source
to pass the validation. -
True
: The interface has passed validation. -
None
: The interface is not supported for your driver.
-
8.3. Provisioning bare metal nodes
Create a new YAML file ~/overcloud-baremetal-deploy.yaml
, define the quantity and attributes of the bare metal nodes that you want to deploy, and assign overcloud roles to these nodes. The provisioning process creates a heat environment file that you can include in your openstack overcloud deploy
command.
Prerequisites
- The undercloud is installed. For more information, see Installing director.
- The bare metal nodes are introspected and available for provisioning and deployment. For more information, see Registering nodes for the overcloud and Creating an inventory of the bare metal node hardware.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Create a new
~/overcloud-baremetal-deploy.yaml
file and define the node count for each role that you want to provision. For example, to provision three Controller nodes and three Compute nodes, use the following syntax:- name: Controller count: 3 - name: Compute count: 3
In the
~/overcloud-baremetal-deploy.yaml
file, define any predictive node placements, custom NICs, or other attributes that you want to assign to your nodes. For example, use the following example syntax to provision three Controller nodes on nodesnode00
,node01
, andnode02
, and three Compute nodes onnode04
,node05
, andnode06
:- name: Controller count: 3 instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 - hostname: overcloud-controller-2 name: node02 - name: Compute count: 3 instances: - hostname: overcloud-novacompute-0 name: node04 - hostname: overcloud-novacompute-1 name: node05 - hostname: overcloud-novacompute-2 name: node06
You can also override the default parameter values with the
defaults
parameter to avoid manual node definitions for each node entry:- name: Controller count: 3 defaults: nics: network: custom-network subnet: custom-subnet instances: - hostname: overcloud-controller-0 name: node00 ...
For more information about the parameters, attributes, and values that you can use in your node definition file, see Bare metal node provisioning attributes.
Optional: By default, the provisioning process uses the
overcloud-full
image. You can use theimage
attribute to define a custom image for use either with all nodes for a role, or for specific node instances:- name: Controller count: 3 instances: - hostname: overcloud-controller-0 name: node00 image: href: overcloud-custom
NoteBare-metal nodes with a root (
/
) partition larger than 2 TiB need to use a whole disk image. For more information on whole disk images, see Creating whole-disk images.Run the provisioning command, specifying the
~/overcloud-baremetal-deploy.yaml
file and defining an output file with the--output
option:(undercloud)$ openstack overcloud node provision \ --stack stack \ --output ~/overcloud-baremetal-deployed.yaml \ ~/overcloud-baremetal-deploy.yaml
The provisioning process generates a heat environment file with the name that you specify in the
--output
option. This file contains your node definitions. When you deploy the overcloud, include this file in the deployment command.In a separate terminal, monitor your nodes to verify that they provision successfully. The provisioning process changes the node state from
available
toactive
:(undercloud)$ watch openstack baremetal node list
Use the
metalsmith
tool to obtain a unified view of your nodes, including allocations and neutron ports:(undercloud)$ metalsmith list
You can also use the
openstack baremetal allocation
command to verify association of nodes to hostnames:(undercloud)$ openstack baremetal allocation list
When your nodes are provisioned successfully, you can deploy the overcloud. For more information, see Configuring a basic overcloud with pre-provisioned nodes.
8.4. Scaling up bare metal nodes
To increase the count of bare metal nodes in an existing overcloud, increment the node count in the ~/overcloud-baremetal-deploy.yaml
file and redeploy the overcloud.
Prerequisites
- A successful undercloud installation. For more information, see Installing director.
- A successful overcloud deployment. For more information, see Configuring a basic overcloud with pre-provisioned nodes.
- Bare metal nodes introspected and available for provisioning and deployment. For more information, see Registering nodes for the overcloud and Creating an inventory of the bare-metal node hardware.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Edit the
~/overcloud-baremetal-deploy.yaml
file that you used to provision your bare metal nodes, and increment thecount
parameter for the roles that you want to scale up. For example, if your overcloud contains three Compute nodes, use the following snippet to increase the Compute node count to 10:- name: Controller count: 3 - name: Compute count: 10
You can also add predictive node placement with the
instances
parameter. For more information about the parameters and attributes that are available, see Bare metal node provisioning attributes.Run the provisioning command, specifying the
~/overcloud-baremetal-deploy.yaml
file and defining an output file with the--output
option:(undercloud)$ openstack overcloud node provision \ --stack stack \ --output ~/overcloud-baremetal-deployed.yaml \ ~/overcloud-baremetal-deploy.yaml
-
Monitor the provisioning progress with the
openstack baremetal node list
command. Deploy the overcloud, including the
~/overcloud-baremetal-deployed.yaml
file that the provisioning command generates, along with any other environment files relevant to your deployment:(undercloud)$ openstack overcloud deploy \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e ~/overcloud-baremetal-deployed.yaml \ --deployed-server \ --disable-validations \ ...
8.5. Scaling down bare metal nodes
Tag the nodes that you want to delete from the stack in the ~/overcloud-baremetal-deploy.yaml
file, redeploy the overcloud, and then include this file in the openstack overcloud node delete
command with the --baremetal-deployment
option.
Prerequisites
- A successful undercloud installation. For more information, see Chapter 4, Installing director on the undercloud.
- A successful overcloud deployment. For more information, see Chapter 9, Configuring a basic overcloud with pre-provisioned nodes.
- At least one bare metal node that you want to remove from the stack.
Procedure
Source the
stackrc
undercloud credential file:$ source ~/stackrc
Edit the
~/overcloud-baremetal-deploy.yaml
file that you used to provision your bare metal nodes, and decrement thecount
parameter for the roles that you want to scale down. You must also define the following attributes for each node that you want to remove from the stack:- The name of the node.
- The hostname that is associated with the node.
The attribute
provisioned: false
.For example, to remove the node
overcloud-controller-1
from the stack, include the following snippet in your~/overcloud-baremetal-deploy.yaml
file:- name: Controller count: 2 instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 # Removed from cluster due to disk failure provisioned: false - hostname: overcloud-controller-2 name: node02
Run the provisioning command, specifying the
~/overcloud-baremetal-deploy.yaml
file and defining an output file with the--output
option:(undercloud)$ openstack overcloud node provision \ --stack stack \ --output ~/overcloud-baremetal-deployed.yaml \ ~/overcloud-baremetal-deploy.yaml
Redeploy the overcloud and include the
~/overcloud-baremetal-deployed.yaml
file that the provisioning command generates, along with any other environment files relevant to your deployment:(undercloud)$ openstack overcloud deploy \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \ -e ~/overcloud-baremetal-deployed.yaml \ --deployed-server \ --disable-validations \ ...
After you redeploy the overcloud, the nodes that you define with the
provisioned: false
attribute are no longer present in the stack. However, these nodes are still running in a provisioned state.NoteIf you want to remove a node from the stack temporarily, you can deploy the overcloud with the attribute
provisioned: false
and then redeploy the overcloud with the attributeprovisioned: true
to return the node to the stack.Run the
openstack overcloud node delete
command, including the~/overcloud-baremetal-deploy.yaml
file with the--baremetal-deployment
option.(undercloud)$ openstack overcloud node delete \ --stack stack \ --baremetal-deployment ~/overcloud-baremetal-deploy.yaml
NoteDo not include the nodes that you want to remove from the stack as command arguments in the
openstack overcloud node delete
command.
8.6. Bare metal node provisioning attributes
Use the following tables to understand the parameters, attributes, and values that are available for you to use when you provision bare metal nodes with the openstack baremetal node provision
command.
Parameter | Value |
---|---|
name | Mandatory role name |
count |
The number of nodes that you want to provision for this role. The default value is |
defaults |
A dictionary of default values for |
instances |
A dictionary of values that you can use to specify attributes for specific nodes. For more information about supported properties in the |
hostname_format |
Overrides the default hostname format for this role. The default format uses the lower case role name. For example, the default format for the Controller role is |
Example syntax
In the following example, the name
refers to the logical name of the node, and the hostname
refers to the generated hostname which is derived from the overcloud stack name, the role, and an incrementing index. All Controller servers use a default custom image overcloud-full-custom
and are on predictive nodes. One of the Compute servers is placed predictively on node04
with custom host name overcloud-compute-special
, and the other 99 Compute servers are on nodes allocated automatically from the pool of available nodes:
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 instances: - hostname: overcloud-controller-0 name: node00 - hostname: overcloud-controller-1 name: node01 - hostname: overcloud-controller-2 name: node02 - name: Compute count: 100 instances: - hostname: overcloud-compute-special name: node04
Parameter | Value |
---|---|
hostname |
If the hostname complies with the |
name | The name of the node that you want to provision. |
image |
Details of the image that you want to provision onto the node. For more information about supported properties in the |
capabilities | Selection criteria to match the node capabilities. |
nics |
List of dictionaries that represent requested NICs. For more information about supported properties in the |
profile | Selection criteria to use Advanced Profile Matching. |
provisioned |
Boolean to determine whether this node is provisioned or unprovisioned. The default value is |
resource_class |
Selection criteria to match the resource class of the node. The default value is |
root_size_gb |
Size of the root partition in GiB. The default value is |
swap_size_mb | Size of the swap partition in MiB. |
traits | A list of traits as selection criteria to match the node traits. |
Example syntax
In the following example, all Controller servers use a custom default overcloud image overcloud-full-custom
. The Controller server overcloud-controller-0
is placed predictively on node00
and has custom root and swap sizes. The other two Controller servers are on nodes allocated automatically from the pool of available nodes, and have default root and swap sizes:
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 instances: - hostname: overcloud-controller-0 name: node00 root_size_gb: 140 swap_size_mb: 600
Parameter | Value |
---|---|
href |
Glance image reference or URL of the root partition or whole disk image. URL schemes supported are |
checksum | When the href is a URL, this value must be the SHA512 checksum of the root partition or whole disk image. |
kernel | Glance image reference or URL of the kernel image. Use this property only for partition images. |
ramdisk | Glance image reference or URL of the ramdisk image. Use this property only for partition images. |
Example syntax
In the following example, all three Controller servers are on nodes allocated automatically from the pool of available nodes. All Controller servers in this environment use a default custom image overcloud-full-custom
:
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 checksum: 1582054665 kernel: file:///var/lib/ironic/images/overcloud-full-custom.vmlinuz ramdisk: file:///var/lib/ironic/images/overcloud-full-custom.initrd
Parameter | Value |
---|---|
fixed_ip | The specific IP address that you want to use for this NIC. |
network | The neutron network where you want to create the port for this NIC. |
subnet | The neutron subnet where you want to create the port for this NIC. |
port | Existing Neutron port to use instead of creating a new port. |
Example syntax
In the following example, all three Controller servers are on nodes allocated automatically from the pool of available nodes. All Controller servers in this environment use a default custom image overcloud-full-custom
and have specific networking requirements:
- name: Controller count: 3 defaults: image: href: file:///var/lib/ironic/images/overcloud-full-custom.qcow2 nics: network: custom-network subnet: custom-subnet