Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Appendix B. Power Management Drivers
Although IPMI is the main method the director uses for power management control, the director also supports other power management types. This appendix provides a list of the supported power management features. Use these power management settings for Section 6.1, “Registering Nodes for the Overcloud”.
B.1. Redfish Link kopierenLink in die Zwischenablage kopiert!
A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF)
- pm_type
 - 
							Set this option to 
redfish. - pm_user; pm_password
 - The Redfish username and password.
 - pm_addr
 - The IP address of the Redfish controller.
 - pm_system_id
 - 
							The canonical path to the system resource. This path should include the root service, version, and the path/unqiue ID for the system. For example: 
/redfish/v1/Systems/CX34R87. - redfish_verify_ca
 - 
							If the Redfish service in your baseboard management controller (BMC) is not configured to use a valid TLS certificate signed by a recognized certificate authority (CA), the Redfish client in ironic fails to connect to the BMC. Set the 
redfish_verify_caoption tofalseto mute the error. However, be aware that disabling BMC authentication compromises the access security of your BMC. 
B.2. Dell Remote Access Controller (DRAC) Link kopierenLink in die Zwischenablage kopiert!
DRAC is an interface that provides out-of-band remote management features including power management and server monitoring.
- pm_type
 - 
							Set this option to 
idrac. - pm_user; pm_password
 - The DRAC username and password.
 - pm_addr
 - The IP address of the DRAC host.
 
B.3. Integrated Lights-Out (iLO) Link kopierenLink in die Zwischenablage kopiert!
iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring.
- pm_type
 - 
							Set this option to 
ilo. - pm_user; pm_password
 - The iLO username and password.
 - pm_addr
 The IP address of the iLO interface.
- 
									To enable this driver, add 
iloto theenabled_hardware_typesoption in yourundercloud.confand rerunopenstack undercloud install. The director also requires an additional set of utilities for iLo. Install the
python-proliantutilspackage and restart theopenstack-ironic-conductorservice:sudo yum install python-proliantutils sudo systemctl restart openstack-ironic-conductor.service
$ sudo yum install python-proliantutils $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful introspection. The director has been successfully tested with nodes using this ILO firmware version.
 - Using a shared iLO port is not supported.
 
- 
									To enable this driver, add 
 
B.4. Cisco Unified Computing System (UCS) Link kopierenLink in die Zwischenablage kopiert!
Cisco UCS is being deprecated and will be removed from Red Hat OpenStack Platform (RHOSP) 16.0.
UCS from Cisco is a data center platform that unites compute, network, storage access, and virtualization resources. This driver focuses on the power management for bare metal systems connected to the UCS.
- pm_type
 - 
							Set this option to 
cisco-ucs-managed. - pm_user; pm_password
 - The UCS username and password.
 - pm_addr
 - The IP address of the UCS interface.
 - pm_service_profile
 The UCS service profile to use. Usually takes the format of
org-root/ls-[service_profile_name]. For example:"pm_service_profile": "org-root/ls-Nova-1"
"pm_service_profile": "org-root/ls-Nova-1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 
									To enable this driver, add 
cisco-ucs-managedto theenabled_hardware_typesoption in yourundercloud.confand rerunopenstack undercloud install. The director also requires an additional set of utilities for UCS. Install the
python-UcsSdkpackage and restart theopenstack-ironic-conductorservice:sudo yum install python-UcsSdk sudo systemctl restart openstack-ironic-conductor.service
$ sudo yum install python-UcsSdk $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
- 
									To enable this driver, add 
 
B.5. Fujitsu Integrated Remote Management Controller (iRMC) Link kopierenLink in die Zwischenablage kopiert!
Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and extended functionality. This driver focuses on the power management for bare metal systems connected to the iRMC.
iRMC S4 or higher is required.
- pm_type
 - 
							Set this option to 
irmc. - pm_user; pm_password
 - The username and password for the iRMC interface.
 - pm_addr
 - The IP address of the iRMC interface.
 - pm_port (Optional)
 - The port to use for iRMC operations. The default is 443.
 - pm_auth_method (Optional)
 - 
							The authentication method for iRMC operations. Use either 
basicordigest. The default isbasic - pm_client_timeout (Optional)
 - Timeout (in seconds) for iRMC operations. The default is 60 seconds.
 - pm_sensor_method (Optional)
 Sensor data retrieval method. Use either
ipmitoolorscci. The default isipmitool.- 
									To enable this driver, add 
irmcto theenabled_hardware_typesoption in yourundercloud.confand rerunopenstack undercloud install. The director also requires an additional set of utilities if you enabled SCCI as the sensor method. Install the
python-scciclientpackage and restart theopenstack-ironic-conductorservice:yum install python-scciclient sudo systemctl restart openstack-ironic-conductor.service
$ yum install python-scciclient $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
- 
									To enable this driver, add 
 
B.6. Virtual Baseboard Management Controller (VBMC) Link kopierenLink in die Zwischenablage kopiert!
The director can use virtual machines as nodes on a KVM host. It controls their power management through emulated IPMI devices. This allows you to use the standard IPMI parameters from Section 6.1, “Registering Nodes for the Overcloud” but for virtual nodes.
This option uses virtual machines instead of bare metal nodes. This means it is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.
Configuring the KVM Host
On the KVM host, enable the OpenStack Platform repository and install the
python2-virtualbmcpackage:sudo subscription-manager repos --enable=rhel-7-server-openstack-13-rpms sudo yum install -y python2-virtualbmc
$ sudo subscription-manager repos --enable=rhel-7-server-openstack-13-rpms $ sudo yum install -y python2-virtualbmcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a virtual baseboard management controller (BMC) for each virtual machine using the
vbmccommand. For example, to create a BMC for virtual machines namedNode01andNode02, define the port to access each BMC and set the authentication details, enter the following commands:vbmc add Node01 --port 6230 --username admin --password PASSWORD vbmc add Node02 --port 6231 --username admin --password PASSWORD
$ vbmc add Node01 --port 6230 --username admin --password PASSWORD $ vbmc add Node02 --port 6231 --username admin --password PASSWORDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the corresponding ports on the host:
sudo firewall-cmd --zone=public \ --add-port=6230/udp \ --add-port=6231/udp
$ sudo firewall-cmd --zone=public \ --add-port=6230/udp \ --add-port=6231/udpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make the changes persistent:
sudo firewall-cmd --runtime-to-permanent
$ sudo firewall-cmd --runtime-to-permanentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that your changes are applied to the firewall settings and the ports are open:
sudo firewall-cmd --list-all
$ sudo firewall-cmd --list-allCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse a different port for each virtual machine. Port numbers lower than 1025 require root privileges in the system.
Start each of the BMCs you have created using the following commands:
vbmc start Node01 vbmc start Node02
$ vbmc start Node01 $ vbmc start Node02Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must repeat this step after rebooting the KVM host.
To verify that you can manage the nodes using
ipmitool, display the power status of a remote node:ipmitool -I lanplus -U admin -P PASSWORD -H 127.0.0.1 -p 6231 power status
$ ipmitool -I lanplus -U admin -P PASSWORD -H 127.0.0.1 -p 6231 power status Chassis Power is offCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
Registering Nodes
					Use the following parameters in your /home/stack/instackenv.json node registration file:
				
- pm_type
 - 
							Set this option to 
ipmi. - pm_user; pm_password
 - Specify the IPMI username and password for the node’s virtual BMC device.
 - pm_addr
 - Specify the IP address of the KVM host that contains the node.
 - pm_port
 - Specify the port to access the specific node on the KVM host.
 - mac
 - Specify a list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
 
For example:
Migrating Existing Nodes
					You can migrate existing nodes from using the deprecated pxe_ssh driver to using the new virtual BMC method. The following command is an example that sets a node to use the ipmi driver and its parameters:
				
B.7. Red Hat Virtualization Link kopierenLink in die Zwischenablage kopiert!
This driver provides control over virtual machines in Red Hat Virtualization through its RESTful API.
- pm_type
 - 
							Set this option to 
staging-ovirt. - pm_user; pm_password
 - 
							The username and password for your Red Hat Virtualization environment. The username also includes the authentication provider. For example: 
admin@internal. - pm_addr
 - The IP address of the Red Hat Virtualization REST API.
 - pm_vm_name
 - The name of the virtual machine to control.
 - mac
 - A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
 
To enable this driver, complete the following steps:
Add
staging-ovirtto theenabled_hardware_typesoption in yourundercloud.conffile:enabled_hardware_types = ipmi,staging-ovirt
enabled_hardware_types = ipmi,staging-ovirtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
python-ovirt-engine-sdk4.x86_64package.sudo yum install python-ovirt-engine-sdk4
$ sudo yum install python-ovirt-engine-sdk4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
openstack undercloud installcommand:openstack undercloud install
$ openstack undercloud installCopy to Clipboard Copied! Toggle word wrap Toggle overflow 
B.8. Fake Driver Link kopierenLink in die Zwischenablage kopiert!
This driver provides a method to use bare metal devices without power management. This means that director does not control the registered bare metal devices and as such require manual control of power at certain points in the introspection and deployment processes.
This option is available for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack Platform enterprise environments.
- pm_type
 Set this option to
fake_pxe.- This driver does not use any authentication details because it does not control power management.
 - 
									To enable this driver, add 
fake_pxeto theenabled_driversoption in yourundercloud.confand rerunopenstack undercloud install. - 
									In your 
instackenv.jsonnode inventory file, set thepm_typetofake_pxefor the nodes that you want to manage manually. - 
									When performing introspection on nodes, manually power the nodes after running the 
openstack overcloud node introspectcommand. - 
									When performing overcloud deployment, check the node status with the 
ironic node-listcommand. Wait until the node status changes fromdeployingtodeploy wait-callbackand then manually power the nodes. - 
									After the overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the 
ironic node-listcommand, wait until the node status changes toactive, then manually reboot all overcloud nodes.