이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Director Installation and Usage
An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud
Abstract
Chapter 1. Introduction 링크 복사링크가 클립보드에 복사되었습니다!
Figure 1.1. Basic Layout of Undercloud and Overcloud
1.1. Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
- Environment planning - The Undercloud provides planning functions for users to assign Red Hat OpenStack Platform roles, including Compute, Controller, and various storage roles.
- Bare metal system control - The Undercloud uses the Intelligent Platform Management Interface (IPMI) of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack to each node. This provides a method to provision bare metal systems as OpenStack nodes.
- Orchestration - The Undercloud provides and reads a set of YAML templates to create an OpenStack environment.
- OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal nodes.
- OpenStack Networking (neutron) and Open vSwitch - Controls networking for bare metal nodes.
- OpenStack Image Service (glance) - Stores images that are written to bare metal machines.
- OpenStack Orchestration (heat) and Puppet - Provides orchestration of nodes and configuration of nodes after the director writes the Overcloud image to disk.
- OpenStack Telemetry (ceilometer) - Performs monitoring and data collection.
- OpenStack Identity (keystone) - Provides authentication and authorization for the director's components.
- MariaDB - The database back end for the director.
- RabbitMQ - Messaging queue for the director's components.
1.2. Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
- Controller - Nodes that provide administration, networking, and high availability for the OpenStack environment. An ideal OpenStack environment recommends three of these nodes together in a high availability cluster.A default Controller node contains the following components: horizon, keystone, nova API, neutron server, Open vSwitch, glance, cinder volume, cinder API, swift storage, swift proxy, heat engine, heat API, ceilometer, MariaDB, RabbitMQ. The Controller also uses Pacemaker and Galera for high availability services.
- Compute - These nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time.A default Compute node contains the following components: nova Compute, nova KVM, ceilometer agent, Open vSwitch
- Storage - Nodes that provide storage for the OpenStack environment. This includes nodes for:
- Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). In addition, the director installs Ceph Monitor onto the Controller nodes in situations where it deploys Ceph Storage nodes.
- Block storage (cinder) - Used as external block storage for HA Controller nodes. This node contains the following components: cinder volume, ceilometer agent, Open vSwitch.
- Object storage (swift) - These nodes provide a external storage layer for Openstack Swift. The Controller nodes access these nodes through the Swift proxy. This node contains the following components: swift storage, ceilometer agent, Open vSwitch.
1.3. High Availability 링크 복사링크가 클립보드에 복사되었습니다!
- Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster.
- HAProxy - Provides load balancing and proxy services to the cluster.
- Galera - Replicates the Red Hat OpenStack Platform database across the cluster.
- Memcached - Provides database caching.
Note
1.4. Ceph Storage 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 2. Requirements 링크 복사링크가 클립보드에 복사되었습니다!
Note
2.1. Environment Requirements 링크 복사링크가 클립보드에 복사되었습니다!
Minimum Requirements
- 1 host machine for the Red Hat OpenStack Platform director
- 1 host machine for a Red Hat OpenStack Platform Compute node
- 1 host machine for a Red Hat OpenStack Platform Controller node
Recommended Requirements
- 1 host machine for the Red Hat OpenStack Platform director
- 3 host machines for Red Hat OpenStack Platform Compute nodes
- 3 host machines for Red Hat OpenStack Platform Controller nodes in a cluster
- 3 host machines for Red Hat Ceph Storage nodes in a cluster
- It is recommended to use bare metal systems for all nodes. At minimum, the Compute nodes require bare metal systems.
- All Overcloud bare metal systems require an Intelligent Platform Management Interface (IPMI). This is because the director controls the power management.
2.2. Undercloud Requirements 링크 복사링크가 클립보드에 복사되었습니다!
- An 8-core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- A minimum of 16 GB of RAM.
- A minimum of 40 GB of available disk space. Make sure to leave at least 10 GB free space before attempting an Overcloud deployment or update. This free space accommodates image conversion and caching during the node provisioning process.
- A minimum of 2 x 1 Gbps Network Interface Cards. However, it is recommended to use a 10 Gbps interface for
Provisioning networktraffic, especially if provisioning a large number of nodes in your Overcloud environment. - Red Hat Enterprise Linux 7.2 or later installed as the host operating system.
Important
2.3. Networking Requirements 링크 복사링크가 클립보드에 복사되었습니다!
Provisioning Network- This is a private network the director uses to provision and manage the Overcloud nodes. The Provisioning network provides DHCP and PXE boot functions to help discover bare metal systems for use in the Overcloud. This network must use a native VLAN on a trunked interface so that the director serves PXE boot and DHCP requests. This is also the network you use to control power management through Intelligent Platform Management Interface (IPMI) on all Overcloud nodes.External Network- A separate network for remote connectivity to all nodes. The interface connecting to this network requires a routable IP address, either defined statically, or dynamically through an external DHCP service.
- Typical minimal Overcloud network configuration can include:
- Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different Overcloud network types.
- Dual NIC configuration - One NIC for the Provisioning network and the other NIC for the External network.
- Dual NIC configuration - One NIC for the Provisioning network on the native VLAN and the other NIC for tagged VLANs that use subnets for the different Overcloud network types.
- Multiple NIC configuration - Each NIC uses a subnet for a different Overcloud network type.
- Additional physical NICs can be used for isolating individual networks, creating bonded interfaces, or for delegating tagged VLAN traffic.
- If using VLANs to isolate your network traffic types, use a switch that supports 802.1Q standards to provide tagged VLANs.
- During the Overcloud creation, you will refer to NICs using a single name across all Overcloud machines. Ideally, you should use the same NIC on each Overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services.
- Make sure the Provisioning network NIC is not the same NIC used for remote connectivity on the director machine. The director installation creates a bridge using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system.
- The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range:
- Include at least one IP address per node connected to the Provisioning network.
- If planning a high availability configuration, include an extra IP address for the virtual IP of the cluster.
- Include additional IP addresses within the range for scaling the environment.
Note
Duplicate IP addresses should be avoided on the Provisioning network. For more information, see Section 11.4, “Troubleshooting IP Address Conflicts on the Provisioning Network”.Note
For more information on planning your IP address usage, for example, for storage, provider, and tenant networks, see the Networking Guide. - Set all Overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC (and any other NICs on the system). Also ensure that the Provisioning NIC has
PXE bootat the top of the boot order, ahead of hard disks and CD/DVD drives. - All Overcloud bare metal systems require an Intelligent Platform Management Interface (IPMI) connected to the Provisioning network, as this allows the director to control the power management of each node.
- Make a note of the following details for each Overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information will be useful later when setting up the Overcloud nodes.
- If an instance needs to be accessible from the external internet, you can allocate a floating IP address from a public network and associate it with an instance. The instance still retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can only be assigned to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved only for use by a single tenant, allowing the tenant to associate or disassociate with a particular instance as required. This configuration exposes your infrastructure to the external internet. As a result, you might need to check that you are following suitable security practices.
- To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond may be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.
Important
- Use network segmentation to mitigate network movement and isolate sensitive data; a flat network is much less secure.
- Restrict services access and ports to a minimum.
- Ensure proper firewall rules and password usage.
- Ensure that SELinux is enabled.
2.4. Overcloud Requirements 링크 복사링크가 클립보드에 복사되었습니다!
Note
2.4.1. Compute Node Requirements 링크 복사링크가 클립보드에 복사되었습니다!
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this processor has a minimum of 4 cores.
- Memory
- A minimum of 6 GB of RAM.Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
- Disk Space
- A minimum of 40 GB of available disk space.
- Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Compute node requires IPMI functionality on the server's motherboard.
2.4.2. Controller Node Requirements 링크 복사링크가 클립보드에 복사되었습니다!
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- Memory
- A minimum of 32 GB of RAM for each Controller node. For optimal performance, it is recommended to use 64 GB for each Controller node.
Important
The amount of recommended memory depends on the number of CPU cores. A greater number of CPU cores requires more memory. For more information on measuring memory requirements, see "Red Hat OpenStack Platform Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal. - Disk Space
- A minimum of 40 GB of available disk space.
- Network Interface Cards
- A minimum of 2 x 1 Gbps Network Interface Cards. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Controller node requires IPMI functionality on the server's motherboard.
2.4.3. Ceph Storage Node Requirements 링크 복사링크가 클립보드에 복사되었습니다!
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
- Memory
- Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.
- Disk Space
- Storage requirements depends on the amount of memory. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space.
- Disk Layout
- The recommended Red Hat Ceph Storage node configuration requires a disk layout similar to the following:
/dev/sda- The root disk. The director copies the main Overcloud image to the disk./dev/sdb- The journal disk. This disk divides into partitions for Ceph OSD journals. For example,/dev/sdb1,/dev/sdb2,/dev/sdb3, and onward. The journal disk is usually a solid state drive (SSD) to aid with system performance./dev/sdcand onward - The OSD disks. Use as many disks as necessary for your storage requirements.
This guide contains the necessary instructions to map your Ceph Storage disks into the director. - Network Interface Cards
- A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. It is recommended to use a 10 Gbps interface for storage node, especially if creating an OpenStack Platform environment that serves a high volume of traffic.
- Intelligent Platform Management Interface (IPMI)
- Each Ceph node requires IPMI functionality on the server's motherboard.
Important
parted [device] mklabel gpt
# parted [device] mklabel gpt
2.5. Repository Requirements 링크 복사링크가 클립보드에 복사되었습니다!
|
Name
|
Repository
|
Description of Requirement
|
|---|---|---|
|
Red Hat Enterprise Linux 7 Server (RPMs)
| rhel-7-server-rpms
|
Base operating system repository.
|
|
Red Hat Enterprise Linux 7 Server - Extras (RPMs)
| rhel-7-server-extras-rpms
|
Contains Red Hat OpenStack Platform dependencies.
|
|
Red Hat Enterprise Linux 7 Server - RH Common (RPMs)
| rhel-7-server-rh-common-rpms
|
Contains tools for deploying and configuring Red Hat OpenStack Platform.
|
|
Red Hat Satellite Tools for RHEL 7 Server RPMs x86_64
| rhel-7-server-satellite-tools-6.1-rpms
|
Tools for managing hosts with Red Hat Satellite 6.
|
|
Red Hat Enterprise Linux High Availability (for RHEL 7 Server) (RPMs)
| rhel-ha-for-rhel-7-server-rpms
|
High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability.
|
|
Red Hat Enterprise Linux OpenStack Platform 8 director for RHEL 7 (RPMs)
| rhel-7-server-openstack-8-director-rpms
|
Red Hat OpenStack Platform director repository. Red Hat OpenStack Platform director repository. Also provides some tools for use on director-deployed Overclouds.
|
|
Red Hat Enterprise Linux OpenStack Platform 8 for RHEL 7 (RPMs)
| rhel-7-server-openstack-8-rpms
|
Core Red Hat OpenStack Platform repository.
|
|
Red Hat Ceph Storage OSD 1.3 for Red Hat Enterprise Linux 7 Server (RPMs)
| rhel-7-server-rhceph-1.3-osd-rpms
|
(For Ceph Storage Nodes) Repository for Ceph Storage Object Storage daemon. Installed on Ceph Storage nodes.
|
|
Red Hat Ceph Storage MON 1.3 for Red Hat Enterprise Linux 7 Server (RPMs)
| rhel-7-server-rhceph-1.3-mon-rpms
|
(For Ceph Storage Nodes) Repository for Ceph Storage Monitor daemon. Installed on Controller nodes in OpenStack environments using Ceph Storage nodes.
|
Note
Chapter 3. Planning your Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
3.1. Planning Node Deployment Roles 링크 복사링크가 클립보드에 복사되었습니다!
- Controller
- Provides key services for controlling your environment. This includes the dashboard (horizon), authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and high availability services.
Note
Environments with one node can be used for testing purposes. Environments with two nodes or more than three nodes are not supported. - Compute
- A physical server that acts as a hypervisor, and provides the processing capabilities required for running virtual machines in the environment. A basic Red Hat OpenStack Platform environment requires at least one Compute node.
- Ceph-Storage
- A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional.
- Cinder-Storage
- A host that provides external block storage for OpenStack's cinder service. This deployment role is optional.
- Swift-Storage
- A host that provides external object storage for OpenStack's Swift service. This deployment role is optional.
| |
Controller
|
Compute
|
Ceph-Storage
|
Swift-Storage
|
Cinder-Storage
|
Total
|
|---|---|---|---|---|---|---|
|
Small Overcloud
|
1
|
1
|
-
|
-
|
-
|
2
|
|
Medium Overcloud
|
1
|
3
|
-
|
-
|
-
|
4
|
|
Medium Overcloud with additional Object and Block storage
|
1
|
3
|
-
|
1
|
1
|
6
|
|
Medium Overcloud with High Availability
|
3
|
3
|
-
|
-
|
-
|
6
|
|
Medium Overcloud with High Availability and Ceph Storage
|
3
|
3
|
3
|
-
|
-
|
9
|
3.2. Planning Networks 링크 복사링크가 클립보드에 복사되었습니다!
|
Network Type
|
Description
|
Used By
|
|---|---|---|
|
IPMI
|
Network used for power management of nodes. This network is predefined before the installation of the Undercloud.
|
All nodes
|
|
Provisioning
|
The director uses this network traffic type to deploy new nodes over PXE boot and orchestrate the installation of OpenStack Platform on the Overcloud bare metal servers. This network is predefined before the installation of the Undercloud.
|
All nodes
|
|
Internal API
|
The Internal API network is used for communication between the OpenStack services using API communication, RPC messages, and database communication.
|
Controller, Compute, Cinder Storage, Swift Storage
|
|
Tenant
|
Neutron provides each tenant with their own networks using either VLAN segregation (where each tenant network is a network VLAN), or tunneling (through VXLAN or GRE). Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and network namespaces means that multiple tenant networks can use the same address range without causing conflicts.
|
Controller, Compute
|
|
Storage
|
Block Storage, NFS, iSCSI, and others. Ideally, this would be isolated to an entirely separate switch fabric for performance reasons.
|
All nodes
|
|
Storage Management
|
OpenStack Object Storage (swift) uses this network to synchronize data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph backend connect over the Storage Management network, since they do not interact with Ceph directly but rather use the frontend service. Note that the RBD driver is an exception, as this traffic connects directly to Ceph.
|
Controller, Ceph Storage, Cinder Storage, Swift Storage
|
|
External
|
Hosts the OpenStack Dashboard (horizon) for graphical system management, the public APIs for OpenStack services, and performs SNAT for incoming traffic destined for instances. If the external network uses private IP addresses (as per RFC-1918), then further NAT must be performed for traffic originating from the internet.
|
Controller
|
|
Floating IP
|
Allows incoming traffic to reach instances using 1-to-1 IP address mapping between the floating IP address, and the IP address actually assigned to the instance in the tenant network. If hosting the Floating IPs on a VLAN separate from
External, you can trunk the Floating IP VLAN to the Controller nodes and add the VLAN through Neutron after Overcloud creation. This provides a means to create multiple Floating IP networks attached to multiple bridges. The VLANs are trunked but are not configured as interfaces. Instead, neutron creates an OVS port with the VLAN segmentation ID on the chosen bridge for each Floating IP network.
|
Controller
|
|
Management
|
Provides access for system administration functions such as SSH access, DNS traffic, and NTP traffic. This network also acts as a gateway for non-Controller nodes.
|
All nodes
|
Note
- Internal API
- Storage
- Storage Management
- Tenant Networks
- External
- Management
nic2 and nic3) in a bond to deliver these networks over their respective VLANs. Meanwhile, each Overcloud node communicates with the Undercloud over the Provisioning network through a native VLAN using nic1.
Figure 3.1. Example VLAN Topology using Bonded Interfaces
| |
Mappings
|
Total Interfaces
|
Total VLANs
|
|---|---|---|---|
|
Flat Network with External Access
|
Network 1 - Provisioning, Internal API, Storage, Storage Management, Tenant Networks
Network 2 - External, Floating IP (mapped after Overcloud creation)
|
2
|
2
|
|
Isolated Networks
|
Network 1 - Provisioning
Network 2 - Internal API
Network 3 - Tenant Networks
Network 4 - Storage
Network 5 - Storage Management
Network 6 - Management
Network 7 - External, Floating IP (mapped after Overcloud creation)
|
3 (includes 2 bonded interfaces)
|
7
|
3.3. Planning Storage 링크 복사링크가 클립보드에 복사되었습니다!
- Ceph Storage Nodes
- The director creates a set of scalable storage nodes using Red Hat Ceph Storage. The Overcloud uses these nodes for:
- Images - Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. You can use glance to store images in a Ceph Block Device.
- Volumes - Cinder volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. You can use Cinder to boot a VM using a copy-on-write clone of an image.
- Guest Disks - Guest disks are guest operating system disks. By default, when you boot a virtual machine with nova, its disk appears as a file on the filesystem of the hypervisor (usually under
/var/lib/nova/instances/<uuid>/). It is possible to boot every virtual machine inside Ceph directly without using cinder, which is advantageous because it allows you to perform maintenance operations easily with the live-migration process. Additionally, if your hypervisor dies it is also convenient to triggernova evacuateand run the virtual machine elsewhere almost seamlessly.
Important
If you want to boot virtual machines in Ceph (ephemeral backend or boot from volume), the glance image format must beRAWformat. Ceph does not support other image formats such as QCOW2 or VMDK for hosting a virtual machine disk.See Red Hat Ceph Storage Architecture Guide for additional information. - Swift Storage Nodes
- The director creates an external object storage node. This is useful in situations where you need to scale or replace controller nodes in your Overcloud environment but need to retain object storage outside of a high availability cluster.
Chapter 4. Installing the Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
4.1. Creating a Director Installation User 링크 복사링크가 클립보드에 복사되었습니다!
stack and set a password:
useradd stack passwd stack # specify a password
[root@director ~]# useradd stack
[root@director ~]# passwd stack # specify a password
sudo:
echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack
[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
[root@director ~]# chmod 0440 /etc/sudoers.d/stack
stack user:
su - stack
[root@director ~]# su - stack
[stack@director ~]$
stack user.
4.2. Creating Directories for Templates and Images 링크 복사링크가 클립보드에 복사되었습니다!
mkdir ~/images mkdir ~/templates
$ mkdir ~/images
$ mkdir ~/templates
4.3. Setting the Hostname for the System 링크 복사링크가 클립보드에 복사되었습니다!
hostname # Checks the base hostname hostname -f # Checks the long hostname (FQDN)
$ hostname # Checks the base hostname
$ hostname -f # Checks the long hostname (FQDN)
hostnamectl to set a hostname:
sudo hostnamectl set-hostname manager.example.com sudo hostnamectl set-hostname --transient manager.example.com
$ sudo hostnamectl set-hostname manager.example.com
$ sudo hostnamectl set-hostname --transient manager.example.com
/etc/hosts. For example, if the system is named manager.example.com, then /etc/hosts requires an entry like:
127.0.0.1 manager.example.com manager localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 manager.example.com manager localhost localhost.localdomain localhost4 localhost4.localdomain4
4.4. Registering your System 링크 복사링크가 클립보드에 복사되었습니다!
Procedure 4.1. Subscribing to the Required Channels Using Subscription Manager
- Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
sudo subscription-manager register
$ sudo subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Find the entitlement pool for the Red Hat OpenStack Platform director.
sudo subscription-manager list --available --all
$ sudo subscription-manager list --available --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 8 entitlements:
sudo subscription-manager attach --pool=pool_id
$ sudo subscription-manager attach --pool=pool_idCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable all default repositories, and then enable the required Red Hat Enterprise Linux repositories:
sudo subscription-manager repos --disable=* sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-8-rpms --enable=rhel-7-server-openstack-8-director-rpms --enable rhel-7-server-rh-common-rpms
$ sudo subscription-manager repos --disable=* $ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-8-rpms --enable=rhel-7-server-openstack-8-director-rpms --enable rhel-7-server-rh-common-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow These repositories contain packages the director installation requires.Important
Only enable the repositories listed above. Additional repositories can cause package and software conflicts. Do not enable any additional repositories. - Perform an update on your system to make sure you have the latest base system packages:
sudo yum update -y sudo reboot
$ sudo yum update -y $ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5. Installing the Director Packages 링크 복사링크가 클립보드에 복사되었습니다!
sudo yum install -y python-tripleoclient
[stack@director ~]$ sudo yum install -y python-tripleoclient
4.6. Configuring the Director 링크 복사링크가 클립보드에 복사되었습니다!
stack user's home directory as undercloud.conf.
stack user's home directory:
cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
- local_ip
- The IP address defined for the director's Provisioning NIC. This is also the IP address the director uses for its DHCP and PXE boot services. Leave this value as the default
192.0.2.1/24unless you are using a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment. - network_gateway
- The gateway for the Overcloud instances. This is the Undercloud host, which forwards traffic to the External network. Leave this as the default
192.0.2.1unless you are either using a different IP address for the director or want to directly use an external gateway.Note
The director's configuration script also automatically enables IP forwarding using the relevantsysctlkernel parameter. - undercloud_public_vip
- The IP address defined for the director's Public API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges. For example,
192.0.2.2. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the/32netmask. - undercloud_admin_vip
- The IP address defined for the director's Admin API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges. For example,
192.0.2.3. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the/32netmask. - undercloud_service_certificate
- The location and filename of the certificate for OpenStack SSL communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise generate your own self-signed certificate using the guidelines in Appendix A, SSL/TLS Certificate Configuration. These guidelines also contain instructions on setting the SELinux context for your certificate, whether self-signed or from an authority.
- local_interface
- The chosen interface for the director's Provisioning NIC. This is also the device the director uses for its DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the
ip addrcommand. For example, this is the result of anip addrcommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the External NIC useseth0and the Provisioning NIC useseth1, which is currently not configured. In this case, set thelocal_interfacetoeth1. The configuration script attaches this interface to a custom bridge defined with theinspection_interfaceparameter. - network_cidr
- The network that the director uses to manage Overcloud instances. This is the Provisioning network. Leave this as the default
192.0.2.0/24unless you are using a different subnet for the Provisioning network. - masquerade_network
- Defines the network that will masquerade for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that it has external access through the director. Leave this as the default (
192.0.2.0/24) unless you are using a different subnet for the Provisioning network. - dhcp_start, dhcp_end
- The start and end of the DHCP allocation range for Overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
- inspection_interface
- The bridge the director uses for node introspection. This is custom bridge that the director configuration creates. The
LOCAL_INTERFACEattaches to this bridge. Leave this as the defaultbr-ctlplane. - inspection_iprange
- A range of IP address that the director's introspection service uses during the PXE boot and provisioning process. Use comma-separated values to define the start and end of this range. For example,
192.0.2.100,192.0.2.120. Make sure this range contains enough IP addresses for your nodes and does not conflict with the range fordhcp_startanddhcp_end. - inspection_extras
- Defines whether to enable extra hardware collection during the inspection process. Requires
python-hardwareorpython-hardware-detectpackage on the introspection image. - inspection_runbench
- Runs a set of benchmarks during node introspection. Set to
trueto enable. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. See Appendix C, Automatic Profile Tagging for more details. - undercloud_debug
- Sets the log level of Undercloud services to
DEBUG. Set this value totrueto enable. - enable_tempest
- Defines whether to install the validation tools. The default is set to
false, but you can can enable usingtrue. - ipxe_deploy
- Defines whether to use iPXE or standard PXE. The default is
true, which enables iPXE. Set tofalseto set to standard PXE. For more information, see "Changing from iPXE to PXE in Red Hat OpenStack Platform director" on the Red Hat Customer Portal. - store_events
- Defines whether to store events in Ceilometer on the Undercloud.
- undercloud_db_password, undercloud_admin_token, undercloud_admin_password, undercloud_glance_password, etc
- The remaining parameters are the access details for all of the director's services. No change is required for the values. The director's configuration script automatically generates these values if blank in
undercloud.conf. You can retrieve all values after the configuration script completes.Important
The configuration file examples for these parameters use<None>as a placeholder value. Setting these values to<None>leads to a deployment error.
openstack undercloud install
$ openstack undercloud install
undercloud.conf. This script takes several minutes to complete.
undercloud-passwords.conf- A list of all passwords for the director's services.stackrc- A set of initialization variables to help you access the director's command line tools.
stack user to use the command line tools, run the following command:
source ~/stackrc
$ source ~/stackrc
4.7. Obtaining Images for Overcloud Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.
- A deployment kernel and ramdisk - Used for system provisioning and deployment.
- An Overcloud kernel, ramdisk, and full image - A base Overcloud system that is written to the node's hard disk.
rhosp-director-images and rhosp-director-images-ipa packages:
sudo yum install rhosp-director-images rhosp-director-images-ipa
$ sudo yum install rhosp-director-images rhosp-director-images-ipa
images directory on the stack user's home (/home/stack/images):
cp /usr/share/rhosp-director-images/overcloud-full-latest-8.0.tar ~/images/. cp /usr/share/rhosp-director-images/ironic-python-agent-latest-8.0.tar ~/images/.
$ cp /usr/share/rhosp-director-images/overcloud-full-latest-8.0.tar ~/images/.
$ cp /usr/share/rhosp-director-images/ironic-python-agent-latest-8.0.tar ~/images/.
cd ~/images for tarfile in *.tar; do tar -xf $tarfile; done
$ cd ~/images
$ for tarfile in *.tar; do tar -xf $tarfile; done
openstack overcloud image upload --image-path /home/stack/images/
$ openstack overcloud image upload --image-path /home/stack/images/
bm-deploy-kernel, bm-deploy-ramdisk, overcloud-full, overcloud-full-initrd, overcloud-full-vmlinuz. These are the images for deployment and the Overcloud. The script also installs the introspection images on the director's PXE server.
discovery-ramdisk.*). The director copies these files to /httpboot.
4.8. Setting a Nameserver on the Undercloud's Neutron Subnet 링크 복사링크가 클립보드에 복사되었습니다!
neutron subnet. Use the following commands to define the nameserver for the environment:
neutron subnet-list neutron subnet-update [subnet-uuid] --dns-nameserver [nameserver-ip]
$ neutron subnet-list
$ neutron subnet-update [subnet-uuid] --dns-nameserver [nameserver-ip]
Important
DnsServer parameter in your network environment templates. This is covered in the advanced configuration scenario in Section 6.2.2, “Creating a Network Environment File”.
4.9. Backing Up the Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
4.10. Completing the Undercloud Configuration 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 5. Configuring Basic Overcloud Requirements 링크 복사링크가 클립보드에 복사되었습니다!
Workflow
- Create a node definition template and register blank nodes in the director.
- Inspect hardware of all nodes.
- Tag nodes into roles.
- Define additional node properties.
Requirements
- The director node created in Chapter 4, Installing the Undercloud
- A set of bare metal machines for your nodes. The number of node required depends on the type of Overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for information on Overcloud roles). These machines also must comply with the requirements set for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7 image to each node.
- One network connection for our Provisioning network, which is configured as a native VLAN. All nodes must connect to this network and comply with the requirements set in Section 2.3, “Networking Requirements”. For the examples in this chapter, we use 192.0.2.0/24 as the Provisioning subnet with the following IP address assignments:
Expand Table 5.1. Provisioning Network IP Assignments Node NameIP AddressMAC AddressIPMI IP AddressDirector192.0.2.1aa:aa:aa:aa:aa:aaControllerDHCP definedbb:bb:bb:bb:bb:bb192.0.2.205ComputeDHCP definedcc:cc:cc:cc:cc:cc192.0.2.206 - All other network types use the Provisioning network for OpenStack services. However, you can create additional networks for other network traffic types. For more information, see Section 6.2, “Isolating Networks”.
5.1. Registering Nodes for the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
instackenv.json) uses the JSON format file, and contains the hardware and power management details for your nodes.
- pm_type
- The power management driver to use. This example uses the IPMI driver (
pxe_ipmitool). - pm_user, pm_password
- The IPMI username and password.
- pm_addr
- The IP address of the IPMI device.
- mac
- (Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.
- cpu
- (Optional) The number of CPUs on the node.
- memory
- (Optional) The amount of memory in MB.
- disk
- (Optional) The size of the hard disk in GB.
- arch
- (Optional) The system architecture.
Note
stack user's home directory (/home/stack/instackenv.json), then import it into the director using the following command:
openstack baremetal import --json ~/instackenv.json
$ openstack baremetal import --json ~/instackenv.json
openstack baremetal configure boot
$ openstack baremetal configure boot
ironic node-list
$ ironic node-list
5.2. Inspecting the Hardware of Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Note
openstack baremetal introspection bulk start
$ openstack baremetal introspection bulk start
sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
$ sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor -f
Important
ironic node-set-maintenance [NODE UUID] true openstack baremetal introspection start [NODE UUID] ironic node-set-maintenance [NODE UUID] false
$ ironic node-set-maintenance [NODE UUID] true
$ openstack baremetal introspection start [NODE UUID]
$ ironic node-set-maintenance [NODE UUID] false
5.3. Tagging Nodes into Profiles 링크 복사링크가 클립보드에 복사되었습니다!
compute, control, swift-storage, ceph-storage, and block-storage are created during Undercloud installation and are usable without modification in most environments.
Note
profile option to the properties/capabilities parameter for each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the following commands:
ironic node-update 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 add properties/capabilities='profile:compute,boot_option:local' ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
$ ironic node-update 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local'
profile:compute and profile:control options tag the two nodes into each respective profiles.
boot_option:local parameter, which defines the boot mode for each node.
Important
openstack overcloud profiles list
$ openstack overcloud profiles list
5.4. Defining the Root Disk for Nodes 링크 복사링크가 클립보드에 복사되었습니다!
model(String): Device identifier.vendor(String): Device vendor.serial(String): Disk serial number.wwn(String): Unique storage identifier.hctl(String): Host:Channel:Target:Lun for SCSI.size(Integer): Size of the device in GB.
mkdir swift-data
cd swift-data
export IRONIC_DISCOVERD_PASSWORD=`sudo grep admin_password /etc/ironic-inspector/inspector.conf | awk '! /^#/ {print $NF}'`
for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-inspector inspector_data-$node; done
$ mkdir swift-data
$ cd swift-data
$ export IRONIC_DISCOVERD_PASSWORD=`sudo grep admin_password /etc/ironic-inspector/inspector.conf | awk '! /^#/ {print $NF}'`
$ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-inspector inspector_data-$node; done
inspector_data object from introspection. All objects use the node UUID as part of the object name:
for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done
$ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do echo "NODE: $node" ; cat inspector_data-$node | jq '.inventory.disks' ; echo "-----" ; done
WD-000000000002 as the serial number. This requires a change to the root_device parameter for the node definition:
ironic node-update 97e3f7b3-5629-473e-a187-2193ebe0b5c7 add properties/root_device='{"serial": "WD-000000000002"}'
$ ironic node-update 97e3f7b3-5629-473e-a187-2193ebe0b5c7 add properties/root_device='{"serial": "WD-000000000002"}'
Note
Important
name to set the root disk as this value can change when the node boots.
5.5. Completing Basic Configuration 링크 복사링크가 클립보드에 복사되었습니다!
- Customize your environment using advanced configuration step. See Chapter 6, Configuring Advanced Customizations for the Overcloud for more information.
- Or deploy a basic Overcloud. See Chapter 7, Creating the Overcloud for more information.
Important
Chapter 6. Configuring Advanced Customizations for the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Note
6.1. Understanding Heat Templates 링크 복사링크가 클립보드에 복사되었습니다!
6.1.1. Heat Templates 링크 복사링크가 클립보드에 복사되었습니다!
- Parameters - These are settings passed to heat, which provides a way to customize a stack, and any default values for parameters without passed values. These are defined in the
parameterssection of a template. - Resources - These are the specific objects to create and configure as part of a stack. OpenStack contains a set of core resources that span across all components. These are defined in the
resourcessection of a template. - Output - These are values passed from heat after the stack's creation. You can access these values either through the heat API or client tools. These are defined in the
outputsection of a template.
type: OS::Nova::Server to create an instance called my_instance with a particular flavor, image, and key. The stack can return the value of instance_name, which is called My Cirros Instance.
heat stack-list --show-nested
$ heat stack-list --show-nested
6.1.2. Environment Files 링크 복사링크가 클립보드에 복사되었습니다!
- Resource Registry - This section defines custom resource names, linked to other heat templates. This essentially provides a method to create custom resources that do not exist within the core resource collection. These are defined in the
resource_registrysection of an environment file. - Parameters - These are common settings you apply to the top-level template's parameters. For example, if you have a template that deploys nested stacks, such as resource registry mappings, the parameters only apply to the top-level template and not templates for the nested resources. Parameters are defined in the
parameterssection of an environment file. - Parameter Defaults - These parameters modify the default values for parameters in all templates. For example, if you have a Heat template that deploys nested stacks, such as resource registry mappings,the parameter defaults apply to all templates. In other words, the top-level template and those defining all nested resources. The parameter defaults are defined in the
parameter_defaultssection of an environment file.
Important
parameter_defaults instead of parameters When creating custom environment files for your Overcloud. This is so the parameters apply to all stack templates for the Overcloud.
my_env.yaml) might be included when creating a stack from a certain Heat template (my_template.yaml). The my_env.yaml files creates a new resource type called OS::Nova::Server::MyServer. The myserver.yaml file is a Heat template file that provides an implementation for this resource type that overrides any built-in ones. You can include the OS::Nova::Server::MyServer resource in your my_template.yaml file.
MyIP applies a parameter only to the main Heat template that deploys along with this environment file. In this example, it only applies to the parameters in my_template.yaml.
NetworkName applies to both the main Heat template (in this example, my_template.yaml) and the templates associated with resources included the main template, such as the OS::Nova::Server::MyServer resource and its myserver.yaml template in this example.
6.1.3. Core Overcloud Heat Templates 링크 복사링크가 클립보드에 복사되었습니다!
/usr/share/openstack-tripleo-heat-templates.
overcloud.yaml- This is the main template file used to create the Overcloud environment.overcloud-resource-registry-puppet.yaml- This is the main environment file used to create the Overcloud environment. It provides a set of configurations for Puppet modules stored on the Overcloud image. After the director writes the Overcloud image to each node, heat starts the Puppet configuration for each node using the resources registered in this environment file.environments- A directory that contains example environment files to apply to your Overcloud deployment.
6.2. Isolating Networks 링크 복사링크가 클립보드에 복사되었습니다!
- Network 1 - Provisioning
- Network 2 - Internal API
- Network 3 - Tenant Networks
- Network 4 - Storage
- Network 5 - Storage Management
- Network 6 - Management
- Network 7 - External and Floating IP (mapped after Overcloud creation)
|
Network Type
|
Subnet
|
VLAN
|
|---|---|---|
|
Internal API
|
172.16.0.0/24
|
201
|
|
Tenant
|
172.17.0.0/24
|
202
|
|
Storage
|
172.18.0.0/24
|
203
|
|
Storage Management
|
172.19.0.0/24
|
204
|
|
Management
|
172.20.0.0/24
|
205
|
|
External / Floating IP
|
10.1.1.0/24
|
100
|
6.2.1. Creating Custom Interface Templates 링크 복사링크가 클립보드에 복사되었습니다!
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans- Directory containing templates for single NIC with VLANs configuration on a per role basis./usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans- Directory containing templates for bonded NIC configuration on a per role basis./usr/share/openstack-tripleo-heat-templates/network/config/multiple-nics- Directory containing templates for multiple NIC configuration using one NIC per role./usr/share/openstack-tripleo-heat-templates/network/config/single-nic-linux-bridge-vlans- Directory containing templates for single NIC with VLANs configuration on a per role basis and using a Linux bridge instead of an Open vSwitch bridge.
/usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans.
cp -r /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans ~/templates/nic-configs
$ cp -r /usr/share/openstack-tripleo-heat-templates/network/config/bond-with-vlans ~/templates/nic-configs
parameters, resources, and output sections. For this example, you would only edit the resources section. Each resources section begins with the following:
os-apply-config command and os-net-config subcommand to configure the network properties for a node. The network_config section contains your custom interface configuration arranged in a sequence based on type, which includes the following:
- interface
- Defines a single network interface. The configuration defines each interface using either the actual interface name ("eth0", "eth1", "enp0s25") or a set of numbered interfaces ("nic1", "nic2", "nic3").
- type: interface name: nic2- type: interface name: nic2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - vlan
- Defines a VLAN. Use the VLAN ID and subnet passed from the
parameterssection.- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}- type: vlan vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - ovs_bond
- Defines a bond in Open vSwitch to join two or more
interfacestogether. This helps with redundancy and increases bandwidth.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - ovs_bridge
- Defines a bridge in Open vSwitch, which connects multiple
interface,ovs_bondandvlanobjects together.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - linux_bond
- Defines a Linux bond that joins two or more
interfacestogether. This helps with redundancy and increases bandwidth. Make sure to include the kernel-based bonding options in thebonding_optionsparameter. For more information on Linux bonding options, see 4.5.1. Bonding Module Directives in the Red Hat Enterprise Linux 7 Networking Guide.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - linux_bridge
- Defines a Linux bridge, which connects multiple
interface,linux_bondandvlanobjects together.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
/home/stack/templates/nic-configs/controller.yaml template uses the following network_config:
Note
br-ex) and creates a bonded interface called bond1 from two numbered interfaces: nic2 and nic3. The bridge also contains a number of tagged VLAN devices, which use bond1 as a parent device. The template also include an interface that connects back to the director (nic1).
get_param function. You would define these in an environment file you create specifically for your networks.
Important
nic4) that does not use any IP assignments for OpenStack services but still uses DHCP and/or a default route. To avoid network conflicts, remove any unused interfaces from ovs_bridge devices and disable the DHCP and default route settings:
- type: interface name: nic4 use_dhcp: false defroute: false
- type: interface
name: nic4
use_dhcp: false
defroute: false
6.2.2. Creating a Network Environment File 링크 복사링크가 클립보드에 복사되었습니다!
/usr/share/openstack-tripleo-heat-templates/network/config/:
/usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml- Example environment file for single NIC with VLANs configuration in thesingle-nic-vlans) network interface directory. Environment files for disabling the External network (net-single-nic-with-vlans-no-external.yaml) or enabling IPv6 (net-single-nic-with-vlans-v6.yaml) are also available./usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml- Example environment file for bonded NIC configuration in thebond-with-vlansnetwork interface directory. Environment files for disabling the External network (net-bond-with-vlans-no-external.yaml) or enabling IPv6 (net-bond-with-vlans-v6.yaml) are also available./usr/share/openstack-tripleo-heat-templates/environments/net-multiple-nics.yaml- Example environment file for a multiple NIC configuration in themultiple-nicsnetwork interface directory. An environment file for enabling IPv6 (net-multiple-nics-v6.yaml) is also available./usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-linux-bridge-with-vlans.yaml- Example environment file for single NIC with VLANs configuration using a Linux bridge instead of an Open vSwitch bridge, which uses the thesingle-nic-linux-bridge-vlansnetwork interface directory.
/usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml file. Copy this file to the stack user's templates directory.
cp /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml /home/stack/templates/network-environment.yaml
$ cp /usr/share/openstack-tripleo-heat-templates/environments/net-bond-with-vlans.yaml /home/stack/templates/network-environment.yaml
resource_registry section contains modified links to the custom network interface templates for each node role. See Section 6.2.1, “Creating Custom Interface Templates”.
parameter_defaults section contains a list of parameters that define the network options for each network type. For a full reference of these options, see Appendix F, Network Environment Options.
BondInterfaceOvsOptions option provides options for our bonded interface using nic2 and nic3. For more information on bonding options, see Appendix G, Open vSwitch Bonding Options.
Important
6.2.3. Assigning OpenStack Services to Isolated Networks 링크 복사링크가 클립보드에 복사되었습니다!
/home/stack/templates/network-environment.yaml). The ServiceNetMap parameter determines the network types used for each service.
storage places these services on the Storage network instead of the Storage Management network. This means you only need to define a set of parameter_defaults for the Storage network and not the Storage Management network.
6.2.4. Selecting Networks to Deploy 링크 복사링크가 클립보드에 복사되었습니다!
resource_registry section of the environment file for networks and ports do not ordinarily need to be changed. The list of networks can be changed if only a subset of the networks are desired.
Note
environments/network-isolation.yaml on the deployment command line. Instead, specify all the networks and ports in the network environment file.
OS::TripleO::Network::* resources. By default these resources point at a noop.yaml file that does not create any networks. By pointing these resources at the YAML files for each network, you enable the creation of these networks.
storage_mgmt.yaml could be replaced with noop.yaml:
noop.yaml, no network or ports are created, so the services on the Storage Management network would default to the Provisioning network. This can be changed in the ServiceNetMap in order to move the Storage Management services to another network, such as the Storage network.
6.3. Controlling Node Placement 링크 복사링크가 클립보드에 복사되었습니다!
- Assign specific node IDs e.g.
controller-0,controller-1, etc - Assign custom hostnames
- Assign specific IP addresses
- Assign specific Virtual IP addresses
Note
6.3.1. Assigning Specific Node IDs 링크 복사링크가 클립보드에 복사되었습니다!
controller-0, controller-1, compute-0, compute-1, and so forth.
ironic node-update <id> replace properties/capabilities='node:controller-0,boot_option:local'
ironic node-update <id> replace properties/capabilities='node:controller-0,boot_option:local'
node:controller-0 to the node. Repeat this pattern using a unique continuous index, starting from 0, for all nodes. Make sure all nodes for a given role (Controller, Compute, or each of the storage roles) are tagged in the same way or else the Nova scheduler will not match the capabilities correctly.
scheduler_hints_env.yaml) that uses scheduler hints to match the capabilities for each node. For example:
parameter_defaults:
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
parameter_defaults:
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
scheduler_hints_env.yaml environment file with the overcloud deploy command during Overcloud creation.
ControllerSchedulerHintsfor Controller nodes.NovaComputeSchedulerHintsfor Compute nodes.BlockStorageSchedulerHintsfor Block Storage nodes.ObjectStorageSchedulerHintsfor Object Storage nodes.CephStorageSchedulerHintsfor Ceph Storage nodes.
Note
baremetal flavor for deployment and not the flavors designed for profile matching (compute, control, etc). For example:
openstack overcloud deploy ... --control-flavor baremetal --compute-flavor baremetal ...
$ openstack overcloud deploy ... --control-flavor baremetal --compute-flavor baremetal ...
6.3.2. Assigning Custom Hostnames 링크 복사링크가 클립보드에 복사되었습니다!
rack2-row12), match an inventory identifier, or other situations where a custom hostname is desired.
HostnameMap parameter in an environment file, such as the scheduler_hints_env.yaml file from Section 6.3.1, “Assigning Specific Node IDs”. For example:
HostnameMap in the parameter_defaults section, and set each mapping as the original hostname that Heat defines using HostnameFormat parameters (e.g. overcloud-controller-0) and the second value is the desired custom hostname for that node (e.g. overcloud-controller-prod-123-0).
6.3.3. Assigning Predictable IPs 링크 복사링크가 클립보드에 복사되었습니다!
environments/ips-from-pool-all.yaml environment file in the core Heat template collection. Copy this file to the stack user's templates directory.
cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-all.yaml ~/templates/.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/ips-from-pool-all.yaml ~/templates/.
ips-from-pool-all.yaml file.
resource_registry references that override the defaults. These tell the director to use a specific IP for a given port on a node type. Modify each resource to use the absolute path of its respective template. For example:
resource_registry entries related to that node type or network from the environment file.
ControllerIPsfor Controller nodes.NovaComputeIPsfor Compute nodes.CephStorageIPsfor Ceph Storage nodes.BlockStorageIPsfor Block Storage nodes.SwiftStorageIPsfor Object Storage nodes.
internal_api assignments fall outside of the InternalApiAllocationPools range. This avoids conflicts with any IPs chosen automatically. Likewise, make sure the IP assignments do not conflict with the VIP configuration, either for standard predictable VIP placement (see Section 6.3.4, “Assigning Predictable Virtual IPs”) or external load balancing (see Section 6.5, “Configuring External Load Balancing”).
openstack overcloud deploy command. If using network isolation (see Section 6.2, “Isolating Networks”), include this file after the network-isolation.yaml file. For example:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/ips-from-pool-all.yaml [OTHER OPTIONS]
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/ips-from-pool-all.yaml [OTHER OPTIONS]
6.3.4. Assigning Predictable Virtual IPs 링크 복사링크가 클립보드에 복사되었습니다!
parameter_defaults section:
InternalApiVirtualFixedIPs that is not within the InternalApiAllocationPools range.
6.4. Configuring Containerized Compute Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Important
docker.yaml- The main environment file for configuring containerized Compute nodes.docker-network.yaml- The environment file for containerized Compute nodes networking without network isolation.docker-network-isolation.yaml- The environment file for containerized Compute nodes using network isolation.
6.4.1. Examining the Containerized Compute Environment File (docker.yaml) 링크 복사링크가 클립보드에 복사되었습니다!
docker.yaml file is the main environment file for the containerized Compute node configuration. It includes the entries in the resource_registry:
resource_registry: OS::TripleO::ComputePostDeployment: ../docker/compute-post.yaml OS::TripleO::NodeUserData: ../docker/firstboot/install_docker_agents.yaml
resource_registry:
OS::TripleO::ComputePostDeployment: ../docker/compute-post.yaml
OS::TripleO::NodeUserData: ../docker/firstboot/install_docker_agents.yaml
- OS::TripleO::NodeUserData
- Provides a Heat template that uses custom configuration on first boot. In this case, it installs the
openstack-heat-docker-agentscontainer on the Compute nodes when they first boot. This container provides a set of initialization scripts to configure the containerized Compute node and Heat hooks to communicate with the director. - OS::TripleO::ComputePostDeployment
- Provides a Heat template with a set of post-configuration resources for Compute nodes. This includes a software configuration resource that provides a set of
tagsto Puppet:Copy to Clipboard Copied! Toggle word wrap Toggle overflow These tags define the Puppet modules to pass to theopenstack-heat-docker-agentscontainer.
docker.yaml file includes a parameter called NovaImage that replaces the standard overcloud-full image with a different image (atomic-image) when provisioning Compute nodes. See in Section 6.4.2, “Uploading the Atomic Host Image” for instructions on uploading this new image.
docker.yaml file also includes a parameter_defaults section that defines the Docker registry and images to use for our Compute node services. You can modify this section to use a local registry instead of the default registry.access.redhat.com. See Section 6.4.3, “Using a Local Registry” for instructions on configuring a local repository.
6.4.2. Uploading the Atomic Host Image 링크 복사링크가 클립보드에 복사되었습니다!
atomic-image. This is because the Compute node requires this image for the base OS during the provisioning phase of the Overcloud creation.
images subdirectory in the stack user's home directory.
stack user.
glance image-create --name atomic-image --file ~/images/rhel-atomic-cloud-7.2-12.x86_64.qcow2 --disk-format qcow2 --container-format bare
$ glance image-create --name atomic-image --file ~/images/rhel-atomic-cloud-7.2-12.x86_64.qcow2 --disk-format qcow2 --container-format bare
6.4.3. Using a Local Registry 링크 복사링크가 클립보드에 복사되었습니다!
docker.yaml environment file in the templates subdirectory:
cp /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml ~/templates/.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml ~/templates/.
resource_registry to use absolute paths:
resource_registry: OS::TripleO::ComputePostDeployment: /usr/share/openstack-tripleo-heat-templates/docker/compute-post.yaml OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heat-templates/docker/firstboot/install_docker_agents.yaml
resource_registry:
OS::TripleO::ComputePostDeployment: /usr/share/openstack-tripleo-heat-templates/docker/compute-post.yaml
OS::TripleO::NodeUserData: /usr/share/openstack-tripleo-heat-templates/docker/firstboot/install_docker_agents.yaml
DockerNamespace in parameter_defaults to your registry URL. Also set DockerNamespaceIsRegistry to true For example:
parameter_defaults: DockerNamespace: registry.example.com:8787/registry.access.redhat.com DockerNamespaceIsRegistry: true
parameter_defaults:
DockerNamespace: registry.example.com:8787/registry.access.redhat.com
DockerNamespaceIsRegistry: true
6.4.4. Including Environment Files in the Overcloud Deployment 링크 복사링크가 클립보드에 복사되었습니다!
docker.yaml) and the network environment file (docker-network.yaml) for the containerized Compute nodes along with the openstack overcloud deploy command. For example:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network.yaml [OTHER OPTIONS] ...
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network.yaml [OTHER OPTIONS] ...
docker-network-isolation.yaml). Add these files before the network isolation files from Section 6.2, “Isolating Networks”. For example:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml [OTHER OPTIONS] ...
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml [OTHER OPTIONS] ...
6.5. Configuring External Load Balancing 링크 복사링크가 클립보드에 복사되었습니다!
6.6. Configuring IPv6 Networking 링크 복사링크가 클립보드에 복사되었습니다!
6.7. Configuring NFS Storage 링크 복사링크가 클립보드에 복사되었습니다!
/usr/share/openstack-tripleo-heat-templates/environments/. These environment templates help with custom configuration of some of the supported features in a director-created Overcloud. This includes an environment file to help configure storage. This file is located at /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml. Copy this file to the stack user's template directory.
cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
$ cp /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml ~/templates/.
- CinderEnableIscsiBackend
- Enables the iSCSI backend. Set to
false. - CinderEnableRbdBackend
- Enables the Ceph Storage backend. Set to
false. - CinderEnableNfsBackend
- Enables the NFS backend. Set to
true. - NovaEnableRbdBackend
- Enables Ceph Storage for Nova ephemeral storage. Set to
false. - GlanceBackend
- Define the back end to use for Glance. Set to
fileto use file-based storage for images. The Overcloud will save these files in a mounted NFS share for Glance. - CinderNfsMountOptions
- The NFS mount options for the volume storage.
- CinderNfsServers
- The NFS share to mount for volume storage. For example,
192.168.122.1:/export/cinder. - GlanceFilePcmkManage
- Enables Pacemaker to manage the share for image storage. If disabled, the Overcloud stores images in the Controller node's file system. Set to
true. - GlanceFilePcmkFstype
- Defines the file system type that Pacemaker uses for image storage. Set to
nfs. - GlanceFilePcmkDevice
- The NFS share to mount for image storage. For example,
192.168.122.1:/export/glance. - GlanceFilePcmkOptions
- The NFS mount options for the image storage.
Important
context=system_u:object_r:glance_var_lib_t:s0 in the GlanceFilePcmkOptions parameter to allow glance access to the /var/lib directory. Without this SELinux content, glance will fail to write to the mount point.
6.8. Configuring Ceph Storage 링크 복사링크가 클립보드에 복사되었습니다!
- Creating an Overcloud with its own Ceph Storage Cluster
- The director has the ability to create a Ceph Storage Cluster during the creation on the Overcloud. The director creates a set of Ceph Storage nodes that use the Ceph OSD to store the data. In addition, the director install the Ceph Monitor service on the Overcloud's Controller nodes. This means if an organization creates an Overcloud with three highly available controller nodes, the Ceph Monitor also becomes a highly available service.
- Integrating a Existing Ceph Storage into an Overcloud
- If you already have an existing Ceph Storage Cluster, you can integrate this during an Overcloud deployment. This means you manage and scale the cluster outside of the Overcloud configuration.
6.9. Configuring Third Party Storage 링크 복사링크가 클립보드에 복사되었습니다!
- Dell Storage Center
- Deploys a single Dell Storage Center back end for the Block Storage (cinder) service.The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml.See the Dell Storage Center Back End Guide for full configuration information. - Dell EqualLogic
- Deploys a single Dell EqualLogic back end for the Block Storage (cinder) service.The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-eqlx-config.yaml.See the Dell EqualLogic Back End Guide for full configuration information. - NetApp Block Storage
- Deploys a NetApp storage appliance as a back end for the Block Storage (cinder) service.The environment file is located at
/usr/share/openstack-tripleo-heat-templates/environments/cinder-dellsc-config.yaml/cinder-netapp-config.yaml.See the NetApp Block Storage Back End Guide for full configuration information.
6.10. Configuring the Overcloud Time Zone 링크 복사링크가 클립보드에 복사되었습니다!
TimeZone parameter in an environment file. If you leave the TimeZone parameter blank, the Overcloud will default to UTC time.
Japan, you would examine the contents of /usr/share/zoneinfo to locate a suitable entry:
ls /usr/share/zoneinfo/
$ ls /usr/share/zoneinfo/
Africa Asia Canada Cuba EST GB GMT-0 HST iso3166.tab Kwajalein MST NZ-CHAT posix right Turkey UTC Zulu
America Atlantic CET EET EST5EDT GB-Eire GMT+0 Iceland Israel Libya MST7MDT Pacific posixrules ROC UCT WET
Antarctica Australia Chile Egypt Etc GMT Greenwich Indian Jamaica MET Navajo Poland PRC ROK Universal W-SU
Arctic Brazil CST6CDT Eire Europe GMT0 Hongkong Iran Japan Mexico NZ Portugal PST8PDT Singapore US zone.tab
Japan is an individual time zone file in this result, but Africa is a directory containing additional time zone files:
ls /usr/share/zoneinfo/Africa/
$ ls /usr/share/zoneinfo/Africa/
Abidjan Algiers Bamako Bissau Bujumbura Ceuta Dar_es_Salaam El_Aaiun Harare Kampala Kinshasa Lome Lusaka Maseru Monrovia Niamey Porto-Novo Tripoli
Accra Asmara Bangui Blantyre Cairo Conakry Djibouti Freetown Johannesburg Khartoum Lagos Luanda Malabo Mbabane Nairobi Nouakchott Sao_Tome Tunis
Addis_Ababa Asmera Banjul Brazzaville Casablanca Dakar Douala Gaborone Juba Kigali Libreville Lubumbashi Maputo Mogadishu Ndjamena Ouagadougou Timbuktu Windhoek
Japan:
parameter_defaults: TimeZone: 'Japan'
parameter_defaults:
TimeZone: 'Japan'
openstack overcloud deploy --templates -e timezone.yaml
$ openstack overcloud deploy --templates -e timezone.yaml
6.11. Enabling SSL/TLS on the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Note
Enabling SSL/TLS
enable-tls.yaml environment file from the Heat template collection:
cp -r /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml ~/templates/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/enable-tls.yaml ~/templates/.
parameter_defaults:
- SSLCertificate:
- Copy the contents of the certificate file into the
SSLCertificateparameter. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
The certificate authority contents require the same indentation level for all new lines. - SSLKey:
- Copy the contents of the private key into the
SSLKeyparameter. For example>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
The private key contents require the same indentation level for all new lines. - EndpointMap:
- The
EndpointMapcontains a mapping of the services using HTTPS and HTTP communication. If using DNS for SSL communication, leave this section with the defaults. However, if using an IP address for the SSL certificate's common name (see Appendix A, SSL/TLS Certificate Configuration), replace all instances ofCLOUDNAMEwithIP_ADDRESS. Use the following command to accomplish this:sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yaml
$ sed -i 's/CLOUDNAME/IP_ADDRESS/' ~/templates/enable-tls.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Do not substituteIP_ADDRESSorCLOUDNAMEfor actual values. Heat replaces these variables with the appropriate value during the Overcloud creation.
resource_registry:
- OS::TripleO::NodeTLSData:
- Change the resource path for
OS::TripleO::NodeTLSData:to an absolute path:resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml
resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Injecting a Root Certificate
inject-trust-anchor.yaml environment file from the heat template collection:
cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/environments/inject-trust-anchor.yaml ~/templates/.
parameter_defaults:
- SSLRootCertificate:
- Copy the contents of the root certificate authority file into the
SSLRootCertificateparameter. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
The certificate authority contents require the same indentation level for all new lines.
resource_registry:
- OS::TripleO::NodeTLSCAData:
- Change the resource path for
OS::TripleO::NodeTLSCAData:to an absolute path:resource_registry: OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yaml
resource_registry: OS::TripleO::NodeTLSCAData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/ca-inject.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Configuring DNS Endpoints
~/templates/cloudname.yaml) to define the hostname of the Overcloud's endpoints. Use the following parameters:
parameter_defaults:
- CloudName:
- The DNS hostname of the Overcloud endpoints.
- DnsServers:
- A list of DNS servers to use. The configured DNS servers must contain an entry for the configured
CloudNamethat matches the IP address of the Public API.
parameter_defaults: CloudName: overcloud.example.com DnsServers: ["10.0.0.1"]
parameter_defaults:
CloudName: overcloud.example.com
DnsServers: ["10.0.0.1"]
Adding Environment Files During Overcloud Creation
openstack overcloud deploy) in Chapter 7, Creating the Overcloud uses the -e option to add environment files. Add the environment files from this section in the following order:
- The environment file to enable SSL/TLS (
enable-tls.yaml) - The environment file to set the DNS hostname (
cloudname.yaml) - The environment file to inject the root certificate authority (
inject-trust-anchor.yaml)
openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/enable-tls.yaml -e ~/templates/cloudname.yaml -e ~/templates/inject-trust-anchor.yaml
6.12. Registering the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Method 1 - Command Line
openstack overcloud deploy) uses a set of options to define your registration details. The table in Section 7.1, “Setting Overcloud Parameters” contains these options and their descriptions. Include these options when running the deployment command in Chapter 7, Creating the Overcloud. For example:
openstack overcloud deploy --templates --rhel-reg --reg-method satellite --reg-sat-url http://example.satellite.com --reg-org MyOrg --reg-activation-key MyKey --reg-force [...]
# openstack overcloud deploy --templates --rhel-reg --reg-method satellite --reg-sat-url http://example.satellite.com --reg-org MyOrg --reg-activation-key MyKey --reg-force [...]
Method 2 - Environment File
cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration ~/templates/.
$ cp -r /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration ~/templates/.
~/templates/rhel-registration/environment-rhel-registration.yaml and modify the following values to suit your registration method and details.
- rhel_reg_method
- Choose the registration method. Either
portal,satellite, ordisable. - rhel_reg_type
- The type of unit to register. Leave blank to register as a
system - rhel_reg_auto_attach
- Automatically attach compatible subscriptions to this system. Set to
trueto enable. - rhel_reg_service_level
- The service level to use for auto attachment.
- rhel_reg_release
- Use this parameter to set a release version for auto attachment. Leave blank to use the default from Red Hat Subscription Manager.
- rhel_reg_pool_id
- The subscription pool ID to use. Use this if not auto-attaching subscriptions.
- rhel_reg_sat_url
- The base URL of the Satellite server to register Overcloud nodes. Use the Satellite's HTTP URL and not the HTTPS URL for this parameter. For example, use
http://satellite.example.comand nothttps://satellite.example.com. The Overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the Overcloud obtains thekatello-ca-consumer-latest.noarch.rpmfile, registers withsubscription-manager, and installskatello-agent. If a Red Hat Satellite 5 server, the Overcloud obtains theRHN-ORG-TRUSTED-SSL-CERTfile and registers withrhnreg_ks. - rhel_reg_server_url
- The hostname of the subscription service to use. The default is for Customer Portal Subscription Management,
subscription.rhn.redhat.com. If this option is not used, the system is registered with Customer Portal Subscription Management. The subscription server URL uses the form ofhttps://hostname:port/prefix. - rhel_reg_base_url
- Gives the hostname of the content delivery server to use to receive updates. The default is
https://cdn.redhat.com. Since Satellite 6 hosts its own content, the URL must be used for systems registered with Satellite 6. The base URL for content uses the form ofhttps://hostname:port/prefix. - rhel_reg_org
- The organization to use for registration.
- rhel_reg_environment
- The environment to use within the chosen organization.
- rhel_reg_repos
- A comma-separated list of repositories to enable. See Section 2.5, “Repository Requirements” for repositories to enable.
- rhel_reg_activation_key
- The activation key to use for registration.
- rhel_reg_user, rhel_reg_password
- The username and password for registration. If possible, use activation keys for registration.
- rhel_reg_machine_name
- The machine name. Leave this as blank to use the hostname of the node.
- rhel_reg_force
- Set to
trueto force your registration options. For example, when re-registering nodes. - rhel_reg_sat_repo
- The repository containing Red Hat Satellite 6's management tools, such as
katello-agent. For example,rhel-7-server-satellite-tools-6.1-rpms.
openstack overcloud deploy) in Chapter 7, Creating the Overcloud uses the -e option to add environment files. Add both ~/templates/rhel-registration/environment-rhel-registration.yaml and ~/templates/rhel-registration/rhel-registration-resource-registry.yaml. For example:
openstack overcloud deploy --templates [...] -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml -e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml
$ openstack overcloud deploy --templates [...] -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml -e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml
Important
OS::TripleO::NodeExtraConfig Heat resource. This means you can only use this resource for registration. See Section 6.14, “Customizing Overcloud Pre-Configuration” for more information.
6.13. Customizing Configuration on First Boot 링크 복사링크가 클립보드에 복사되었습니다!
cloud-init, which you can call using the OS::TripleO::NodeUserData resource type.
/home/stack/templates/nameserver.yaml) that runs a script to append each node's resolv.conf with a specific nameserver. You can use the OS::TripleO::MultipartMime resource type to send the configuration script.
/home/stack/templates/firstboot.yaml) that registers your heat template as the OS::TripleO::NodeUserData resource type.
resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml
resource_registry:
OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml
openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml
$ openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml
-e applies the environment file to the Overcloud stack.
Important
OS::TripleO::NodeUserData to one heat template. Subsequent usage overrides the heat template to use.
6.14. Customizing Overcloud Pre-Configuration 링크 복사링크가 클립보드에 복사되었습니다!
- OS::TripleO::ControllerExtraConfigPre
- Additional configuration applied to Controller nodes before the core Puppet configuration.
- OS::TripleO::ComputeExtraConfigPre
- Additional configuration applied to Compute nodes before the core Puppet configuration.
- OS::TripleO::CephStorageExtraConfigPre
- Additional configuration applied to CephStorage nodes before the core Puppet configuration.
- OS::TripleO::NodeExtraConfig
- Additional configuration applied to all nodes roles before the core Puppet configuration.
/home/stack/templates/nameserver.yaml) that runs a script to append each node's resolv.conf with a variable nameserver.
- ExtraPreConfig
- This defines a software configuration. In this example, we define a Bash
scriptand Heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - ExtraPreDeployments
- This executes a software configuration, which is the software configuration from the
ExtraPreConfigresource. Note the following:- The
serverparameter is provided by the parent template and is mandatory in templates for this hook. input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates.
/home/stack/templates/pre_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfig resource type.
openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml
$ openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml
Important
6.15. Customizing Overcloud Post-Configuration 링크 복사링크가 클립보드에 복사되었습니다!
OS::TripleO::NodeExtraConfigPost resource to apply configuration using the standard OS::Heat::SoftwareConfig types. This applies additional configuration after the main Overcloud configuration completes.
/home/stack/templates/nameserver.yaml) that runs a script to append each node's resolv.conf with a variable nameserver.
- ExtraConfig
- This defines a software configuration. In this example, we define a Bash
scriptand Heat replaces_NAMESERVER_IP_with the value stored in thenameserver_ipparameter. - ExtraDeployments
- This executes a software configuration, which is the software configuration from the
ExtraConfigresource. Note the following:- The
serversparameter is provided by the parent template and is mandatory in templates for this hook. input_valuescontains a parameter calleddeploy_identifier, which stores theDeployIdentifierfrom the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates.
/home/stack/templates/post_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml
$ openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml
Important
OS::TripleO::NodeExtraConfigPost to only one heat template. Subsequent usage overrides the heat template to use.
6.16. Customizing Puppet Configuration Data 링크 복사링크가 클립보드에 복사되었습니다!
- ExtraConfig
- Configuration to add to all nodes.
- controllerExtraConfig
- Configuration to add to all Controller nodes.
- NovaComputeExtraConfig
- Configuration to add to all Compute nodes.
- BlockStorageExtraConfig
- Configuration to add to all Block Storage nodes.
- ObjectStorageExtraConfig
- Configuration to add to all Object Storage nodes
- CephStorageExtraConfig
- Configuration to add to all Ceph Storage nodes
parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese:
parameter_defaults:
NovaComputeExtraConfig:
nova::compute::reserved_host_memory: 1024
nova::compute::vnc_keymap: ja
parameter_defaults:
NovaComputeExtraConfig:
nova::compute::reserved_host_memory: 1024
nova::compute::vnc_keymap: ja
openstack overcloud deploy.
Important
6.17. Applying Custom Puppet Configuration 링크 복사링크가 클립보드에 복사되었습니다!
motd to each node. The process for accomplishing is to first create a Heat template (/home/stack/templates/custom_puppet_config.yaml) that launches Puppet configuration.
/home/stack/templates/motd.pp within the template and passes it to nodes for configuration. The motd.pp file itself contains the Puppet classes to install and configure motd.
/home/stack/templates/puppet_post_config.yaml) that registers your heat template as the OS::TripleO::NodeExtraConfigPost: resource type.
resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml
resource_registry:
OS::TripleO::NodeExtraConfigPost: /home/stack/templates/custom_puppet_config.yaml
openstack overcloud deploy --templates -e /home/stack/templates/puppet_post_config.yaml
$ openstack overcloud deploy --templates -e /home/stack/templates/puppet_post_config.yaml
motd.pp to all nodes in the Overcloud.
6.18. Using Customized Core Heat Templates 링크 복사링크가 클립보드에 복사되었습니다!
/usr/share/openstack-tripleo-heat-templates to the stack user's templates directory:
cp -r /usr/share/openstack-tripleo-heat-templates ~/templates/my-overcloud
$ cp -r /usr/share/openstack-tripleo-heat-templates ~/templates/my-overcloud
openstack overcloud deploy, we use the --templates option to specify your local template directory. This occurs later in this scenario (see Chapter 7, Creating the Overcloud).
Note
/usr/share/openstack-tripleo-heat-templates) if you specify the --templates option without a directory.
Important
/usr/share/openstack-tripleo-heat-templates. Red Hat recommends using the methods from the following section instead of modifying the heat template collection:
git.
Chapter 7. Creating the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy command to create it. Before running this command, you should familiarize yourself with key options and how to include custom environment files. This chapter discusses the openstack overcloud deploy command and the options associated with it.
Warning
openstack overcloud deploy as a background process. The Overcloud creation might hang in mid-deployment if started as a background process.
7.1. Setting Overcloud Parameters 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy command.
|
Parameter
|
Description
|
Example
|
|---|---|---|
|
--templates [TEMPLATES]
|
The directory containing the Heat templates to deploy. If blank, the command uses the default template location at
/usr/share/openstack-tripleo-heat-templates/
|
~/templates/my-overcloud
|
|
--stack STACK
|
The name of the stack to create or update
|
overcloud
|
|
-t [TIMEOUT], --timeout [TIMEOUT]
|
Deployment timeout in minutes
|
240
|
|
--control-scale [CONTROL_SCALE]
|
The number of Controller nodes to scale out
|
3
|
|
--compute-scale [COMPUTE_SCALE]
|
The number of Compute nodes to scale out
|
3
|
|
--ceph-storage-scale [CEPH_STORAGE_SCALE]
|
The number of Ceph Storage nodes to scale out
|
3
|
|
--block-storage-scale [BLOCK_STORAGE_SCALE]
|
The number of Cinder nodes to scale out
|
3
|
|
--swift-storage-scale [SWIFT_STORAGE_SCALE]
|
The number of Swift nodes to scale out
|
3
|
|
--control-flavor [CONTROL_FLAVOR]
|
The flavor to use for Controller nodes
|
control
|
|
--compute-flavor [COMPUTE_FLAVOR]
|
The flavor to use for Compute nodes
|
compute
|
|
--ceph-storage-flavor [CEPH_STORAGE_FLAVOR]
|
The flavor to use for Ceph Storage nodes
|
ceph-storage
|
|
--block-storage-flavor [BLOCK_STORAGE_FLAVOR]
|
The flavor to use for Cinder nodes
|
cinder-storage
|
|
--swift-storage-flavor [SWIFT_STORAGE_FLAVOR]
|
The flavor to use for Swift storage nodes
|
swift-storage
|
|
--neutron-flat-networks [NEUTRON_FLAT_NETWORKS]
|
(DEPRECATED) Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation
|
datacentre
|
|
--neutron-physical-bridge [NEUTRON_PHYSICAL_BRIDGE]
|
(DEPRECATED) An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed
|
br-ex
|
|
--neutron-bridge-mappings [NEUTRON_BRIDGE_MAPPINGS]
|
(DEPRECATED) The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network
|
datacentre:br-ex
|
|
--neutron-public-interface [NEUTRON_PUBLIC_INTERFACE]
|
(DEPRECATED) Defines the interface to bridge onto br-ex for network nodes
|
nic1, eth0
|
|
--neutron-network-type [NEUTRON_NETWORK_TYPE]
|
(DEPRECATED) The tenant network type for Neutron
|
gre or vxlan
|
|
--neutron-tunnel-types [NEUTRON_TUNNEL_TYPES]
|
(DEPRECATED) The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string
|
'vxlan' 'gre,vxlan'
|
|
--neutron-tunnel-id-ranges [NEUTRON_TUNNEL_ID_RANGES]
|
(DEPRECATED) Ranges of GRE tunnel IDs to make available for tenant network allocation
|
1:1000
|
|
--neutron-vni-ranges [NEUTRON_VNI_RANGES]
|
(DEPRECATED) Ranges of VXLAN VNI IDs to make available for tenant network allocation
|
1:1000
|
|
--neutron-disable-tunneling
|
(DEPRECATED) Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron
| |
|
--neutron-network-vlan-ranges [NEUTRON_NETWORK_VLAN_RANGES]
|
(DEPRECATED) The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the 'datacentre' physical network
|
datacentre:1:1000
|
|
--neutron-mechanism-drivers [NEUTRON_MECHANISM_DRIVERS]
|
(DEPRECATED) The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string
|
'openvswitch,l2population'
|
|
--libvirt-type [LIBVIRT_TYPE]
|
Virtualization type to use for hypervisors
|
kvm,qemu
|
|
--ntp-server [NTP_SERVER]
|
Network Time Protocol (NTP) server to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example:
--ntp-server 0.centos.pool.org,1.centos.pool.org. For a high availability cluster deployment, it is essential that your controllers are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices.
|
pool.ntp.org
|
|
--no-proxy [NO_PROXY]
|
Defines custom values for the environment variable no_proxy, which excludes certain domain extensions from proxy communication
| |
|
--overcloud-ssh-user OVERCLOUD_SSH_USER
|
Defines the SSH user to access the Overcloud nodes. Normally SSH access occurs through the
heat-admin user.
|
ocuser
|
|
-e [EXTRA HEAT TEMPLATE], --extra-template [EXTRA HEAT TEMPLATE]
|
Extra environment files to pass to the Overcloud deployment. Can be specified more than once. Note that the order of environment files passed to the
openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files.
|
-e ~/templates/my-config.yaml
|
|
--validation-errors-fatal
|
The Overcloud creation process performs a set of pre-deployment checks. This option exits if any errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.
| |
|
--validation-warnings-fatal
|
The Overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks.
| |
|
--dry-run
|
Performs validation check on the Overcloud but does not actually create the Overcloud.
| |
|
--rhel-reg
|
Register Overcloud nodes to the Customer Portal or Satellite 6
| |
|
--reg-method
|
Registration method to use for the overcloud nodes
| satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal
|
|
--reg-org [REG_ORG]
|
Organization to use for registration
| |
|
--reg-force
|
Register the system even if it is already registered
| |
|
--reg-sat-url [REG_SAT_URL]
|
The base URL of the Satellite server to register Overcloud nodes. Use the Satellite's HTTP URL and not the HTTPS URL for this parameter. For example, use
http://satellite.example.com and not https://satellite.example.com. The Overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If a Red Hat Satellite 6 server, the Overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager, and installs katello-agent. If a Red Hat Satellite 5 server, the Overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks.
| |
|
--reg-activation-key [REG_ACTIVATION_KEY]
|
Activation key to use for registration
| |
Note
openstack help overcloud deploy
$ openstack help overcloud deploy
7.2. Including Environment Files in Overcloud Creation 링크 복사링크가 클립보드에 복사되었습니다!
-e includes an environment file to customize your Overcloud. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:
- Any network isolation files, including the initialization file (
environments/network-isolation.yaml) from the heat template collection and then your custom NIC configuration file. See Section 6.2, “Isolating Networks” for more information on network isolation. - Any external load balancing environment files.
- Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
- Any environment files for Red Hat CDN or Satellite registration.
- Any other custom environment files.
-e option become part of your Overcloud's stack definition. The director requires these environment files for re-deployment and post-deployment functions in Chapter 8, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage to your Overcloud.
- Modify parameters in the custom environment files and Heat templates
- Run the
openstack overcloud deploycommand again with the same environment files
Important
deploy-overcloud.sh:
7.3. Overcloud Creation Example 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
--templates- Creates the Overcloud using the Heat template collection in/usr/share/openstack-tripleo-heat-templates.-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml- The-eoption adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration.-e ~/templates/network-environment.yaml- The-eoption adds an additional environment file to the Overcloud deployment. In this case, it is the network environment file from Section 6.2.2, “Creating a Network Environment File”.-e ~/templates/storage-environment.yaml- The-eoption adds an additional environment file to the Overcloud deployment. In this case, it is a custom environment file that initializes our storage configuration.--control-scale 3- Scale the Controller nodes to three.--compute-scale 3- Scale the Compute nodes to three.--ceph-storage-scale 3- Scale the Ceph Storage nodes to three.--control-flavor control- Use the a specific flavor for the Controller nodes.--compute-flavor compute- Use the a specific flavor for the Compute nodes.--ceph-storage-flavor ceph-storage- Use the a specific flavor for the Ceph Storage nodes.--ntp-server pool.ntp.org- Use an NTP server for time synchronization. This is useful for keeping the Controller node cluster in synchronization.
7.4. Monitoring the Overcloud Creation 링크 복사링크가 클립보드에 복사되었습니다!
stack user and run:
source ~/stackrc # Initializes the stack user to use the CLI commands heat stack-list --show-nested
$ source ~/stackrc # Initializes the stack user to use the CLI commands
$ heat stack-list --show-nested
heat stack-list --show-nested command shows the current stage of the Overcloud creation.
7.5. Accessing the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
overcloudrc, in your stack user's home director. Run the following command to use this file:
source ~/overcloudrc
$ source ~/overcloudrc
source ~/stackrc
$ source ~/stackrc
heat-admin. The stack user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:
nova list
$ nova list
heat-admin user and the node's IP address:
ssh heat-admin@192.0.2.23
$ ssh heat-admin@192.0.2.23
7.6. Completing the Overcloud Creation 링크 복사링크가 클립보드에 복사되었습니다!
Chapter 8. Performing Tasks after Overcloud Creation 링크 복사링크가 클립보드에 복사되었습니다!
8.1. Creating the Overcloud Tenant Network 링크 복사링크가 클립보드에 복사되었습니다!
overcloud and create an initial Tenant network in Neutron. For example:
source ~/overcloudrc neutron net-create default neutron subnet-create --name default --gateway 172.20.1.1 default 172.20.0.0/16
$ source ~/overcloudrc
$ neutron net-create default
$ neutron subnet-create --name default --gateway 172.20.1.1 default 172.20.0.0/16
default. The Overcloud automatically assigns IP addresses from this network using an internal DHCP mechanism.
neutron net-list:
8.2. Creating the Overcloud External Network 링크 복사링크가 클립보드에 복사되었습니다!
Using a Native VLAN
overcloud and create an External network in Neutron. For example:
source ~/overcloudrc neutron net-create nova --router:external --provider:network_type flat --provider:physical_network datacentre neutron subnet-create --name nova --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 nova 10.1.1.0/24
$ source ~/overcloudrc
$ neutron net-create nova --router:external --provider:network_type flat --provider:physical_network datacentre
$ neutron subnet-create --name nova --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 nova 10.1.1.0/24
nova. The Overcloud requires this specific name for the default floating IP pool. This is also important for the validation tests in Section 8.5, “Validating the Overcloud”.
datacentre physical network. As a default, datacentre maps to the br-ex bridge. Leave this option as the default unless you have used custom neutron settings during the Overcloud creation.
Using a Non-Native VLAN
source ~/overcloudrc neutron net-create nova --router:external --provider:network_type vlan --provider:physical_network datacentre --provider:segmentation_id 104 neutron subnet-create --name nova --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 nova 10.1.1.0/24
$ source ~/overcloudrc
$ neutron net-create nova --router:external --provider:network_type vlan --provider:physical_network datacentre --provider:segmentation_id 104
$ neutron subnet-create --name nova --enable_dhcp=False --allocation-pool=start=10.1.1.51,end=10.1.1.250 --gateway=10.1.1.1 nova 10.1.1.0/24
provider:segmentation_id value defines the VLAN to use. In this case, you can use 104.
neutron net-list:
8.3. Creating Additional Floating IP Networks 링크 복사링크가 클립보드에 복사되었습니다!
br-ex, as long as you meet the following conditions:
NeutronExternalNetworkBridgeis set to"''"in your network environment file.- You have mapped the additional bridge during deployment. For example, to map a new bridge called
br-floatingto thefloatingphysical network:openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --neutron-bridge-mappings datacentre:br-ex,floating:br-floating
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --neutron-bridge-mappings datacentre:br-ex,floating:br-floatingCopy to Clipboard Copied! Toggle word wrap Toggle overflow
neutron net-create ext-net --router:external --provider:physical_network floating --provider:network_type vlan --provider:segmentation_id 105 neutron subnet-create --name ext-subnet --enable_dhcp=False --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 ext-net 10.1.2.0/24
$ neutron net-create ext-net --router:external --provider:physical_network floating --provider:network_type vlan --provider:segmentation_id 105
$ neutron subnet-create --name ext-subnet --enable_dhcp=False --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 ext-net 10.1.2.0/24
8.4. Creating the Overcloud Provider Network 링크 복사링크가 클립보드에 복사되었습니다!
neutron net-create --provider:physical_network datacentre --provider:network_type vlan --provider:segmentation_id 201 --shared provider_network
$ neutron net-create --provider:physical_network datacentre --provider:network_type vlan --provider:segmentation_id 201 --shared provider_network
neutron subnet-create --name provider-subnet --enable_dhcp=True --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 provider_network 10.9.101.0/24
$ neutron subnet-create --name provider-subnet --enable_dhcp=True --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 provider_network 10.9.101.0/24
8.5. Validating the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
source ~/stackrc sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
$ source ~/stackrc
$ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal
$ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
heat_stack_owner role exists in your Overcloud:
keystone role-create --name heat_stack_owner
$ keystone role-create --name heat_stack_owner
tempest directory in your stack user's home directory and install a local version of the Tempest suite:
mkdir ~/tempest cd ~/tempest /usr/share/openstack-tempest-liberty/tools/configure-tempest-directory
$ mkdir ~/tempest
$ cd ~/tempest
$ /usr/share/openstack-tempest-liberty/tools/configure-tempest-directory
~/tempest-deployer-input.conf. This file provides a set of Tempest configuration options relevant to your Overcloud. Run the following command to use this file to configure Tempest:
tools/config_tempest.py --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD --network-id d474fe1f-222d-4e32-9242-cd1fefe9c14b
$ tools/config_tempest.py --deployer-input ~/tempest-deployer-input.conf --debug --create identity.uri $OS_AUTH_URL identity.admin_password $OS_PASSWORD --network-id d474fe1f-222d-4e32-9242-cd1fefe9c14b
$OS_AUTH_URL and $OS_PASSWORD environment variables use values set from the overcloudrc file sourced previously. The --network-id is the UUID of the external network created in Section 8.2, “Creating the Overcloud External Network”.
Important
http_proxy environment variable to use a proxy for command line operations.
tools/run-tests.sh
$ tools/run-tests.sh
Note
'.*smoke' option.
tools/run-tests.sh '.*smoke'
$ tools/run-tests.sh '.*smoke'
tempest.log file generated in the same directory. For example, the output might show the following failed test:
{2} tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair [18.305114s] ... FAILED
{2} tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_specify_keypair [18.305114s] ... FAILED
ServersTestJSON:test_create_specify_keypair in the log:
Note
-A 4 option shows the next four lines, which are usually the request header and body and response header and body.
source ~/stackrc sudo ovs-vsctl del-port vlan201
$ source ~/stackrc
$ sudo ovs-vsctl del-port vlan201
8.6. Fencing the Controller Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Note
heat-admin user from the stack user on the director. The Overcloud creation automatically copies the stack user's SSH key to each node's heat-admin.
pcs status:
pcs property show:
|
Device
|
Type
|
|---|---|
fence_ipmilan
|
The Intelligent Platform Management Interface (IPMI)
|
fence_idrac, fence_drac5
|
Dell Remote Access Controller (DRAC)
|
fence_ilo
|
Integrated Lights-Out (iLO)
|
fence_ucs
|
Cisco UCS - For more information, see Configuring Cisco Unified Computing System (UCS) Fencing on an OpenStack High Availability Environment
|
fence_xvm, fence_virt
|
Libvirt and SSH
|
fence_ipmilan) as an example.
sudo pcs stonith describe fence_ipmilan
$ sudo pcs stonith describe fence_ipmilan
stonith device to Pacemaker for each node. Use the following commands for the cluster:
Note
sudo pcs stonith create my-ipmilan-for-controller-0 fence_ipmilan pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s sudo pcs constraint location my-ipmilan-for-controller-0 avoids overcloud-controller-0
$ sudo pcs stonith create my-ipmilan-for-controller-0 fence_ipmilan pcmk_host_list=overcloud-controller-0 ipaddr=192.0.2.205 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-0 avoids overcloud-controller-0
sudo pcs stonith create my-ipmilan-for-controller-1 fence_ipmilan pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s sudo pcs constraint location my-ipmilan-for-controller-1 avoids overcloud-controller-1
$ sudo pcs stonith create my-ipmilan-for-controller-1 fence_ipmilan pcmk_host_list=overcloud-controller-1 ipaddr=192.0.2.206 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-1 avoids overcloud-controller-1
sudo pcs stonith create my-ipmilan-for-controller-2 fence_ipmilan pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s sudo pcs constraint location my-ipmilan-for-controller-2 avoids overcloud-controller-2
$ sudo pcs stonith create my-ipmilan-for-controller-2 fence_ipmilan pcmk_host_list=overcloud-controller-2 ipaddr=192.0.2.207 login=admin passwd=p@55w0rd! lanplus=1 cipher=1 op monitor interval=60s
$ sudo pcs constraint location my-ipmilan-for-controller-2 avoids overcloud-controller-2
sudo pcs stonith show
$ sudo pcs stonith show
sudo pcs stonith show [stonith-name]
$ sudo pcs stonith show [stonith-name]
stonith property to true:
sudo pcs property set stonith-enabled=true
$ sudo pcs property set stonith-enabled=true
sudo pcs property show
$ sudo pcs property show
8.7. Modifying the Overcloud Environment 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy command from your initial Overcloud creation. For example, if you created an Overcloud using Chapter 7, Creating the Overcloud, you would rerun the following command:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. It does not recreate the Overcloud, but rather changes the existing Overcloud.
openstack overcloud deploy command with a -e option. For example:
openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/new-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/storage-environment.yaml -e ~/templates/new-environment.yaml --control-scale 3 --compute-scale 3 --ceph-storage-scale 3 --control-flavor control --compute-flavor compute --ceph-storage-flavor ceph-storage --ntp-server pool.ntp.org
Important
8.8. Importing Virtual Machines into the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
nova image-create instance_name image_name glance image-download image_name --file exported_vm.qcow2
$ nova image-create instance_name image_name
$ glance image-download image_name --file exported_vm.qcow2
glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported
$ glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
$ nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported
Important
8.9. Migrating VMs from an Overcloud Compute Node 링크 복사링크가 클립보드에 복사되었습니다!
nova user with access to other Compute nodes during the migration process. The director creates this key automatically.
Important
openstack-tripleo-heat-templates-0.8.14-29.el7ost package and later versions.
openstack-tripleo-heat-templates-0.8.14-29.el7ost package or later versions.
Procedure 8.1. Migrating Instances off the Compute Node
- From the director, source the
overcloudrcand obtain a list of the current nova services:source ~/stack/overcloudrc nova service-list
$ source ~/stack/overcloudrc $ nova service-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable the
nova-computeservice on the node you intend to migrate.nova service-disable [hostname] nova-compute
$ nova service-disable [hostname] nova-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow This prevents new instances from being scheduled on it. - Begin the process of migrating instances off the node:
nova host-servers-migrate [hostname]
$ nova host-servers-migrate [hostname]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The current status of the migration process can be retrieved with the command:
nova migration-list
$ nova migration-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When migration of each instance completes, its state in nova will change to
VERIFY_RESIZE. This gives you an opportunity to confirm that the migration completed successfully, or to roll it back. To confirm the migration, use the command:nova resize-confirm [server-name]
$ nova resize-confirm [server-name]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
nova service-enable [hostname] nova-compute
$ nova service-enable [hostname] nova-compute
8.10. Protecting the Overcloud from Removal 링크 복사링크가 클립보드에 복사되었습니다!
heat stack-delete overcloud command, Heat contains a set of policies to restrict certain actions. Edit the /etc/heat/policy.json and find the following parameter:
"stacks:delete": "rule:deny_stack_user"
"stacks:delete": "rule:deny_stack_user"
"stacks:delete": "rule:deny_everybody"
"stacks:delete": "rule:deny_everybody"
heat client. To allow removal of the Overcloud, revert the policy to the original value.
8.11. Removing the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Procedure 8.2. Removing the Overcloud
- Delete any existing Overcloud:
heat stack-delete overcloud
$ heat stack-delete overcloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Confirm the deletion of the Overcloud:
heat stack-list
$ heat stack-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deletion takes a few minutes.
Chapter 9. Scaling and Replacing Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Warning
|
Node Type
|
Scale Up?
|
Scale Down?
|
Notes
|
|---|---|---|---|
|
Controller
|
N
|
N
| |
|
Compute
|
Y
|
Y
| |
|
Ceph Storage Nodes
|
Y
|
N
|
You must have at least 1 Ceph Storage node from the initial Overcloud creation.
|
|
Block Storage Nodes
|
N
|
N
| |
|
Object Storage Nodes
|
Y
|
Y
|
Requires manual ring management, which is described in Section 9.6, “Replacing Object Storage Nodes”.
|
Important
9.1. Adding Compute or Ceph Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
newnodes.json) containing the new node details to register:
openstack baremetal import --json newnodes.json
$ openstack baremetal import --json newnodes.json
ironic node-list ironic node-set-maintenance [NODE UUID] true openstack baremetal introspection start [NODE UUID] ironic node-set-maintenance [NODE UUID] false
$ ironic node-list
$ ironic node-set-maintenance [NODE UUID] true
$ openstack baremetal introspection start [NODE UUID]
$ ironic node-set-maintenance [NODE UUID] false
ironic node-update [NODE UUID] add properties/capabilities='profile:compute,boot_option:local'
$ ironic node-update [NODE UUID] add properties/capabilities='profile:compute,boot_option:local'
bm-deploy-kernel and bm-deploy-ramdisk images:
deploy_kernel and deploy_ramdisk settings:
ironic node-update [NODE UUID] add driver_info/deploy_kernel='09b40e3d-0382-4925-a356-3a4b4f36b514' ironic node-update [NODE UUID] add driver_info/deploy_ramdisk='765a46af-4417-4592-91e5-a300ead3faf6'
$ ironic node-update [NODE UUID] add driver_info/deploy_kernel='09b40e3d-0382-4925-a356-3a4b4f36b514'
$ ironic node-update [NODE UUID] add driver_info/deploy_ramdisk='765a46af-4417-4592-91e5-a300ead3faf6'
openstack overcloud deploy again with the desired number of nodes for a role. For example, to scale to 5 Compute nodes:
openstack overcloud deploy --templates --compute-scale 5 [OTHER_OPTIONS]
$ openstack overcloud deploy --templates --compute-scale 5 [OTHER_OPTIONS]
Important
9.2. Removing Compute Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Important
source ~/stack/overcloudrc nova service-list nova service-disable [hostname] nova-compute source ~/stack/stackrc
$ source ~/stack/overcloudrc
$ nova service-list
$ nova service-disable [hostname] nova-compute
$ source ~/stack/stackrc
overcloud stack in the director using the local template files. First identify the UUID of the Overcloud stack:
heat stack-list
$ heat stack-list
nova list
$ nova list
openstack overcloud node delete --stack [STACK_UUID] --templates -e [ENVIRONMENT_FILE] [NODE1_UUID] [NODE2_UUID] [NODE3_UUID]
$ openstack overcloud node delete --stack [STACK_UUID] --templates -e [ENVIRONMENT_FILE] [NODE1_UUID] [NODE2_UUID] [NODE3_UUID]
Important
-e or --environment-file option to avoid making undesired manual changes to the Overcloud.
Important
openstack overcloud node delete command runs to completion before you continue. Use the openstack stack list command and check the overcloud stack has reached an UPDATE_COMPLETE status.
source ~/stack/overcloudrc nova service-list nova service-delete [service-id] source ~/stack/stackrc
$ source ~/stack/overcloudrc
$ nova service-list
$ nova service-delete [service-id]
$ source ~/stack/stackrc
source ~/stack/overcloudrc neutron agent-list neutron agent-delete [openvswitch-agent-id] source ~/stack/stackrc
$ source ~/stack/overcloudrc
$ neutron agent-list
$ neutron agent-delete [openvswitch-agent-id]
$ source ~/stack/stackrc
9.3. Replacing Compute Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Migrate workload off the existing Compute node and shutdown the node. See Section 8.9, “Migrating VMs from an Overcloud Compute Node” for this process.
- Remove the Compute node from the Overcloud. See Section 9.2, “Removing Compute Nodes” for this process.
- Scale out the Overcloud with a new Compute node. See Chapter 9, Scaling and Replacing Nodes for this process.
9.4. Replacing Controller Nodes 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy command to update the Overcloud with a request to replace a controller node. Note that this process is not completely automatic; during the Overcloud stack update process, the openstack overcloud deploy command will at some point report a failure and halt the Overcloud stack update. At this point, the process requires some manual intervention. Then the openstack overcloud deploy process can continue.
Important
9.4.1. Preliminary Checks 링크 복사링크가 클립보드에 복사되었습니다!
- Check the current status of the
overcloudstack on the Undercloud:source stackrc heat stack-list --show-nested
$ source stackrc $ heat stack-list --show-nestedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Theovercloudstack and its subsequent child stacks should have either aCREATE_COMPLETEorUPDATE_COMPLETE. - Perform a backup of the Undercloud databases:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check your Undercloud contains 10 GB free storage to accomodate for image caching and conversion when provisioning the new node.
- Check the status of Pacemaker on the running Controller nodes. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to get the Pacemaker status:
ssh heat-admin@192.168.0.47 'sudo pcs status'
$ ssh heat-admin@192.168.0.47 'sudo pcs status'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show all services running on the existing nodes and stopped on the failed node. - Check the following parameters on each node of the Overcloud's MariaDB cluster:
wsrep_local_state_comment: Syncedwsrep_cluster_size: 2
Use the following command to check these parameters on each running Controller node (respectively using 192.168.0.47 and 192.168.0.46 for IP addresses):for i in 192.168.0.47 192.168.0.46 ; do echo "*** $i ***" ; ssh heat-admin@$i "sudo mysql --exec=\"SHOW STATUS LIKE 'wsrep_local_state_comment'\" ; sudo mysql --exec=\"SHOW STATUS LIKE 'wsrep_cluster_size'\""; done
$ for i in 192.168.0.47 192.168.0.46 ; do echo "*** $i ***" ; ssh heat-admin@$i "sudo mysql --exec=\"SHOW STATUS LIKE 'wsrep_local_state_comment'\" ; sudo mysql --exec=\"SHOW STATUS LIKE 'wsrep_cluster_size'\""; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to get the status
ssh heat-admin@192.168.0.47 "sudo rabbitmqctl cluster_status"
$ ssh heat-admin@192.168.0.47 "sudo rabbitmqctl cluster_status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Therunning_nodeskey should only show the two available nodes and not the failed node. - Disable fencing, if enabled. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to disable fencing:
ssh heat-admin@192.168.0.47 "sudo pcs property set stonith-enabled=false"
$ ssh heat-admin@192.168.0.47 "sudo pcs property set stonith-enabled=false"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the fencing status with the following command:ssh heat-admin@192.168.0.47 "sudo pcs property show stonith-enabled"
$ ssh heat-admin@192.168.0.47 "sudo pcs property show stonith-enabled"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the
nova-computeservice on the director node:sudo systemctl status openstack-nova-compute nova hypervisor-list
$ sudo systemctl status openstack-nova-compute $ nova hypervisor-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show all non-maintenance mode nodes asup. - Make sure all Undercloud services are running:
sudo systemctl list-units httpd\* mariadb\* neutron\* openstack\* openvswitch\* rabbitmq\*
$ sudo systemctl list-units httpd\* mariadb\* neutron\* openstack\* openvswitch\* rabbitmq\*Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.4.2. Node Replacement 링크 복사링크가 클립보드에 복사되었습니다!
nova list output.
overcloud-controller-1 node and replace it with overcloud-controller-3. First, set the node into maintenance mode so the director does not reprovision the failed node. Correlate the instance ID from nova list with the node ID from ironic node-list
ironic node-set-maintenance da3a8d19-8a59-4e9d-923a-6a336fe10284 true
[stack@director ~]$ ironic node-set-maintenance da3a8d19-8a59-4e9d-923a-6a336fe10284 true
control profile.
ironic node-update 75b25e9a-948d-424a-9b3b-f0ef70a6eacf add properties/capabilities='profile:control,boot_option:local'
[stack@director ~]$ ironic node-update 75b25e9a-948d-424a-9b3b-f0ef70a6eacf add properties/capabilities='profile:control,boot_option:local'
~/templates/remove-controller.yaml) that defines the node index to remove:
parameters:
ControllerRemovalPolicies:
[{'resource_list': ['1']}]
parameters:
ControllerRemovalPolicies:
[{'resource_list': ['1']}]
Important
overcloud.yaml file:
sed -i "s/resource\.0/resource.1/g" ~/templates/my-overcloud/overcloud.yaml
$ sed -i "s/resource\.0/resource.1/g" ~/templates/my-overcloud/overcloud.yaml
Note
parameter_defaults:
ExtraConfig:
pacemaker::corosync::settle_tries: 5
parameter_defaults:
ExtraConfig:
pacemaker::corosync::settle_tries: 5
remove-controller.yaml environment file:
openstack overcloud deploy --templates --control-scale 3 -e ~/templates/remove-controller.yaml [OTHER OPTIONS]
[stack@director ~]$ openstack overcloud deploy --templates --control-scale 3 -e ~/templates/remove-controller.yaml [OTHER OPTIONS]
Important
-e ~/templates/remove-controller.yaml is only required once in this instance. This is because node removal process happens only once and should not run on subsequent runs.
heat stack-list --show-nested
[stack@director ~]$ heat stack-list --show-nested
Important
RHELUnregistrationDeployment resource to hang due to the removed Controller node being unavailable. If this occurs, send a signal to the resource using the following commands:
heat resource-list -n 5 -f name=RHELUnregistrationDeployment overcloud heat resource-signal [STACK_NAME] RHELUnregistrationDeployment
# heat resource-list -n 5 -f name=RHELUnregistrationDeployment overcloud
# heat resource-signal [STACK_NAME] RHELUnregistrationDeployment
[STACK_NAME] with the removed Controller's substack. For example, overcloud-Controller-yfbet6xh6oov-1-f5v5pmcfvv2k-NodeExtraConfig-zuiny44lei3w for Controller node 1.
ControllerNodesPostDeployment stage, the Overcloud stack will time out and halt with an UPDATE_FAILED error at ControllerLoadBalancerDeployment_Step1. This is expected behavior and manual intervention is required as per the next section.
9.4.3. Manual Intervention 링크 복사링크가 클립보드에 복사되었습니다!
ControllerNodesPostDeployment stage, wait until the Overcloud stack times out and halts with an UPDATE_FAILED error at ControllerLoadBalancerDeployment_Step1. This is because some Puppet modules do not support nodes replacement. This point in the process requires some manual intervention. Follow these configuration steps:
- Get a list of IP addresses for the Controller nodes. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the
nodeidvalue of the removed node in the/etc/corosync/corosync.conffile on an existing node. For example, the existing node isovercloud-controller-0at 192.168.0.47:ssh heat-admin@192.168.0.47 "sudo cat /etc/corosync/corosync.conf"
[stack@director ~]$ ssh heat-admin@192.168.0.47 "sudo cat /etc/corosync/corosync.conf"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This displays anodelistthat contains the ID for the removed node (overcloud-controller-1):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note thenodeidvalue of the removed node for later. In this example, it is 2. - Delete the failed node from the Corosync configuration on each node and restart Corosync. For this example, log into
overcloud-controller-0andovercloud-controller-2and run the following commands:[stack@director] ssh heat-admin@192.168.201.47 "sudo pcs cluster localnode remove overcloud-controller-1" [stack@director] ssh heat-admin@192.168.201.47 "sudo pcs cluster reload corosync" [stack@director] ssh heat-admin@192.168.201.46 "sudo pcs cluster localnode remove overcloud-controller-1" [stack@director] ssh heat-admin@192.168.201.46 "sudo pcs cluster reload corosync"
[stack@director] ssh heat-admin@192.168.201.47 "sudo pcs cluster localnode remove overcloud-controller-1" [stack@director] ssh heat-admin@192.168.201.47 "sudo pcs cluster reload corosync" [stack@director] ssh heat-admin@192.168.201.46 "sudo pcs cluster localnode remove overcloud-controller-1" [stack@director] ssh heat-admin@192.168.201.46 "sudo pcs cluster reload corosync"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log into one of the remaining nodes and delete the node from the cluster with the
crm_nodecommand:sudo crm_node -R overcloud-controller-1 --force
[stack@director] ssh heat-admin@192.168.201.47 [heat-admin@overcloud-controller-0 ~]$ sudo crm_node -R overcloud-controller-1 --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stay logged into this node. - Delete the failed node from the RabbitMQ cluster:
sudo rabbitmqctl forget_cluster_node rabbit@overcloud-controller-1
[heat-admin@overcloud-controller-0 ~]$ sudo rabbitmqctl forget_cluster_node rabbit@overcloud-controller-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the failed node from MongoDB. First, find the IP address for the node's Interal API connection.
sudo netstat -tulnp | grep 27017
[heat-admin@overcloud-controller-0 ~]$ sudo netstat -tulnp | grep 27017 tcp 0 0 192.168.0.47:27017 0.0.0.0:* LISTEN 13415/mongodCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the node is theprimaryreplica set:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This should indicate if the current node is the primary. If not, use the IP address of the node indicated in theprimarykey.Connect to MongoDB on the primary node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the MongoDB cluster:tripleo:PRIMARY> rs.status()
tripleo:PRIMARY> rs.status()Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the node using the_idkey and remove the failed node using thenamekey. In this case, we remove Node 1, which has192.168.0.45:27017forname:tripleo:PRIMARY> rs.remove('192.168.0.45:27017')tripleo:PRIMARY> rs.remove('192.168.0.45:27017')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
You must run the command against thePRIMARYreplica set. If you see the following message:"replSetReconfig command must be sent to the current replica set primary."
"replSetReconfig command must be sent to the current replica set primary."Copy to Clipboard Copied! Toggle word wrap Toggle overflow Relog into MongoDB on the node designated asPRIMARY.Note
The following output is normal when removing the failed node's replica set:2016-05-07T03:57:19.541+0000 DBClientCursor::init call() failed 2016-05-07T03:57:19.543+0000 Error: error doing query: failed at src/mongo/shell/query.js:81 2016-05-07T03:57:19.545+0000 trying reconnect to 192.168.0.47:27017 (192.168.0.47) failed 2016-05-07T03:57:19.547+0000 reconnect 192.168.0.47:27017 (192.168.0.47) ok
2016-05-07T03:57:19.541+0000 DBClientCursor::init call() failed 2016-05-07T03:57:19.543+0000 Error: error doing query: failed at src/mongo/shell/query.js:81 2016-05-07T03:57:19.545+0000 trying reconnect to 192.168.0.47:27017 (192.168.0.47) failed 2016-05-07T03:57:19.547+0000 reconnect 192.168.0.47:27017 (192.168.0.47) okCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit MongoDB:tripleo:PRIMARY> exit
tripleo:PRIMARY> exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Update list of nodes in the Galera cluster:
sudo pcs resource update galera wsrep_cluster_address=gcomm://overcloud-controller-0,overcloud-controller-3,overcloud-controller-2
[heat-admin@overcloud-controller-0 ~]$ sudo pcs resource update galera wsrep_cluster_address=gcomm://overcloud-controller-0,overcloud-controller-3,overcloud-controller-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the Galera cluster check on the new node. Copy the
/etc/sysconfig/clustercheckfrom the existing node to the same location on the new node. - Configure the
rootuser's Galera access on the new node. Copy the/root/.my.cnffrom the existing node to the same location on the new node. - Add the new node to the cluster:
sudo pcs cluster node add overcloud-controller-3
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster node add overcloud-controller-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the
/etc/corosync/corosync.conffile on each node. If thenodeidof the new node is the same as the removed node, update the value to a new nodeid value. For example, the/etc/corosync/corosync.conffile contains an entry for the new node (overcloud-controller-3):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that in this example, the new node uses the samenodeidof the removed node. Update this value to a unused node ID value. For example:node { ring0_addr: overcloud-controller-3 nodeid: 4 }node { ring0_addr: overcloud-controller-3 nodeid: 4 }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update thisnodeidvalue on each Controller node's/etc/corosync/corosync.conffile, including the new node. - Restart the Corosync service on the existing nodes only. For example, on
overcloud-controller-0:sudo pcs cluster reload corosync
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster reload corosyncCopy to Clipboard Copied! Toggle word wrap Toggle overflow And onovercloud-controller-2:sudo pcs cluster reload corosync
[heat-admin@overcloud-controller-2 ~]$ sudo pcs cluster reload corosyncCopy to Clipboard Copied! Toggle word wrap Toggle overflow Do not run this command on the new node. - Start the new Controller node:
sudo pcs cluster start overcloud-controller-3
[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster start overcloud-controller-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable the keystone service on the new node. Copy the
/etc/keystonedirectory from a remaining node to the director host:sudo -i scp -r /etc/keystone stack@192.168.0.1:~/.
[heat-admin@overcloud-controller-0 ~]$ sudo -i [root@overcloud-controller-0 ~]$ scp -r /etc/keystone stack@192.168.0.1:~/.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the new Controller node. Remove the/etc/keystonedirectory from the new Controller node and copy thekeystonefiles from the director host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit/etc/keystone/keystone.confand set theadmin_bind_hostandpublic_bind_hostparameters to new Controller node's IP address. To find these IP addresses, use theip addrcommand and look for the IP address within the following networks:admin_bind_host- Provisioning networkpublic_bind_host- Internal API network
Note
These networks might differ if you deployed the Overcloud using a customServiceNetMapparameter.For example, if the Provisioning network uses the 192.168.0.0/24 subnet and the Internal API uses the 172.17.0.0/24 subnet, use the following commands to find the node’s IP addresses on those networks:ip addr | grep "192\.168\.0\..*/24" ip addr | grep "172\.17\.0\..*/24"
[root@overcloud-controller-3 ~]$ ip addr | grep "192\.168\.0\..*/24" [root@overcloud-controller-3 ~]$ ip addr | grep "172\.17\.0\..*/24"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and restart some services through Pacemaker. The cluster is currently in maintenance mode and you will need to temporarily disable it to enable the service. For example:
sudo pcs property set maintenance-mode=false --wait
[heat-admin@overcloud-controller-3 ~]$ sudo pcs property set maintenance-mode=false --waitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Galera service starts on all nodes.
sudo pcs status | grep galera -A1
[heat-admin@overcloud-controller-3 ~]$ sudo pcs status | grep galera -A1 Master/Slave Set: galera-master [galera] Masters: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow If need be, perform a `cleanup` on the new node:sudo pcs resource cleanup galera --node overcloud-controller-3
[heat-admin@overcloud-controller-3 ~]$ sudo pcs resource cleanup galera --node overcloud-controller-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the Keystone service starts on all nodes.
sudo pcs status | grep keystone -A1
[heat-admin@overcloud-controller-3 ~]$ sudo pcs status | grep keystone -A1 Clone Set: openstack-keystone-clone [openstack-keystone] Started: [ overcloud-controller-0 overcloud-controller-2 overcloud-controller-3 ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow If need be, perform a `cleanup` on the new node:sudo pcs resource cleanup openstack-keystone-clone --node overcloud-controller-3
[heat-admin@overcloud-controller-3 ~]$ sudo pcs resource cleanup openstack-keystone-clone --node overcloud-controller-3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Switch the cluster back into maintenance mode:
sudo pcs property set maintenance-mode=true --wait
[heat-admin@overcloud-controller-3 ~]$ sudo pcs property set maintenance-mode=true --waitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack overcloud deploy --templates --control-scale 3 [OTHER OPTIONS]
[stack@director ~]$ openstack overcloud deploy --templates --control-scale 3 [OTHER OPTIONS]
Important
remove-controller.yaml file is no longer needed.
9.4.4. Finalizing Overcloud Services 링크 복사링크가 클립보드에 복사되었습니다!
for i in `sudo pcs status|grep -B2 Stop |grep -v "Stop\|Start"|awk -F"[" '/\[/ {print substr($NF,0,length($NF)-1)}'`; do echo $i; sudo pcs resource cleanup $i; done
[heat-admin@overcloud-controller-0 ~]$ for i in `sudo pcs status|grep -B2 Stop |grep -v "Stop\|Start"|awk -F"[" '/\[/ {print substr($NF,0,length($NF)-1)}'`; do echo $i; sudo pcs resource cleanup $i; done
sudo pcs status
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
Note
pcs resource cleanup command to restart them after resolving them.
sudo pcs property set stonith-enabled=true
[heat-admin@overcloud-controller-0 ~]$ sudo pcs property set stonith-enabled=true
exit
[heat-admin@overcloud-controller-0 ~]$ exit
9.4.5. Finalizing Overcloud Network Agents 링크 복사링크가 클립보드에 복사되었습니다!
overcloudrc file so that you can interact with the Overcloud. Check your routers to make sure the L3 agents are properly hosting the routers in your Overcloud environment. In this example, we use a router with the name r1:
source ~/overcloudrc neutron l3-agent-list-hosting-router r1
[stack@director ~]$ source ~/overcloudrc
[stack@director ~]$ neutron l3-agent-list-hosting-router r1
neutron agent-list | grep "neutron-l3-agent"
[stack@director ~]$ neutron agent-list | grep "neutron-l3-agent"
neutron l3-agent-router-add fd6b3d6e-7d8c-4e1a-831a-4ec1c9ebb965 r1 neutron l3-agent-router-remove b40020af-c6dd-4f7a-b426-eba7bac9dbc2 r1
[stack@director ~]$ neutron l3-agent-router-add fd6b3d6e-7d8c-4e1a-831a-4ec1c9ebb965 r1
[stack@director ~]$ neutron l3-agent-router-remove b40020af-c6dd-4f7a-b426-eba7bac9dbc2 r1
neutron l3-agent-list-hosting-router r1
[stack@director ~]$ neutron l3-agent-list-hosting-router r1
neutron agent-list -F id -F host | grep overcloud-controller-1 neutron agent-delete ddae8e46-3e8e-4a1b-a8b3-c87f13c294eb
[stack@director ~]$ neutron agent-list -F id -F host | grep overcloud-controller-1
| ddae8e46-3e8e-4a1b-a8b3-c87f13c294eb | overcloud-controller-1.localdomain |
[stack@director ~]$ neutron agent-delete ddae8e46-3e8e-4a1b-a8b3-c87f13c294eb
9.4.6. Finalizing Compute Services 링크 복사링크가 클립보드에 복사되었습니다!
overcloudrc file so that you can interact with the Overcloud. Check the compute services for the removed node:
source ~/overcloudrc nova service-list | grep "overcloud-controller-1.localdomain"
[stack@director ~]$ source ~/overcloudrc
[stack@director ~]$ nova service-list | grep "overcloud-controller-1.localdomain"
nova-scheduler service for overcloud-controller-1.localdomain has an ID of 5, run the following command:
nova service-delete 5
[stack@director ~]$ nova service-delete 5
openstack-nova-consoleauth service on the new node.
nova service-list | grep consoleauth
[stack@director ~]$ nova service-list | grep consoleauth
pcs resource restart openstack-nova-consoleauth
[stack@director] ssh heat-admin@192.168.201.47
[heat-admin@overcloud-controller-0 ~]$ pcs resource restart openstack-nova-consoleauth
9.4.7. Conclusion 링크 복사링크가 클립보드에 복사되었습니다!
Important
9.5. Replacing Ceph Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
9.6. Replacing Object Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Update the Overcloud with the new Object Storage nodes and prevent Director from creating the ring files.
- Manually add/remove the nodes to the cluster using
swift-ring-builder.
~/templates/swift-ring-prevent.yaml with the following content:
parameter_defaults: SwiftRingBuild: false RingBuild: false ObjectStorageCount: 3
parameter_defaults:
SwiftRingBuild: false
RingBuild: false
ObjectStorageCount: 3
SwiftRingBuild and RingBuild parameters define whether the Overcloud automatically builds the ring files for Object Storage and Controller nodes respectively. The ObjectStorageCount defines how many Object Storage nodes in our environment. In this situation, we scale from 2 to 3 nodes.
swift-ring-prevent.yaml file with the rest of your Overcloud’s environment files as part of the openstack overcloud deploy:
openstack overcloud deploy --templates [ENVIRONMENT_FILES] -e swift-ring-prevent.yaml
$ openstack overcloud deploy --templates [ENVIRONMENT_FILES] -e swift-ring-prevent.yaml
Note
Note
sudo mkdir -p /srv/node/d1 sudo chown -R swift:swift /srv/node/d1
$ sudo mkdir -p /srv/node/d1
$ sudo chown -R swift:swift /srv/node/d1
Note
heat-admin user and then change to the superuser. For example, given a Controller node with an IP address of 192.168.201.24.
ssh heat-admin@192.168.201.24 sudo -i
$ ssh heat-admin@192.168.201.24
$ sudo -i
/etc/swift/*.builder files from the Controller node to the new Object Storage node's /etc/swift/ directory. If necessary, transfer the files to the director host:
scp /etc/swift/*.builder stack@192.1.2.1:~/.
[root@overcloud-controller-0 ~]# scp /etc/swift/*.builder stack@192.1.2.1:~/.
scp ~/*.builder heat-admin@192.1.2.24:~/.
[stack@director ~]$ scp ~/*.builder heat-admin@192.1.2.24:~/.
heat-admin user and then change to the superuser. For example, given a Object Storage node with an IP address of 192.168.201.29.
ssh heat-admin@192.168.201.29 sudo -i
$ ssh heat-admin@192.168.201.29
$ sudo -i
/etc/swift directory:
cp /home/heat-admin/*.builder /etc/swift/.
# cp /home/heat-admin/*.builder /etc/swift/.
swift-ring-builder /etc/swift/account.builder add zX-IP:6002/d1 weight swift-ring-builder /etc/swift/container.builder add zX-IP:6001/d1 weight swift-ring-builder /etc/swift/object.builder add zX-IP:6000/d1 weight
# swift-ring-builder /etc/swift/account.builder add zX-IP:6002/d1 weight
# swift-ring-builder /etc/swift/container.builder add zX-IP:6001/d1 weight
# swift-ring-builder /etc/swift/object.builder add zX-IP:6000/d1 weight
- zX
- Replace X with the corresponding integer of a specified zone (for example, z1 for Zone 1).
- IP
- The IP that the account, container, and object services use to listen. This should match the IP address of each storage node; specifically, the value of
bind_ipin theDEFAULTsections of/etc/swift/object-server.conf,/etc/swift/account-server.conf, and/etc/swift/container-server.conf. - weight
- Describes relative weight of the device in comparison to other devices. This is usually 100.
Note
swift-ring-builder on the rings files alone:
swift-ring-builder /etc/swift/account.builder
# swift-ring-builder /etc/swift/account.builder
swift-ring-builder /etc/swift/account.builder remove IP swift-ring-builder /etc/swift/container.builder remove IP swift-ring-builder /etc/swift/object.builder remove IP
# swift-ring-builder /etc/swift/account.builder remove IP
# swift-ring-builder /etc/swift/container.builder remove IP
# swift-ring-builder /etc/swift/object.builder remove IP
swift-ring-builder /etc/swift/account.builder rebalance swift-ring-builder /etc/swift/container.builder rebalance swift-ring-builder /etc/swift/object.builder rebalance
# swift-ring-builder /etc/swift/account.builder rebalance
# swift-ring-builder /etc/swift/container.builder rebalance
# swift-ring-builder /etc/swift/object.builder rebalance
/etc/swift/ contents to the root user and swift group:
chown -R root:swift /etc/swift
# chown -R root:swift /etc/swift
openstack-swift-proxy service:
systemctl restart openstack-swift-proxy.service
# systemctl restart openstack-swift-proxy.service
/etc/swift/ on the Controller nodes and the existing Object Storage nodes (except for the node to remove). If necessary, transfer the files to the director host:
scp *.builder stack@192.1.2.1:~/ scp *.ring.gz stack@192.1.2.1:~/
[root@overcloud-objectstorage-2 swift]# scp *.builder stack@192.1.2.1:~/
[root@overcloud-objectstorage-2 swift]# scp *.ring.gz stack@192.1.2.1:~/
/etc/swift/ on each node.
/etc/swift/ contents to the root user and swift group:
chown -R root:swift /etc/swift
# chown -R root:swift /etc/swift
ObjectStorageCount to the omit the old ring. In this case, we reduce from 3 to 2:
parameter_defaults: SwiftRingBuild: false RingBuild: false ObjectStorageCount: 2
parameter_defaults:
SwiftRingBuild: false
RingBuild: false
ObjectStorageCount: 2
remove-object-node.yaml) to identify and remove the old Object Storage node. In this case, we remove overcloud-objectstorage-1:
parameter_defaults:
ObjectStorageRemovalPolicies:
[{'resource_list': ['1']}]
parameter_defaults:
ObjectStorageRemovalPolicies:
[{'resource_list': ['1']}]
openstack overcloud deploy --templates -e swift-ring-prevent.yaml -e remove-object-node.yaml ...
$ openstack overcloud deploy --templates -e swift-ring-prevent.yaml -e remove-object-node.yaml ...
Chapter 10. Rebooting the Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
- If rebooting all nodes in one role, it is advisable to reboot each node individually. This helps retain services for that role during the reboot.
- If rebooting all nodes in your OpenStack Platform environment, use the following list to guide the reboot order:
Recommended Node Reboot Order
- Reboot the director
- Reboot Controller nodes
- Reboot Ceph Storage nodes
- Reboot Compute nodes
- Reboot object Storage nodes
10.1. Rebooting the Director 링크 복사링크가 클립보드에 복사되었습니다!
- Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
source ~/stackrc nova list ironic node-list heat stack-list
$ source ~/stackrc
$ nova list
$ ironic node-list
$ heat stack-list
10.2. Rebooting Controller Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Select a node to reboot. Log into it and reboot it:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow The remaining Controller Nodes in the cluster retain the high availability services during the reboot. - Wait until the node boots.
- Log into the node and check the cluster status:
sudo pcs status
$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The node rejoins the cluster.Note
If any services fail after the reboot, run sudopcs resource cleanup, which cleans the errors and sets the state of each resource toStarted. If any errors persist, contact Red Hat and request guidance and assistance. - Log out of the node, select the next Controller Node to reboot, and repeat this procedure until you have rebooted all Controller Nodes.
10.3. Rebooting Ceph Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Select the first Ceph Storage node to reboot and log into it.
- Disable Ceph Storage cluster rebalancing temporarily:
sudo ceph osd set noout sudo ceph osd set norebalance
$ sudo ceph osd set noout $ sudo ceph osd set norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
- Log into the node and check the cluster status:
sudo ceph -s
$ sudo ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that thepgmapreports allpgsas normal (active+clean). - Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
- When complete, enable cluster rebalancing again:
sudo ceph osd unset noout sudo ceph osd unset norebalance
$ sudo ceph osd unset noout $ sudo ceph osd unset norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform a final status check to make sure the cluster reports
HEALTH_OK:sudo ceph status
$ sudo ceph statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Rebooting Compute Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Select a Compute node to reboot
- Migrate its instances to another Compute node
- Reboot the empty Compute node
source ~/stackrc nova list | grep "compute"
$ source ~/stackrc
$ nova list | grep "compute"
- From the undercloud, select a Compute Node to reboot and disable it:
source ~/overcloudrc nova service-list nova service-disable [hostname] nova-compute
$ source ~/overcloudrc $ nova service-list $ nova service-disable [hostname] nova-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - List all instances on the Compute node:
nova list --host [hostname]
$ nova list --host [hostname]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select a second Compute Node to act as the target host for migrating instances. This host needs enough resources to host the migrated instances. From the undercloud, migrate each instance from the disabled host to the target host.
nova live-migration [instance-name] [target-hostname] nova migration-list nova resize-confirm [instance-name]
$ nova live-migration [instance-name] [target-hostname] $ nova migration-list $ nova resize-confirm [instance-name]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step until you have migrated all instances from the Compute Node.
Important
- Log into the Compute Node and reboot it:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
- Enable the Compute Node again:
source ~/overcloudrc nova service-enable [hostname] nova-compute
$ source ~/overcloudrc $ nova service-enable [hostname] nova-computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Select the next node to reboot.
10.5. Rebooting Object Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
- Select a Object Storage node to reboot. Log into it and reboot it:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
- Log into the node and check the status:
sudo systemctl list-units "openstack-swift*"
$ sudo systemctl list-units "openstack-swift*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log out of the node and repeat this process on the next Object Storage node.
Chapter 11. Troubleshooting Director Issues 링크 복사링크가 클립보드에 복사되었습니다!
- The
/var/logdirectory contains logs for many common OpenStack Platform components as well as logs for standard Red Hat Enterprise Linux applications. - The
journaldservice provides logs for various components. Note that ironic uses two units:openstack-ironic-apiandopenstack-ironic-conductor. Likewise,ironic-inspectoruses two units as well:openstack-ironic-inspectorandopenstack-ironic-inspector-dnsmasq. Use both units for each respective component. For example:sudo journalctl -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq
$ sudo journalctl -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow ironic-inspectoralso stores the ramdisk logs in/var/log/ironic-inspector/ramdisk/as gz-compressed tar files. Filenames contain date, time, and the IPMI address of the node. Use these logs for diagnosing introspection issues.
11.1. Troubleshooting Node Registration 링크 복사링크가 클립보드에 복사되었습니다!
ironic to fix problems with node data registered. Here are a few examples:
Procedure 11.1. Fixing an Incorrect MAC Address
- Find out the assigned port UUID:
ironic node-port-list [NODE UUID]
$ ironic node-port-list [NODE UUID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update the MAC address:
ironic port-update [PORT UUID] replace address=[NEW MAC]
$ ironic port-update [PORT UUID] replace address=[NEW MAC]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure 11.2. Fix an Incorrect IPMI Address
- Run the following command:
ironic node-update [NODE UUID] replace driver_info/ipmi_address=[NEW IPMI ADDRESS]
$ ironic node-update [NODE UUID] replace driver_info/ipmi_address=[NEW IPMI ADDRESS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2. Troubleshooting Hardware Introspection 링크 복사링크가 클립보드에 복사되었습니다!
ironic-inspector) times out after a default 1 hour period if the discovery ramdisk provides no response. Sometimes this might indicate a bug in the discovery ramdisk but usually it happens due to an environment misconfiguration, particularly BIOS boot settings.
Errors with Starting Node Introspection
baremetal introspection, which acts an an umbrella command for ironic's services. However, if running the introspection directly with ironic-inspector, it might fail to discover nodes in the AVAILABLE state, which is meant for deployment and not for discovery. Change the node status to the MANAGEABLE state before discovery:
ironic node-set-provision-state [NODE UUID] manage
$ ironic node-set-provision-state [NODE UUID] manage
ironic node-set-provision-state [NODE UUID] provide
$ ironic node-set-provision-state [NODE UUID] provide
Introspected node is not booting in PXE
ironic-inspector adds the MAC address of the node to the Undercloud firewall's ironic-inspector chain. This allows the node to boot over PXE. To verify the correct configuration, run the following command:
sudo iptables -L
$ sudo iptables -L
Chain ironic-inspector (1 references) target prot opt source destination DROP all -- anywhere anywhere MAC xx:xx:xx:xx:xx:xx ACCEPT all -- anywhere anywhere
Chain ironic-inspector (1 references)
target prot opt source destination
DROP all -- anywhere anywhere MAC xx:xx:xx:xx:xx:xx
ACCEPT all -- anywhere anywhere
ironic-inspector cache, which is in an SQLite database. To fix it, delete the SQLite file:
sudo rm /var/lib/ironic-inspector/inspector.sqlite
$ sudo rm /var/lib/ironic-inspector/inspector.sqlite
sudo ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade sudo systemctl restart openstack-ironic-inspector
$ sudo ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
$ sudo systemctl restart openstack-ironic-inspector
Stopping the Discovery Process
ironic-inspector does not provide a direct means for stopping discovery. The recommended path is to wait until the process times out. If necessary, change the timeout setting in /etc/ironic-inspector/inspector.conf to change the timeout period to another period in minutes.
Procedure 11.3. Stopping the Discovery Process
- Change the power state of each node to off:
ironic node-set-power-state [NODE UUID] off
$ ironic node-set-power-state [NODE UUID] offCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove
ironic-inspectorcache and restart it:rm /var/lib/ironic-inspector/inspector.sqlite sudo systemctl restart openstack-ironic-inspector
$ rm /var/lib/ironic-inspector/inspector.sqlite $ sudo systemctl restart openstack-ironic-inspectorCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resynchronize the
ironic-inspectorcache:sudo ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
$ sudo ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Accessing the Introspection Ramdisk
- Provide a temporary password to the
openssl passwd -1command to generate an MD5 hash. For example:openssl passwd -1 mytestpassword
$ openssl passwd -1 mytestpassword $1$enjRSyIw$/fYUpJwr6abFy/d.koRgQ/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
/httpboot/inspector.ipxefile, find the line starting withkernel, and append therootpwdparameter and the MD5 hash. For example:kernel http://192.2.0.1:8088/agent.kernel ipa-inspection-callback-url=http://192.168.0.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,logs systemd.journald.forward_to_console=yes BOOTIF=${mac} ipa-debug=1 ipa-inspection-benchmarks=cpu,mem,disk rootpwd="$1$enjRSyIw$/fYUpJwr6abFy/d.koRgQ/" selinux=0kernel http://192.2.0.1:8088/agent.kernel ipa-inspection-callback-url=http://192.168.0.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,logs systemd.journald.forward_to_console=yes BOOTIF=${mac} ipa-debug=1 ipa-inspection-benchmarks=cpu,mem,disk rootpwd="$1$enjRSyIw$/fYUpJwr6abFy/d.koRgQ/" selinux=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can append thesshkeyparameter with your public SSH key.Note
Quotation marks are required for both therootpwdandsshkeyparameters. - Start the introspection and find the IP address from either the
arpcommand or the DHCP logs:arp sudo journalctl -u openstack-ironic-inspector-dnsmasq
$ arp $ sudo journalctl -u openstack-ironic-inspector-dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow - SSH as a root user with the temporary password or the SSH key.
ssh root@192.0.2.105
$ ssh root@192.0.2.105Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Checking the Introspection Storage
sudo systemctl list-units openstack-swift*
$ sudo systemctl list-units openstack-swift*
11.3. Troubleshooting Overcloud Creation 링크 복사링크가 클립보드에 복사되었습니다!
- Orchestration (heat and nova services)
- Bare Metal Provisioning (ironic service)
- Post-Deployment Configuration (Puppet)
11.3.1. Orchestration 링크 복사링크가 클립보드에 복사되었습니다!
openstack overcloud deploy.
11.3.2. Bare Metal Provisioning 링크 복사링크가 클립보드에 복사되었습니다!
ironic to see all registered nodes and their current status:
- Review the Provision State and Maintenance columns in the resulting table. Check for the following:
- An empty table, or fewer nodes than you expect
- Maintenance is set to True
- Provision State is set to
manageable
This usually indicates an issue with the registration or discovery processes. For example, if Maintenance sets itself to True automatically, the nodes are usually using the wrong power management credentials. - If Provision State is
available, then the problem occurred before bare metal deployment has even started. - If Provision State is
activeand Power State ispower on, the bare metal deployment has finished successfully. This means that the problem occurred during the post-deployment configuration step. - If Provision State is
wait call-backfor a node, the bare metal provisioning process has not yet finished for this node. Wait until this status changes, otherwise, connect to the virtual console of the failed node and check the output. - If Provision State is
errorordeploy failed, then bare metal provisioning has failed for this node. Check the bare metal node's details:ironic node-show [NODE UUID]
$ ironic node-show [NODE UUID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Look forlast_errorfield, which contains error description. If the error message is vague, you can use logs to clarify it:sudo journalctl -u openstack-ironic-conductor -u openstack-ironic-api
$ sudo journalctl -u openstack-ironic-conductor -u openstack-ironic-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you see
wait timeout errorand the node Power State ispower on, connect to the virtual console of the failed node and check the output.
11.3.3. Post-Deployment Configuration 링크 복사링크가 클립보드에 복사되었습니다!
Procedure 11.4. Diagnosing Post-Deployment Configuration Issues
- List all the resources from the Overcloud stack to see which one failed:
heat resource-list overcloud
$ heat resource-list overcloudCopy to Clipboard Copied! Toggle word wrap Toggle overflow This shows a table of all resources and their states. Look for any resources with aCREATE_FAILED. - Show the failed resource:
heat resource-show overcloud [FAILED RESOURCE]
$ heat resource-show overcloud [FAILED RESOURCE]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for any information in theresource_status_reasonfield that can help your diagnosis. - Use the
novacommand to see the IP addresses of the Overcloud nodes.nova list
$ nova listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in as theheat-adminuser to one of the deployed nodes. For example, if the stack's resource list shows the error occurred on a Controller node, log in to a Controller node. Theheat-adminuser has sudo access.ssh heat-admin@192.0.2.14
$ ssh heat-admin@192.0.2.14Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the
os-collect-configlog for a possible reason for the failure.sudo journalctl -u os-collect-config
$ sudo journalctl -u os-collect-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In some cases, nova fails deploying the node in entirety. This situation would be indicated by a failed
OS::Heat::ResourceGroupfor one of the Overcloud role types. Usenovato see the failure in this case.nova list nova show [SERVER ID]
$ nova list $ nova show [SERVER ID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow The most common error shown will reference the error messageNo valid host was found. See Section 11.5, “Troubleshooting "No Valid Host Found" Errors” for details on troubleshooting this error. In other cases, look at the following log files for further troubleshooting:/var/log/nova/*/var/log/heat/*/var/log/ironic/*
- Use the SOS toolset, which gathers information about system hardware and configuration. Use this information for diagnostic purposes and debugging. SOS is commonly used to help support technicians and developers. SOS is useful on both the Undercloud and Overcloud. Install the
sospackage:sudo yum install sos
$ sudo yum install sosCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a report:sudo sosreport --all-logs
$ sudo sosreport --all-logsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
|
Step
|
Description
|
|---|---|
ControllerLoadBalancerDeployment_Step1
|
Initial load balancing software configuration, including Pacemaker, RabbitMQ, Memcached, Redis, and Galera.
|
ControllerServicesBaseDeployment_Step2
|
Initial cluster configuration, including Pacemaker configuration, HAProxy, MongoDB, Galera, Ceph Monitor, and database initialization for OpenStack Platform services.
|
ControllerRingbuilderDeployment_Step3
|
Initial ring build for OpenStack Object Storage (
swift).
|
ControllerOvercloudServicesDeployment_Step4
|
Configuration of all OpenStack Platform services (
nova, neutron, cinder, sahara, ceilometer, heat, horizon, aodh, gnocchi).
|
ControllerOvercloudServicesDeployment_Step5
|
Configure service start up settings in Pacemaker, including constraints to determine service start up order and service start up parameters.
|
ControllerOvercloudServicesDeployment_Step6
|
Final pass of the Overcloud configuration.
|
11.4. Troubleshooting IP Address Conflicts on the Provisioning Network 링크 복사링크가 클립보드에 복사되었습니다!
Procedure 11.5. Identify active IP addresses
- Install
nmap:yum install nmap
# yum install nmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Use
nmapto scan the IP address range for active addresses. This example scans the192.0.2.0/24range, replace this with the IP subnet of the Provisioning network (using CIDR bitmask notation):nmap -sn 192.0.2.0/24
# nmap -sn 192.0.2.0/24Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Review the output of the
nmapscan:For example, you should see the IP address(es) of the Undercloud, and any other hosts that are present on the subnet. If any of the active IP addresses conflict with the IP ranges inundercloud.conf, you will need to either change the IP address ranges or free up the IP addresses before introspecting or deploying the Overcloud nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Troubleshooting "No Valid Host Found" Errors 링크 복사링크가 클립보드에 복사되었습니다!
/var/log/nova/nova-conductor.log contains the following error:
NoValidHost: No valid host was found. There are not enough hosts available.
NoValidHost: No valid host was found. There are not enough hosts available.
- Make sure introspection succeeds for you. Otherwise check that each node contains the required ironic node properties. For each node:
ironic node-show [NODE UUID]
$ ironic node-show [NODE UUID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check thepropertiesJSON field has valid values for keyscpus,cpu_arch,memory_mbandlocal_gb. - Check that the nova flavor used does not exceed the ironic node properties above for a required number of nodes:
nova flavor-show [FLAVOR NAME]
$ nova flavor-show [FLAVOR NAME]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check that sufficient nodes are in the
availablestate according toironic node-list. Nodes inmanageablestate usually mean a failed introspection. - Check the nodes are not in maintenance mode. Use
ironic node-listto check. A node automatically changing to maintenance mode usually means incorrect power credentials. Check them and then remove maintenance mode:ironic node-set-maintenance [NODE UUID] off
$ ironic node-set-maintenance [NODE UUID] offCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you're using the Automated Health Check (AHC) tools to perform automatic node tagging, check that you have enough nodes corresponding to each flavor/profile. Check the
capabilitieskey inpropertiesfield forironic node-show. For example, a node tagged for the Compute role should containprofile:compute. - It takes some time for node information to propagate from ironic to nova after introspection. The director's tool usually accounts for it. However, if you performed some steps manually, there might be a short period of time when nodes are not available to nova. Use the following command to check the total resources in your system.:
nova hypervisor-stats
$ nova hypervisor-statsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Troubleshooting the Overcloud after Creation 링크 복사링크가 클립보드에 복사되었습니다!
11.6.1. Overcloud Stack Modifications 링크 복사링크가 클립보드에 복사되었습니다!
overcloud stack through the director. Example of stack modifications include:
- Scaling Nodes
- Removing Nodes
- Replacing Nodes
overcloud stack.
Overcloud heat stack. In particular, use the following command to help identify problematic resources:
heat stack-list --show-nested- List all stacks. The
--show-nesteddisplays all child stacks and their respective parent stacks. This command helps identify the point where a stack failed. heat resource-list overcloud- List all resources in the
overcloudstack and their current states. This helps identify which resource is causing failures in the stack. You can trace this resource failure to its respective parameters and configuration in the heat template collection and the Puppet modules. heat event-list overcloud- List all events related to the
overcloudstack in chronological order. This includes the initiation, completion, and failure of all resources in the stack. This helps identify points of resource failure.
11.6.2. Controller Service Failures 링크 복사링크가 클립보드에 복사되었습니다!
pcs) command is a tool that manages a Pacemaker cluster. Run this command on a Controller node in the cluster to perform configuration and monitoring functions. Here are few commands to help troubleshoot Overcloud services on a high availability cluster:
pcs status- Provides a status overview of the entire cluster including enabled resources, failed resources, and online nodes.
pcs resource show- Shows a list of resources, and their respective nodes.
pcs resource disable [resource]- Stop a particular resource.
pcs resource enable [resource]- Start a particular resource.
pcs cluster standby [node]- Place a node in standby mode. The node is no longer available in the cluster. This is useful for performing maintenance on a specific node without affecting the cluster.
pcs cluster unstandby [node]- Remove a node from standby mode. The node becomes available in the cluster again.
/var/log/.
11.6.3. Compute Service Failures 링크 복사링크가 클립보드에 복사되었습니다!
- View the status of the service using the following
systemdfunction:sudo systemctl status openstack-nova-compute.service
$ sudo systemctl status openstack-nova-compute.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Likewise, view thesystemdjournal for the service using the following command:sudo journalctl -u openstack-nova-compute.service
$ sudo journalctl -u openstack-nova-compute.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The primary log file for Compute nodes is
/var/log/nova/nova-compute.log. If issues occur with Compute node communication, this log file is usually a good place to start a diagnosis. - If performing maintenance on the Compute node, migrate the existing instances from the host to an operational Compute node, then disable the node. See Section 8.9, “Migrating VMs from an Overcloud Compute Node” for more information on node migrations.
11.6.4. Ceph Storage Service Failures 링크 복사링크가 클립보드에 복사되었습니다!
11.7. Tuning the Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
- The OpenStack Authentication service (
keystone) uses a token-based system for access to other OpenStack services. After a certain period, the database accumulates many unused tokens. It is recommended you create a cronjob to flush the token table in the database. For example, to flush the token table at 4 a.m. each day:0 04 * * * /bin/keystone-manage token_flush
0 04 * * * /bin/keystone-manage token_flushCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Heat stores a copy of all template files in its database's
raw_templatetable each time you runopenstack overcloud deploy. Theraw_templatetable retains all past templates and grows in size. To remove unused templates in theraw_templatestable, create a daily cronjob that clears unused templates that exist in the database for longer than a day:0 04 * * * /bin/heat-manage purge_deleted -g days 1
0 04 * * * /bin/heat-manage purge_deleted -g days 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
openstack-heat-engineandopenstack-heat-apiservices might consume too many resources at times. If so, setmax_resources_per_stack=-1in/etc/heat/heat.confand restart the heat services:sudo systemctl restart openstack-heat-engine openstack-heat-api
$ sudo systemctl restart openstack-heat-engine openstack-heat-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Sometimes the director might not have enough resources to perform concurrent node provisioning. The default is 10 nodes at the same time. To reduce the number of concurrent nodes, set the
max_concurrent_buildsparameter in/etc/nova/nova.confto a value less than 10 and restart the nova services:sudo systemctl restart openstack-nova-api openstack-nova-scheduler
$ sudo systemctl restart openstack-nova-api openstack-nova-schedulerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
/etc/my.cnf.d/server.cnffile. Some recommended values to tune include:- max_connections
- Number of simultaneous connections to the database. The recommended value is 4096.
- innodb_additional_mem_pool_size
- The size in bytes of a memory pool the database uses to store data dictionary information and other internal data structures. The default is usually 8M and an ideal value is 20M for the Undercloud.
- innodb_buffer_pool_size
- The size in bytes of the buffer pool, the memory area where the database caches table and index data. The default is usually 128M and an ideal value is 1000M for the Undercloud.
- innodb_flush_log_at_trx_commit
- Controls the balance between strict ACID compliance for commit operations, and higher performance that is possible when commit-related I/O operations are rearranged and done in batches. Set to 1.
- innodb_lock_wait_timeout
- The length of time in seconds a database transaction waits for a row lock before giving up. Set to 50.
- innodb_max_purge_lag
- This variable controls how to delay INSERT, UPDATE, and DELETE operations when purge operations are lagging. Set to 10000.
- innodb_thread_concurrency
- The limit of concurrent operating system threads. Ideally, provide at least two threads for each CPU and disk resource. For example, if using a quad-core CPU and a single disk, use 10 threads.
- Ensure that heat has enough workers to perform an Overcloud creation. Usually, this depends on how many CPUs the Undercloud has. To manually set the number of workers, edit the
/etc/heat/heat.conffile, set thenum_engine_workersparameter to the number of workers you need (ideally 4), and restart the heat engine:sudo systemctl restart openstack-heat-engine
$ sudo systemctl restart openstack-heat-engineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Important Logs for Undercloud and Overcloud 링크 복사링크가 클립보드에 복사되었습니다!
|
Information
|
Undercloud or Overcloud
|
Log Location
|
|---|---|---|
|
General director services
|
Undercloud
| /var/log/nova/*
/var/log/heat/*
/var/log/ironic/*
|
|
Introspection
|
Undercloud
| /var/log/ironic/*
/var/log/ironic-inspector/*
|
|
Provisioning
|
Undercloud
| /var/log/ironic/*
|
|
Cloud-Init Log
|
Overcloud
| /var/log/cloud-init.log
|
|
Overcloud Configuration (Summary of Last Puppet Run)
|
Overcloud
| /var/lib/puppet/state/last_run_summary.yaml
|
|
Overcloud Configuration (Report from Last Puppet Run)
|
Overcloud
| /var/lib/puppet/state/last_run_report.yaml
|
|
Overcloud Configuration (All Puppet Reports)
|
Overcloud
| /var/lib/puppet/reports/overcloud-*/*
|
|
General Overcloud services
|
Overcloud
| /var/log/ceilometer/*
/var/log/ceph/*
/var/log/cinder/*
/var/log/glance/*
/var/log/heat/*
/var/log/horizon/*
/var/log/httpd/*
/var/log/keystone/*
/var/log/libvirt/*
/var/log/neutron/*
/var/log/nova/*
/var/log/openvswitch/*
/var/log/rabbitmq/*
/var/log/redis/*
/var/log/swift/*
|
|
High availability log
|
Overcloud
| /var/log/pacemaker.log
|
Appendix A. SSL/TLS Certificate Configuration 링크 복사링크가 클립보드에 복사되었습니다!
Creating a Certificate Authority
openssl genrsa -out ca.key.pem 4096 openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem
$ openssl genrsa -out ca.key.pem 4096
$ openssl req -key ca.key.pem -new -x509 -days 7300 -extensions v3_ca -out ca.crt.pem
openssl req command asks for certain details about your authority. Enter these details.
ca.crt.pem. Copy this file to each client that aims to access your Red Hat Openstack Platform environment and run the following command to add it to the certificate authority trust bundle:
sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract
$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract
Creating an SSL/TLS Certificate
cp /etc/pki/tls/openssl.cnf .
$ cp /etc/pki/tls/openssl.cnf .
openssl.cnf file and set SSL parameters to use for the director. An example of the types of parameters to modify include:
Important
commonName_default to the IP address, or fully qualified domain name if using one, of the Public API:
- For the Undercloud, use the
undercloud_public_vipparameter inundercloud.conf. If using a fully qualified domain name for this IP address, use the domain name instead.
- For the Overcloud, use the IP address for the Public API, which is the first address for the
ExternalAllocationPoolsparameter in your network isolation environment file. If using a fully qualified domain name for this IP address, use the domain name instead.
alt_names section. If also using DNS, include the hostname for the server as DNS entries in the same section. For more information about openssl.cnf, run man openssl.cnf.
server.key.pem), the certificate signing request (server.csr.pem), and the signed certificate (server.crt.pem):
openssl genrsa -out server.key.pem 2048 openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.cert.pem
$ openssl genrsa -out server.key.pem 2048
$ openssl req -config openssl.cnf -key server.key.pem -new -out server.csr.pem
$ sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out server.crt.pem -cert ca.cert.pem
Important
openssl req command asks for several details for the certificate, including the Common Name. Make sure the Common Name is set to the IP address of the Public API for the Undercloud or Overcloud (depending on which certificate set you are creating). The openssl.cnf file should use this IP address as a default value.
Using the Certificate with the Undercloud
cat server.crt.pem server.key.pem > undercloud.pem
$ cat server.crt.pem server.key.pem > undercloud.pem
undercloud.pem for use with the undercloud_service_certificate option in the undercloud.conf file. This file also requires a special SELinux context so that the HAProxy tool can read it. Use the following example as a guide:
sudo mkdir /etc/pki/instack-certs sudo cp ~/undercloud.pem /etc/pki/instack-certs/. sudo semanage fcontext -a -t etc_t "/etc/pki/instack-certs(/.*)?" sudo restorecon -R /etc/pki/instack-certs
$ sudo mkdir /etc/pki/instack-certs
$ sudo cp ~/undercloud.pem /etc/pki/instack-certs/.
$ sudo semanage fcontext -a -t etc_t "/etc/pki/instack-certs(/.*)?"
$ sudo restorecon -R /etc/pki/instack-certs
sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/ sudo update-ca-trust extract
$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract
undercloud.pem file location to the undercloud_service_certificate option in the undercloud.conf file. For example:
undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem
undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem
Using the Certificate with the Overcloud
enable-tls.yaml file from Section 6.11, “Enabling SSL/TLS on the Overcloud”.
Appendix B. Power Management Drivers 링크 복사링크가 클립보드에 복사되었습니다!
B.1. Dell Remote Access Controller (DRAC) 링크 복사링크가 클립보드에 복사되었습니다!
- pm_type
- Set this option to
pxe_drac. - pm_user, pm_password
- The DRAC username and password.
- pm_addr
- The IP address of the DRAC host.
B.2. Integrated Lights-Out (iLO) 링크 복사링크가 클립보드에 복사되었습니다!
- pm_type
- Set this option to
pxe_ilo. - pm_user, pm_password
- The iLO username and password.
- pm_addr
- The IP address of the iLO interface.
Additional Notes
- Edit the
/etc/ironic/ironic.conffile and addpxe_iloto theenabled_driversoption to enable this driver. - The director also requires an additional set of utilities for iLo. Install the
python-proliantutilspackage and restart theopenstack-ironic-conductorservice:sudo yum install python-proliantutils sudo systemctl restart openstack-ironic-conductor.service
$ sudo yum install python-proliantutils $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - HP nodes must a 2015 firmware version for successful introspection. The director has been successfully tested with nodes using firmware version 1.85 (May 13 2015).
- Using a shared iLO port is not supported.
B.3. Cisco Unified Computing System (UCS) 링크 복사링크가 클립보드에 복사되었습니다!
- pm_type
- Set this option to
pxe_ucs. - pm_user, pm_password
- The UCS username and password.
- pm_addr
- The IP address of the UCS interface.
- pm_service_profile
- The UCS service profile to use. Usually takes the format of
org-root/ls-[service_profile_name]. For example:"pm_service_profile": "org-root/ls-Nova-1"
"pm_service_profile": "org-root/ls-Nova-1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Notes
- Edit the
/etc/ironic/ironic.conffile and addpxe_ucsto theenabled_driversoption to enable this driver. - The director also requires an additional set of utilities for UCS. Install the
python-UcsSdkpackage and restart theopenstack-ironic-conductorservice:sudo yum install python-UcsSdk sudo systemctl restart openstack-ironic-conductor.service
$ sudo yum install python-UcsSdk $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
B.4. Fujitsu Integrated Remote Management Controller (iRMC) 링크 복사링크가 클립보드에 복사되었습니다!
Important
- pm_type
- Set this option to
pxe_irmc. - pm_user, pm_password
- The username and password for the iRMC interface.
- pm_addr
- The IP address of the iRMC interface.
- pm_port (Optional)
- The port to use for iRMC operations. The default is 443.
- pm_auth_method (Optional)
- The authentication method for iRMC operations. Use either
basicordigest. The default isbasic - pm_client_timeout (Optional)
- Timeout (in seconds) for iRMC operations. The default is 60 seconds.
- pm_sensor_method (Optional)
- Sensor data retrieval method. Use either
ipmitoolorscci. The default isipmitool.
Additional Notes
- Edit the
/etc/ironic/ironic.conffile and addpxe_irmcto theenabled_driversoption to enable this driver. - The director also requires an additional set of utilities if you enabled SCCI as the sensor method. Install the
python-scciclientpackage and restart theopenstack-ironic-conductorservice:yum install python-scciclient sudo systemctl restart openstack-ironic-conductor.service
$ yum install python-scciclient $ sudo systemctl restart openstack-ironic-conductor.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
B.5. SSH and Virsh 링크 복사링크가 클립보드에 복사되었습니다!
Important
- pm_type
- Set this option to
pxe_ssh. - pm_user, pm_password
- The SSH username and contents of the SSH private key. The private key must be on one line with new lines replaced with escape characters (
\n). For example:-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEA .... kk+WXt9Y=\n-----END RSA PRIVATE KEY-----
-----BEGIN RSA PRIVATE KEY-----\nMIIEogIBAAKCAQEA .... kk+WXt9Y=\n-----END RSA PRIVATE KEY-----Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the SSH public key to the libvirt server'sauthorized_keyscollection. - pm_addr
- The IP address of the virsh host.
Additional Notes
- The server hosting libvirt requires an SSH key pair with the public key set as the
pm_passwordattribute. - Ensure the chosen
pm_userhas full access to the libvirt environment.
B.6. Fake PXE Driver 링크 복사링크가 클립보드에 복사되었습니다!
Important
- pm_type
- Set this option to
fake_pxe.
Additional Notes
- This driver does not use any authentication details because it does not control power management.
- Edit the
/etc/ironic/ironic.conffile and addfake_pxeto theenabled_driversoption to enable this driver. Restart the baremetal services after editing the file:sudo systemctl restart openstack-ironic-api openstack-ironic-conductor
$ sudo systemctl restart openstack-ironic-api openstack-ironic-conductorCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When performing introspection on nodes, manually power the nodes after running the
openstack baremetal introspection bulk startcommand. - When performing Overcloud deployment, check the node status with the
ironic node-listcommand. Wait until the node status changes fromdeployingtodeploy wait-callbackand then manually power the nodes. - After the Overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the
ironic node-listcommand, wait until the node status changes toactive, then manually reboot all Overcloud nodes.
Appendix C. Automatic Profile Tagging 링크 복사링크가 클립보드에 복사되었습니다!
- The policies can identify and isolate underperforming or unstable nodes from use in the Overcloud.
- The policies can define whether to automatically tag nodes into specific profiles.
Description
"description": "A new rule for my node tagging policy"
"description": "A new rule for my node tagging policy"
Conditions
- field
- Defines the field to evaluate.
- op
- Defines the operation to use for the evaluation. This includes the following:
eq- Equal tone- Not equal tolt- Less thangt- Greater thanle- Less than or equal toge- Greater than or equal toin-net- Checks that an IP address is in a given networkmatches- Requires a full match against a given regular expressioncontains- Requires a value to contain a given regular expression;is-empty- Checks that field is empty.
- invert
- Boolean value to define whether to invert the result of the evaluation.
- multiple
- Defines the evaluation to use if multiple results exist. This includes:
any- Requires any result to matchall- Requires all results to matchfirst- Requires the first result to match
- value
- Defines the value in the evaluation. If the field and operation result in the value, the condition return a true result. If not, the condition returns false.
Actions
action key and additional keys depending on the value of action:
fail- Fails the introspection. Requires amessageparameter for the failure message.set-attribute- Sets an attribute on an Ironic node. Requires apathfield, which is the path to an Ironic attribute (e.g./driver_info/ipmi_address), and avalueto set.set-capability- Sets a capability on an Ironic node. Requiresnameandvaluefields, which are the name and the value for a new capability accordingly. The existing value for this same capability is replaced. For example, use this to define node profiles.extend-attribute- The same asset-attributebut treats the existing value as a list and appends value to it. If the optionaluniqueparameter is set to True, nothing is added if the given value is already in a list.
Policy File Example
rules.json) with the introspection rules to apply:
- Fail introspection if memory is lower is 4096 MiB. Such rules can be applied to exclude nodes that should not become part of your cloud.
- Nodes with hard drive size 1 TiB and bigger are assigned the swift-storage profile unconditionally.
- Nodes with hard drive less than 1 TiB but more than 40 GiB can be either Compute or Controller nodes. We assign two capabilities (
compute_profileandcontrol_profile) so that theopenstack overcloud profiles matchcommand can later make the final choice. For that to work, we remove the existing profile capability, otherwise it will have priority.
Note
profile capability always overrides the existing value. However, [PROFILE]_profile capabilities are ignored for nodes with an existing profile capability.
Importing Policy Files
openstack baremetal introspection rule import rules.json
$ openstack baremetal introspection rule import rules.json
openstack baremetal introspection bulk start
$ openstack baremetal introspection bulk start
openstack overcloud profiles list
$ openstack overcloud profiles list
openstack baremetal introspection rule purge
$ openstack baremetal introspection rule purge
Matching Nodes to Roles
openstack overcloud profiles match command to specify how many nodes to assign to a certain role. For example, to automatically match three Controller nodes, three Compute nodes, and three Ceph Storage nodes, use the following command:
openstack overcloud profiles match --control-flavor control --control-scale 3 --compute-flavor compute --compute-scale 3 --ceph-storage-flavor ceph-storage --ceph-storage-scale 3
$ openstack overcloud profiles match --control-flavor control --control-scale 3 --compute-flavor compute --compute-scale 3 --ceph-storage-flavor ceph-storage --ceph-storage-scale 3
Automatic Profile Tagging Properties
field attribute of each condition:
|
Property
|
Description
|
|---|---|
memory_mb
|
The amount of memory for the node in MB.
|
cpus
|
The total number of cores for the node’s CPUs.
|
cpu_arch
|
The architecture of the node’s CPUs.
|
local_gb
|
The total storage space of the node’s root disk. See Section 5.4, “Defining the Root Disk for Nodes” for more information about setting the root disk for a node.
|
Appendix D. Network Interface Parameters 링크 복사링크가 클립보드에 복사되었습니다!
|
Option
|
Default
|
Description
|
|---|---|---|
|
name
| |
Name of the Interface
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the interface
|
|
routes
| |
A sequence of routes assigned to the interface
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
primary
|
False
|
Defines the interface as the primary interface
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the interface
|
|
Option
|
Default
|
Description
|
|---|---|---|
|
vlan_id
| |
The VLAN ID
|
|
device
| |
The VLAN's parent device to attach the VLAN. For example, use this parameter to attach the VLAN to a bonded interface device.
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the VLAN
|
|
routes
| |
A sequence of routes assigned to the VLAN
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
primary
|
False
|
Defines the VLAN as the primary interface
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the VLAN
|
|
Option
|
Default
|
Description
|
|---|---|---|
|
name
| |
Name of the bond
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the bond
|
|
routes
| |
A sequence of routes assigned to the bond
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
primary
|
False
|
Defines the interface as the primary interface
|
|
members
| |
A sequence of interface objects to use in the bond
|
|
ovs_options
| |
A set of options to pass to OVS when creating the bond
|
|
ovs_extra
| |
A set of options to to set as the OVS_EXTRA parameter in the bond's network configuration file
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the bond
|
|
Option
|
Default
|
Description
|
|---|---|---|
|
name
| |
Name of the bridge
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the bridge
|
|
routes
| |
A sequence of routes assigned to the bridge
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
members
| |
A sequence of interface, VLAN, and bond objects to use in the bridge
|
|
ovs_options
| |
A set of options to pass to OVS when creating the bridge
|
|
ovs_extra
| |
A set of options to to set as the OVS_EXTRA parameter in the bridge's network configuration file
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the bridge
|
|
Option
|
Default
|
Description
|
|---|---|---|
|
name
| |
Name of the bond
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the bond
|
|
routes
| |
A sequence of routes assigned to the bond
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
primary
|
False
|
Defines the interface as the primary interface
|
|
members
| |
A sequence of interface objects to use in the bond
|
|
bonding_options
| |
A set of options when creating the bond. For more information on Linux bonding options, see 4.5.1. Bonding Module Directives in the Red Hat Enterprise Linux 7 Networking Guide.
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the bond
|
|
Option
|
Default
|
Description
|
|---|---|---|
|
name
| |
Name of the bridge
|
|
use_dhcp
|
False
|
Use DHCP to get an IP address
|
|
use_dhcpv6
|
False
|
Use DHCP to get a v6 IP address
|
|
addresses
| |
A sequence of IP addresses assigned to the bridge
|
|
routes
| |
A sequence of routes assigned to the bridge
|
|
mtu
|
1500
|
The maximum transmission unit (MTU) of the connection
|
|
members
| |
A sequence of interface, VLAN, and bond objects to use in the bridge
|
|
defroute
|
True
|
Use this interface as the default route
|
|
persist_mapping
|
False
|
Write the device alias configuration instead of the system names
|
|
dhclient_args
|
None
|
Arguments to pass to the DHCP client
|
|
dns_servers
|
None
|
List of DNS servers to use for the bridge
|
Appendix E. Network Interface Template Examples 링크 복사링크가 클립보드에 복사되었습니다!
E.1. Configuring Interfaces 링크 복사링크가 클립보드에 복사되었습니다!
nic1, nic2, etc.) instead of named interfaces (eth0, eno2, etc.). For example, one host might have interfaces em1 and em2, while another has eno1 and eno2, but you can refer to both hosts' NICs as nic1 and nic2.
ethXinterfaces, such aseth0,eth1, etc. These are usually onboard interfaces.enoXinterfaces, such aseno0,eno1, etc. These are usually onboard interfaces.enXinterfaces, sorted alpha numerically, such asenp3s0,enp3s1,ens3, etc. These are usually add-on interfaces.
nic1 to nic4 and only plug four cables on each host.
E.2. Configuring Routes and Default Routes 링크 복사링크가 클립보드에 복사되었습니다!
defroute=no for interfaces other than the one using the default route.
nic3) to be the default route. Use the following YAML to disable the default route on another DHCP interface (nic2):
Note
defroute parameter only applies to routes obtained through DHCP.
E.3. Using the Native VLAN for Floating IPs 링크 복사링크가 클립보드에 복사되었습니다!
br-int instead of using br-ex directly. This model allows multiple Floating IP networks using either VLANs or multiple physical connections.
NeutronExternalNetworkBridge parameter in the parameter_defaults section of your network isolation environment file:
parameter_defaults:
# Set to "br-ex" when using floating IPs on the native VLAN
NeutronExternalNetworkBridge: "''"
parameter_defaults:
# Set to "br-ex" when using floating IPs on the native VLAN
NeutronExternalNetworkBridge: "''"
br-ex, you can use the External network for Floating IPs in addition to the horizon dashboard, and Public APIs.
E.4. Using the Native VLAN on a Trunked Interface 링크 복사링크가 클립보드에 복사되었습니다!
Note
E.5. Configuring Jumbo Frames 링크 복사링크가 클립보드에 복사되었습니다!
Note
Appendix F. Network Environment Options 링크 복사링크가 클립보드에 복사되었습니다!
|
Parameter
|
Description
|
Example
|
|---|---|---|
|
InternalApiNetCidr
|
The network and subnet for the Internal API network
|
172.17.0.0/24
|
|
StorageNetCidr
|
The network and subnet for the Storage network
| |
|
StorageMgmtNetCidr
|
The network and subnet for the Storage Management network
| |
|
TenantNetCidr
|
The network and subnet for the Tenant network
| |
|
ExternalNetCidr
|
The network and subnet for the External network
| |
|
InternalApiAllocationPools
|
The allocation pool for the Internal API network in a tuple format
|
[{'start': '172.17.0.10', 'end': '172.17.0.200'}]
|
|
StorageAllocationPools
|
The allocation pool for the Storage network in a tuple format
| |
|
StorageMgmtAllocationPools
|
The allocation pool for the Storage Management network in a tuple format
| |
|
TenantAllocationPools
|
The allocation pool for the Tenant network in a tuple format
| |
|
ExternalAllocationPools
|
The allocation pool for the External network in a tuple format
| |
|
InternalApiNetworkVlanID
|
The VLAN ID for the Internal API network
|
200
|
|
StorageNetworkVlanID
|
The VLAN ID for the Storage network
| |
|
StorageMgmtNetworkVlanID
|
The VLAN ID for the Storage Management network
| |
|
TenantNetworkVlanID
|
The VLAN ID for the Tenant network
| |
|
ExternalNetworkVlanID
|
The VLAN ID for the External network
| |
|
ExternalInterfaceDefaultRoute
|
The gateway IP address for the External network
|
10.1.2.1
|
|
ControlPlaneDefaultRoute
|
Gateway router for the Provisioning network (or Undercloud IP)
|
ControlPlaneDefaultRoute: 192.0.2.254
|
|
ControlPlaneSubnetCidr
|
CIDR subnet mask length for provisioning network
|
ControlPlaneSubnetCidr: 24
|
|
EC2MetadataIp
|
The IP address of the EC2 metadata server. Generally the IP of the Undercloud.
|
EC2MetadataIp: 192.0.2.1
|
|
DnsServers
|
Define the DNS servers for the Overcloud nodes. Include a maximum of two.
|
DnsServers: ["8.8.8.8","8.8.4.4"]
|
|
BondInterfaceOvsOptions
|
The options for bonding interfaces
|
BondInterfaceOvsOptions:"bond_mode=balance-tcp"
|
|
NeutronFlatNetworks
|
Defines the flat networks to configure in neutron plugins. Defaults to "datacentre" to permit external network creation
|
NeutronFlatNetworks: "datacentre"
|
|
NeutronExternalNetworkBridge
|
An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Set to
"br-ex" if using floating IPs on native VLAN on bridge br-ex. Typically, this should not need to be changed.
|
NeutronExternalNetworkBridge: "br-ex"
|
|
NeutronBridgeMappings
|
The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network
|
NeutronBridgeMappings: "datacentre:br-ex"
|
|
NeutronPublicInterface
|
Defines the interface to bridge onto br-ex for network nodes
|
NeutronPublicInterface: "eth0"
|
|
NeutronNetworkType
|
The tenant network type for Neutron
|
NeutronNetworkType: "vxlan"
|
|
NeutronTunnelTypes
|
The tunnel types for the neutron tenant network. To specify multiple values, use a comma separated string.
|
NeutronTunnelTypes: 'gre,vxlan'
|
|
NeutronTunnelIdRanges
|
Ranges of GRE tunnel IDs to make available for tenant network allocation
|
NeutronTunnelIdRanges "1:1000"
|
|
NeutronVniRanges
|
Ranges of VXLAN VNI IDs to make available for tenant network allocation
|
NeutronVniRanges: "1:1000"
|
|
NeutronEnableTunnelling
|
Defines whether to enable or disable tunneling in case you aim to use a VLAN segmented network or flat network with Neutron. Defaults to enabled
| |
|
NeutronNetworkVLANRanges
|
The neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the 'datacentre' physical network.
|
NeutronNetworkVLANRanges: "datacentre:1:1000"
|
|
NeutronMechanismDrivers
|
The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string
|
NeutronMechanismDrivers: 'openvswitch,l2population'
|
Appendix G. Open vSwitch Bonding Options 링크 복사링크가 클립보드에 복사되었습니다!
BondInterfaceOvsOptions:
"bond_mode=balance-tcp"
BondInterfaceOvsOptions:
"bond_mode=balance-tcp"
Important
bond_mode=balance-tcp
|
This mode will perform load balancing by taking layer 2 to layer 4 data into consideration. For example, destination MAC address, IP address, and TCP port. In addition,
balance-tcp requires that LACP be configured on the switch. This mode is similar to mode 4 bonds used by the Linux bonding driver. balance-tcp is recommended when possible, as LACP provides the highest resiliency for link failure detection, and supplies additional diagnostic information about the bond.
The recommended option is to configure
balance-tcp with LACP. This setting attempts to configure LACP, but will fallback to active-backup if LACP cannot be negotiated with the physical switch.
|
bond_mode=balance-slb
|
Balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. Bonding with
balance-slb allows a limited form of load balancing without the remote switch's knowledge or cooperation. SLB assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. This mode uses a simple hashing algorithm based on source MAC address and VLAN number, with periodic rebalancing as traffic patterns change. This mode is similar to mode 2 bonds used by the Linux bonding driver. This mode is used when the switch is configured with bonding but is not configured to use LACP (static instead of dynamic bonds).
|
bond_mode=active-backup
|
This mode offers active/standby failover where the standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require any special switch support or configuration, and works when the links are connected to separate switches. This mode does not provide load balancing.
|
lacp=[active|passive|off]
|
Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use
bond_mode=balance-slb or bond_mode=active-backup.
Do not use LACP with OVS-based bonds, as this configuration is problematic and unsupported. Instead, consider using bond_mode=balance-slb as a replacement for this functionality. In addition, you can still use LACP with Linux bonding. For the technical details behind this requirement, see BZ#1267291.
|
other-config:lacp-fallback-ab=true
|
Sets the LACP behavior to switch to bond_mode=active-backup as a fallback.
|
other_config:lacp-time=[fast|slow]
|
Set the LACP heartbeat to 1 second (fast) or 30 seconds (slow). The default is slow.
|
other_config:bond-detect-mode=[miimon|carrier]
|
Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier.
|
other_config:bond-miimon-interval=100
|
If using miimon, set the heartbeat interval in milliseconds.
|
other_config:bond_updelay=1000
|
Number of milliseconds a link must be up to be activated to prevent flapping.
|
other_config:bond-rebalance-interval=10000
|
Milliseconds between rebalancing flows between bond members. Set to zero to disable.
|
Important
Appendix H. Revision History 링크 복사링크가 클립보드에 복사되었습니다!
| Revision History | |||
|---|---|---|---|
| Revision 8.0-0 | Tue Nov 24 2015 | ||
| |||