8.2. Install a Compute Node
8.2.1. Install the Compute Service Packages
- openstack-nova-api
- Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This must be the node pointed to by the Identity service endpoint definition for the Compute service.
- openstack-nova-compute
- Provides the OpenStack Compute service.
- openstack-nova-conductor
- Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
- openstack-nova-scheduler
- Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
- python-cinderclient
- Provides client utilities for accessing storage managed by the Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the Block Storage service.
#
yum install -y openstack-nova-api openstack-nova-compute \
openstack-nova-conductor openstack-nova-scheduler \
python-cinderclient
Note
8.2.2. Create the Compute Service Database
root
user.
Procedure 8.3. Creating the Compute Service Database
- Connect to the database service:
#
mysql -u root -p
- Create the
nova
database:mysql>
CREATE DATABASE nova; - Create a
nova
database user and grant the user access to thenova
database:mysql>
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
8.2.3. Configure the Compute Service Database Connection
/etc/nova/nova.conf
file. It must be updated to point to a valid database server before starting the service.
openstack-nova-conductor
). Compute nodes communicate with the conductor using the messaging infrastructure; the conductor orchestrates communication with the database. As a result, individual Compute nodes do not require direct access to the database. There must be at least one instance of the conductor service in any Compute environment.
root
user.
Procedure 8.4. Configuring the Compute Service SQL Database Connection
- Set the value of the
sql_connection
configuration key:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT sql_connection mysql://USER:PASS@IP/DB
Replace the following values:- Replace USER with the Compute service database user name, usually
nova
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the database server.
- Replace DB with the name of the Compute service database, usually
nova
.
Important
8.2.4. Create the Compute Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 8.5. Creating Identity Records for the Compute Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
compute
user:[(keystone_admin)]#
keystone user-create --name nova --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 96cd855e5bfe471ce4066794bbafb615 | | name | nova | | username | nova | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Compute service when authenticating with the Identity service. - Link the
compute
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user nova --role admin --tenant services
- Create the
compute
service entry:[(keystone_admin)]#
keystone service-create --name compute \
--type compute \
--description "OpenStack Compute Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute Service | | enabled | True | | id | 8dea97f5ee254b309c1792d2bd821e59 | | name | compute | | type | compute | +-------------+----------------------------------+ - Create the
compute
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service compute
--publicurl "http://IP:8774/v2/%(tenant_id)s" \
--adminurl "http://IP:8774/v2/%(tenant_id)s" \
--internalurl "http://IP:8774/v2/%(tenant_id)s" \
--region 'RegionOne'
Replace IP with the IP address or host name of the system hosting the Compute API service.
8.2.5. Configure Compute Service Authentication
root
user.
Procedure 8.6. Configuring the Compute Service to Authenticate Through the Identity Service
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT auth_strategy keystone
- Set the Identity service host that the Compute service must use:
#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the Compute service to authenticate as the correct tenant:
#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of the Compute service. Examples in this guide useservices
. - Set the Compute service to authenticate using the
compute
administrative user account:#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_user compute
- Set the Compute service to use the correct
compute
administrative user account password:#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_password PASSWORD
Replace PASSWORD with the password set when thecompute
user was created.
8.2.6. Configure the Firewall to Allow Compute Service Traffic
5900
to 5999
. Connections to the Compute API service are received on port 8774
. The firewall on the service node must be configured to allow network traffic on these ports. All steps in this procedure must be performed on each Compute node, while logged in as the root
user.
Procedure 8.7. Configuring the Firewall to Allow Compute Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports in the ranges
5900
to5999
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
- Add an INPUT rule allowing TCP traffic on port
8774
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
8.2.7. Configure the Compute Service to Use SSL
nova.conf
file to configure SSL.
Configuration Option | Description |
---|---|
enabled_ssl_apis
|
A list of APIs with enabled SSL.
|
ssl_ca_file
|
The CA certificate file to use to verify connecting clients.
|
ssl_cert_file
|
The SSL certificate of the API server.
|
ssl_key_file
|
The SSL private key of the API server.
|
tcp_keepidle
|
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Defaults to 600.
|
8.2.8. Configure RabbitMQ Message Broker Settings for the Compute Service
root
user.
Procedure 8.8. Configuring the Compute Service to use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend rabbit
- Set the Compute service to connect to the RabbitMQ host:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for the Compute service when RabbitMQ was configured:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_userid nova
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_password NOVA_PASS
Replacenova
and NOVA_PASS with the RabbitMQ user name and password created for the Compute service. - When RabbitMQ was launched, the
nova
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Compute service to connect to this virtual host:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_virtual_host /
8.2.9. Enable SSL Communication Between the Compute Service and the Message Broker
Procedure 8.9. Enabling SSL Communication Between the Compute Service and the RabbitMQ Message Broker
- Enable SSL communication with the message broker:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_use_ssl True
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_certfile /path/to/client.crt
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
Replace the following values:- Replace /path/to/client.crt with the absolute path to the exported client certificate.
- Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
- If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).
8.2.10. Configure Resource Overcommitment
Important
- The default CPU overcommit ratio is 16. This means that up to 16 virtual cores can be assigned to a node for each physical core.
- The default memory overcommit ratio is 1.5. This means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
cpu_allocation_ratio
and ram_allocation_ratio
directives in /etc/nova/nova.conf
to change these default settings.
8.2.11. Reserve Host Resources
/etc/nova/nova.conf
:
reserved_host_memory_mb
. Defaults to 512MB.reserved_host_disk_mb
. Defaults to 0MB.
8.2.12. Configure Compute Networking
8.2.12.1. Compute Networking Overview
nova-network
service must not run. Instead all network related decisions are delegated to the OpenStack Networking Service.
nova-manage
and nova
to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.
Important
nova-network
and reboot any physical nodes that were running nova-network
before using these nodes to run OpenStack Network. Problems can arise from inadvertently running the nova-network
process while using OpenStack Networking service; for example, a previously running nova-network
could push down stale firewall rules.
8.2.12.2. Update the Compute Configuration
root
user.
Procedure 8.10. Updating the Connection and Authentication Settings of Compute Nodes
- Modify the
network_api_class
configuration key to indicate that OpenStack Networking is in use:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT network_api_class nova.network.neutronv2.api.API
- Set the Compute service to use the endpoint of the OpenStack Networking API:
#
openstack-config --set /etc/nova/nova.conf \
neutron url http://IP:9696/
Replace IP with the IP address or host name of the server hosting the OpenStack Networking API service. - Set the name of the tenant used by the OpenStack Networking service. Examples in this guide use services:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_tenant_name services
- Set the name of the OpenStack Networking administrative user:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_username neutron
- Set the password associated with the OpenStack Networking administrative user:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_password PASSWORD
- Set the URL associated with the Identity service endpoint:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_auth_url http://IP:35357/v2.0
Replace IP with the IP address or host name of the server hosting the Identity service. - Enable the metadata proxy and configure the metadata proxy secret:
#
openstack-config --set /etc/nova/nova.conf \
neutron service_metadata_proxy true
#
openstack-config --set /etc/nova/nova.conf \
neutron metadata_proxy_shared_secret METADATA_SECRET
Replace METADATA_SECRET with the string that the metadata proxy will use to secure communication. - Enable the use of OpenStack Networking security groups:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT security_group_api neutron
- Set the firewall driver to
nova.virt.firewall.NoopFirewallDriver
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
This must be done when OpenStack Networking security groups are in use. - Open the
/etc/sysctl.conf
file in a text editor, and add or edit the following kernel networking parameters:net.ipv4.ip_forward = 1 net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
- Load the updated kernel parameters:
#
sysctl -p
8.2.12.3. Configure the L2 Agent
8.2.12.4. Configure Virtual Interface Plugging
nova-compute
creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack Networking controlled virtual switch. Compute must also inform the virtual switch of the OpenStack Networking port identifier associated with each vNIC.
nova.virt.libvirt.vif.LibvirtGenericVIFDriver
, is provided in Red Hat OpenStack Platform. This driver relies on OpenStack Networking being able to return the type of virtual interface binding required. The following plug-ins support this operation:
- Linux Bridge
- Open vSwitch
- NEC
- BigSwitch
- CloudBase Hyper-V
- Brocade
openstack-config
command to set the value of the vif_driver
configuration key appropriately:
#
openstack-config --set /etc/nova/nova.conf \
libvirt vif_driver \
nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Important
- If running Open vSwitch with security groups enabled, use the Open vSwitch specific driver,
nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
, instead of the generic driver. - For Linux Bridge environments, you must add the following to the
/etc/libvirt/qemu.conf
file to ensure that the virtual machine launches properly:user = "root" group = "root" cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet", "/dev/net/tun", ]
8.2.13. Populate the Compute Service Database
Important
Procedure 8.11. Populating the Compute Service Database
- Log in to a system hosting an instance of the
openstack-nova-conductor
service. - Switch to the
nova
user:#
su nova -s /bin/sh
- Initialize and populate the database identified in
/etc/nova/nova.conf
:$
nova-manage db sync
8.2.14. Launch the Compute Services
Procedure 8.12. Launching Compute Services
- Libvirt requires that the
messagebus
service be enabled and running. Start the service:#
systemctl start messagebus.service
- The Compute service requires that the
libvirtd
service be enabled and running. Start the service and configure it to start at boot time:#
systemctl start libvirtd.service
#
systemctl enable libvirtd.service
- Start the API service on each system that is hosting an instance of it. Note that each API instance should either have its own endpoint defined in the Identity service database or be pointed to by a load balancer that is acting as the endpoint. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-api.service
#
systemctl enable openstack-nova-api.service
- Start the scheduler on each system that is hosting an instance of it. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-scheduler.service
#
systemctl enable openstack-nova-scheduler.service
- Start the conductor on each system that is hosting an instance of it. Note that it is recommended that this service is not run on every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-conductor.service
#
systemctl enable openstack-nova-conductor.service
- Start the Compute service on every system that is intended to host virtual machine instances. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-compute.service
#
systemctl enable openstack-nova-compute.service
- Depending on your environment configuration, you may also need to start the following services:
openstack-nova-cert
- The X509 certificate service, required if you intend to use the EC2 API to the Compute service.
Note
To use the EC2 API to the Compute service, you must set the options in thenova.conf
configuration file. For more information, see Configuring the EC2 API section in the Red Hat OpenStack Platform Configuration Reference Guide. This document is available from the following link: openstack-nova-network
- The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack Networking.
openstack-nova-objectstore
- The Nova object storage service. It is recommended that the Object Storage service (Swift) is used for new deployments.