이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Installation Reference
Installation Reference for Red Hat OpenStack Platform
Abstract
Chapter 1. Introduction
- The MariaDB Database Service
- The RabbitMQ Message Broker
- The Identity Service
- The Object Storage Service
- The Image Service
- The Block Storage Service
- OpenStack Networking
- The Compute Service
- The Orchestration Service
- The Dashboard
- The Data Processing Service
- The Telemetry Service
- The Time-Series-as-a-Service
- The File Share Service (Technology Preview)
- The Database-as-a-Service (Technology Preview)
Note
1.1. Subscribe to the Required Channels
Procedure 1.1. Subscribing to the Required Channels
- Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
#
subscription-manager register
- Obtain detailed information about the Red Hat OpenStack Platform subscription available to you:
#
subscription-manager list --available --matches '*OpenStack Platform*'
This command should print output similar to the following:+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ Subscription Name: Red Hat Enterprise Linux OpenStack Platform, Standard (2-sockets) Provides: Red Hat Beta ... Red Hat OpenStack ... SKU: ABC1234 Contract: 12345678 Pool ID: 0123456789abcdef0123456789abcdef Provides Management: No Available: Unlimited Suggested: 1 Service Level: Standard Service Type: L1-L3 Subscription Type: Stackable Ends: 12/31/2099 System Type: Virtual
- Use the
Pool ID
printed by this command to attach the Red Hat OpenStack Platform entitlement:#
subscription-manager attach --pool=Pool ID
- Disable any irrelevant and enable the required channels:
#
subscription-manager repos --disable=* \
--enable=rhel-7-server-rpms \
--enable=rhel-7-server-openstack-8-rpms \
--enable=rhel-7-server-rh-common-rpms \
--enable=rhel-7-server-extras-rpms
- Run the
yum update
command and reboot to ensure that the most up-to-date packages, including the kernel, are installed and running.#
yum update
#
reboot
yum repolist
command to confirm the repository configuration again at any time.
1.2. Installation Prerequisites Checklists
Note
root
access to the host machine (to install components and perform other administrative tasks such as updating the firewall).- Administrative access to the Identity service.
- Administrative access to the database (ability to add both databases and users).
Item | Description | Value/Verified |
---|---|---|
Hardware requirements
|
Hardware requirements must be verified.
|
Yes | No
|
Operating system
|
Red Hat Enterprise Linux 7.1 Server
|
Yes | No
|
Red Hat subscription
|
You must have a subscription that entitles your systems to receive the following updates:
|
Yes | No
|
Administrative access on all installation machines | Almost all procedures in this guide must be performed as the root user, so you must have root access. | Yes | No |
Red Hat subscription user name and password
|
You must know the Red Hat subscription user name and password.
|
|
Machine addresses
|
You must know the IP address or host name of the server or servers on which any OpenStack components and supporting software will be installed.
|
Provide host addresses for the following services:
|
Item | Description | Value |
---|---|---|
Host access
|
The system hosting the Identity service must have access to the following components:
|
Verify whether the system has access to the following components:
|
SSL certificates
|
If you are using external SSL certificates, you must know where the database and certificates are located, and have access to them.
|
Yes | No
|
LDAP information | If you are using LDAP, you must have administrative access to configure a new directory server schema. | Yes | No |
Connections | The system hosting the Identity service must have a connection to all other OpenStack services. | Yes | No |
Item | Description | Value |
---|---|---|
File system
|
Red Hat currently supports the
XFS and ext4 file systems for object storage; one of these must be available.
|
|
Mount point
|
The
/srv/node mount point must be available.
|
Yes | No
|
Connections | The system hosting the Object Storage service requires a connection to the Identity service. | Yes | No |
Item | Description | Value |
---|---|---|
Back-end storage
|
The Image service supports a number of storage back ends. You must decide on one of the following:
|
Storage type:
|
Connections | The server hosting the Image service must have a connection to the Identity service, the dashboard service, and the Compute services. The server must also have access to the Object Storage service if it is using Object Storage as its back end. | Yes | No |
Item | Description | Value |
---|---|---|
Back-end storage
|
The Block Storage service supports a number of storage back ends. You must decide on one of the following:
|
Storage type:
|
Connections | The server hosting the Block Storage service must have a connection to the Identity service, the dashboard service, and the Compute services. | Yes | No |
Item | Description | Value |
---|---|---|
Plug-in agents
|
In addition to the standard OpenStack Networking components, a number of plug-in agents are also available that implement various networking mechanisms.
You must decide which of these apply to your network and install them.
|
Circle the appropriate plug-in:
|
Connections | The server hosting OpenStack Networking must have a connection to the Identity service, the dashboard service, and the Compute services. | Yes | No |
Item | Description | Value |
---|---|---|
Hardware virtualization support
|
The Compute service requires hardware virtualization support.
|
Yes | No
|
VNC client
|
The Compute service supports Virtual Network Computing (VNC) console access to instances through a web browser. You must decide whether this will be provided to your users.
|
Yes | No
|
Resources: CPU and memory
|
OpenStack supports overcommitting of CPU and memory resources on Compute nodes:
|
Decide:
|
Resources: host
|
You can reserve resources for the host, to prevent a given amount of memory and disk resources from being automatically assigned to other resources on the host.
|
Decide:
|
libvirt version | You must know the version of libvirt that you are using in order to configure Virtual Interface Plugging. | Version: |
Connections | The server or servers hosting the Compute service must have a connection to all other OpenStack services. | Yes | No |
Item | Description | Value |
---|---|---|
Host software
|
The system hosting the dashboard service must have the following packages already installed:
|
Yes | No
|
Connections
|
The system hosting the dashboard service must have a connection to all other OpenStack services.
|
Yes | No
|
Chapter 2. Prerequisites
iptables
to provide firewall capabilities. It also explains how to install the database service and message broker used by all components in the Red Hat OpenStack Platform environment. The MariaDB database service provides the tools to create and access the databases required for each component. The RabbitMQ message broker allows internal communication between the components. Messages can be sent from and received by any component that is configured to use the message broker.
Note
2.1. Configure the Firewall
iptables
. This involves disabling the Network Manager service, and configuring the server to use the firewall capabilities provided by iptables
instead of those provided by firewalld
. All further firewall configuration in this document uses iptables
.
2.1.1. Disable Network Manager
root
user. This includes the server that will host OpenStack Networking, all network nodes, and all Compute nodes.
Procedure 2.1. Disabling the Network Manager Service
- Verify whether Network Manager is currently enabled:
#
systemctl status NetworkManager.service | grep Active:
- The system displays an error if the Network Manager service is not currently installed. If this error is displayed, no further action is required to disable the Network Manager service.
- The system displays
Active: active (running)
if Network Manager is running, orActive: inactive (dead)
if it is not. If Network Manager is inactive, no further action is required.
- If Network Manager is running, stop it and then disable it:
#
systemctl stop NetworkManager.service
#
systemctl disable NetworkManager.service
- Open each interface configuration file on the system in a text editor. Interface configuration files are found in the
/etc/sysconfig/network-scripts/
directory and have names in the formatifcfg-X
, where X is replaced by the name of the interface. Valid interface names includeeth0
,p1p5
, andem1
.To ensure that the standard network service takes control of the interfaces and automatically activates them on boot, confirm that the following keys are set in each interface configuration file, or add them manually:NM_CONTROLLED=no ONBOOT=yes
- Start the standard network service:
#
systemctl start network.service
- Configure the network service to start at boot time:
#
systemctl enable network.service
2.1.2. Disable the firewalld Service
firewalld
service for Compute and OpenStack Networking nodes, and enable the iptables
service.
Procedure 2.2. Disabling the firewalld Service
- Install the
iptables
service:#
yum install iptables-services
- Review the iptables rules defined in
/etc/sysconfig/iptables
:Note
You can review your currentfirewalld
configuration:#
firewall-cmd --list-all
- When you are satisfied with the
iptables
rules, disablefirewalld
:#
systemctl disable firewalld.service
- Stop the
firewalld
service and start theiptables
services:#
systemctl stop firewalld.service; systemctl start iptables.service; systemctl start ip6tables.service
- Configure the
iptables
services to start at boot time:#
systemctl enable iptables.service
#
systemctl enable ip6tables.service
2.2. Install the Database Server
2.2.1. Install the MariaDB Database Packages
- mariadb-galera-server
- Provides the MariaDB database service.
- mariadb-galera-common
- Provides the MariaDB service shared files. This package is installed as a dependency of the mariadb-galera-server package.
- galera
- Installs the Galera wsrep (Write Set REPlication) provider. This package is installed as a dependency of the mariadb-galera-server package.
#
yum install mariadb-galera-server
2.2.2. Configure the Firewall to Allow Database Traffic
root
user.
Procedure 2.3. Configuring the Firewall to Allow Database Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
3306
to the file. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 3306 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
2.2.3. Start the Database Service
root
user.
Procedure 2.4. Starting the Database Service
- Start the
mariadb
service:#
systemctl start mariadb.service
- Configure the
mariadb
service to start at boot time:#
systemctl enable mariadb.service
2.2.4. Configure the Database Administrator Account
root
that provides access to the MariaDB service from the machine on which the MariaDB service was installed. You must set a password for this account to secure access to the server hosting the MariaDB service. You must also enable access to the MariaDB service from machines other than the machine on which the MariaDB server is installed. It is also recommended that you remove the anonymous user and test database that are created during installation.
Procedure 2.5. Configuring the Database Administrator Account
- Log in to the machine on which the MariaDB service is installed.
- Use the
mysql_secure_installation
to set theroot
password, allow remote root login, and remove the anonymous user account and test database:#
mysql_secure_installation
Note
-p
and the old password:
#
mysqladmin -u root -pOLDPASS password NEWPASS
2.2.5. Test Connectivity
2.2.5.1. Test Local Connectivity
Procedure 2.6. Testing Local Connectivity
- Connect to the database service, replacing
USER
with the user name with which to connect:#
mysql -u USER -p
- Enter the password of the database user when prompted.
Enter password:
2.2.5.2. Test Remote Connectivity
Procedure 2.7. Testing Remote Connectivity
- Install the MySQL client tools:
#
yum install mysql
- Connect to the database service, replacing USER with the database user name and HOST with the IP address or host name of the server hosting the database service:
#
mysql -u USER -h HOST -p
- Enter the password of the database user when prompted:
Enter password:
2.3. Install the Message Broker
- Block Storage service
- Compute service
- OpenStack Networking
- Orchestration service
- Image service
- Telemetry service
2.3.1. Install the RabbitMQ Message Broker Package
#
yum install rabbitmq-server
2.3.2. Configure the Firewall for Message Broker Traffic
5672
. All steps in this procedure must be performed on the server hosting the messaging service, while logged in as the root
user.
Procedure 2.8. Configuring the Firewall for Message Broker Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing incoming connections on port
5672
. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect:#
systemctl restart iptables.service
2.3.3. Launch and Configure the RabbitMQ Message Broker
Procedure 2.9. Launching and Configuring the RabbitMQ Message Broker for Use with OpenStack
- Launch the
rabbitmq-server
service and configure it to start at boot time:#
systemctl start rabbitmq-server.service
#
systemctl enable rabbitmq-server.service
- When the rabbitmq-server package is installed, a
guest
user with a defaultguest
password is automatically created for the RabbitMQ service. Red Hat strongly advises that you change this default password, especially if you have IPv6 available. With IPv6, RabbitMQ may be accessible from outside the network. Change the default guest password:#
rabbitmqctl change_password guest NEW_RABBITMQ_PASS
Replace NEW_RABBITMQ_PASS with a more secure password. - Create a RabbitMQ user account for the Block Storage service, the Compute service, OpenStack Networking, the Orchestration service, the Image service, and the Telemetry service:
#
rabbitmqctl add_user cinder CINDER_PASS
#
rabbitmqctl add_user nova NOVA_PASS
#
rabbitmqctl add_user neutron NEUTRON_PASS
#
rabbitmqctl add_user heat HEAT_PASS
#
rabbitmqctl add_user glance GLANCE_PASS
#
rabbitmqctl add_user ceilometer CEILOMETER_PASS
Replace CINDER_PASS, NOVA_PASS, NEUTRON_PASS, HEAT_PASS, GLANCE_PASS, and CEILOMETER_PASS with secure passwords for each service. - Grant each of these RabbitMQ users read and write permissions to all resources:
#
rabbitmqctl set_permissions cinder ".*" ".*" ".*"
#
rabbitmqctl set_permissions nova ".*" ".*" ".*"
#
rabbitmqctl set_permissions neutron ".*" ".*" ".*"
#
rabbitmqctl set_permissions heat ".*" ".*" ".*"
#
rabbitmqctl set_permissions glance ".*" ".*" ".*"
#
rabbitmqctl set_permissions ceilometer ".*" ".*" ".*"
2.3.4. Enable SSL on the RabbitMQ Message Broker
/etc/rabbitmq/rabbitmq.config
configuration file.
Procedure 2.10. Enabling SSL on the RabbitMQ Message Broker
- Create a directory in which to store the required certificates:
#
mkdir /etc/pki/rabbitmq
- Choose a secure certificate password and store it in a file within the
/etc/pki/rabbitmq
directory:#
echo SSL_RABBITMQ_PW > /etc/pki/rabbitmq/certpw
Replace SSL_RABBITMQ_PW with a certificate password. This password will be used later for further securing the necessary certificates. - Set the permissions for the certificate directory and password file:
#
chmod 700 /etc/pki/rabbitmq
#
chmod 600 /etc/pki/rabbitmq/certpw
- Create the certificate database files (
*.db
) in the/etc/pki/rabbitmq
directory, using the password in the/etc/pki/rabbitmq/certpw
file:#
certutil -N -d /etc/pki/rabbitmq -f /etc/pki/rabbitmq/certpw
- For a production environment, it is recommended that you use a reputable third-party Certificate Authority (CA) to sign your certificates. Create a Certificate Signing Request (CSR) for a third-party CA:
#
certutil -R -d /etc/pki/rabbitmq -s "CN=RABBITMQ_HOST" \
-a -f /etc/pki/rabbitmq/certpw > RABBITMQ_HOST.csr
Replace RABBITMQ_HOST with the IP or host name of the server hosting the RabbitMQ message broker. This command produces a CSR namedRABBITMQ_HOST.csr
and a key file (keyfile.key). The key file will be used later when configuring the RabbitMQ message broker to use SSL.Note
Some CAs may require additional values other than"CN=RABBITMQ_HOST"
. - Provide
RABBITMQ_HOST.csr
to your third-party CA for signing. Your CA should provide you with a signed certificate (server.crt) and a CA file (ca.crt). Add these files to your certificate database:#
certutil -A -d /etc/pki/rabbitmq -n RABBITMQ_HOST -f /etc/pki/rabbitmq/certpw \
-t u,u,u -a -i /path/to/server.crt
#
certutil -A -d /etc/pki/rabbitmq -n "Your CA certificate" \
-f /etc/pki/rabbitmq/certpw -t CT,C,C -a -i /path/to/ca.crt
- Configure the RabbitMQ message broker to use the certificate files for secure communications. Open the
/etc/rabbitmq/rabbitmq.config
configuration file in a text editor, and edit therabbit
section as follows:- Find the line that reads:
%% {ssl_listeners, [5671]},
Uncomment the setting by removing the percent signs:{ssl_listeners, [5671]},
- Scroll down to the line that reads:
%% {ssl_options, [{cacertfile, "/path/to/testca/cacert.pem"},
Replace this line and the next few lines which comprise thessl_options
section with the following content:{ssl_options, [{cacertfile, "/path/to/ca.crt"}, {certfile, "/path/to/server.crt"}, {keyfile, "/path/to/keyfile.key"}, {verify, verify_peer}, {versions, ['tlsv1.2','tlsv1.1',tlsv1]}, {fail_if_no_peer_cert, false}]}
- Replace /path/to/ca.crt with the absolute path to the CA certificate.
- Replace /path/to/server.crt with the absolute path to the signed certificate.
- Replace /path/to/keyfile.key with the absolute path to the key file.
- Disable SSLv3 by editing the
rabbitmq.config
to include support for only specific TLS encryption versions:{rabbit, [ {ssl_options, [{versions, ['tlsv1.2','tlsv1.1',tlsv1]}]}, ]}
- Restart the RabbitMQ service for the change to take effect:
# systemctl restart rabbitmq-server.service
2.3.5. Export an SSL Certificate for Clients
#
pk12util -o <p12exportfile> -n <certname> -d <certdir> -w <p12filepwfile>
#
openssl pkcs12 -in <p12exportfile> -out <clcertname> -nodes -clcerts -passin pass:<p12pw>
openssl
manual page.
2.4. Network Time Protocol
Important
Chapter 3. Install the Identity Service
3.1. Install the Identity Service Packages
- openstack-keystone
- Provides the OpenStack Identity service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks, including the editing of configuration files.
- openstack-selinux
- Provides OpenStack-specific SELinux policy modules.
#
yum install -y openstack-keystone \
openstack-utils \
openstack-selinux
3.2. Create the Identity Database
root
user.
Procedure 3.1. Creating the Identity Service Database
- Connect to the database service:
#
mysql -u root -p
- Create the
keystone
database:mysql>
CREATE DATABASE keystone; - Create a
keystone
database user and grant the user access to thekeystone
database:mysql>
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
3.3. Configure the Identity Service
3.3.1. Configure the Identity Service Database Connection
/etc/keystone/keystone.conf
file. It must be updated to point to a valid database server before starting the service.
root
user.
Procedure 3.2. Configuring the Identity Service SQL Database Connection
- Set the value of the
connection
configuration key:#
openstack-config --set /etc/keystone/keystone.conf \
sql connection mysql://USER:PASS@IP/DB
Replace the following values:- Replace USER with the Identity service database user name, usually
keystone
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the database server.
- Replace DB with the name of the Identity service database, usually
keystone
.
Important
3.3.2. Set the Identity Service Administration Token
root
user.
Procedure 3.3. Setting the Identity Service Administration Token
- Generate an initial service token and save it in the
OS_SERVICE_TOKEN
environment variable:#
export OS_SERVICE_TOKEN=$(openssl rand -hex 10)
- Store the value of the administration token in a file for future use:
#
echo $OS_SERVICE_TOKEN > ~/ks_admin_token
- Set the value of the
admin_token
configuration key to that of the newly created token:#
openstack-config --set /etc/keystone/keystone.conf \
DEFAULT admin_token $OS_SERVICE_TOKEN
Note
#
keystone-manage token_flush
3.3.3. Configure the Public Key Infrastructure
3.3.3.1. Public Key Infrastructure Overview
keystone-manage pki_setup
command. It is, however, possible to manually create and sign the required certificates using a third party certificate authority. If using third party certificates the Identity service configuration must be manually updated to point to the certificates and supporting files.
[signing]
section of the /etc/keystone/keystone.conf
configuration file. These keys are:
- ca_certs
- Specifies the location of the certificate for the authority that issued the certificate denoted by the
certfile
configuration key. The default value is/etc/keystone/ssl/certs/ca.pem
. - ca_key
- Specifies the key of the certificate authority that issued the certificate denoted by the
certfile
configuration key. The default value is/etc/keystone/ssl/certs/cakey.pem
. - ca_password
- Specifies the password, if applicable, required to open the certificate authority file. The default action if no value is specified is not to use a password.
- certfile
- Specifies the location of the certificate that must be used to verify tokens. The default value of
/etc/keystone/ssl/certs/signing_cert.pem
is used if no value is specified. - keyfile
- Specifies the location of the private key that must be used when signing tokens. The default value of
/etc/keystone/ssl/private/signing_key.pem
is used if no value is specified. - token_format
- Specifies the algorithm to use when generating tokens. Possible values are
UUID
andPKI
. The default value isPKI
.
3.3.3.2. Create the Public Key Infrastructure Files
root
user.
Procedure 3.4. Creating the PKI Files to be Used by the Identity Service
- Run the
keystone-manage pki_setup
command:#
keystone-manage pki_setup \
--keystone-user keystone \
--keystone-group keystone
- Ensure that the
keystone
user owns the/var/log/keystone/
and/etc/keystone/ssl/
directories:#
chown -R keystone:keystone /var/log/keystone \
/etc/keystone/ssl/
3.3.3.3. Configure the Identity Service to Use Public Key Infrastructure Files
/etc/keystone/keystone.conf
file:
#
openstack-config --set /etc/keystone/keystone.conf \
signing token_format PKI
#
openstack-config --set /etc/keystone/keystone.conf \
signing certfile /etc/keystone/ssl/certs/signing_cert.pem
#
openstack-config --set /etc/keystone/keystone.conf \
signing keyfile /etc/keystone/ssl/private/signing_key.pem
#
openstack-config --set /etc/keystone/keystone.conf \
signing ca_certs /etc/keystone/ssl/certs/ca.pem
#
openstack-config --set /etc/keystone/keystone.conf \
signing key_size 1024
#
openstack-config --set /etc/keystone/keystone.conf \
signing valid_days 3650
#
openstack-config --set /etc/keystone/keystone.conf \
signing ca_password None
/etc/keystone/keystone.conf
file.
3.3.4. Configure the Firewall to Allow Identity Service Traffic
root
user.
Procedure 3.5. Configuring the Firewall to Allow Identity Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports
5000
and35357
to the file. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
3.3.5. Populate the Identity Service Database
Procedure 3.6. Populating the Identity Service Database
- Log in to the system hosting the Identity service.
- Switch to the
keystone
user and initialize and populate the database identified in/etc/keystone/keystone.conf
:#
su keystone -s /bin/sh -c "keystone-manage db_sync"
3.3.6. Limit the Number of Entities in a Collection
Procedure 3.7. Limiting the Number of Entities in a Collection
- Open the
/etc/keystone/keystone.conf
in a text editor. - Set a global value using
list_limit
in the[DEFAULT]
section. - Optionally override the global value with a specific limit in individual sections. For example:
[assignment] list_limit = 100
list_{entity}
call has been truncated, the response status code will still be 200 (OK), but the truncated
attribute in the collection will be set to true
.
3.4. Start the Identity Service
root
user.
Procedure 3.8. Launching the Identity Service
- Start the
openstack-keystone
service:#
systemctl start openstack-keystone.service
- Configure the
openstack-keystone
service to start at boot time:#
systemctl enable openstack-keystone.service
3.5. Create an Administrator Account
Procedure 3.9. Creating an Administrator Account
- Set the
OS_SERVICE_TOKEN
environment variable to the value of the administration token. This is done by reading the token file created when setting the administration token:#
export OS_SERVICE_TOKEN=`cat ~/ks_admin_token`
- Set the
OS_SERVICE_ENDPOINT
environment variable to point to the server hosting the Identity service:#
export OS_SERVICE_ENDPOINT="http://IP:35357/v2.0"
Replace IP with the IP address or host name of your Identity server. - Create an
admin
user:#
keystone user-create --name admin --pass PASSWORD
+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id | 94d659c3c9534095aba5f8475c87091a | | name | admin | | tenantId | | +----------+-----------------------------------+Replace PASSWORD with a secure password for the account. - Create an
admin
role:#
keystone role-create --name admin
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 78035c5d3cd94e62812d6d37551ecd6a | | name | admin | +----------+----------------------------------+ - Create an
admin
tenant:#
keystone tenant-create --name admin
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 6f8e3e36c4194b86b9a9b55d4b722af3 | | name | admin | +-------------+----------------------------------+ - Link the
admin
user and theadmin
role together in the context of theadmin
tenant:#
keystone user-role-add --user admin --role admin --tenant admin
- The newly-created
admin
account will be used for future management of the Identity service. To facilitate authentication, create akeystonerc_admin
file in a secure location such as the home directory of theroot
user.Add these lines to the file to set the environment variables that will be used for authentication:unset OS_SERVICE_TOKEN unset OS_SERVICE_ENDPOINT export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=PASSWORD export OS_AUTH_URL=http://IP:35357/v2.0/ export PS1='[\u@\h \W(keystone_admin)]\$ '
Replace PASSWORD with the password of theadmin
user, and replace IP with the IP address or host name of the Identity server. - Load the environment variables used for authentication:
#
source ~/keystonerc_admin
3.6. Create the Identity Service Endpoint
root
user.
Procedure 3.10. Creating the Identity Service Endpoint
- Set up the shell to access Keystone as the
admin
user:#
source ~/keystonerc_admin
- Set the
OS_SERVICE_TOKEN
environment variable to the administration token. This is done by reading the token file created when setting the administration token:[(keystone_admin)]#
export OS_SERVICE_TOKEN=`cat ~/ks_admin_token`
- Set the
OS_SERVICE_ENDPOINT
environment variable to point to the server hosting the Identity service:[(keystone_admin]#
export OS_SERVICE_ENDPOINT='http://IP:35357/v2.0'
Replace IP with the IP address or host name of the Identity server. - Create a service entry for the Identity service:
[(keystone_admin)]#
keystone service-create --name=keystone --type=identity \
--description="Keystone Identity service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Keystone Identity service | | enabled | True | | id | a8bff1db381f4751bd8ac126464511ae | | name | keystone | | type | identity | +-------------+----------------------------------+ - Create an endpoint entry for the v2.0 API Identity service:
[(keystone_admin)]#
keystone endpoint-create \
--service keystone \
--publicurl 'https://IP:443/v2.0' \
--adminurl 'https://IP:443/v2.0' \
--internalurl 'https://IP:5000/v2.0' \
--region 'RegionOne'
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | https://IP:443/keystone/admin | | id | 1295011fdc874a838f702518e95a0e13 | | internalurl | https://IP:5000/v2.0 | | publicurl | https://IP:443/keystone/main | | region | RegionOne | | service_id | ID | +-------------+----------------------------------+Replace IP with the IP address or host name of the Identity server.Note
By default, the endpoint is created in the default region,RegionOne
. If you need to specify a different region when creating an endpoint, use the--region
argument.
3.6.1. Service Regions
RegionOne
.
--region
argument when adding service endpoints:
[(keystone_admin)]#
keystone endpoint-create --region 'RegionOne' \
--service SERVICENAME\
--publicurl PUBLICURL
--adminurl ADMINURL
--internalurl INTERNALURL
Example 3.1. Endpoints Within Discrete Regions
APAC
and EMEA
regions share an Identity server (identity.example.com
) endpoint, while providing region specific compute API endpoints:
$
keystone endpoint-list
+---------+--------+------------------------------------------------------+
| id | region | publicurl |
+---------+--------+------------------------------------------------------+
| 0d8b... | APAC | http://identity.example.com:5000/v3 |
| 769f... | EMEA | http://identity.example.com:5000/v3 |
| 516c... | APAC | http://nova-apac.example.com:8774/v2/%(tenant_id)s |
| cf7e... | EMEA | http://nova-emea.example.com:8774/v2/%(tenant_id)s |
+---------+--------+------------------------------------------------------+
3.7. Create a Regular User Account
Procedure 3.11. Creating a Regular User Account
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create a tenant:
[(keystone_admin)]#
keystone tenant-create --name TENANT
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 6f8e3e36c4194b86b9a9b55d4b722af3 | | name | TENANT | +-------------+----------------------------------+Replace TENANT with a name for the tenant. - Create a regular user:
[(keystone_admin)]#
keystone user-create --name USER --tenant TENANT --pass PASSWORD
+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id | b8275d7494dd4c9cb3f69967a11f9765 | | name | USER | | tenantId | 6f8e3e36c4194b86b9a9b55d4b722af3 | | username | USER | +----------+-----------------------------------+Replace USER with a user name for the account. Replace TENANT with the tenant name that you used in the previous step. Replace PASSWORD with a secure password for the account.Note
The user is associated with Identity's default_member_
role automatically thanks to the--tenant
option. - To facilitate authentication, create a
keystonerc_user
file in a secure location (for example, the home directory of theroot
user).Set the following environment variables to be used for authentication:export OS_USERNAME=USER export OS_TENANT_NAME=TENANT export OS_PASSWORD=PASSWORD export OS_AUTH_URL=http://IP:5000/v2.0/ export PS1='[\u@\h \W(keystone_user)]\$ '
Replace USER, TENANT, and PASSWORD with the values specified during tenant and user creation. Replace IP with the IP address or host name of the Identity server.
3.8. Create the Services Tenant
Note
services
tenant.
Note
Procedure 3.12. Creating the Services Tenant
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
services
tenant:[(keystone_admin)]#
keystone tenant-create --name services --description "Services Tenant"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Services Tenant | | enabled | True | | id | 7e193e36c4194b86b9a9b55d4b722af3 | | name | services | +-------------+----------------------------------+
Note
[(keystone_admin)]#
keystone tenant-list
3.9. Validate the Identity Service Installation
keystonerc_admin
and keystonerc_user
files containing the environment variables required to authenticate as the administrative user and a regular user respectively. Also, the system must have the following already installed: httpd, mod_wsgi, and mod_ssl (for security purposes).
Procedure 3.13. Validating the Identity Service Installation
- Set up the shell to access keystone as the adminstrative user:
#
source ~/keystonerc_admin
- List the users defined in the system:
[(keystone_admin)]#
keystone user-list
+----------------------------------+--------+---------+------------------+ | id | name | enabled | email | +----------------------------------+--------+---------+------------------+ | 94d659c3c9534095aba5f8475c87091a | admin | True | | | b8275d7494dd4c9cb3f69967a11f9765 | USER | True | | +----------------------------------+--------+---------+------------------+The list of users defined in the system is displayed. If the list is not displayed, there is an issue with the installation.- If the message returned indicates a permissions or authorization issue, check that the administrative user account, tenant, and role were created properly. Also ensure that the three objects are linked correctly.
Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)
- If the message returned indicates a connectivity issue, verify that the
openstack-keystone
service is running and that the firewall service is configured to allow connections on ports5000
and35357
.Authorization Failed: [Errno 111] Connection refused
- Set up the shell to access keystone as the regular Identity service user:
#
source ~/keystonerc_user
- Attempt to list the users defined in the system:
[(keystone_user)]#
keystone user-list
Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)An error message is displayed indicating that the user isNot Authorized
to run the command. If the error message is not displayed, but the user list appears instead, then the regular user account was incorrectly attached to theadmin
role. - Verify that the regular user account is able to run commands that it is authorized to access:
[(keystone_user)]#
keystone token-get
+-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2013-05-07T13:00:24Z | | id | 5f6e089b24d94b198c877c58229f2067 | | tenant_id | f7e8628768f2437587651ab959fbe239 | | user_id | 8109f0e3deaf46d5990674443dcf7db7 | +-----------+----------------------------------+
3.9.1. Troubleshoot Identity Client (keystone) Connectivity Problems
keystone
) is unable to contact the Identity service, it returns an error:
Unable to communicate with identity service: [Errno 113] No route to host. (HTTP 400)
- Identity service is down
- On the system hosting the Identity service, check the service status:
#
openstack-status | grep keystone
openstack-keystone: activeIf the service is not running, log in as theroot
user and start it.#
service openstack-keystone start
- Firewall is not configured properly
- The firewall might not be configured to allow TCP traffic on ports
5000
and35357
. See Section 3.3.4, “Configure the Firewall to Allow Identity Service Traffic” for instructions on how to correct this. - Service Endpoints not defined correctly
- On the server hosting the Identity service, check that the endpoints are defined correctly.
Procedure 3.14. Verifying Identity Service Endpoints
- Obtain the administration token:
#
grep admin_token /etc/keystone/keystone.conf
admin_token = 0292d404a88c4f269383ff28a3839ab4 - Unset any pre-defined Identity service-related environment variables:
#
unset OS_USERNAME OS_TENANT_NAME OS_PASSWORD OS_AUTH_URL
- Use the administration token and endpoint to authenticate with the Identity service. Confirm that the Identity service endpoint is correct:
#
keystone --os-token TOKEN \
--os-endpoint ENDPOINT \
endpoint-list
Replace TOKEN with the ID of the administration token. Replace ENDPOINT with the endpoint for the administration endpoint: http://IP:35357/v2.0.Verify that the listedpublicurl
,internalurl
, andadminurl
for the Identity service are correct. In particular, ensure that the IP addresses and port numbers listed within each endpoint are correct and reachable over the network. - If these values are incorrect, see Section 3.6, “Create the Identity Service Endpoint” for information on adding the correct endpoint. Once the correct endpoints have been added, remove any incorrect endpoints:
#
keystone --os-token=TOKEN \
--os-endpoint=ENDPOINT \
endpoint-delete ID
Replace TOKEN and ENDPOINT with the values identified previously. Replace ID with the identity of the endpoint to remove as listed by theendpoint-list
action.
Chapter 4. Install the Object Service
4.1. Object Storage Service Requirements
- Supported Filesystems
- The Object Storage service stores objects in filesystems. Currently,
XFS
andext4
are supported. Your filesystem must be mounted with Extended Attributes (xattr
) enabled.It is recommended that you useXFS
. Configure this in/etc/fstab
:Example 4.1. Sample /etc/fstab Entry for One XFS Storage Disk
/dev/sdb1 /srv/node/d1 xfs inode64,noatime,nodiratime 0 0
Note
Extended Attributes are already enabled onXFS
by default. As such, you do not need to specifyuser_xattr
in your/etc/fstab
entry. - Acceptable Mountpoints
- The Object Storage service expects devices to be mounted at
/srv/node/
.
4.2. Configure rsyncd
rsyncd
for your filesystems before you install and configure the Object Storage service. The following procedure must be performed on each storage node, while logged in as the root
user. The procedure assumes that at least two XFS storage disks have been mounted on each storage node.
Example 4.2. Sample /etc/fstab Entry for Two XFS Storage Disks
/dev/sdb1 /srv/node/d1 xfs inode64,noatime,nodiratime 0 0 /dev/sdb2 /srv/node/d2 xfs inode64,noatime,nodiratime 0 0
Procedure 4.1. Configuring rsyncd
- Copy addresses from the controller's
/etc/hosts
file, and add storage node IP addresses. Also ensure that all nodes have all addresses in their/etc/hosts
file. - Install the rsync and xinetd packages:
#
yum install rsync xinetd
- Open the
/etc/rsyncd.conf
file in a text editor, and add the following lines:##assumes 'swift' has been used as the Object Storage user/group uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid ##address on which the rsync daemon listens address = LOCAL_MGT_NETWORK_IP [account] max connections = 2 path = /srv/node/ read only = false write only = no list = yes incoming chmod = 0644 outgoing chmod = 0644 lock file = /var/lock/account.lock [container] max connections = 2 path = /srv/node/ read only = false write only = no list = yes incoming chmod = 0644 outgoing chmod = 0644 lock file = /var/lock/container.lock [object] max connections = 2 path = /srv/node/ read only = false write only = no list = yes incoming chmod = 0644 outgoing chmod = 0644 lock file = /var/lock/object.lock
Note
Multiple account, container, and object sections can be used. - Open the
/etc/xinetd.d/rsync
file, and add the following lines:service rsync { port = 873 disable = no socket_type = stream protocol = tcp wait = no user = root group = root groups = yes server = /usr/bin/rsync bind = LOCAL_MGT_NETWORK_IP server_args = --daemon --config /etc/rsync.conf }
- Start the
xinetd
service, and configure it to start at boot time:#
systemctl start xinetd.service
#
systemctl enable xinetd.service
4.3. Install the Object Storage Service Packages
Primary OpenStack Object Storage Packages
- openstack-swift-proxy
- Proxies requests for objects.
- openstack-swift-object
- Stores data objects of up to 5GB.
- openstack-swift-container
- Maintains a database that tracks all of the objects in each container.
- openstack-swift-account
- Maintains a database that tracks all of the containers in each account.
OpenStack Object Storage Dependencies
- openstack-swift
- Contains code common to the specific services.
- openstack-swift-plugin-swift3
- The swift3 plugin for OpenStack Object Storage.
- memcached
- Soft dependency of the proxy server, caches authenticated clients rather than making them reauthorize at every interaction.
- openstack-utils
- Provides utilities for configuring OpenStack.
- python-swiftclient
- Provides the
swift
command-line tool.
Procedure 4.2. Installing the Object Storage Service Packages
- Install the required packages:
#
yum install -y openstack-swift-proxy \
openstack-swift-object \
openstack-swift-container \
openstack-swift-account \
openstack-utils \
memcached \
python-swiftclient
4.4. Configure the Object Storage Service
4.4.1. Create the Object Storage Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 4.3. Creating Identity Records for the Object Storage Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
swift
user:[(keystone_admin)]#
keystone user-create --name swift --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | e1765f70da1b4432b54ced060139b46a | | name | swift | | username | swift | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Object Storage service when authenticating with the Identity service. - Link the
swift
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user swift --role admin --tenant services
- Create the
swift
Object Storage service entry:[(keystone_admin)]#
keystone service-create --name swift --type object-store \
--description "Swift Storage Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Swift Storage Service | | enabled | True | | id | 9e0156e9965241e7a7d9c839884f9c01 | | name | swift | | type | object-store | +-------------+----------------------------------+ - Create the
swift
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service swift \
--publicurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
--adminurl 'http://IP:8080/v1' \
--internalurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
--region 'RegionOne'
Replace IP with the IP address or fully qualified domain name of the server hosting the Object Storage Proxy service.
4.4.2. Configure the Object Storage Service Storage Nodes
ext4
or XFS
, and mounted under the /srv/node/
directory. All of the services that will run on a given node must be enabled, and their ports opened.
Procedure 4.4. Configuring the Object Storage Service Storage Nodes
- Format your devices using the
ext4
orXFS
filesystem. Ensure thatxattr
s are enabled. - Add your devices to the
/etc/fstab
file to ensure that they are mounted under/srv/node/
at boot time. Use theblkid
command to find your device's unique ID, and mount the device using its unique ID.Note
If usingext4
, ensure that extended attributes are enabled by mounting the filesystem with theuser_xattr
option. (InXFS
, extended attributes are enabled by default.) - Configure the firewall to open the TCP ports used by each service running on each node. By default, the account service uses port 6202, the container service uses port 6201, and the object service uses port 6200.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing TCP traffic on the ports used by the account, container, and object service. The new rule must appear before anyreject-with icmp-host-prohibited
rule:-A INPUT -p tcp -m multiport --dports 6200,6201,6202,873 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect:#
systemctl restart iptables.service
- Change the owner of the contents of
/srv/node/
toswift:swift
:#
chown -R swift:swift /srv/node/
- Set the
SELinux
context correctly for all directories under/srv/node/
:#
restorecon -R /srv
- Add a hash prefix to the
/etc/swift/swift.conf
file:#
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \
$(openssl rand -hex 10)
- Add a hash suffix to the
/etc/swift/swift.conf
file:#
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \
$(openssl rand -hex 10)
- Set the IP address that the storage services will listen on. Run the following commands for every service on every node in your Object Storage cluster:
#
openstack-config --set /etc/swift/object-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
#
openstack-config --set /etc/swift/account-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
#
openstack-config --set /etc/swift/container-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
Replace NODE_IP_ADDRESS with the IP address of the node you are configuring. - Copy
/etc/swift/swift.conf
from the node you are currently configuring to all of your Object Storage service nodes.Important
The/etc/swift/swift.conf
file must be identical on all of your Object Storage service nodes. - Start the services that will run on the node:
#
systemctl start openstack-swift-account.service
#
systemctl start openstack-swift-container.service
#
systemctl start openstack-swift-object.service
- Configure the services to start at boot time:
#
systemctl enable openstack-swift-account.service
#
systemctl enable openstack-swift-container.service
#
systemctl enable openstack-swift-object.service
4.4.3. Configure the Object Storage Service Proxy Service
gets
and puts
are directed.
Note
Procedure 4.5. Configuring the Object Storage Service Proxy Service
- Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken auth_host IP
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_tenant_name services
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_user swift
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_password PASSWORD
Replace the following values:- Replace IP with the IP address or host name of the Identity server.
- Replace services with the name of the tenant that was created for the Object Storage service (previous examples set this to
services
). - Replace swift with the name of the service user that was created for the Object Storage service (previous examples set this to
swift
). - Replace PASSWORD with the password associated with the service user.
- Start the
memcached
andopenstack-swift-proxy
services:#
systemctl start memcached.service
#
systemctl start openstack-swift-proxy.service
- Configure the
memcached
andopenstack-swift-proxy
services to start at boot time:#
systemctl enable memcached.service
#
systemctl enable openstack-swift-proxy.service
- Allow incoming connections to the server hosting the Object Storage proxy service. Open the
/etc/sysconfig/iptables
file in a text editor, and Add an INPUT rule allowing TCP traffic on port 8080. The new rule must appear before any INPUT rules that REJECT traffic: :-A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT
Important
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080
. For information regarding the creation of more restrictive firewall rules, see the Red Hat Enterprise Linux Security Guide: - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
4.4.4. Object Storage Service Rings
4.4.5. Build Object Storage Service Ring Files
Ring File Parameter | Description |
---|---|
part_power
|
2partition power = partition count.
The partition is rounded up after calculation.
|
replica_count
|
The number of times that your data will be replicated in the cluster.
|
min_part_hours
|
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
|
zone
|
Used when adding devices to rings (optional). Zones are a flexible abstraction, where each zone should be separated from other zones as possible in your deployment. You can use a zone to represent sites, cabinet, nodes, or even devices.
|
Procedure 4.6. Building Object Storage Service Ring Files
- Build one ring for each service. Provide a builder file, a partition power, a replica count, and the minimum hours between partition reassignment:
#
swift-ring-builder /etc/swift/object.builder create part_power replica_count min_part_hours
#
swift-ring-builder /etc/swift/container.builder create part_power replica_count min_part_hours
#
swift-ring-builder /etc/swift/account.builder create part_power replica_count min_part_hours
- When the rings are created, add devices to the account ring:
#
swift-ring-builder /etc/swift/account.builder add zX-SERVICE_IP:6202/dev_mountpt part_count
Replace the following values:- Replace X with the corresponding integer of a specified zone (for example,
z1
would correspond to Zone One). - Replace SERVICE_IP with the IP on which the account, container, and object services should listen. This IP should match the
bind_ip
value set during the configuration of the Object Storage service storage nodes. - Replace dev_mountpt with the
/srv/node
subdirectory under which your device is mounted. - Replace part_count with the partition count you used to calculate your partition power.
Note
Repeat this step for each device (on each node in the cluster) you want added to the ring. - Add each device to both the container and object rings:
#
swift-ring-builder /etc/swift/container.builder add zX-SERVICE_IP:6201/dev_mountpt part_count
#
swift-ring-builder /etc/swift/object.builder add zX-SERVICE_IP:6200/dev_mountpt part_count
Replace the variables with the same ones used in the previous step.Note
Repeat these commands for each device (on each node in the cluster) you want added to the ring. - Distribute the partitions across the devices in the ring:
#
swift-ring-builder /etc/swift/account.builder rebalance
#
swift-ring-builder /etc/swift/container.builder rebalance
#
swift-ring-builder /etc/swift/object.builder rebalance
- Check to see that you now have three ring files in the directory
/etc/swift
:#
ls /etc/swift/*gz
The files should be listed as follows:/etc/swift/account.ring.gz /etc/swift/container.ring.gz /etc/swift/object.ring.gz
- Restart the
openstack-swift-proxy
service:#
systemctl restart openstack-swift-proxy.service
- Ensure that all files in the
/etc/swift/
directory, including those that you have just created, are owned by theroot
user and theswift
group:Important
All mount points must be owned byroot
; all roots of mounted file systems must be owned byswift
. Before running the following command, ensure that all devices are already mounted and owned byroot
.#
chown -R root:swift /etc/swift
- Copy each ring builder file to each node in the cluster, storing them under
/etc/swift/
.
4.5. Validate the Object Storage Service Installation
keystonerc_admin
file and on which the python-swiftclient package is installed.
Procedure 4.7. Validating the Object Storage Service Installation
- On the proxy server node, turn on debug level logging:
#
openstack-config --set /etc/swift/proxy-server.conf DEFAULT log_level debug
- Restart the
rsyslog
service and theopenstack-swift-proxy
service:#
systemctl restart rsyslog.service
#
systemctl restart openstack-swift-proxy.service
- Set up the shell to access Keystone as the administrative user:
#
source ~/keystonerc_admin
- Ensure that you can connect to the proxy server:
[(keystone_admin)]#
swift list
Message from syslogd@example-swift-01 at Jun 14 02:46:00 ... 135 proxy-server Server reports support for api versions: v3.0, v2.0 - Upload some files to your Object Storage service nodes:
[(keystone_admin)]#
head -c 1024 /dev/urandom > data1.file ; swift upload c1 data1.file
[(keystone_admin)]#
head -c 1024 /dev/urandom > data2.file ; swift upload c1 data2.file
[(keystone_admin)]#
head -c 1024 /dev/urandom > data3.file ; swift upload c1 data3.file
- List the objects stored in the Object Storage service cluster:
[(keystone_admin)]#
swift list
[(keystone_admin)]#
swift list c1
data1.file data2.file data3.file
Chapter 5. Install the Image Service
5.1. Image Service Requirements
- The root credentials and IP address of the server hosting the MariaDB database service
- The administrative user credentials and endpoint URL of the Identity service
5.2. Install the Image Service Packages
- openstack-glance
- Provides the OpenStack Image service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks, including the editing of configuration files.
- openstack-selinux
- Provides OpenStack-specific SELinux policy modules.
#
yum install -y openstack-glance openstack-utils openstack-selinux
5.3. Create the Image Service Database
root
user.
Procedure 5.1. Creating the Image Service Database
- Connect to the database service:
#
mysql -u root -p
- Create the
glance
database:mysql>
CREATE DATABASE glance; - Create a
glance
database user and grant the user access to theglance
database:mysql>
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
5.4. Configure the Image Service
- Configure the Identity service for Image service authentication (create database entries, set connection strings, and update configuration files).
- Configure the disk-image storage back end (this guide uses the Object Storage service).
- Configure the firewall for Image service access.
- Configure TLS/SSL.
- Populate the Image service database.
5.4.1. Configure the Image Service Database Connection
/etc/glance/glance-api.conf
and /etc/glance/glance-registry.conf
files. It must be updated to point to a valid database server before starting the service.
root
user.
Procedure 5.2. Configuring the Image Service SQL Database Connection
- Set the value of the
sql_connection
configuration key in theglance-api.conf
file:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT sql_connection mysql://USER:PASS@IP/DB
Replace the following values:- Replace USER with the Image service database user name, usually
glance
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the server hosting the database service.
- Replace DB with the name of the Image service database, usually
glance
.
- Set the value of the
sql_connection
configuration key in theglance-registry.conf
file:#
openstack-config --set /etc/glance/glance-registry.conf \
DEFAULT sql_connection mysql://USER:PASS@IP/DB
Replace USER, PASS, IP, and DB with the same values used in the previous step.
Important
5.4.2. Create the Image Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 5.3. Creating Identity Records for the Image Service
- Set up the shell to access Keystone as the admin user:
#
source ~/keystonerc_admin
- Create the
glance
user:[(keystone_admin)]#
keystone user-create --name glance --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 8091eaf121b641bf84ce73c49269d2d1 | | name | glance | | username | glance | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Image Service when authenticating with the Identity service. - Link the
glance
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user glance --role admin --tenant services
- Create the
glance
Image service entry:[(keystone_admin)]#
keystone service-create --name glance \
--type image \
--description "Glance Image Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Glance Image Service | | enabled | True | | id | 7461b83f96bd497d852fb1b85d7037be | | name | glance | | type | image | +-------------+----------------------------------+ - Create the
glance
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service glance \
--publicurl 'http://
IP
:9292' \--adminurl 'http://
IP
:9292' \--internalurl 'http://
IP
:9292' \--region 'RegionOne'
Replace IP with the IP address or host name of the server hosting the Image service.
5.4.3. Configure Image Service Authentication
root
user.
Procedure 5.4. Configuring the Image Service to Authenticate through the Identity Service
- Configure the
glance-api
service:#
openstack-config --set /etc/glance/glance-api.conf \
paste_deploy flavor keystone
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_host IP
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_port 35357
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_protocol http
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_tenant_name services
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_user glance
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_password PASSWORD
- Configure the
glance-registry
service:#
openstack-config --set /etc/glance/glance-registry.conf \
paste_deploy flavor keystone
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_host IP
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_port 35357
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_protocol http
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_tenant_name services
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_user glance
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_password PASSWORD
- Replace IP with the IP address or host name of the Identity server.
- Replace services with the name of the tenant that was created for the use of the Image service (previous examples set this to
services
). - Replace glance with the name of the service user that was created for the Image service (previous examples set this to
glance
). - Replace PASSWORD with the password associated with the service user.
5.4.4. Use the Object Storage Service for Image Storage
file
) for its storage back end; however, either of the following storage back ends can be used to store uploaded disk images:
file
- Local file system of the Image server (/var/lib/glance/images/
directory)swift
- OpenStack Object Storage service
Note
openstack-config
command; however, you can also manually update the /etc/glance/glance-api.conf
file. If manually updating the file, ensure that the default_store
parameter is set to the correct back end (for example, 'default_store=rbd
'), and update the parameters in that back end's section (for example, under 'RBD Store Options
').
Procedure 5.5. Configuring the Image Service to use the Object Storage Service
- Set the
default_store
configuration key toswift
:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT default_store swift
- Set the
swift_store_auth_address
configuration key to the public endpoint for the Identity service:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_auth_address http://IP:5000/v2.0/
- Add the container for storing images in the Object Storage service:
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_create_container_on_put True
- Set the
swift_store_user
configuration key, in the format TENANT:USER, to contain the tenant and user to use for authentication:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_user services:swift
- If you followed the instructions in this guide to deploy Object Storage, replace these values with the
services
tenant and theswift
user respectively (as shown in the command example above). - If you did not follow the instructions in this guide to deploy Object Storage, replace these values with the appropriate Object Storage tenant and user for your environment.
- Set the
swift_store_key
configuration key to the password that was set for theswift
user when deploying the Object Storage service:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_key PASSWORD
5.4.5. Configure the Firewall to Allow Image Service Traffic
9292
. All steps in this procedure must be performed on the server hosting the Image service, while logged in as the root
user.
Procedure 5.6. Configuring the Firewall to Allow Image Service Traffic
- Open the
/etc/glance/glance-api.conf
file in a text editor, and remove any comment characters preceding the following parameters:bind_host = 0.0.0.0 bind_port = 9292
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
9292
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
5.4.6. Configure RabbitMQ Message Broker Settings for the Image Service
root
user.
Procedure 5.7. Configuring the Image Service (glance) to Use the RabbitMQ Message Broker
- Set RabbitMQ as the notifier:
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT notification_driver messaging
- Set the name of the RabbitMQ host:
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for the Image service when RabbitMQ was configured:
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT rabbit_userid glance
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT rabbit_password GLANCE_PASS
Replaceglance
and GLANCE_PASS with the RabbitMQ user name and password created for the Image service. - When RabbitMQ was launched, the
glance
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Image service to connect to this virtual host:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT rabbit_virtual_host /
5.4.7. Configure the Image Service to Use SSL
glance-api.conf
file to configure SSL.
Configuration Option | Description |
---|---|
cert_file
|
The path to the certificate file to use when starting the API server securely.
|
key_file
|
The path to the private key file to use when starting the API server securely.
|
ca_file
|
The path to the CA certificate file to use to verify connecting clients.
|
5.4.8. Populate the Image Service Database
Procedure 5.8. Populating the Image Service Database
- Log in to the system hosting the Image service.
- Switch to the
glance
user:#
su glance -s /bin/sh
- Initialize and populate the database identified in
/etc/glance/glance-api.conf
and/etc/glance/glance-registry.conf
:$
glance-manage db_sync
5.4.9. Enable Image Loading Through the Local File System
Note
Procedure 5.9. Configuring Image and Compute Services to Send and Receive Images through the Local File System
- Create a JSON document that exposes the Image file system metadata required by
openstack-nova-compute
. - Configure the Image service to use the JSON document.
- Configure
openstack-nova-compute
to use the file system metadata provided by the Image service.
5.4.9.1. Configure the Image Service to Provide Images Through the Local File System
openstack-nova-compute
service.
Procedure 5.10. Configuring the Image Service to Expose Local File System Metadata to the Compute Service
- Determine the mount point of the file system used by the Image service:
#
df
Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 51475068 10905752 37947876 23% / devtmpfs 2005504 0 2005504 0% /dev tmpfs 2013248 668 2012580 1% /dev/shmFor example, if the Image service uses the/dev/sda3
file system, its corresponding mount point is/
. - Create a unique ID for the mount point:
#
uuidgen
ad5517ae-533b-409f-b472-d82f91f41773Note the output of theuuidgen
, as this will be used in the next step. - Create a file with the
.json
extension. - Open the file in a text editor, and add the following information:
{ "id": "UID", "mountpoint": "MOUNTPT" }
Replace the following values:- Replace UID with the unique ID created in the previous step.
- Replace MOUNTPT with the mount point of the Image service's file system, as determined in the first step.
- Configure the Image service to use this JSON file:
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT show_multiple_locations True
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT filesystem_store_metadata_file JSON_PATH
Replace JSON_PATH with the full path to the JSON file. - Restart the Image service (if it is already running):
#
systemctl restart openstack-glance-registry.service
#
systemctl restart openstack-glance-api.service
5.4.9.2. Configure the Compute Service to Use Local File System Metadata
openstack-nova-compute
to load images from the local file system.
Procedure 5.11. Configuring the Compute Service to use File System Metadata Provided by the Image Service
- Configure
openstack-nova-compute
to enable the use of direct URLs that have thefile://
scheme:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT allowed_direct_url_schemes file
- Create an entry for the Image service's file system:
#
openstack-config --set /etc/nova/nova.conf \
image_file_url filesystems FSENTRY
Replace FSENTRY with a name to assign to the Image service's file system. - Open the
.json
file used by the Image service to expose its local file-system metadata. The information in this file will be used in the next step. - Associate the entry for Image service's file system to the file system metadata exposed by the Image service:
#
openstack-config --set /etc/nova/nova.conf \
image_file_url:FSENTRY id UID
#
openstack-config --set /etc/nova/nova.conf \
image_file_url:FSENTRY mountpoint MOUNTPT
Replace the following values:- Replace UID with the unique ID used by the Image service. In the
.json
file used by the Image service, the UID is the"id"
value. - Replace MOUNTPT with the mount point used by the Image service's file system. In the
.json
file used by the Image service, the MOUNTPT is the"mountpoint"
value.
5.5. Launch the Image API and Registry Services
glance-api
and glance-registry
services, and configure each service to start at boot time:
#
systemctl start openstack-glance-registry.service
#
systemctl start openstack-glance-api.service
#
systemctl enable openstack-glance-registry.service
#
systemctl enable openstack-glance-api.service
5.6. Validate the Image Service Installation
5.6.1. Obtain a Test Disk Image
Procedure 5.12. Downloading a Test Disk Image
- Go to https://access.redhat.com, and log in to the Red Hat Customer Portal using your customer account details.
- Clickin the menu bar.
- Clickto sort the product downloads alphabetically.
- Click Red Hat Enterprise Linux to access the page.
- Click the KVM Guest Image download link.
5.6.2. Upload a Disk Image
Important
virt-sysprep
command on all Linux-based virtual machine images prior to uploading them to the Image service. The virt-sysprep
command reinitializes a disk image in preparation for use in a virtual environment. Default operations include the removal of SSH keys, removal of persistent MAC addresses, and removal of user accounts.
virt-sysprep
command is provided by the Red Hat Enterprise Linux libguestfs-tools package. Install the package, and reinitialize the disk image:
#
yum install -y libguestfs-tools
#
virt-sysprep --add FILE
virt-sysprep
manual page.
Procedure 5.13. Uploading a Disk Image to the Image Service
- Set up the shell to access keystone as a configured user (an administrative account is not required):
#
source ~/keystonerc_userName
- Import the disk image:
[(keystone_userName)]#
glance image-create --name "NAME" \
--is-public IS_PUBLIC \
--disk-format DISK_FORMAT \
--container-format CONTAINER_FORMAT \
--file IMAGE
Replace the following values:- Replace NAME with a name by which users will refer to the disk image.
- Replace IS_PUBLIC with either
true
orfalse
:true
- All users are able to view and use the image.false
- Only administrators are able to view and use the image.
- Replace DISK_FORMAT with the disk image's format. Valid values include:
aki
,ami
,ari
,iso
,qcow2
,raw
,vdi
,vhd
, andvmdk
. If the format of the virtual machine disk image is unknown, use theqemu-img info
command to try and identify it. - Replace CONTAINER_FORMAT with the container format of the image. The container format is
bare
unless the image is packaged in a file format, such asovf
orami
, that includes additional metadata related to the image. - Replace IMAGE with the local path to the image file (for uploading). If the image being uploaded is not locally accessible but is available using a remote URL, provide the URL using the
--location
parameter instead of the--file
parameter. Note that you must also specify the--copy-from
argument to copy the image into the object store, otherwise the image will be accessed remotely each time it is required.
For more information about theglance image-create
syntax, see the help page:[(keystone_userName)]#
glance help image-create
Note the unique identifier for the image in the output of the command above. - Verify that your image was successfully uploaded:
[(keystone_userName)]#
glance image-show IMAGE_ID
+------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 2f81976cae15c16ef0010c51e3a6c163 | | container_format | bare | | created_at | 2013-01-25T14:45:48 | | deleted | False | | disk_format | qcow2 | | id | 0ce782c6-0d3e-41df-8fd5-39cd80b31cd9 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | RHEL 6.6 | | owner | b1414433c021436f97e9e1e4c214a710 | | protected | False | | size | 25165824 | | status | active | | updated_at | 2013-01-25T14:45:50 | +------------------+--------------------------------------+Replace IMAGE_ID with the unique identifier for the image.
Chapter 6. Install the Block Storage Service
6.1. Install the Block Storage Service Packages
- openstack-cinder
- Provides the Block Storage services and associated configuration files.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
- device-mapper-multipath
- Provides tools to manage multipath devices using device-mapper. These tools are necessary for proper block storage operations.
#
yum install -y openstack-cinder openstack-utils openstack-selinux device-mapper-multipath
6.2. Create the Block Storage Service Database
root
user.
Procedure 6.1. Creating the Block Storage Services Database
- Connect to the database service:
#
mysql -u root -p
- Create the
cinder
database:mysql>
CREATE DATABASE cinder; - Create a
cinder
database user and grant the user access to thecinder
database:mysql>
GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
6.3. Configure the Block Storage Service
6.3.1. Configure the Block Storage Service Database Connection
/etc/cinder/cinder.conf
file. It must be updated to point to a valid database server before starting the service.
sql_connection
configuration key on each system hosting Block Storage services:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT sql_connection mysql://USER:PASS@IP/DB
- Replace USER with the Block Storage service database user name, usually
cinder
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the server hosting the database service.
- Replace DB with the name of the Block Storage service database, usually
cinder
.
Important
6.3.2. Create the Block Storage Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 6.2. Creating Identity Records for the Block Storage Service
- Set up the shell to access Keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
cinder
user:[(keystone_admin)]#
keystone user-create --name cinder --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | e1765f70da1b4432b54ced060139b46a | | name | cinder | | username | cinder | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Block Storage service when authenticating with the Identity service. - Link the
cinder
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user cinder --role admin --tenant services
- Create the
cinder
andcinderv2
Block Storage service entries:[(keystone_admin)]#
keystone service-create --name cinder \
--type volume \
--description "Cinder Volume Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Cinder Volume Service | | enabled | True | | id | dfde7878671e484c9e581a3eb9b63e66 | | name | cinder | | type | volume | +-------------+----------------------------------+[(keystone_admin)]#
keystone service-create --name cinderv2 \
--type volumev2 \
--description "Cinder Volume Service v2"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Cinder Volume Service v2 | | enabled | True | | id | 42318fdec1926f57643ca7b1e40b78df | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ - Create the
cinder
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service cinder \
--publicurl 'http://IP:8776/v1/%(tenant_id)s' \
--adminurl 'http://IP:8776/v1/%(tenant_id)s' \
--internalurl 'http://IP:8776/v1/%(tenant_id)s' \
--region 'RegionOne'
[(keystone_admin)]#
keystone endpoint-create \
--service cinderv2 \
--publicurl 'http://IP:8776/v2/%(tenant_id)s' \
--adminurl 'http://IP:8776/v2/%(tenant_id)s' \
--internalurl 'http://IP:8776/v2/%(tenant_id)s'
--region 'RegionOne'
Replace IP with the IP address or host name of the server hosting the Block Storage API service (openstack-cinder-api
). To install and run multiple instances of the API service, repeat this step for the IP address or host name of each instance.
6.3.3. Configure Block Storage Service Authentication
root
user.
Procedure 6.3. Configuring the Block Storage Service to Authenticate Through the Identity Service
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT auth_strategy keystone
- Set the Identity service host that the Block Storage services must use:
#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the Block Storage services to authenticate as the correct tenant:
#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of OpenStack Networking. Examples in this guide useservices
. - Set the Block Storage services to authenticate using the
cinder
administrative user account:#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_user cinder
- Set the Block Storage services to use the correct
cinder
administrative user account password:#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when thecinder
user was created.
6.3.4. Configure the Firewall to Allow Block Storage Service Traffic
root
user.
Procedure 6.4. Configuring the Firewall to Allow Block Storage Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports
3260
and8776
to the file. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
6.3.5. Configure the Block Storage Service to Use SSL
cinder.conf
file to configure SSL.
Configuration Option | Description |
---|---|
backlog
|
The number of backlog requests with which to configure the socket.
|
tcp_keepidle
|
Sets the value of TCP_KEEPIDLE in seconds for each server socket.
|
ssl_ca_file
|
The CA certificate file to use to verify connecting clients.
|
ssl_cert_file
|
The certificate file to use when starting the server securely.
|
ssl_key_file
|
The private key file to use when starting the server securely.
|
6.3.6. Configure RabbitMQ Message Broker Settings for the Block Storage Service
root
user.
Procedure 6.5. Configuring the Block Storage Service to use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu
- Set the name of the RabbitMQ host:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ username and password created for the Block Storage service when RabbitMQ was configured:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_userid cinder
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_password CINDER_PASS
Replacecinder
and CINDER_PASS with the RabbitMQ user name and password created for the Block Storage service. - When RabbitMQ was launched, the
cinder
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Block Storage service to connect to this virtual host:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_virtual_host /
6.3.7. Enable SSL Communication Between the Block Storage Service and the Message Broker
- Enable SSL communication with the message broker:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rabbit_use_ssl True
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT kombu_ssl_certfile /path/to/client.crt
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
Replace the following values:- Replace /path/to/client.crt with the absolute path to the exported client certificate.
- Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
- If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).
6.3.8. Populate the Block Storage Database
Important
Procedure 6.6. Populating the Block Storage Service Database
- Log in to the system hosting one of the Block Storage services.
- Switch to the
cinder
user:#
su cinder -s /bin/sh
- Initialize and populate the database identified in
/etc/cinder/cinder.conf
:$
cinder-manage db sync
6.3.9. Increase the Throughput of the Block Storage API Service
openstack-cinder-api
) runs in one process. This limits the number of API requests that the Block Storage service can process at any given time. In a production environment, you should increase the Block Storage API throughput by allowing openstack-cinder-api
to run in as many processes as the machine capacity allows.
osapi_volume_workers
, allows you to specify the number of API service workers (or OS processes) to launch for openstack-cinder-api
.
openstack-cinder-api
host:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT osapi_volume_workers CORES
6.4. Configure the Volume Service
6.4.1. Block Storage Driver Support
openstack-cinder-volume
) requires access to suitable block storage. Red Hat OpenStack Platform provides volume drivers for the following supported block storage types:
- Red Hat Ceph
- LVM/iSCSI
- ThinLVM
- NFS
- NetApp
- Dell EqualLogic
- Dell Storage Center
6.4.2. Configure OpenStack Block Storage to Use an LVM Storage Back End
openstack-cinder-volume
service can make use of a volume group attached directly to the server on which the service runs. This volume group must be created exclusively for use by the Block Storage service, and the configuration updated to point to the name of the volume group.
openstack-cinder-volume
service, while logged in as the root
user.
Procedure 6.7. Configuring openstack-cinder-volume to Use LVM Storage as a Back End
- Create a physical volume:
#
pvcreate DEVICE
Physical volume "DEVICE" successfully createdReplace DEVICE with the path to a valid, unused, device. For example:#
pvcreate /dev/sdX
- Create a volume group:
#
vgcreate cinder-volumes DEVICE
Volume group "cinder-volumes" successfully createdReplace DEVICE with the path to the device used when creating the physical volume. Optionally replace cinder-volumes with an alternative name for the new volume group. - Set the
volume_group
configuration key to the name of the volume group created in the previous step:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_group cinder-volumes
- Ensure that the correct volume driver for accessing LVM storage is in use by setting the
volume_driver
configuration key tocinder.volume.drivers.lvm.LVMISCSIDriver
:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
6.4.3. Configure the SCSI Target Daemon
openstack-cinder-volume
service uses a SCSI target daemon for mounting storage. You must install a SCSI target daemon on each server hosting an instance of the openstack-cinder-volume
service, while logged in as the root
user.
Procedure 6.8. Configure a SCSI Target Daemon
- Install the targetcli package:
#
yum install targetcli
- Launch the
target
daemon and configure it to start at boot time:#
systemctl start target.service
#
systemctl enable target.service
- Configure the volume service to use the
lioadm
iSCSI target user-land tool:#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT iscsi_helper lioadm
- Set the IP address on which the iSCSI daemon must listen (ISCSIIP):
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT iscsi_ip_address ISCSIIP
Replace ISCSI_IP with the IP address or host name of the server hosting theopenstack-cinder-volume
service.
6.5. Launch the Block Storage Services
- The API service (
openstack-cinder-api
). - The scheduler service (
openstack-cinder-scheduler
). - The volume service (
openstack-cinder-volume
).
Procedure 6.9. Launching Block Storage Services
- Log in as the
root
user on each server where you intend to run the API, start the API service, and configure it to start at boot time:#
systemctl start openstack-cinder-api.service
#
systemctl enable openstack-cinder-api.service
- Log in as the
root
user on each server where you intend to run the scheduler, start the scheduler service, and configure it to start at boot time:#
systemctl start openstack-cinder-scheduler.service
#
systemctl enable openstack-cinder-scheduler.service
- Log in as the
root
user on each server to which Block Storage has been attached, start the volume service, and configure it to start at boot time:#
systemctl start openstack-cinder-volume.service
#
systemctl enable openstack-cinder-volume.service
6.6. Validate the Block Storage Service Installation
6.6.1. Validate the Block Storage Service Installation Locally
root
user or a user with access to the keystonerc_admin
file. Copy the keystonerc_admin
file to the system before proceeding.
Procedure 6.10. Validating the Block Storage Service Installation Locally
- Populate the environment variables used for identifying and authenticating the administrative user:
#
source ~/keystonerc_admin
- Verify that no errors are returned in the output of this command:
#
cinder list
- Create a volume:
#
cinder create SIZE
Replace SIZE with the size of the volume to create in Gigabytes (GB). - Remove the volume:
#
cinder delete ID
Replace ID with the identifier returned when the volume was created.
6.6.2. Validate the Block Storage Service Installation Remotely
root
user or a user with access to the keystonerc_admin
file. Copy the keystonerc_admin
file to the system before proceeding.
Procedure 6.11. Validating the Block Storage Service Installation Remotely
- Install the python-cinderclient package:
#
yum install -y python-cinderclient
- Populate the environment variables used for identifying and authenticating the administrative user:
#
source ~/keystonerc_admin
- Verify that no errors are returned in the output of this command:
#
cinder list
- Create a volume:
#
cinder create SIZE
Replace SIZE with the size of the volume to create in gigabytes (GB). - Remove the volume:
#
cinder delete ID
Replace ID with the identifier returned when the volume was created.
Chapter 7. Install OpenStack Networking
7.1. Install the OpenStack Networking Packages
- openstack-neutron
- Provides OpenStack Networking and associated configuration files.
- openstack-neutron-PLUGIN
- Provides an OpenStack Networking plug-in. Replace PLUGIN with one of the recommended plug-ins (
ml2
,openvswitch
, orlinuxbridge
).Note
The monolithic Open vSwitch and linuxbridge plug-ins have been deprecated and will be removed in a future release; their functionality has instead been reimplemented as ML2 mechanisms. - openstack-utils
- Provides supporting utilities to assist with a number of tasks, including the editing of configuration files.
- openstack-selinux
- Provides OpenStack-specific SELinux policy modules.
#
yum install -y openstack-neutron \
openstack-neutron-PLUGIN \
openstack-utils \
openstack-selinux
ml2
,openvswitch
, or linuxbridge
to determine which plug-in is installed.
7.2. Configure OpenStack Networking
Important
vif_plugging_is_fatal
option is commented out in the [DEFAULT]
section of the /etc/nova/nova.conf
file, and defaults to True
. This option controls whether instances should fail to boot if VIF plugging fails. Similarly, the notify_nova_on_port_status_changes
and notify_nova_on_port_data_changes
options are commented out in the [DEFAULT]
section of the /etc/neutron/neutron.conf
file, and default to False
. These options control whether notifications should be sent to nova on port status or data changes. However, this combination of values can prevent instances from booting. To allow instances to boot correctly, set all of these options to either True
or False
. To set True
, run the following commands:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT vif_plugging_is_fatal True
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT notify_nova_on_port_status_changes True
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT notify_nova_on_port_data_changes True
False
, run the following commands instead:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT vif_plugging_is_fatal False
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT notify_nova_on_port_status_changes False
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT notify_nova_on_port_data_changes False
7.2.1. Set the OpenStack Networking Plug-in
Note
neutron.conf
by their nominated short names, instead of their lengthy class names. For example:
core_plugin = neutron.plugins.ml2.plugin:Ml2Pluginwill become:
core_plugin = ml2
Short name | Class name |
---|---|
bigswitch | neutron.plugins.bigswitch.plugin:NeutronRestProxyV2 |
brocade | neutron.plugins.brocade.NeutronPlugin:BrocadePluginV2 |
cisco | neutron.plugins.cisco.network_plugin:PluginV2 |
embrane | neutron.plugins.embrane.plugins.embrane_ovs_plugin:EmbraneOvsPlugin |
hyperv | neutron.plugins.hyperv.hyperv_neutron_plugin:HyperVNeutronPlugin |
linuxbridge | neutron.plugins.linuxbridge.lb_neutron_plugin:LinuxBridgePluginV2 |
midonet | neutron.plugins.midonet.plugin:MidonetPluginV2 |
ml2 | neutron.plugins.ml2.plugin:Ml2Plugin |
mlnx | neutron.plugins.mlnx.mlnx_plugin:MellanoxEswitchPlugin |
nec | neutron.plugins.nec.nec_plugin:NECPluginV2 |
openvswitch | neutron.plugins.openvswitch.ovs_neutron_plugin:OVSNeutronPluginV2 |
plumgrid | neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin:NeutronPluginPLUMgridV2 |
ryu | neutron.plugins.ryu.ryu_neutron_plugin:RyuNeutronPluginV2 |
vmware | neutron.plugins.vmware.plugin:NsxPlugin |
service_plugins
option accepts a comma-delimited list of multiple service plugins.
Short name | Class name |
---|---|
dummy | neutron.tests.unit.dummy_plugin:DummyServicePlugin |
router | neutron.services.l3_router.l3_router_plugin:L3RouterPlugin |
firewall | neutron.services.firewall.fwaas_plugin:FirewallPlugin |
lbaas | neutron.services.loadbalancer.plugin:LoadBalancerPlugin |
metering | neutron.services.metering.metering_plugin:MeteringPlugin |
7.2.1.1. Enable the ML2 Plug-in
neutron-server
service.
Procedure 7.1. Enabling the ML2 Plug-in
- Create a symbolic link to direct OpenStack Networking to the
ml2_conf.ini
file:#
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- Set the tenant network type. Supported values are
gre
,local
,vlan
, andvxlan
. The default value islocal
, but this is not recommended for enterprise deployments:#
openstack-config --set /etc/neutron/plugin.ini \
ml2 tenant_network_types TYPE
Replace TYPE with the tenant network type. - If you chose
flat
orvlan
networking, you must also map physical networks to VLAN ranges:#
openstack-config --set /etc/neutron/plugin.ini \
ml2 network_vlan_ranges NAME:START:END
Replace the following values:- Replace NAME with the name of the physical network.
- Replace START with the VLAN identifier that starts the range.
- Replace END with the VLAN identifier that ends the range.
Multiple ranges can be specified using a comma-delimited list, for example:physnet1:1000:2999,physnet2:3000:3999
- Set the driver types. Supported values are
local
,flat
,vlan
,gre
, andvxlan
:#
openstack-config --set /etc/neutron/plugin.ini \
ml2 type_drivers TYPE
Replace TYPE with the driver type. Specify multiple drivers using a comma-delimited list. - Set the mechanism drivers. Available values are
openvswitch
,linuxbridge
, andl2population
:#
openstack-config --set /etc/neutron/plugin.ini \
ml2 mechanism_drivers TYPE
Replace TYPE with the mechanism driver type. Specify multiple mechanism drivers using a comma-delimited list. - Enable L2 population:
#
openstack-config --set /etc/neutron/plugin.ini \
agent l2_population True
- Set the firewall driver in the
/etc/neutron/plugins/ml2/openvswitch_agent.ini
file or the/etc/neutron/plugins/ml2/linuxbridge_agent.ini
file, depending on which plug-in agent you are using:Open vSwitch Firewall Driver
#
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini
securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Linux Bridge Firewall Driver
#
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini
securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- Enable the ML2 plug-in and the L3 router:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT core_plugin ml2
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT service_plugins router
7.2.1.2. Enable the Open vSwitch Plug-in
neutron-server
service.
Note
Procedure 7.2. Enabling the Open vSwitch Plug-in
- Create a symbolic link to direct OpenStack Networking to the
openvswitch_agent.ini
file:#
ln -s /etc/neutron/plugins/ml2/openvswitch_agent.ini \
/etc/neutron/plugin.ini
- Set the tenant network type. Supported values are
gre
,local
,vlan
, andvxlan
. The default value islocal
, but this is not recommended for enterprise deployments:#
openstack-config --set /etc/neutron/plugin.ini \
OVS tenant_network_type TYPE
Replace TYPE with the tenant network type. - If you chose
flat
orvlan
networking, you must also map physical networks to VLAN ranges:#
openstack-config --set /etc/neutron/plugin.ini \
OVS network_vlan_ranges NAME:START:END
Replace the following values:- Replace NAME with the name of the physical network.
- Replace START with the VLAN identifier that starts the range.
- Replace END with the VLAN identifier that ends the range.
Multiple ranges can be specified using a comma-delimited list, for example:physnet1:1000:2999,physnet2:3000:3999
- Set the firewall driver:
#
openstack-config --set /etc/neutron/plugin.ini \
securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
- Enable the Open vSwitch plug-in:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT core_plugin openvswitch
7.2.1.3. Enable the Linux Bridge Plug-in
neutron-server
service.
Note
Procedure 7.3. Enabling the Linux Bridge Plug-in
- Create a symbolic link to direct OpenStack Networking to the
linuxbridge_agent.ini
file:#
ln -s /etc/neutron/plugins/ml2/linuxbridge_agent.ini \
/etc/neutron/plugin.ini
- Set the tenant network type. Supported values are
flat
,vlan
, andlocal
. The default islocal
, but this is not recommended for enterprise deployments:#
openstack-config --set /etc/neutron/plugin.ini \
VLAN tenant_network_type TYPE
Replace TYPE with the chosen tenant network type. - If you chose
flat
orvlan
networking, you must also map physical networks to VLAN ranges:#
openstack-config --set /etc/neutron/plugin.ini \
LINUX_BRIDGE network_vlan_ranges NAME:START:END
- Replace NAME with the name of the physical network.
- Replace START with the VLAN identifier that starts the range.
- Replace END with the VLAN identifier that ends the range.
Multiple ranges can be specified using a comma-delimited list, for example:physnet1:1000:2999,physnet2:3000:3999
- Set the firewall driver:
#
openstack-config --set /etc/neutron/plugin.ini \
securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- Enable the Linux Bridge plug-in:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT core_plugin linuxbridge
7.2.2. Create the OpenStack Networking Database
root
user, and prior to starting the neutron-server
service.
Procedure 7.4. Creating the OpenStack Networking Database
- Connect to the database service:
#
mysql -u root -p
- Create the database with one of the following names:This example creates the ML2
- If you are using the ML2 plug-in, the recommended database name is
neutron_ml2
- If you are using the Open vSwitch plug-in, the recommended database name is
ovs_neutron
. - If you are using the Linux Bridge plug-in, the recommended database name is
neutron_linux_bridge
.
neutron_ml2
database:mysql>
CREATE DATABASE neutron_ml2 character set utf8; - Create a
neutron
database user and grant the user access to theneutron_ml2
database:mysql>
GRANT ALL ON neutron_ml2.* TO 'neutron'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON neutron_ml2.* TO 'neutron'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
7.2.3. Configure the OpenStack Networking Database Connection
/etc/neutron/plugin.ini
file. It must be updated to point to a valid database server before starting the service. All steps in this procedure must be performed on the server hosting OpenStack Networking, while logged in as the root
user.
Procedure 7.5. Configuring the OpenStack Networking SQL Database Connection
- Set the value of the
connection
configuration key.#
openstack-config --set /etc/neutron/plugin.ini \
DATABASE sql_connection mysql://USER:PASS@IP/DB
Replace the following values:- Replace USER with the OpenStack Networking database user name, usually
neutron
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the database server.
- Replace DB with the name of the OpenStack Networking database.
Important
The IP address or host name specified in the connection configuration key must match the IP address or host name to which the OpenStack Networking database user was granted access when creating the OpenStack Networking database. Moreover, if the database is hosted locally and you granted permissions to 'localhost' when creating the database, you must enter 'localhost'. - Upgrade the OpenStack Networking database schema:
#
neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf \
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
7.2.4. Create the OpenStack Networking Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 7.6. Creating Identity Records for OpenStack Networking
- Set up the shell to access Keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
neutron
user:[(keystone_admin)]#
keystone user-create --name neutron --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 1df18bcd14404fa9ad954f9d5eb163bc | | name | neutron | | username | neutron | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by OpenStack Networking when authenticating with the Identity service. - Link the
neutron
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user neutron --role admin --tenant services
- Create the
neutron
OpenStack Networking service entry:[(keystone_admin)]#
keystone service-create --name neutron \
--type network \
--description "OpenStack Networking"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 134e815915f442f89c39d2769e278f9b | | name | neutron | | type | network | +-------------+----------------------------------+ - Create the
neutron
endpoint entry:[(keystone_admin)]#
keystone endpoint-create
--service neutron \
--publicurl 'http://IP:9696' \
--adminurl 'http://IP:9696' \
--internalurl 'http://IP:9696' \
--region 'RegionOne'
Replace IP with the IP address or host name of the server that will act as the OpenStack Networking node.
7.2.5. Configure OpenStack Networking Authentication
root
user.
Procedure 7.7. Configuring the OpenStack Networking Service to Authenticate through the Identity Service
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT auth_strategy keystone
- Set the Identity service host that OpenStack Networking must use:
#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set OpenStack Networking to authenticate as the correct tenant:
#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of OpenStack Networking. Examples in this guide useservices
. - Set OpenStack Networking to authenticate using the
neutron
administrative user account:#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_user neutron
- Set OpenStack Networking to use the correct
neutron
administrative user account password:#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when theneutron
user was created.
7.2.6. Configure the Firewall to Allow OpenStack Networking Traffic
9696
. The firewall on the OpenStack Networking node must be configured to allow network traffic on this port. All steps in this procedure must be performed on the server hosting OpenStack Networking, while logged in as the root
user.
Procedure 7.8. Configuring the Firewall to Allow OpenStack Networking Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
9696
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
7.2.7. Configure RabbitMQ Message Broker Settings for OpenStack Networking
root
user.
Procedure 7.9. Configuring the OpenStack Networking Service to use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rpc_backend neutron.openstack.common.rpc.impl_kombu
- Set OpenStack Networking to connect to the RabbitMQ host:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for OpenStack Networking when RabbitMQ was configured:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_userid neutron
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_password NEUTRON_PASS
Replaceneutron
and NEUTRON_PASS with the RabbitMQ user name and password created for OpenStack Networking. - When RabbitMQ was launched, the
neutron
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Networking service to connect to this virtual host:#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_virtual_host /
7.2.8. Enable SSL Communication Between OpenStack Networking and the Message Broker
Procedure 7.10. Enabling SSL Communication Between OpenStack Networking and the RabbitMQ Message Broker
- Enable SSL communication with the message broker:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rabbit_use_ssl True
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT kombu_ssl_certfile /path/to/client.crt
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
Replace the following values:- Replace /path/to/client.crt with the absolute path to the exported client certificate.
- Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
- If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).
7.2.9. Configure OpenStack Networking to Communicate with the Compute Service
Procedure 7.11. Configuring OpenStack Networking to Communicate with the Compute Service
- Set OpenStack Networking to connect to the Compute controller node:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_url http://CONTROLLER_IP:8774/v2
Replace CONTROLLER_IP with the IP address or host name of the Compute controller node. - Set the user name, password, and tenant for the
nova
user:#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_admin_username nova
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_admin_tenant_id TENANT_ID
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_admin_password PASSWORD
Replace TENANT_ID with the unique identifier of the tenant created for the use of the Compute service. Replace PASSWORD with the password set when thenova
user was created. - Set OpenStack Networking to connect to the Compute controller node in an administrative context:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_admin_auth_url http://CONTROLLER_IP:35357/v2.0
Replace CONTROLLER_IP with the IP address or host name of the Compute controller node. - Set OpenStack Networking to use the correct region for the Compute controller node:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT nova_region_name RegionOne
7.2.10. Launch OpenStack Networking
neutron-server
service and configure it to start at boot time:
#
systemctl start neutron-server.service
#
systemctl enable neutron-server.service
Important
force_gateway_on_subnet
configuration key to True
in the /etc/neutron/neutron.conf
file.
7.3. Configure the DHCP Agent
root
user.
Procedure 7.12. Configuring the DHCP Agent
- Configure the DHCP agent to use the Identity service for authentication.
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT auth_strategy keystone
- Set the Identity service host that the DHCP agent must use:
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the DHCP agent to authenticate as the correct tenant:
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_tenant_name services
Replaceservices
with the name of the tenant created for the use of OpenStack Networking. Examples in this guide useservices
. - Set the DHCP agent to authenticate using the
neutron
administrative user account:#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_user neutron
- Set the DHCP agent to use the correct
neutron
administrative user account password:#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when theneutron
user was created.
- Set the interface driver in the
/etc/neutron/dhcp_agent.ini
file based on the OpenStack Networking plug-in being used. If you are using ML2, select either driver. Use the command that applies to the plug-in used in your environment:Open vSwitch Interface Driver
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge Interface Driver
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT interface_driver \
neutron.agent.linux.interface.BridgeInterfaceDriver
- Start the
neutron-dhcp-agent
service and configure it to start at boot time:#
systemctl start neutron-dhcp-agent.service
#
systemctl enable neutron-dhcp-agent.service
7.4. Create an External Network
br-ex
) directly, is only supported when the Open vSwitch plug-in (or its functionality, implemented through ML2) is in use. The second method, which is supported by the ML2 plug-in, the Open vSwitch plug-in, and the Linux Bridge plug-in, is to use an external provider network.
keystonerc_admin
file containing the authentication details of the Identity service administrative user.
Procedure 7.13. Creating and Configuring an External Network
- Set up the shell to access Keystone as the administrative user:
#
source ~/keystonerc_admin
- Create a new provider network:
[(keystone_admin)]#
neutron net-create EXTERNAL_NAME \
--router:external \
--provider:network_type TYPE \
--provider:physical_network PHYSNET \
--provider:segmentation_id VLAN_TAG
Replace the following values:- Replace EXTERNAL_NAME with a name for the new external network provider.
- Replace TYPE with the type of provider network to use. Supported values are
flat
(for flat networks),vlan
(for VLAN networks), andlocal
(for local networks). - Replace PHYSNET with a name for the physical network. This is not applicable if you intend to use a local network type. PHYSNET must match one of the values defined under
bridge_mappings
in the/etc/neutron/plugin.ini
file. - Replace VLAN_TAG with the VLAN tag that will be used to identify network traffic. The VLAN tag specified must have been defined by the network administrator. If the
network_type
was set to a value other thanvlan
, this parameter is not required.
Take note of the unique external network identifier returned; this is required in subsequent steps. - Create a new subnet for the external provider network:
[(keystone_admin)]#
neutron subnet-create --gateway GATEWAY \
--allocation-pool start=IP_RANGE_START,end=IP_RANGE_END \
--disable-dhcp EXTERNAL_NAME EXTERNAL_CIDR
Replace the following values:- Replace GATEWAY with the IP address or hostname of the system that will act as the gateway for the new subnet. This address must be within the block of IP addresses specified by EXTERNAL_CIDR, but outside of the block of IP addresses specified by the range started by IP_RANGE_START and ended by IP_RANGE_END.
- Replace IP_RANGE_START with the IP address that denotes the start of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
- Replace IP_RANGE_END with the IP address that denotes the end of the range of IP addresses within the new subnet from which floating IP addresses will be allocated.
- Replace EXTERNAL_NAME with the name of the external network the subnet is to be associated with. This must match the name that was provided to the
net-create
action in the previous step. - Replace EXTERNAL_CIDR with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. An example is
192.168.100.0/24
. The block of IP addresses specified by the range started by IP_RANGE_START and ended by IP_RANGE_END must fall within the block of IP addresses specified by EXTERNAL_CIDR.
Take note of the unique subnet identifier returned; this is required in subsequent steps. - Create a new router:
[(keystone_admin)]#
neutron router-create NAME
Replace NAME with a name for the new router. Take note of the unique router identifier returned; this is required in subsequent steps, and when configuring the L3 agent. - Link the router to the external provider network:
[(keystone_admin)]#
neutron router-gateway-set ROUTER NETWORK
Replace ROUTER with the unique identifier of the router, and replace NETWORK with the unique identifier of the external provider network. - Link the router to each private network subnet:
[(keystone_admin)]#
neutron router-interface-add ROUTER SUBNET
Replace ROUTER with the unique identifier of the router, and replace SUBNET with the unique identifier of a private network subnet. Perform this step for each existing private network subnet to which to link the router.
7.5. Configure the Plug-in Agent
7.5.1. Configure the Open vSwitch Plug-in Agent
Note
Procedure 7.14. Configuring the Open vSwitch Plug-in Agent
- Start the
openvswitch
service:#
systemctl start openvswitch.service
- Configure the
openvswitch
service to start at boot time:#
systemctl enable openvswitch.service
- Each host running the Open vSwitch agent requires an Open vSwitch bridge called
br-int
, which is used for private network traffic. This bridge is created automatically.Warning
Thebr-int
bridge is required for the agent to function correctly. Once created, do not remove or otherwise modify thebr-int
bridge. - Set the value of the
bridge_mappings
configuration key to a comma-separated list of physical networks and the network bridges associated with them:#
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini \
ovs bridge_mappings PHYSNET:BRIDGE
Replace PHYSNET with the name of a physical network, and replace BRIDGE with the name of the network bridge. - Start the
neutron-openvswitch-agent
service:#
systemctl start neutron-openvswitch-agent.service
- Configure the
neutron-openvswitch-agent
service to start at boot time:#
systemctl enable neutron-openvswitch-agent.service
- Configure the
neutron-ovs-cleanup
service to start at boot time. This service ensures that the OpenStack Networking agents maintain full control over the creation and management of tap devices:#
systemctl enable neutron-ovs-cleanup.service
7.5.2. Configure the Linux Bridge Plug-in Agent
Procedure 7.15. Configuring the Linux Bridge Plug-in Agent
- Set the value of the
physical_interface_mappings
configuration key to a comma-separated list of physical networks and the VLAN ranges associated with them that are available for allocation to tenant networks:#
openstack-config --set /etc/neutron/plugin.ini \
LINUX_BRIDGE physical_interface_mappings PHYSNET:VLAN_START:VLAN_END
Replace the following values:- Replace PHYSNET with the name of a physical network.
- Replace VLAN_START with an identifier indicating the start of the VLAN range.
- Replace VLAN_END with an identifier indicating the end of the VLAN range.
- Start the
neutron-linuxbridge-agent
service:#
systemctl start neutron-linuxbridge-agent.service
- Configure the
neutron-linuxbridge-agent
service to start at boot time:#
systemctl enable neutron-linuxbridge-agent.service
7.6. Configure the L3 Agent
root
user.
Procedure 7.16. Configuring the L3 Agent
- Configure the L3 agent to use the Identity service for authentication.
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/neutron/metadata_agent.ini \
DEFAULT auth_strategy keystone
- Set the Identity service host that the L3 agent must use:
#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the L3 agent to authenticate as the correct tenant:
#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of OpenStack Networking. Examples in this guide useservices
. - Set the L3 agent to authenticate using the
neutron
administrative user account:#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_user neutron
- Set the L3 agent to use the correct
neutron
administrative user account password:#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when theneutron
user was created. - If the
neutron-metadata-agent
service and thenova-metadata-api
service are not installed on the same server, set the address of thenova-metadata-api
service:#
openstack-config --set /etc/neutron/metadata_agent.ini \
DEFAULT nova_metadata_ip IP
Replace IP with the IP address of the server hosting thenova-metadata-api
service.
- Set the interface driver in the
/etc/neutron/l3_agent.ini
file based on the OpenStack Networking plug-in being used. Use the command the applies to the plug-in used in your environment:Open vSwitch Interface Driver
#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge Interface Driver
#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
- The L3 agent connects to external networks using either an external bridge or an external provider network. When using the Open vSwitch plug-in, either approach is supported. When using the Linux Bridge plug-in, only the use of an external provider network is supported. Set up the option that is most appropriate for your environment.
Using an External Bridge
Create and configure an external bridge and configure OpenStack Networking to use it. Perform these steps on each system hosting an instance of the L3 agent.- Create the external bridge,
br-ex
:#
ovs-vsctl add-br br-ex
- Ensure that the
br-ex
device persists on reboot by creating a/etc/sysconfig/network-scripts/ifcfg-br-ex
file, and adding the following lines:DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes BOOTPROTO=none
- Ensure that the L3 agent will use the external bridge:
#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT external_network_bridge br-ex
Using a Provider Network
To connect the L3 agent to external networks using a provider network, you must first have created the provider network. You must also have created a subnet and router to associate with it. The unique identifier of the router is required to complete these steps.Set the value of theexternal_network_bridge
configuration to be blank. This ensures that the L3 agent does not attempt to use an external bridge:#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT external_network_bridge ""
- Start the
neutron-l3-agent
service and configure it to start at boot time:#
systemctl start neutron-l3-agent.service
#
systemctl enable neutron-l3-agent.service
- The OpenStack Networking metadata agent allows virtual machine instances to communicate with the Compute metadata service. It runs on the same hosts as the L3 agent. Start the
neutron-metadata-agent
service and configure it to start at boot time:#
systemctl start neutron-metadata-agent.service
#
systemctl enable neutron-metadata-agent.service
- The
leastrouter
scheduler enumerates L3 Agent router assignment, and consequently schedules the router to the L3 Agent with the fewest routers. This differs from the ChanceScheduler behavior, which randomly selects from the candidate pool of L3 Agents.- Enable the
leastrouter
scheduler:#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT router_scheduler_driver neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- The router is scheduled once connected to a network. Unschedule the router:
[(keystone_admin)]#
neutron l3-agent-router-remove L3_NODE_ID ROUTER_ID
Replace L3_NODE_ID with the unique identifier of the agent on which the router is currently hosted, and replace ROUTER_ID with the unique identifier of the router. - Assign the router:
[(keystone_admin)]#
neutron l3-agent-router-add L3_NODE_ID ROUTER_ID
Replace L3_NODE_ID with the unique identifier of the agent on which the router is to be assigned, and replace ROUTER_ID with the unique identifier of the router.
7.7. Validate the OpenStack Networking Installation
Procedure 7.17. Validate the OpenStack Networking Installation
On All Nodes
- Verify that the customized Red Hat Enterprise Linux kernel intended for use with Red Hat OpenStack Platform is running:
#
uname --kernel-release
2.6.32-358.6.2.openstack.el6.x86_64If the kernel release value returned does not contain the stringopenstack
, update the kernel and reboot the system. - Ensure that the installed IP utilities support network namespaces:
#
ip netns
If an error indicating that the argument is not recognised or supported is returned, update the system usingyum
.
On Service Nodes
- Ensure that the
neutron-server
service is running:#
openstack-status | grep neutron-server
neutron-server: active
On Network Nodes
Ensure that the following services are running:- DHCP agent (
neutron-dhcp-agent
) - L3 agent (
neutron-l3-agent
) - Plug-in agent, if applicable (
neutron-openvswitch-agent
orneutron-linuxbridge-agent
) - Metadata agent (
neutron-metadata-agent
)
#
openstack-status | grep SERVICENAME
7.7.1. Troubleshoot OpenStack Networking Issues
- Debugging Networking Device
- Use the
ip a
command to display all the physical and virtual devices. - Use the
ovs-vsctl show
command to display the interfaces and bridges in a virtual switch. - Use the
ovs-dpctl show
command to show datapaths on the switch.
- Tracking Networking Packets
- Check where packets are not getting through:
Replace INTERFACE with the name of the network interface to check. The interface name can be the name of the bridge or host Ethernet device.#
tcpdump -n -i INTERFACE -e -w FILENAME
The-e
flag ensures that the link-level header is printed (in which thevlan
tag will appear).The-w
flag is optional. Use it if you want to write the output to a file. If not, the output is written to the standard output (stdout
).For more information abouttcpdump
, see its manual page.
- Debugging Network Namespaces
- Use the
ip netns list
command to list all known network namespaces. - Show routing tables inside specific namespaces:
Start the#
ip netns exec NAMESPACE_ID bash
#
route -n
ip netns exec
command in a bash shell so that subsequent commands can be invoked without theip netns exec
command.
Chapter 8. Install the Compute Service
8.1. Install a Compute VNC Proxy
8.1.1. Install the Compute VNC Proxy Packages
openstack-nova-xvpvncproxy
service).
root
user.
Procedure 8.1. Installing the Compute VNC proxy packages
- Install the VNC proxy utilities and the console authentication service:
- Install the openstack-nova-novncproxy package using the
yum
command:#
yum install -y openstack-nova-novncproxy
- Install the openstack-nova-console package using the
yum
command:#
yum install -y openstack-nova-console
8.1.2. Configure the Firewall to Allow Compute VNC Proxy Traffic
openstack-nova-novncproxy
service listens on TCP port 6080 and the openstack-nova-xvpvncproxy
service listens on TCP port 6081.
root
user.
Procedure 8.2. Configuring the firewall to allow Compute VNC proxy traffic
- Edit the
/etc/sysconfig/iptables
file and add the following on a new line underneath the -A INPUT -i lo -j ACCEPT line and before any -A INPUT -j REJECT rules:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6080 -j ACCEPT
- Save the file and exit the editor.
- Similarly, when using the
openstack-nova-xvpvncproxy
service, enable traffic on TCP port 6081 with the following on a new line in the same location:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6081 -j ACCEPT
root
user to apply the changes:
# service iptables restart
# iptables-save
8.1.3. Configure the VNC Proxy Service
/etc/nova/nova.conf
file holds the following VNC options:
- vnc_enabled - Default is true.
- vncserver_listen - The IP address to which VNC services will bind.
- vncserver_proxyclient_address - The IP address of the compute host used by proxies to connect to instances.
- novncproxy_base_url - The browser address where clients connect to instance.
- novncproxy_port - The port listening for browser VNC connections. Default is 6080.
- xvpvncproxy_port - The port to bind for traditional VNC clients. Default is 6081.
root
user, use the service
command to start the console authentication service:
#
service openstack-nova-consoleauth start
chkconfig
command to permanently enable the service:
#
chkconfig openstack-nova-consoleauth on
root
user, use the service
command on the nova node to start the browser-based service:
#
service openstack-nova-novncproxy start
chkconfig
command to permanently enable the service:
#
chkconfig openstack-nova-novncproxy on
8.1.4. Configure Live Migration
8.1.4.1. General Requirements
- Access to the cloud environment on the command line as an administrator (all steps in this procedure are carried out on the command line). To execute commands, first load the user's authentication variables:
#
source ~/keystonerc_admin
- Both source and destination nodes must be located in the same subnet, and have the same processor type.
- All compute servers (controller and nodes) must be able to perform name resolution with each other.
- The UID and GID of the Compute service and libvirt users must be identical between compute nodes.
- The compute nodes must be using KVM with libvirt.
8.1.4.2. Multipathing Requirements
multipathd
by running these commands on both source and destination nodes:
#
mpathconf --enable --user_friendly_names n
#
service multipathd restart
8.1.5. Access Instances with the Compute VNC Proxy
/etc/nova/nova.conf
file to access instance consoles.

Figure 8.1. VNC instance access
8.2. Install a Compute Node
8.2.1. Install the Compute Service Packages
- openstack-nova-api
- Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This must be the node pointed to by the Identity service endpoint definition for the Compute service.
- openstack-nova-compute
- Provides the OpenStack Compute service.
- openstack-nova-conductor
- Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
- openstack-nova-scheduler
- Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
- python-cinderclient
- Provides client utilities for accessing storage managed by the Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the Block Storage service.
#
yum install -y openstack-nova-api openstack-nova-compute \
openstack-nova-conductor openstack-nova-scheduler \
python-cinderclient
Note
8.2.2. Create the Compute Service Database
root
user.
Procedure 8.3. Creating the Compute Service Database
- Connect to the database service:
#
mysql -u root -p
- Create the
nova
database:mysql>
CREATE DATABASE nova; - Create a
nova
database user and grant the user access to thenova
database:mysql>
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client:mysql>
quit
8.2.3. Configure the Compute Service Database Connection
/etc/nova/nova.conf
file. It must be updated to point to a valid database server before starting the service.
openstack-nova-conductor
). Compute nodes communicate with the conductor using the messaging infrastructure; the conductor orchestrates communication with the database. As a result, individual Compute nodes do not require direct access to the database. There must be at least one instance of the conductor service in any Compute environment.
root
user.
Procedure 8.4. Configuring the Compute Service SQL Database Connection
- Set the value of the
sql_connection
configuration key:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT sql_connection mysql://USER:PASS@IP/DB
Replace the following values:- Replace USER with the Compute service database user name, usually
nova
. - Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the database server.
- Replace DB with the name of the Compute service database, usually
nova
.
Important
8.2.4. Create the Compute Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 8.5. Creating Identity Records for the Compute Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
compute
user:[(keystone_admin)]#
keystone user-create --name nova --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 96cd855e5bfe471ce4066794bbafb615 | | name | nova | | username | nova | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Compute service when authenticating with the Identity service. - Link the
compute
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user nova --role admin --tenant services
- Create the
compute
service entry:[(keystone_admin)]#
keystone service-create --name compute \
--type compute \
--description "OpenStack Compute Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute Service | | enabled | True | | id | 8dea97f5ee254b309c1792d2bd821e59 | | name | compute | | type | compute | +-------------+----------------------------------+ - Create the
compute
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service compute
--publicurl "http://IP:8774/v2/%(tenant_id)s" \
--adminurl "http://IP:8774/v2/%(tenant_id)s" \
--internalurl "http://IP:8774/v2/%(tenant_id)s" \
--region 'RegionOne'
Replace IP with the IP address or host name of the system hosting the Compute API service.
8.2.5. Configure Compute Service Authentication
root
user.
Procedure 8.6. Configuring the Compute Service to Authenticate Through the Identity Service
- Set the authentication strategy to
keystone
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT auth_strategy keystone
- Set the Identity service host that the Compute service must use:
#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the Compute service to authenticate as the correct tenant:
#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of the Compute service. Examples in this guide useservices
. - Set the Compute service to authenticate using the
compute
administrative user account:#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_user compute
- Set the Compute service to use the correct
compute
administrative user account password:#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_password PASSWORD
Replace PASSWORD with the password set when thecompute
user was created.
8.2.6. Configure the Firewall to Allow Compute Service Traffic
5900
to 5999
. Connections to the Compute API service are received on port 8774
. The firewall on the service node must be configured to allow network traffic on these ports. All steps in this procedure must be performed on each Compute node, while logged in as the root
user.
Procedure 8.7. Configuring the Firewall to Allow Compute Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports in the ranges
5900
to5999
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
- Add an INPUT rule allowing TCP traffic on port
8774
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 8774 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
8.2.7. Configure the Compute Service to Use SSL
nova.conf
file to configure SSL.
Configuration Option | Description |
---|---|
enabled_ssl_apis
|
A list of APIs with enabled SSL.
|
ssl_ca_file
|
The CA certificate file to use to verify connecting clients.
|
ssl_cert_file
|
The SSL certificate of the API server.
|
ssl_key_file
|
The SSL private key of the API server.
|
tcp_keepidle
|
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Defaults to 600.
|
8.2.8. Configure RabbitMQ Message Broker Settings for the Compute Service
root
user.
Procedure 8.8. Configuring the Compute Service to use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend rabbit
- Set the Compute service to connect to the RabbitMQ host:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for the Compute service when RabbitMQ was configured:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_userid nova
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_password NOVA_PASS
Replacenova
and NOVA_PASS with the RabbitMQ user name and password created for the Compute service. - When RabbitMQ was launched, the
nova
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Compute service to connect to this virtual host:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_virtual_host /
8.2.9. Enable SSL Communication Between the Compute Service and the Message Broker
Procedure 8.9. Enabling SSL Communication Between the Compute Service and the RabbitMQ Message Broker
- Enable SSL communication with the message broker:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rabbit_use_ssl True
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_certfile /path/to/client.crt
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
Replace the following values:- Replace /path/to/client.crt with the absolute path to the exported client certificate.
- Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
- If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).
8.2.10. Configure Resource Overcommitment
Important
- The default CPU overcommit ratio is 16. This means that up to 16 virtual cores can be assigned to a node for each physical core.
- The default memory overcommit ratio is 1.5. This means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
cpu_allocation_ratio
and ram_allocation_ratio
directives in /etc/nova/nova.conf
to change these default settings.
8.2.11. Reserve Host Resources
/etc/nova/nova.conf
:
reserved_host_memory_mb
. Defaults to 512MB.reserved_host_disk_mb
. Defaults to 0MB.
8.2.12. Configure Compute Networking
8.2.12.1. Compute Networking Overview
nova-network
service must not run. Instead all network related decisions are delegated to the OpenStack Networking Service.
nova-manage
and nova
to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.
Important
nova-network
and reboot any physical nodes that were running nova-network
before using these nodes to run OpenStack Network. Problems can arise from inadvertently running the nova-network
process while using OpenStack Networking service; for example, a previously running nova-network
could push down stale firewall rules.
8.2.12.2. Update the Compute Configuration
root
user.
Procedure 8.10. Updating the Connection and Authentication Settings of Compute Nodes
- Modify the
network_api_class
configuration key to indicate that OpenStack Networking is in use:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT network_api_class nova.network.neutronv2.api.API
- Set the Compute service to use the endpoint of the OpenStack Networking API:
#
openstack-config --set /etc/nova/nova.conf \
neutron url http://IP:9696/
Replace IP with the IP address or host name of the server hosting the OpenStack Networking API service. - Set the name of the tenant used by the OpenStack Networking service. Examples in this guide use services:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_tenant_name services
- Set the name of the OpenStack Networking administrative user:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_username neutron
- Set the password associated with the OpenStack Networking administrative user:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_password PASSWORD
- Set the URL associated with the Identity service endpoint:
#
openstack-config --set /etc/nova/nova.conf \
neutron admin_auth_url http://IP:35357/v2.0
Replace IP with the IP address or host name of the server hosting the Identity service. - Enable the metadata proxy and configure the metadata proxy secret:
#
openstack-config --set /etc/nova/nova.conf \
neutron service_metadata_proxy true
#
openstack-config --set /etc/nova/nova.conf \
neutron metadata_proxy_shared_secret METADATA_SECRET
Replace METADATA_SECRET with the string that the metadata proxy will use to secure communication. - Enable the use of OpenStack Networking security groups:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT security_group_api neutron
- Set the firewall driver to
nova.virt.firewall.NoopFirewallDriver
:#
openstack-config --set /etc/nova/nova.conf \
DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
This must be done when OpenStack Networking security groups are in use. - Open the
/etc/sysctl.conf
file in a text editor, and add or edit the following kernel networking parameters:net.ipv4.ip_forward = 1 net.ipv4.conf.all.rp_filter = 0 net.ipv4.conf.default.rp_filter = 0 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
- Load the updated kernel parameters:
#
sysctl -p
8.2.12.3. Configure the L2 Agent
8.2.12.4. Configure Virtual Interface Plugging
nova-compute
creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack Networking controlled virtual switch. Compute must also inform the virtual switch of the OpenStack Networking port identifier associated with each vNIC.
nova.virt.libvirt.vif.LibvirtGenericVIFDriver
, is provided in Red Hat OpenStack Platform. This driver relies on OpenStack Networking being able to return the type of virtual interface binding required. The following plug-ins support this operation:
- Linux Bridge
- Open vSwitch
- NEC
- BigSwitch
- CloudBase Hyper-V
- Brocade
openstack-config
command to set the value of the vif_driver
configuration key appropriately:
#
openstack-config --set /etc/nova/nova.conf \
libvirt vif_driver \
nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Important
- If running Open vSwitch with security groups enabled, use the Open vSwitch specific driver,
nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
, instead of the generic driver. - For Linux Bridge environments, you must add the following to the
/etc/libvirt/qemu.conf
file to ensure that the virtual machine launches properly:user = "root" group = "root" cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet", "/dev/net/tun", ]
8.2.13. Populate the Compute Service Database
Important
Procedure 8.11. Populating the Compute Service Database
- Log in to a system hosting an instance of the
openstack-nova-conductor
service. - Switch to the
nova
user:#
su nova -s /bin/sh
- Initialize and populate the database identified in
/etc/nova/nova.conf
:$
nova-manage db sync
8.2.14. Launch the Compute Services
Procedure 8.12. Launching Compute Services
- Libvirt requires that the
messagebus
service be enabled and running. Start the service:#
systemctl start messagebus.service
- The Compute service requires that the
libvirtd
service be enabled and running. Start the service and configure it to start at boot time:#
systemctl start libvirtd.service
#
systemctl enable libvirtd.service
- Start the API service on each system that is hosting an instance of it. Note that each API instance should either have its own endpoint defined in the Identity service database or be pointed to by a load balancer that is acting as the endpoint. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-api.service
#
systemctl enable openstack-nova-api.service
- Start the scheduler on each system that is hosting an instance of it. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-scheduler.service
#
systemctl enable openstack-nova-scheduler.service
- Start the conductor on each system that is hosting an instance of it. Note that it is recommended that this service is not run on every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-conductor.service
#
systemctl enable openstack-nova-conductor.service
- Start the Compute service on every system that is intended to host virtual machine instances. Start the service and configure it to start at boot time:
#
systemctl start openstack-nova-compute.service
#
systemctl enable openstack-nova-compute.service
- Depending on your environment configuration, you may also need to start the following services:
openstack-nova-cert
- The X509 certificate service, required if you intend to use the EC2 API to the Compute service.
Note
To use the EC2 API to the Compute service, you must set the options in thenova.conf
configuration file. For more information, see Configuring the EC2 API section in the Red Hat OpenStack Platform Configuration Reference Guide. This document is available from the following link: openstack-nova-network
- The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack Networking.
openstack-nova-objectstore
- The Nova object storage service. It is recommended that the Object Storage service (Swift) is used for new deployments.
Chapter 9. Install the Orchestration Service
9.1. Install the Orchestration Service Packages
- openstack-heat-api
- Provides the OpenStack-native REST API to the Orchestration engine service.
- openstack-heat-api-cfn
- Provides the AWS CloudFormation-compatible API to the Orchestration engine service.
- openstack-heat-common
- Provides components common to all Orchestration services.
- openstack-heat-engine
- Provides the OpenStack API for launching templates and submitting events back to the API.
- openstack-heat-api-cloudwatch
- Provides the AWS CloudWatch-compatible API to the Orchestration engine service.
- heat-cfntools
- Provides the tools required on
heat
-provisioned cloud instances. - python-heatclient
- Provides a Python API and command-line script, both of which make up a client for the Orchestration API service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks, including the editing of configuration files.
#
yum install -y openstack-heat-* python-heatclient openstack-utils
9.2. Configure the Orchestration Service
- Configure a database for the Orchestration service.
- Bind each Orchestration API service to a corresponding IP address.
- Create and configure the Orchestration service Identity records.
- Configure how Orchestration services authenticate with the Identity service.
9.2.1. Create the Orchestration Service Database
/etc/heat/heat.conf
file. It must be updated to point to a valid database server before the service is started. All steps in this procedure must be performed on the database server, while logged in as the root
user.
Procedure 9.1. Configuring the Orchestration Service Database
- Connect to the database service:
#
mysql -u root -p
- Create the
heat
database:mysql>
CREATE DATABASE heat; - Create a database user named
heat
and grant the user access to theheat
database:mysql>
GRANT ALL ON heat.* TO 'heat'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be user to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES; - Exit the mysql client:
mysql>
quit - Set the value of the
sql_connection
configuration key:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT sql_connection mysql://heat:PASSWORD@IP/heat
Replace the following values:- Replace PASSWORD with the password of the
heat
database user. - Replace IP with the IP address or host name of the database server.
- As the
heat
user, sync the database:#
runuser -s /bin/sh heat -c "heat-manage db_sync"
Important
9.2.2. Restrict the Bind Addresses of Each Orchestration API Service
bind_host
setting of each Orchestration API service. This setting controls which IP address a service should use for incoming connections.
bind_host
setting for each Orchestration API service:
#
openstack-config --set /etc/heat/heat.conf
heat_api bind_host IP
#
openstack-config --set /etc/heat/heat.conf
heat_api_cfn bind_host IP
#
openstack-config --set /etc/heat/heat.conf
heat_api_cloudwatch bind_host IP
9.2.3. Create the Orchestration Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 9.2. Creating Identity Records for the Orchestration Service
- Set up the shell to access Keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
heat
user:[(keystone_admin)]#
keystone user-create --name heat --pass PASSWORD
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id | 96cd855e5bfe471ce4066794bbafb615 | | name | heat | | username | heat | +----------+----------------------------------+Replace PASSWORD with a password that will be used by the Orchestration service when authenticating with the Identity service. - Link the
heat
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user heat --role admin --tenant services
- Create the
heat
andheat-cfn
service entries:[(keystone_admin)]#
keystone service-create --name heat \
--type orchestration
#
keystone service-create --name heat-cfn \
--type cloudformation
- Create endpoint entries for the
heat
service and theheat-cfn
service:[(keystone_admin)]#
keystone endpoint-create \
--service heat-cfn \
--publicurl 'HEAT_CFN_IP:8000/v1' \
--adminurl 'HEAT_CFN_IP:8000/v1' \
--internalurl 'HEAT_CFN_IP:8000/v1' \
--region 'RegionOne'
[(keystone_admin)]#
keystone endpoint-create \
--service heat \
--publicurl 'HEAT_IP:8004/v1/%(tenant_id)s' \
--adminurl 'HEAT_IP:8004/v1/%(tenant_id)s' \
--internalurl 'HEAT_IP:8004/v1/%(tenant_id)s' \
--region 'RegionOne'
Replace the following values:- Replace HEAT_CFN_IP with the IP or host name of the system hosting the
heat-cfn
service. - Replace HEAT_IP with the IP or host name of the system hosting the
heat
service.
Important
Include thehttp://
prefix for HEAT_CFN_IP and HEAT_IP values.
9.2.3.1. Create the Required Identity Domain for the Orchestration Service
heat
stacks. Using a separate domain allows for separation between the instances and the user deploying the stack. This allows regular users without administrative rights to deploy heat
stacks that require such credentials.
Procedure 9.3. Creating an Identity Service Domain for the Orchestration Service
- Obtain the administrative token used by the Identity service. This token is the value of the
admin_token
configuration key in the/etc/keystone/keystone.conf
file of the Identity server:#
cat /etc/keystone/keystone.conf | grep admin_token
admin_token = 0292d404a88c4f269383ff28a3839ab4The administrative token is used to perform all actions requiring administrative credentials. - Install the python-openstackclient package on the Red Hat Enterprise Linux 7.1 host you will use to create and configure the domain:
#
yum install python-openstackclient
Run the rest of the steps in this procedure from the Red Hat Enterprise Linux 7.1 host. - Create the
heat
domain:#
openstack --os-token ADMIN_TOKEN --os-url=IDENTITY_IP:5000/v3 \
--os-identity-api-version=3 domain create heat \
--description "Owns users and projects created by heat"
Replace the following values:- Replace ADMIN_TOKEN with the administrative token.
- Replace IDENTITY_IP with the IP or host name of the server hosting the Identity service.
This command returns the domain ID of theheat
domain. This ID (HEAT_DOMAIN_ID) is used in the next step. - Create a user named
heat_domain_admin
that can have administrative rights within theheat
domain:#
openstack --os-token ADMIN_TOKEN --os-url=IDENTITY_IP:5000/v3 \
--os-identity-api-version=3 user create heat_domain_admin \
--password PASSWORD \
--domain HEAT_DOMAIN_ID
--description "Manages users and projects created by heat"
Replace PASSWORD with a password for this user. This command returns a user ID (DOMAIN_ADMIN_ID), which is used in the next step. - Grant the
heat_domain_admin
user administrative rights within theheat
domain:#
openstack --os-token ADMIN_TOKEN --os-url=IDENTITY_IP:5000/v3 \
--os-identity-api-version=3 role add --user DOMAIN_ADMIN_ID \
--domain HEAT_DOMAIN_ID admin
- On the server hosting the Orchestration service, configure the service to use the
heat
domain and user:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT stack_domain_admin_password DOMAIN_PASSWORD
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT stack_domain_admin heat_domain_admin
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT stack_user_domain HEAT_DOMAIN_ID
9.2.4. Configure Orchestration Service Authentication
root
user.
Procedure 9.4. Configuring the Orchestration Service to Authenticate Through the Identity Service
- Set the Orchestration services to authenticate as the correct tenant:
#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken admin_tenant_name services
Replace services is the name of the tenant created for the use of the Orchestration service. Examples in this guide useservices
. - Set the Orchestration services to authenticate using the
heat
administrative user account:#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken admin_user heat
- Set the Orchestration services to use the correct
heat
administrative user account password:#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when theheat
user was created. - Set the Identity service host that the Orchestration services must use:
#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken service_host KEYSTONE_HOST
#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken auth_host KEYSTONE_HOST
#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken auth_uri http://KEYSTONE_HOST:35357/v2.0
#
openstack-config --set /etc/heat/heat.conf \
keystone_authtoken keystone_ec2_uri http://KEYSTONE_HOST:35357/v2.0
Replace KEYSTONE_HOST with the IP address or host name of the server hosting the Identity service. If the Identity service is hosted on the same system, use127.0.0.1
. - Configure the
heat-api-cfn
andheat-api-cloudwatch
service host names to which virtual machine instances will connect:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_metadata_server_url HEAT_CFN_HOST:8000
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_waitcondition_server_url HEAT_CFN_HOST:8000/v1/waitcondition
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_watch_server_url HEAT_CLOUDWATCH_HOST:8003
Replace the following values:- Replace HEAT_CFN_HOST with the IP address or host name of the server hosting the
heat-api-cfn
service. - Replace HEAT_CLOUDWATCH_HOST with the IP address or host name of the server hosting the
heat-api-cloudwatch
service.
Important
Even if all services are hosted on the same system, do not use127.0.0.1
for either service host name. This IP address refers to the local host of each instance, and would therefore prevent the instance from reaching the actual service. - Application templates use wait conditions and signaling for orchestration. Define the Identity role for users that should receive progress data. By default, this role is
heat_stack_user
:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT heat_stack_user_role heat_stack_user
9.2.5. Configure RabbitMQ Message Broker Settings for the Orchestration Service
root
user.
Procedure 9.5. Configuring the Orchestration Service to use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rpc_backend heat.openstack.common.rpc.impl_kombu
- Set the Orchestration service to connect to the RabbitMQ host:
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for the Orchestration service when RabbitMQ was configured:
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_userid heat
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_password HEAT_PASS
Replaceheat
and HEAT_PASS with the RabbitMQ user name and password created for the Orchestration service. - When RabbitMQ was launched, the
heat
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Orchestration service to connect to this virtual host:#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_virtual_host /
9.2.6. Enable SSL Communication Between the Orchestration Service and the Message Broker
Procedure 9.6. Enabling SSL Communication Between the Orchestration Service and the RabbitMQ Message Broker
- Enable SSL communication with the message broker:
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT rabbit_use_ssl True
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT kombu_ssl_certfile /path/to/client.crt
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT kombu_ssl_keyfile /path/to/clientkeyfile.key
Replace the following values:- Replace /path/to/client.crt with the absolute path to the exported client certificate.
- Replace /path/to/clientkeyfile.key with the absolute path to the exported client key file.
- If your certificates were signed by a third-party Certificate Authority (CA), you must also run the following command:
#
openstack-config --set /etc/heat/heat.conf \
DEFAULT kombu_ssl_ca_certs /path/to/ca.crt
Replace /path/to/ca.crt with the absolute path to the CA file provided by the third-party CA (see Section 2.3.4, “Enable SSL on the RabbitMQ Message Broker” for more information).
9.3. Launch the Orchestration Service
Procedure 9.7. Launching Orchestration Services
- Start the Orchestration API service, and configure it to start at boot time:
#
systemctl start openstack-heat-api.service
#
systemctl enable openstack-heat-api.service
- Start the Orchestration AWS CloudFormation-compatible API service, and configure it to start at boot time:
#
systemctl start openstack-heat-api-cfn.service
#
systemctl enable openstack-heat-api-cfn.service
- Start the Orchestration AWS CloudWatch-compatible API service, and configure it to start at boot time:
#
systemctl start openstack-heat-api-cloudwatch.service
#
systemctl enable openstack-heat-api-cloudwatch.service
- Start the Orchestration API service for launching templates and submitting events back to the API, and configure it to start at boot time:
#
systemctl start openstack-heat-engine.service
#
systemctl enable openstack-heat-engine.service
9.4. Deploy a Stack Using Orchestration Templates
.template
files) to launch instances, IPs, volumes, or other types of stacks. The heat utility is a command-line interface that allows you to create, configure, and launch stacks.
Note
#
yum install -y openstack-heat-templates
openstack-heat-api-cfn
service. Such instances must be able to communicate with the openstack-heat-api-cloudwatch
service and the openstack-heat-api-cfn
service. The IPs and ports used by these services are the values set in the /etc/heat/heat.conf
file as heat_metadata_server_url
and heat_watch_server_url
.
openstack-heat-api-cloudwatch
(8003), openstack-heat-api-cfn
(8000), and openstack-api
(8004).
Procedure 9.8. Deploying a Stack Using Orchestration Templates
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add the following INPUT rules to allow TCP traffic on ports
8003
,8000
, and8004
:-A INPUT -i BR -p tcp --dport 8003 -j ACCEPT -A INPUT -i BR -p tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m multiport --dports 8004 -j ACCEPT
Replace BR with the interface of the bridge used by the instances launched from Orchestration templates. Do not include the-i BR
parameter in theINPUT
rules if you are not usingnova-network
, or if the Orchestration service andnova-compute
are not hosted on the same server. - Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect:#
systemctl restart iptables.service
- Launch an application:
#
heat stack-create STACKNAME \
--template-file=PATH_TEMPLATE \
--parameters="PARAMETERS"
Replace the following values:- Replace STACKNAME with the name to assign to the stack. This name will appear when you run the
heat stack-list
command. - Replace PATH_TEMPLATE with the path to your
.template
file. - Replace PARAMETERS with a semicolon-delimited list of stack creation parameters to use. Supported parameters are defined in the template file itself.
9.5. Integrate Telemetry and Orchestration Services
heat stack-create
command. To enable this, the Orchestration service must be installed and configured accordingly (see Section 12.1, “Overview of Telemetry Service Deployment” for more information).
resource_registry
section of /etc/heat/environment.d/default.yaml
:
"AWS::CloudWatch::Alarm": "file:///etc/heat/templates/AWS_CloudWatch_Alarm.yaml"
Chapter 10. Install the Dashboard
10.1. Dashboard Service Requirements
- The httpd, mod_wsgi, and mod_ssl packages must be installed (for security purposes):
#
yum install -y mod_wsgi httpd mod_ssl
- The system must have a connection to the Identity service, as well as to the other OpenStack API services (Compute, Block Storage, Object Storage, Image, and Networking services).
- You must know the URL of the Identity service endpoint.
10.2. Install the Dashboard Packages
Note
memcached
as the session store.
- openstack-dashboard
- Provides the OpenStack dashboard service.
memcached
, the following packages must also be installed:
- memcached
- Memory-object caching system that speeds up dynamic web applications by alleviating database load.
- python-memcached
- Python interface to the
memcached
daemon.
Procedure 10.1. Installing the Dashboard Packages
- If required, install the
memcached
object caching system:#
yum install -y memcached python-memcached
- Install the dashboard package:
#
yum install -y openstack-dashboard
10.3. Launch the Apache Web Service
httpd
service. Start the service, and configure it to start at boot time:
#
systemctl start httpd.service
#
systemctl enable httpd.service
10.4. Configure the Dashboard
10.4.1. Configure Connections and Logging
/etc/openstack-dashboard/local_settings
file (sample files are available in the Configuration Reference Guide at https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform):
Procedure 10.2. Configuring Connections and Logging for the Dashboard
- Set the
ALLOWED_HOSTS
parameter with a comma-separated list of host/domain names that the application can serve. For example:ALLOWED_HOSTS = ['horizon.example.com', 'localhost', '192.168.20.254', ]
- Update the
CACHES
settings with thememcached
values:SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : 'memcacheURL:port', } }
Replace the following values:- Replace memcacheURL with IP address of the host on which
memcached
was installed. - Replace port with the value from the
PORT
parameter in the/etc/sysconfig/memcached
file.
- Specify the host URL for the Identity service endpoint. For example:
OPENSTACK_KEYSTONE_URL="127.0.0.1"
- Update the dashboard's time zone:
TIME_ZONE="UTC"
The time zone can also be updated using the dashboard GUI. - To ensure the configuration changes take effect, restart the Apache service:
#
systemctl restart httpd.service
Note
HORIZON_CONFIG
dictionary contains all the settings for the dashboard. Whether or not a service is in the dashboard depends on the Service Catalog configuration in the Identity service.
Note
django-secure
module to ensure that most of the recommended practices and modern browser protection mechanisms are enabled. For more information http://django-secure.readthedocs.org/en/latest/ (django-secure).
10.4.2. Configure the Dashboard to Use HTTPS
Procedure 10.3. Configuring the Dashboard to use HTTPS
- Open the
/etc/openstack-dashboard/local_settings
file in a text editor, and uncomment the following parameters:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True
The latter two settings instruct the browser to only send dashboard cookies over HTTPS connections, ensuring that sessions will not work over HTTP. - Open the
/etc/httpd/conf/httpd.conf
file in a text editor, and add the following line:NameVirtualHost *:443
- Open the
/etc/httpd/conf.d/openstack-dashboard.conf
file in a text editor.- Delete the following lines:
WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> <IfModule mod_deflate.c> SetOutputFilter DEFLATE <IfModule mod_headers.c> # Make sure proxies donât deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule> Order allow,deny Allow from all </Directory> <Directory /usr/share/openstack-dashboard/static> <IfModule mod_expires.c> ExpiresActive On ExpiresDefault "access 6 month" </IfModule> <IfModule mod_deflate.c> SetOutputFilter DEFLATE </IfModule> Order allow,deny Allow from all </Directory> RedirectMatch permanent ^/$ https://xxx.xxx.xxx.xxx:443/dashboard
- Add the following lines:
WSGIDaemonProcess dashboard WSGIProcessGroup dashboard WSGISocketPrefix run/wsgi LoadModule ssl_module modules/mod_ssl.so <VirtualHost *:80> ServerName openstack.example.com RedirectPermanent / https://openstack.example.com/ </VirtualHost> <VirtualHost *:443> ServerName openstack.example.com SSLEngine On SSLCertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCACertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCertificateKeyFile /etc/httpd/SSL/openstack.example.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 RedirectPermanent /dashboard https://openstack.example.com Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> Order allow,deny Allow from all </Directory> </VirtualHost> <Directory /usr/share/openstack-dashboard/static> <IfModule mod_expires.c> ExpiresActive On ExpiresDefault "access 6 month" </IfModule> <IfModule mod_deflate.c> SetOutputFilter DEFLATE </IfModule> Order allow,deny Allow from all </Directory> RedirectMatch permanent ^/$ /dashboard/
In the new configuration, Apache listens on port 443 and redirects all non-secured requests to the HTTPS protocol. The<VirtualHost *:443>
section defines the required options for this protocol, including private key, public key, and certificates. - Restart the Apache service and the
memcached
service:#
systemctl restart httpd.service
#
systemctl restart memcached.service
10.4.3. Change the Default Role for the Dashboard
_member_
, which is created automatically by the Identity service. This is adequate for regular users. If you choose to create a different role and set the dashboard to use this role, you must create this role in the Identity service prior to using the dashboard, then configure the dashboard to use it.
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 10.4. Changing the Default Role for the Dashboard
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the new role:
[(keystone_admin)]#
keystone role-create --name NEW_ROLE
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 8261ac4eabcc4da4b01610dbad6c038a | | name | NEW_ROLE | +----------+----------------------------------+Replace NEW_ROLE with a name for the role. - Open the
/etc/openstack-dashboard/local_settings
file in a text editor, and change the value of the following parameter:OPENSTACK_KEYSTONE_DEFAULT_ROLE = 'NEW_ROLE'
Replace NEW_ROLE with the name of the role you created in the previous step. - Restart the Apache service for the change to take effect:
#
systemctl restart httpd.service
10.4.4. Configure SELinux
httpd
service to the Identity server. This is also recommended if SELinux is configured in 'Permissive' mode.
Procedure 10.5. Configuring SELinux to Allow Connections from the Apache Service
- Check the status of SELinux on the system:
#
getenforce
- If the resulting value is 'Enforcing' or 'Permissive', allow connections between the
httpd
service and the Identity service:#
setsebool -P httpd_can_network_connect on
10.4.5. Configure the Dashboard Firewall
httpd
service and the dashboard support both HTTP and HTTPS connections. All steps in this procedure must be performed on the server hosting the httpd
service, while logged in as the root
user.
Note
Procedure 10.6. Configuring the Firewall to Allow Dashboard Traffic
- Open the
/etc/sysconfig/iptables
configuration file in a text editor:- To allow incoming connections using only HTTPS, add the following firewall rule:
-A INPUT -p tcp --dport 443 -j ACCEPT
- To allow incoming connections using both HTTP and HTTPS, add the following firewall rule:
-A INPUT -p tcp -m multiport --dports 80,443 -j ACCEPT
- Restart the
iptables
service for the changes to take effect:#
systemctl restart iptables.service
Important
10.5. Validate Dashboard Installation
- HTTPS
https://HOSTNAME/dashboard/
- HTTP
http://HOSTNAME/dashboard/

Figure 10.1. Dashboard Login Screen
Chapter 11. Install the Data Processing Service
11.1. Install the Data Processing Service Packages
#
yum install openstack-sahara-api openstack-sahara-engine
openstack-sahara-api
service.
11.2. Configure the Data Processing Service
- Configure the Data Processing service database connection.
- Configure the Data Processing API service to authenticate with the Identity service.
- Configure the firewall to allow service traffic for the Data Processing service (through port
8386
).
11.2.1. Create the Data Processing Service Database
/etc/sahara/sahara.conf
file. It must be updated to point to a valid database server before starting the Data Processing API service (openstack-sahara-api
).
Procedure 11.1. Creating and Configuring a Database for the Data Processing API Service
- Connect to the database service:
#
mysql -u root -p
- Create the
sahara
database:mysql>
CREATE DATABASE sahara; - Create a
sahara
database user and grant the user access to thesahara
database:mysql>
GRANT ALL ON sahara.* TO 'sahara'@'%' IDENTIFIED BY 'PASSWORD';mysql>
GRANT ALL ON sahara.* TO 'sahara'@'localhost' IDENTIFIED BY 'PASSWORD';Replace PASSWORD with a secure password that will be used to authenticate with the database server as this user. - Exit the
mysql
client:mysql>
quit - Set the value of the
sql_connection
configuration key:#
openstack-config --set /etc/sahara/sahara.conf \
database connection mysql://sahara:PASSWORD@IP/sahara
Replace the following values:- Replace PASS with the password of the database user.
- Replace IP with the IP address or host name of the server hosting the database service.
- Configure the schema of the
sahara
database:#
sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head
Important
11.2.2. Create the Data Processing Service Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 11.2. Creating Identity Records for the Data Processing Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
sahara
user:[(keystone_admin)]#
keystone user-create --name sahara --pass PASSWORD
Replace PASSWORD with a password that will be used by the Data Processing service when authenticating with the Identity service. - Link the
sahara
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user sahara --role admin --tenant services
- Create the
sahara
service entry:[(keystone_admin)]#
keystone service-create --name sahara \
--type data-processing \
--description "OpenStack Data Processing"
- Create the
sahara
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service sahara \
--publicurl 'http://SAHARA_HOST:8386/v1.1/%(tenant_id)s' \
--adminurl 'http://SAHARA_HOST:8386/v1.1/%(tenant_id)s' \
--internalurl 'http://SAHARA_HOST:8386/v1.1/%(tenant_id)s' \
--region 'RegionOne'
Replace SAHARA_HOST with the IP address or fully qualified domain name of the server hosting the Data Processing service.Note
By default, the endpoint is created in the default region,RegionOne
. This is a case-sensitive value. To specify a different region when creating an endpoint, use the--region
argument to provide it.See Section 3.6.1, “Service Regions” for more information.
11.2.3. Configure Data Processing Service Authentication
openstack-sahara-api
) to use the Identity service for authentication. All steps in this procedure must be performed on the server hosting the Data Processing API service, while logged in as the root
user.
Procedure 11.3. Configuring the Data Processing API Service to Authenticate through the Identity Service
- Set the Identity service host that the Data Processing API service must use:
#
openstack-config --set /etc/sahara/sahara.conf \
keystone_authtoken auth_uri http://IP:5000/v2.0/
#
openstack-config --set /etc/sahara/sahara.conf \
keystone_authtoken identity_uri http://IP:35357
Replace IP with the IP address of the server hosting the Identity service. - Set the Data Processing API service to authenticate as the correct tenant:
#
openstack-config --set /etc/sahara/sahara.conf \
keystone_authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of the Data Processing service. Examples in this guide useservices
. - Set the Data Processing API service to authenticate using the
sahara
administrative user account:#
openstack-config --set /etc/sahara/sahara.conf \
keystone_authtoken admin_user sahara
- Set the Data Processing API service to use the correct
sahara
administrative user account password:#
openstack-config --set /etc/sahara/sahara.conf \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when thesahara
user was created.
11.2.4. Configure the Firewall to Allow OpenStack Data Processing Service Traffic
8386
. The firewall on the service node must be configured to allow network traffic on this port. All steps in this procedure must be performed on the server hosting the Data Processing service, while logged in as the root
user.
Procedure 11.4. Configuring the Firewall to Allow Data Processing Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
8386
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 8386 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
11.3. Configure and Launch the Data Processing Service
Procedure 11.5. Launching the Data Processing Service
- If your OpenStack deployment uses OpenStack Networking (
neutron
), you must configure the Data Processing service accordingly:#
openstack-config --set /etc/sahara/sahara.conf \
DEFAULT use_neutron true
- Start the Data Processing services and configure them to start at boot time:
#
systemctl start openstack-sahara-api.service
#
systemctl start openstack-sahara-engine.service
#
systemctl enable openstack-sahara-api.service
#
systemctl enable openstack-sahara-engine.service
Chapter 12. Install the Telemetry Service
12.1. Overview of Telemetry Service Deployment
openstack-ceilometer
agents, and two alarm services. The API service (provided by the openstack-ceilometer-api package) runs on one or more central management servers to provide access to the Telemetry database.
Note
mongod
is the only database service supported by the Telemetry service.
- The Central agent (provided by openstack-ceilometer-central) runs on a central management server to poll public REST APIs for utilization statistics about resources that are not visible (either through notifications or from the hypervisor layer).
- The Collector (provided by openstack-ceilometer-collector) runs on one or more central management servers to receive notifications on resource usage. The Collector also parses resource usage statistics and saves them as datapoints in the Telemetry database.
- The Compute agent (provided by openstack-ceilometer-compute) runs on each Compute service node to poll for instance utilization statistics. You must install and configure the Compute service before installing the openstack-ceilometer-compute package on any node.
- The Evaluator (provided by ceilometer-alarm-evaluator) triggers state transitions on alarms.
- The Notifier (provided by ceilometer-alarm-notifier) executes required actions when alarms are triggered.
- Authentication, including the Identity service tokens and Telemetry secret
- The database connection string, for connecting to the Telemetry database
/etc/ceilometer/ceilometer.conf
. As such, components deployed on the same host will share the same settings. If Telemetry components are deployed on multiple hosts, you must replicate any authentication changes to these hosts by copying the ceilometer.conf
file to each host after applying the new settings.
12.2. Install the Telemetry Service Packages
- mongodb
- Provides the MongoDB database service. The Telemetry service uses MongoDB as its back-end data repository.
- mongodb-server
- Provides the MongoDB server software, MongoDB sharding server software, default configuration files, and init scripts.
- openstack-ceilometer-api
- Provides the
ceilometer
API service. - openstack-ceilometer-central
- Provides the Central
ceilometer
agent. - openstack-ceilometer-collector
- Provides the
ceilometer
Collector agent. - openstack-ceilometer-common
- Provides components common to all
ceilometer
services. - openstack-ceilometer-compute
- Provides the
ceilometer
agent that must run on each Compute node. - openstack-ceilometer-alarm
- Provides the
ceilometer
alarm notification and evaluation services. - openstack-ceilometer-notification
- Provides the
ceilometer
Notification agent. This agent provides metrics to the Collector agent from different OpenStack services. - python-ceilometer
- Provides the
ceilometer
Python library. - python-ceilometerclient
- Provides the
ceilometer
command-line tool and a Python API (specifically, theceilometerclient
module).
#
yum install -y mongodb-server mongodb openstack-ceilometer-* python-ceilometer python-ceilometerclient
12.3. Configure the MongoDB Back End and Create the Telemetry Database
mongod
service, optionally configure mongod
to run with the --smallfiles
parameter. This parameter configures MongoDB to use a smaller default data file and journal size. MongoDB will limit the size of each data file, creating and writing to a new one when it reaches 512MB.
Procedure 12.1. Configuring the MongoDB Back End and Creating the Telemetry Database
- Optionally configure
mongod
to run with the--smallfiles
parameter. Open the/etc/sysconfig/mongod
file in a text editor, and add the following line:OPTIONS="--smallfiles /etc/mongodb.conf"
MongoDB uses the parameters specified in theOPTIONS
section whenmongod
launches. - Start the MongoDB service:
#
systemctl start mongod.service
- If the database must be accessed from a server other than its local host, open the
/etc/mongod.conf
file in a text editor, and update thebind_ip
with the IP address of your MongoDB server:bind_ip = MONGOHOST
- Open the
/etc/sysconfig/iptables
file in a text editor and add an INPUT rule allowing TCP traffic on port27017
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 27017 -j ACCEPT
- Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
- Create a database for the Telemetry service:
#
mongo --host MONGOHOST --eval '
db = db.getSiblingDB("ceilometer");
db.addUser({user: "ceilometer",
pwd: "MONGOPASS",
roles: [ "readWrite", "dbAdmin" ]})'
This also creates a database user namedceilometer
. Replace MONGOHOST with the IP address or host name of the server hosting the MongoDB database. Replace MONGOPASS with a password for theceilometer
user.
12.4. Configure the Telemetry Service Database Connection
/etc/ceilometer/ceilometer.conf
file. It must be updated to point to a valid database server before starting the Telemetry API service (openstack-ceilometer-api
), Notification agent (openstack-ceilometer-notification
), and Collector agent (openstack-ceilometer-collector
).
openstack-ceilometer-api
service and the openstack-ceilometer-collector
service, while logged in as the root
user.
Procedure 12.2. Configuring the Telemetry Service Database Connection
- Set the database connection string:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
database connection mongodb://ceilometer:MONGOPASS@MONGOHOST/ceilometer
Replace the following values:- Replace MONGOPASS with the password of the
ceilometer
user; it is required by the Telemetry service to log in to the database server. Supply these credentials only when required by the database server (for example, when the database server is hosted on another system or node). - Replace MONGOHOST with the IP address or host name and the port of the server hosting the database service.
If MongoDB is hosted locally on the same host, use the following database connection string:mongodb://localhost:27017/ceilometer
12.5. Create the Telemetry Identity Records
services
tenant. For more information, see:
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 12.3. Creating Identity Records for the Telemetry Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
ceilometer
user:[(keystone_admin)]#
keystone user-create --name ceilometer \
--pass PASSWORD \
--email CEILOMETER_EMAIL
Replace the following values:- Replace PASSWORD with the password that will be used by the Telemetry service when authenticating with the Identity service.
- Replace CEILOMETER_EMAIL with the email address used by the Telemetry service.
- Create the
ResellerAdmin
role:[(keystone_admin)]#
keystone role-create --name ResellerAdmin
- Link the
ceilometer
user and theResellerAdmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user ceilometer \
--role ResellerAdmin \
--tenant services
- Link the
ceilometer
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
keystone user-role-add --user ceilometer \
--role admin \
--tenant services
- Create the
ceilometer
service entry:[(keystone_admin)]#
keystone service-create --name ceilometer \
--type metering \
--description "OpenStack Telemetry Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Telemetry Service | | enabled | True | | id | a511aea8bc1264641f4dff1db38751br | | name | ceilometer | | type | metering | +-------------+----------------------------------+ - Create the
ceilometer
endpoint entry:[(keystone_admin)]#
keystone endpoint-create \
--service ceilometer \
--publicurl 'IP:8777' \
--adminurl 'IP:8777' \
--internalurl 'IP:8777' \
--region 'RegionOne'
Replace IP with the IP address or host name of the server hosting the Telemetry service.Note
By default, the endpoint is created in the default region,RegionOne
. This is a case-sensitive value. To specify a different region when creating an endpoint, use the--region
argument to provide it.See Section 3.6.1, “Service Regions” for more information.
12.6. Configure Telemetry Service Authentication
openstack-ceilometer-api
) to use the Identity service for authentication. All steps in this procedure must be performed on the server hosting the Telemetry API service, while logged in as the root
user.
Procedure 12.4. Configuring the Telemetry Service to Authenticate Through the Identity Service
- Set the Identity service host that the Telemetry API service must use:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken auth_host IP
Replace IP with the IP address or host name of the server hosting the Identity service. - Set the authentication port that the Telemetry API service must use:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken auth_port PORT
Replace PORT with the authentication port used by the Identity service, usually35357
. - Set the Telemetry API service to use the
http
protocol for authenticating:#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken auth_protocol http
- Set the Telemetry API service to authenticate as the correct tenant:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_tenant_name services
Replace services with the name of the tenant created for the use of the Telemetry service. Examples in this guide useservices
. - Set the Telemetry service to authenticate using the
ceilometer
administrative user account:#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_user ceilometer
- Set the Telemetry service to use the correct
ceilometer
administrative user account password:#
openstack-config --set /etc/ceilometer/ceilometer.conf \
keystone_authtoken admin_password PASSWORD
Replace PASSWORD with the password set when theceilometer
user was created. - The Telemetry secret is a string used to help secure communication between all components of the Telemetry service across multiple hosts (for example, between the Collector agent and a Compute node agent). Set the Telemetry secret:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
publisher_rpc metering_secret SECRET
Replace SECRET with the string that all Telemetry service components should use to sign and verify messages that are sent or received over AMQP. - Configure the service endpoints to be used by the Central agent, Compute agents, and Evaluator on the host where each component is deployed:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT os_auth_url http://IP:35357/v2.0
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT os_username ceilometer
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT os_tenant_name services
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT os_password PASSWORD
Replace the following values:- Replace IP with the IP address or host name of the server hosting the Identity service.
- Replace PASSWORD with the password set when the
ceilometer
user was created.
12.7. Configure the Firewall to Allow Telemetry Service Traffic
8777
. The firewall on the service node must be configured to allow network traffic on this port. All steps in this procedure must be performed on the server hosting the Telemetry service, while logged in as the root
user.
Procedure 12.5. Configuring the Firewall to Allow Telemetry Service Traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
8777
. The new rule must appear before any INPUT rules that REJECT traffic:-A INPUT -p tcp -m multiport --dports 8777 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
12.8. Configure RabbitMQ Message Broker Settings for the Telemetry Service
root
user.
Procedure 12.6. Configuring the Telemetry Service to Use the RabbitMQ Message Broker
- Set RabbitMQ as the RPC back end:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rpc_backend rabbit
- Set the Telemetry service to connect to the RabbitMQ host:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rabbit_host RABBITMQ_HOST
Replace RABBITMQ_HOST with the IP address or host name of the message broker. - Set the message broker port to
5672
:#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rabbit_port 5672
- Set the RabbitMQ user name and password created for the Telemetry service when RabbitMQ was configured:
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rabbit_userid ceilometer
#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rabbit_password CEILOMETER_PASS
Replaceceilometer
and CEILOMETER_PASS with the RabbitMQ user name and password created for the Telemetry service. - When RabbitMQ was launched, the
ceilometer
user was granted read and write permissions to all resources: specifically, through the virtual host/
. Configure the Telemetry service to connect to this virtual host:#
openstack-config --set /etc/ceilometer/ceilometer.conf \
DEFAULT rabbit_virtual_host /
12.9. Configure the Compute Node
openstack-ceilometer-compute
) installed on that node. You can configure a node's Compute agent by replicating the /etc/ceilometer/ceilometer.conf
file from another host whose Telemetry components have already been configured.
Procedure 12.7. Enabling Notifications on a Compute Node
- Install the packages openstack-ceilometer-compute, python-ceilometer and python-ceilometerclient on the node:
#
yum install openstack-ceilometer-compute python-ceilometer python-ceilometerclient
- Enable auditing on the node:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT instance_usage_audit True
- Configure the audit frequency:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT instance_usage_audit_period hour
- Configure what type of state changes should trigger a notification:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT notify_on_state_change vm_and_task_state
- Set the node to use the correct notification drivers. Open the
/etc/nova/nova.conf
file in a text editor, and add the following lines in theDEFAULT
section:notification_driver = messagingv2 notification_driver = ceilometer.compute.nova_notifier
The Compute node requires two different notification drivers, which are defined using the same configuration key. You cannot useopenstack-config
to set these values. - Start the Compute agent:
#
systemctl start openstack-ceilometer-compute.service
- Configure the agent to start at boot time:
#
systemctl enable openstack-ceilometer-compute.service
- Restart the
openstack-nova-compute
service to apply all changes to/etc/nova/nova.conf
:#
systemctl restart openstack-nova-compute.service
12.10. Configure Monitored Services
#
yum install python-ceilometer python-ceilometerclient
Note
- Image service (
glance
) #
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT notifier_strategy NOTIFYMETHOD
Replace NOTIFYMETHOD with a notification queue:rabbit
(to use arabbitmq
queue) orqpid
(to use aqpid
message queue).- Block Storage service (
cinder
) #
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT notification_driver messagingv2
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rpc_backend cinder.openstack.common.rpc.impl_kombu
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT control_exchange cinder
- Object Storage service (
swift
) - The Telemetry service collects samples from the Object Storage service (
swift
) through theResellerAdmin
role that was created when configuring the required Identity records for Telemetry. You must also configure the Object Storage service to process traffic fromceilometer
.- Open the
/etc/swift/proxy-server.conf
file in a text editor, and add or update the following lines:[filter:ceilometer] use = egg:ceilometer#swift [pipeline:main] pipeline = healthcheck cache authtoken keystoneauth ceilometer proxy-server
- Add the
swift
user to theceilometer
group:#
usermod -a -G ceilometer swift
- Allow the Object Storage service to output logs to
/var/log/ceilometer/swift-proxy-server.log
:#
touch /var/log/ceilometer/swift-proxy-server.log
#
chown ceilometer:ceilometer /var/log/ceilometer/swift-proxy-server.log
#
chmod 664 /var/log/ceilometer/swift-proxy-server.log
- OpenStack Networking (
neutron
) - Telemetry supports the use of labels for distinguishing IP ranges. Enable OpenStack Networking integration with Telemetry:
#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT notification_driver messagingv2
12.11. Launch the Telemetry API and Agents
#
systemctl start SERVICENAME.service
#
systemctl enable SERVICENAME.service
- openstack-ceilometer-compute
- openstack-ceilometer-central
- openstack-ceilometer-collector
- openstack-ceilometer-api
- openstack-ceilometer-alarm-evaluator
- openstack-ceilometer-alarm-notifier
- openstack-ceilometer-notification
Chapter 13. Install Time-Series-Database-as-a-Service
gnocchi
) is a multi-tenant, metrics and resource database. It is designed to store metrics at a very large scale while providing access to metrics and resources information to operators and users.
Note
- storage
- The
storage
driver is responsible for storing measures of created metrics. It receives timestamps and values and computes aggregations according to the defined archive policies. - indexer
- The
indexer
driver is responsible for storing the index of all resources, along with their types and their properties. Time-Series-as-a-Service only knows resource types from the OpenStack project, but also provides a generic type so you can create basic resources and handle the resource properties yourself. Theindexer
is also responsible for linking resources with metrics.
13.1. Install the Time-Series-Database-as-a-Service Packages
- openstack-gnocchi-api
- Provides the main OpenStack Time-Series-Database-as-a-Service API
- openstack-gnocchi-carbonara
- Provides the OpenStack Time-Series-Database-as-a-Service carbonara
- openstack-gnocchi-doc
- Provides the OpenStack Time-Series-Database-as-a-Service documentation
- openstack-gnocchi-indexer-sqlalchemy
- Provides the OpenStack Time-Series-Database-as-a-Service indexer SQLAlchemy
- openstack-gnocchi-statsd
- Provides the OpenStack Time-Series-Database-as-a-Service stats deamon
- python-gnocchi
- Provides the OpenStack Time-Series-Database-as-a-Service Python libraries
#
yum install openstack-gnocchi\* -y
13.2. Initialize Time-Series-Database-as-a-Service
indexer
:
#
gnocchi-dbsync
13.3. Configure Time-Series-Database-as-a-Service
/etc/gnocchi/gnocchi.conf
) will have no settings configured. You will need to manually add and configure each setting as required.
- In the
[DEFAULT]
section, enable logging and verbose output:[DEFAULT] debug = true verbose = true
- In the
[API]
section, list the number of workers:[api] workers = 1
- In the
[database]
section, set backend tosqlalchemy
:[database] backend = sqlalchemy
- In the
[indexer]
section, configure the SQL database by passing the user name, password, and the IP address:[indexer] url = mysql://USER_NAME:PASSWORD@192.0.2.10/gnocchi2?charset=utf8
Note
The database has to be created before startinggnocchi-api
- In the
[keystone_authtoken]
section, update the authentication parameters. For example:[keystone_authtoken] auth_uri = http://192.0.2.7:5000/v2.0 signing_dir = /var/cache/gnocchi auth_host = 192.0.2.7 auth_port = 35357 auth_protocol = http identity_uri = http://192.0.2.7:35357/ admin_user = admin admin_password = 5179f4d3c5b1a4c51269cad2a23dbf336513efeb admin_tenant_name = admin
- In the
[statsd]
section, include the following parameter values:[statsd] resource_id = RESOURCE_ID user_id = USER_ID project_id = PROJECT_ID archive_policy_name = low flush_delay = 5
Replace the values forRESOURCE_ID
,USER_ID
, andPROJECT_ID
with values for your deployment. - In the
[storage]
section, manually add thecoordination_url
andfile_basepath
and set thedriver
value to file:[storage] coordination_url = file:///var/lib/gnocchi/locks driver = file file_basepath = /var/lib/gnocchi
- Restart the
gnocchi
service to ensure that the change takes effect:#
systemctl restart openstack-gnocchi-api.service
#
systemctl restart openstack-gnocchi-metricd.service
#
systemctl restart openstack-gnocchi-statsd.service
13.4. Create Time-Series-Database-as-a-Service Database
root
user.
- Connect to the database service:
#
mysql -u root -p
- Create the Time-Series-Database-as-a-Service database:
mysql>
CREATE DATABASE gnocchi;
- Create a Time-Series-Database-as-a-Service database user and grant the user access to the Time-Series-Database-as-a-Service database:
mysql>
GRANT ALL ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'PASSWORD';
mysql>
GRANT ALL ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'PASSWORD';
ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately:
mysql>
FLUSH PRIVILEGES;
- Exit the
mysql
client:mysql>
quit
13.5. Set Time-Series-Database-as-a-Service as the Backend for Telemetry Service
gnocchi
dispatcher posts the meters onto TDSaaS backend.
gnocchi
dispatcher, add the following configuration settings to the /etc/ceilometer/ceilometer.conf
file:
[DEFAULT] dispatcher = gnocchi [dispatcher_gnocchi] filter_project = gnocchi_swift filter_service_activity = True archive_policy = low url = http://localhost:8041
url
in the above configuration is a TDSaaS endpoint URL and depends on your deployment.
Note
gnocchi
dispatcher is enabled, Ceilometer API calls will return a 410
, with an empty result. The TDSaaS API should be used instead to access the data.
gnocchi
service to ensure that the change takes effect:
#
systemctl restart openstack-ceilometer-api.service
Chapter 15. Install Database-as-a-Service (Technology Preview)
Warning
Warning
15.1. Database-as-a-Service Requirements
- Update the admin user's password:
#
keystone user-password-update --pass ADMIN_PASSWORD admin
- Update the
/root/keystonerc_admin
file with the new password:export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=ADMIN_PASSWORD export OS_AUTH_URL=http://keystone IP:5000/v2.0/ export PS1='[\u@\h \W(keystone_admin)]\$ '
- Load the environment variables and make sure the
admin
user has theadmin
role in theservices
tenant:#
source keystonerc_admin
~(keystone_admin)]#
keystone user-role-add --user admin --tenant services --role admin
~(keystone_admin)]#
keystone user-role-list --user admin --tenant services
+----------------------------------+-------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+-------+----------------------------------+----------------------------------+ | 4501ce8328324ef5bf1ed93ceb5494e6 | admin | 4db867e819ad40e4bf79681bae269084 | 70cd02c84f86471b8dd934db46fb484f | +----------------------------------+-------+----------------------------------+----------------------------------+
15.2. Install the Database-as-a-Service Packages
- openstack-trove-api
- Provides the main OpenStack Database-as-a-Service API.
- openstack-trove-conductor
- Provides the OpenStack Database-as-a-Service conductor service.
- openstack-trove-guestagent
- Provides the OpenStack Database-as-a-Service guest agent service.
- openstack-trove-taskmanager
- Provides the OpenStack Database-as-a-Service task manager service.
- openstack-trove-images
- Provides the OpenStack Database-as-a-Service image creation tool.
- python-trove
- Provides the OpenStack Database-as-a-Service Python library.
- python-troveclient
- Provides a client for the Database-as-a-Service API.
#
yum install openstack-trove\*
15.3. Configure Database-as-a-Service
- Create a keystone user and add role for the Database-as-a-Service:
[root@rhosp-trove ~(keystone_admin)]#
keystone user-create --name trove --pass trove --email trove@localhost --tenant services
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | trove@localhost | | enabled | True | | id | eb4b3ea5808247dc926406220b8b271e | | name | trove | | tenantId | 70cd02c84f86471b8dd934db46fb484f | | username | trove | +----------+----------------------------------+[root@rhosp-trove ~(keystone_admin)]#
keystone user-role-add --user trove --tenant services --role admin
[root@rhosp-trove ~(keystone_admin)]#
keystone user-role-list --user trove --tenant services
+----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | eb4b3ea5808247dc926406220b8b271e | 70cd02c84f86471b8dd934db46fb484f | | 4501ce8328324ef5bf1ed93ceb5494e6 | admin | eb4b3ea5808247dc926406220b8b271e | 70cd02c84f86471b8dd934db46fb484f | +----------------------------------+----------+----------------------------------+----------------------------------+ - Optionally, set up verbose debug information in all configuration files:
[root@rhosp-trove ~(keystone_admin)]#
for conf_file in {trove,trove-conductor,trove-taskmanager,trove-guestagent}; do
>openstack-config --set /etc/trove/$conf_file.conf DEFAULT verbose True;
>openstack-config --set /etc/trove/$conf_file.conf DEFAULT debug True;
>done
- Create the
api-paste.ini
file (if not present):[root@rhosp-trove ~(keystone_admin)]#
cp /usr/share/trove/trove-dist-paste.ini /etc/trove/api-paste.ini
- Update keystone authtoken in
api-paste.ini
:[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_uri = http://127.0.0.1:35357/ identity_uri = http://127.0.0.1:35357/ admin_password = TROVE_PASSWORD admin_user = trove admin_tenant_name = services
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf DEFAULT api_paste_config /etc/trove/api-paste.in
- Update
trove.conf
with the same information asapi-paste.ini
:[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf keystone_authtoken auth_uri http://127.0.0.1:35357/
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf keystone_authtoken identity_uri http://127.0.0.1:35357/
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf keystone_authtoken admin_password TROVE_PASSWORD
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf keystone_authtoken admin_user trove
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove.conf keystone_authtoken admin_tenant_name = services
- Set up
nova_proxy
information introve-taskmanager.conf
. This needs to be the actual admin user as the Database-as-a-Service will use this user's credentials to issue nova commands:[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove-taskmanager.conf DEFAULT nova_proxy_admin_user admin
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove-taskmanager.conf DEFAULT nova_proxy_admin_password ADMIN_PASSWORD
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set trove-taskmanager.conf DEFAULT nova_proxy_admin_tenant_name services
- Update the configuration files with RabbitMQ host information:
[root@rhosp-trove trove(keystone_admin)]#
cat /etc/rabbitmq/rabbitmq.config
% This file managed by Puppet % Template Path: rabbitmq/templates/rabbitmq.config [ {rabbit, [ {default_user, <<"guest">>}, {default_pass, <<"RABBITMQ_GUEST_PASSWORD">>} ]},[root@rhosp-trove trove(keystone_admin)]#
for conf_file in trove.conf trove-taskmanager.conf trove-conductor.conf ; do
>openstack-config --set /etc/trove/$conf_file DEFAULT rabbit_host 127.0.0.1;
>openstack-config --set /etc/trove/$conf_file DEFAULT rabbit_password RABBITMQ_GUEST_PASSWORD;
>done
- Add service URLs to all the configuration files:
[root@rhosp-trove trove(keystone_admin)]#
for conf_file in trove.conf trove-taskmanager.conf trove-conductor.conf ; do
>openstack-config --set /etc/trove/$conf_file DEFAULT trove_auth_url http://127.0.0.1:5000/v2.0
>openstack-config --set /etc/trove/$conf_file DEFAULT nova_compute_url http://127.0.0.1:8774/v2
>openstack-config --set /etc/trove/$conf_file DEFAULT cinder_url http://127.0.0.1:8776/v1
>openstack-config --set /etc/trove/$conf_file DEFAULT swift_url http://127.0.0.1:8080/v1/AUTH_
>openstack-config --set /etc/trove/$conf_file DEFAULT sql_connection mysql://trove:trove@127.0.0.1/trove
>openstack-config --set /etc/trove/$conf_file DEFAULT notifier_queue_hostname 127.0.0.1
>done
Note that the commands above add a MySQL connection that does not work yet; those permissions are added next. - Update the task manager configuration with cloud-init information:
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT cloud-init_loaction /etc/trove/cloudinit
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set /etc/trove/trove-taskmanager.conf DEFAULT taskmanager_manager trove.taskmanager.manager.Manager
[root@rhosp-trove trove(keystone_admin)]#
mkdir /etc/trove/cloudinit
- Update
trove.conf
with the default datastore (database type), and set the name of the OpenStack Networking network to which instances will be attached. In this case, that network was namedprivate
:[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set /etc/trove/trove.conf DEFAULT default_datastore mysql
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set /etc/trove/trove.conf DEFAULT add_addresses True
[root@rhosp-trove trove(keystone_admin)]#
openstack-config --set /etc/trove/trove.conf DEFAULT network_label_regex ^private$
- Create the Database-as-a-Service database and add permissions for the
trove
user:[root@rhosp-trove trove(keystone_admin)]#
mysql -u root
MariaDB [(none)]>create database trove;
Query OK, 1 row affected (0.00 sec) MariaDB [(none)]>grant all on trove.* to trove@'localhost' identified by 'TROVE_PASSWORD';
Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]>grant all on trove.* to trove@'%' identified by 'TROVE_PASSWORD';
Query OK, 0 rows affected (0.00 sec) - Populate the new database and create the initial datastore:
[root@rhosp-trove trove(keystone_admin)]#
trove-manage db_sync
[root@rhosp-trove trove(keystone_admin)]#
trove-manage datastore_update mysql ''
- Create the cloud-init file that will be used with an image.
Note
When an instance is created by the Database-as-a-Service, it will use whateverimage_id
you have set in the database to build the instance. Additionally, based on the datastore specified, it will also now look in/etc/trove/cloudinit/
for a.cloudinit
file to attach as user data. For example, if you choosemysql
as the datastore for a new instance, nova will look for amysql.cloudinit
file in/etc/trove/cloudinit/
to attach as a user-data script. This is used to register and install MySQL at build time.Create the/etc/trove/cloudinit/mysql.cloudinit
file with the following content, replacing each occurrence of PASSWORD with a suitable password, RHN_USERNAME, RHN_PASSWORD and POOL_ID with your Red Hat credentials and subscription pool ID, and host SSH public key with the key for passwordless SSH login:#!/bin/bash sed -i'.orig' -e's/without-password/yes/' /etc/ssh/sshd_config echo "PASSWORD" | passwd --stdin cloud-user echo "PASSWORD" | passwd --stdin root systemctl restart sshd subscription-manager register --username=RHN_USERNAME --password=RHN_PASSWORD subscription-manager attach --pool POOL_ID subscription-manager repos --disable=* subscription-manager repos --enable=rhel-7-server-optional-rpms subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-server-rhscl-7-rpms yum install -y openstack-trove-guestagent mysql55 cat << EOF > /etc/trove/trove-guestagent.conf rabbit_host = 172.1.0.12 rabbit_password = RABBITMQ_GUEST_PASSWORD nova_proxy_admin_user = admin nova_proxy_admin_pass = ADMIN_PASSWORD nova_proxy_admin_tenant_name = services trove_auth_url = http://172.1.0.12:35357/v2.0 control_exchange = trove EOF echo "host SSH public key" >> /root/.ssh/authorized_keys echo "host SSH public key" >> /home/cloud-user/.ssh/authorized_keys systemctl stop trove-guestagent systemctl enable trove-guestagent systemctl start trove-guestagent
Note
The above is written as a bash script, which is supported bycloud-init
. This can also be done usingcloud-init
's YAML-style layout. - Upload a cloud image, specified as the parameter of the
--file
option, using glance:[root@rhosp-trove trove(keystone_admin)]#
glance image-create --name rhel7 \
>--file image.qcow2 \
>--disk_format qcow2 \
>--container_format bare \
>--is-public True \
>--owner trove
[root@rhosp-trove trove(keystone_admin)]#
glance image-list
+--------------------------------------+--------+-------------+------------------+-----------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+--------+-------------+------------------+-----------+--------+ | b88fa633-7219-4b80-87fa-300840575f91 | cirros | qcow2 | bare | 13147648 | active | | 9bd48cdf-52b4-4463-8ce7-ce81f44205ae | rhel7 | qcow2 | bare | 435639808 | active | +--------------------------------------+--------+-------------+------------------+-----------+--------+ - Update the Database-as-a-Service database with a reference to the Red Hat Enterprise Linux 7 image; use the ID from the output of the previous command:
[root@rhosp-trove trove(keystone_admin)]#
trove-manage --config-file=/etc/trove/trove.conf datastore_version_update \
>mysql mysql-5.5 mysql 9bd48cdf-52b4-4463-8ce7-ce81f44205ae mysql55 1
Note
The syntax is:trove-manage datastore_version_update datastore version_name manager image_id packages active
- Create the Database-as-a-Service service using keystone to make OpenStack aware of its presence:
[root@rhosp-trove trove(keystone_admin)]#
keystone service-create --name trove \
>--type database \
>--description "OpenStack DBaaS"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack DBaaS | | enabled | True | | id | 01394626309446beb9bec68c3bc781b4 | | name | trove | | type | database | +-------------+----------------------------------+ - Add URL endpoints for the Database-as-a-Service API; use the ID from the output of the previous command as the parameter of the
--service-id
option:[root@rhosp-trove trove(keystone_admin)]#
keystone endpoint-create \
>--service-id 01394626309446beb9bec68c3bc781b4 \
>--publicurl 'http://127.0.0.1:8779/v1.0/%(tenant_id)s' \
>--internalurl 'http://127.0.0.1:8779/v1.0/%(tenant_id)s' \
>--adminurl 'http://127.0.0.1:8779/v1.0/%(tenant_id)s' \
>--region RegionOne
- Start the three Database-as-a-Service services and enable them to start at boot:
[root@rhosp-trove trove(keystone_admin)]#
systemctl start openstack-trove-{api,taskmanager,conductor}
[root@rhosp-trove trove(keystone_admin)]#
systemctl enable openstack-trove-{api,taskmanager,conductor}
ln -s '/usr/lib/systemd/system/openstack-trove-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-trove-api.service' ln -s '/usr/lib/systemd/system/openstack-trove-taskmanager.service' '/etc/systemd/system/multi-user.target.wants/openstack-trove-taskmanager.service' ln -s '/usr/lib/systemd/system/openstack-trove-conductor.service' '/etc/systemd/system/multi-user.target.wants/openstack-trove-conductor.service'Important
Runsystemctl status openstack-trove-{api,taskmanager,conductor}
to make sure these services have started properly. If they have failed due to an error with/var/log/trove
, you can run these commands to solve the issue:[root@rhosp-trove trove(keystone_admin)]#
chown -R trove:trove /var/log/trove
[root@rhosp-trove trove(keystone_admin)]#
systemctl restart openstack-trove-{api,taskmanager,conductor}
Appendix A. Revision History
Revision History | |||
---|---|---|---|
Revision 8.0.0-1 | Tue 19 Apr 2016 | ||
|
Legal Notice
- "Copyright © 2013, 2014, 2015 OpenStack FoundationLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 3.0 License http://creativecommons.org/licenses/by-sa/3.0/legalcode
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701