此内容没有您所选择的语言版本。
Chapter 4. Configuration
This chapter explores how to provide additions to the OpenStack Puppet modules. This includes some basic guidelines on developing Puppet modules.
4.1. Learning Puppet Basics
The following section provide a few basic to help you understand Puppet’s syntax and the structure of a Puppet module.
4.1.1. Examining the Anatomy of a Puppet Module
Before contributing to the OpenStack modules, we need to understand the components that create a Puppet module.
- Manifests
Manifests are files that contain code to define a set of resource and their attributes. A resource is any configurable part of a system. Examples of resources include packages, services, files, users and groups, SELinux configuration, SSH key authentication, cron jobs, and more. A manifest defines each required resource using a set of key-value pairs for their attributes. For example:
package { 'httpd': ensure => installed, }
This declaration checks if the httpd package is installed. If not, the manifest executes yum and installs it. Manifests are located in the manifest directory of a module. Puppet modules also use a test directory for test manifests. These manifests are used to test certain classes contained in your official manifests.
- Classes
- Classes act as a method for unifying multiple resources in a manifest. For example, if installing and configuring a HTTP server, you might create a class with three resources: one to install the HTTP server packages, one to configure the HTTP server, and one to start or enable the server. You can also refer to classes from other modules, which applies their configuration. For example, if you had to configure an application that also required a webserver, you can refer to the previously mentioned class for the HTTP server.
- Static Files
Modules can contain static files that Puppet can copy to certain locations on your system. These locations, and other attributes such as permissions, are defined through file resource declarations in manifests.
Static files are located in the files directory of a module.
- Templates
Sometimes configuration files require custom content. In this situation, users would create a template instead of a static file. Like static files, templates are defined in manifests and copied to locations on a system. The difference is that templates allow Ruby expressions to define customized content and variable input. For example, if you wanted to configure httpd with a customizable port then the template for the configuration file would include:
Listen <%= @httpd_port %>
The
httpd_port
variable in this case is defined in the manifest that references this template.Templates are located in the templates directory of a module.
- Plugins
Plugins allow for aspects that extend beyond the core functionality of Puppet. For example, you can use plugins to define custom facts, custom resources, or new functions. For example, a database administrator might need a resource type for PostgreSQL databases. This could help the database administrator populate PostgreSQL with a set of new databases after installing PostgreSQL. As a result, the database administrator need only create a Puppet manifest that ensures PostgreSQL installs and the databases are created afterwards.
Plugins are located in the lib directory of a module. This includes a set of subdirectories depending on the plugin type. For example:
-
/lib/facter
- Location for custom facts. -
/lib/puppet/type
- Location for custom resource type definitions, which outline the key-value pairs for attributes. -
/lib/puppet/provider
- Location for custom resource providers, which are used in conjunction with resource type definitions to control resources. -
/lib/puppet/parser/functions
- Location for custom functions.
-
4.1.2. Installing a Service
Some software requires package installations. This is one function a Puppet module can perform. This requires a resource definition that defines configurations for a certain package.
For example, to install the httpd
package through the mymodule
module, you would add the following content to a Puppet manifest in the mymodule
module:
class mymodule::httpd { package { 'httpd': ensure => installed, } }
This code defines a subclass of mymodule
called httpd
, then defines a package resource declaration for the httpd
package. The ensure => installed
attribute tells Puppet to check if the package is installed. If it is not installed, Puppet executes yum
to install it.
4.1.3. Starting and Enabling a Service
After installing a package, you might aim to start the service. Use another resource declaration called service
. This requires editing the manifest with the following content:
class mymodule::httpd { package { 'httpd': ensure => installed, } service { 'httpd': ensure => running, enable => true, require => Package["httpd"], } }
This achieves a couple of things:
-
The
ensure => running
attribute checks if the service is running. If not, Puppet enables it. -
The
enable => true
attribute sets the service to run when the system boots. -
The
require => Package["httpd"]
attribute defines an ordering relationship between one resource declaration and another. In this case, it ensures the httpd service starts after the httpd package installs. This creates a dependency between the service and its respective package.
4.1.4. Configuring a Service
The previous two steps show how to install and enable a service through Puppet. However, you might aim to provide some custom configuration to the services. In our example, the HTTP server already provides some default configuration in /etc/httpd/conf/httpd.conf
, which provides a web host on port 80. This section adds some extra configuration to provide an additional web host on a user-specified port.
For this to occur, you use a template file to store the HTTP configuration file. This is because the user-defined port requires variable input. In the module’s templates
directory, you would add a file called myserver.conf.erb
with the following contents:
Listen <%= @httpd_port %> NameVirtualHost *:<%= @httpd_port %> <VirtualHost *:<%= @httpd_port %>> DocumentRoot /var/www/myserver/ ServerName *:<%= @fqdn %>> <Directory "/var/www/myserver/"> Options All Indexes FollowSymLinks Order allow,deny Allow from all </Directory> </VirtualHost>
This template follows the standard syntax for Apache web server configuration. The only difference is the inclusion of Ruby escape characters to inject variables from our module. For example, httpd_port
, which we use to specify the web server port.
Notice also the inclusion of fqdn
, which is a variable that stores the fully qualified domain name of the system. This is known as a system fact. System facts are collected from each system prior to generating each respective system’s Puppet catalog. Puppet uses the facter
command to gather these system facts and you can also run facter
to view a list of these facts.
After saving this file, you would add the resource to module’s Puppet manifest :
class mymodule::httpd { package { 'httpd': ensure => installed, } service { 'httpd': ensure => running, enable => true, require => Package["httpd"], } file {'/etc/httpd/conf.d/myserver.conf': notify => Service["httpd"], ensure => file, require => Package["httpd"], content => template("mymodule/myserver.conf.erb"), } file { "/var/www/myserver": ensure => "directory", } }
This achieves the following:
-
We add a file resource declaration for the server configuration file (
/etc/httpd/conf.d/myserver.conf
). The content for this file is themyserver.conf.erb
template we created earlier. We also check thehttpd
package is installed before adding this file. -
We also add a second file resource declaration. This one creates a directory (
/var/www/myserver
) for our web server. -
We also add a relationship between the configuration file and the httpd service using the
notify => Service["httpd"]
attribute. This checks our configuration file for any changes. If the file has changed, Puppet restarts the service.
4.2. Obtaining OpenStack Puppet Modules
The Red Hat OpenStack Platform uses the official OpenStack Puppet modules, which you obtain from the openstack
group on Github. Navigate your browser to https://github.com/openstack and in the filters section search for puppet
. All Puppet module use the prefix puppet-
.
For this example, we will examine the official OpenStack Block Storage (cinder
), which you can clone using the following command:
$ git clone https://github.com/openstack/puppet-cinder.git
This creates a clone of the Puppet module for Cinder.
4.3. Adding Configuration for a Puppet Module
The OpenStack modules primarily aim to configure the core service. Most also contain additional manifests to configure additional services, sometimes known as backends, agents, or plugins. For example, the cinder
module contains a directory called backends
, which contains configuration options for different storage devices including NFS, iSCSI, Red Hat Ceph Storage, and others.
For example, the manifests/backends/nfs.pp
file contains the following configuration
define cinder::backend::nfs ( $volume_backend_name = $name, $nfs_servers = [], $nfs_mount_options = undef, $nfs_disk_util = undef, $nfs_sparsed_volumes = undef, $nfs_mount_point_base = undef, $nfs_shares_config = '/etc/cinder/shares.conf', $nfs_used_ratio = '0.95', $nfs_oversub_ratio = '1.0', $extra_options = {}, ) { file {$nfs_shares_config: content => join($nfs_servers, "\n"), require => Package['cinder'], notify => Service['cinder-volume'] } cinder_config { "${name}/volume_backend_name": value => $volume_backend_name; "${name}/volume_driver": value => 'cinder.volume.drivers.nfs.NfsDriver'; "${name}/nfs_shares_config": value => $nfs_shares_config; "${name}/nfs_mount_options": value => $nfs_mount_options; "${name}/nfs_disk_util": value => $nfs_disk_util; "${name}/nfs_sparsed_volumes": value => $nfs_sparsed_volumes; "${name}/nfs_mount_point_base": value => $nfs_mount_point_base; "${name}/nfs_used_ratio": value => $nfs_used_ratio; "${name}/nfs_oversub_ratio": value => $nfs_oversub_ratio; } create_resources('cinder_config', $extra_options) }
This achieves a couple of things:
-
The
define
statement creates a defined type calledcinder::backend::nfs
. A defined type is similar to a class; the main difference is Puppet evaluates a defined type multiple times. For example, you might require multiple NFS backends and as such the configuration requires multiple evaluations for each NFS share. -
The next few lines define the parameters in this configuration and their default values. The default values are overwritten if the user passes new values to the
cinder::backend::nfs
defined type. -
The
file
function is a resource declaration that calls for the creation of a file. This file contains a list of our NFS shares and name for this file is defined in the parameters ($nfs_shares_config = '/etc/cinder/shares.conf'
). Note the additional attributes: -
The
content
attribute creates a list using the$nfs_servers
parameter. -
The
require
attribute ensures that thecinder
package is installed. -
The
notify
attribute tells thecinder-volume
service to reset. The
cinder_config
function is a resource declaration that uses a plugin from thelib/puppet/
directory in the module. This plugin adds configuration to the/etc/cinder/cinder.conf
file. Each line in this resource adds a configuration options to the relevant section in thecinder.conf
file. For example, if the$name
parameter ismynfs
, then the following attributes:"${name}/volume_backend_name": value => $volume_backend_name; "${name}/volume_driver": value => 'cinder.volume.drivers.nfs.NfsDriver'; "${name}/nfs_shares_config": value => $nfs_shares_config;
Would save the following to the
cinder.conf
file:[mynfs] volume_backend_name=mynfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/shares.conf
-
The
create_resources
function converts a hash into a set of resources. In this case, the manifest converts the$extra_options
hash to a set of additional configuration options for the backend. This provides a flexible method to add further configuration options not included in the manifest’s core parameters.
This shows the importance of including a manifest to configure your hardware’s OpenStack driver. The manifest provides a simple method for the director to include configuration options relevant to your hardware. This acts as a main integration point for the director to configure your Overcloud to use your hardware.
4.4. Adding Hiera Data to Puppet Configuration
Puppet contains a tool called Hiera, which acts as a key/value systems that provides node-specific configuration. These keys and their values are usually stored in files located in /etc/puppet/hieradata
. The /etc/puppet/hiera.yaml
file defines the order that Puppet reads the files in the hieradata
directory.
When configuring the Overcloud, Puppet uses this data to overwrite the default values for certain Puppet classes. For example, the default NFS mount options for cinder::backend::nfs
in puppet-cinder
are undefined:
$nfs_mount_options = undef,
However, you can create your own manifest that calls the cinder::backend::nfs
defined type and replace this option with Hiera data:
cinder::backend::nfs { $cinder_nfs_backend: nfs_mount_options => hiera('cinder_nfs_mount_options'), }
This means the nfs_mount_options
parameter takes uses Hiera data value from the cinder_nfs_mount_options
key:
cinder_nfs_mount_options: rsize=8192,wsize=8192
Alternatively, you can use the Hiera data to overwrite cinder::backend::nfs::nfs_mount_options
parameter directly so that it applies to all evalutations of the NFS configuration. For example:
cinder::backend::nfs::nfs_mount_options: rsize=8192,wsize=8192
The above Hiera data overwrites this parameter on each evaluation of cinder::backend::nfs
.