Chapter 8. Integration Points
This chapter explores the specific integration points for director integration. This includes looking at specific OpenStack components and their relationship with director or Overcloud integration. This section is not an exhaustive description of all OpenStack integration but should give you enough information to start integrating hardware and software with Red Hat OpenStack Platform.
8.1. Bare Metal Provisioning (Ironic)
The OpenStack Bare Metal Provisioning (Ironic) component is used within the director to control the power state of the nodes. The director uses a set of back-end drivers to interface with specific bare metal power controllers. These drivers are the key to enabling hardware and vendor specific extensions and capabilities. The most common driver is the IPMI driver (pxe_ipmitool
) which controls the power state for any server that supports the Intelligent Platform Management Interface (IPMI).
Integrating with Ironic starts with the upstream OpenStack community first. Ironic drivers accepted upstream are automatically included in the core Red Hat OpenStack Platform product and the director by default. However, they might not be supported as per certification requirements.
Hardware drivers must undergo continuous integration testing to ensure their continued functionality. For information on third party driver testing and suitability, please see the OpenStack community page on Ironic Testing.
Upstream Repositories:
Upstream Blueprints:
- Launchpad: http://launchpad.net/ironic
Puppet Module:
Bugzilla components:
- openstack-ironic
- python-ironicclient
- python-ironic-oscplugin
- openstack-ironic-discoverd
- openstack-puppet-modules
- openstack-tripleo-heat-templates
Integration Notes:
-
The upstream project contains drivers in the
ironic/drivers
directory. -
The director performs a bulk registration of nodes defined in a JSON file. The
os-cloud-config
tool (https://github.com/openstack/os-cloud-config/) parses this file to determine the node registration details and perform the registration. This means theos-cloud-config
tool, specifically thenodes.py
file, requires support for your driver. The director is automatically configured to use Ironic, which means the Puppet configuration requires little to no modification. However, if your driver is included with Ironic, you need to add your driver to the
/etc/ironic/ironic.conf
file. Edit this file and search for theenabled_drivers
parameter. For example:enabled_drivers=pxe_ipmitool,pxe_ssh,pxe_drac
This allows Ironic to use the specified driver from the
drivers
directory.
8.2. Networking (Neutron)
OpenStack Networking (Neutron) provides the ability to create a network architecture within your cloud environment. The project provides several integration points for Software Defined Networking (SDN) vendors. These integration points usually fall into the categories of plugins or agents
A plugin allows extension and customization of pre-existing Neutron functions. Vendors can write plugins to ensure interoperability between Neutron and certified software and hardware. Most vendors should aim to develop a driver for Neutron’s Modular Layer 2 (ml2) plugin, which provides a modular backend for integrating your own drivers.
An agent provides a specific network function. The main Neutron server (and its plugins) communicate with Neutron agents. Existing examples include agents for DHCP, Layer 3 support, and bridging support.
For both plugins and agents, you can either:
- Include them for distribution as part of the OpenStack Platform solution, or
- Add them to the Overcloud images after OpenStack Platform’s distribution.
It is recommended to analyze the functionality of existing plugins and agents so you can determine how to integrate your own certified hardware and software. In particular, it is recommended to first develop a driver as a part of the ml2 plugin.
Upstream Repositories:
Upstream Blueprints:
- Launchpad: http://launchpad.net/neutron
Puppet Module:
Bugzilla components:
- openstack-neutron
- python-neutronclient
- openstack-puppet-modules
- openstack-tripleo-heat-templates
Integration Notes:
The upstream
neutron
project contains several integration points:-
The plugins are located in
neutron/plugins/
-
The ml2 plugin drivers are located in
neutron/plugins/ml2/drivers/
-
The agents are located in
neutron/agents/
-
The plugins are located in
-
Since the OpenStack Liberty release, many of the vendor-specific ml2 plugin have been moved into their own repositories beginning with
networking-
. For example, the Cisco-specific plugins are located in https://github.com/openstack/networking-cisco The
puppet-neutron
repository also contains separate directories for configuring these integration points:-
The plugin configuration is located in
manifests/plugins/
-
The ml2 plugin driver configuration is located in
manifests/plugins/ml2/
-
The agent configuration is located in
manifests/agents/
-
The plugin configuration is located in
-
The
puppet-neutron
repository contains numerous additional libraries for configuration functions. For example, theneutron_plugin_ml2
library adds a function to add attributes to the ml2 plugin configuration file.
8.3. Block Storage (Cinder)
OpenStack Block Storage (Cinder) provides an API that interacts with block storage devices, which OpenStack uses to create volumes. For example, Cinder provides virtual storage devices for instances. Cinder provides a core set of drivers to support different storage hardware and protocols. For example, some of the core drivers include support for NFS, iSCSI, and Red Hat Ceph Storage. Vendors can include drivers to support additional certified hardware.
Vendors have two main options with the drivers and configuration they develop:
- Include them for distribution as part of the OpenStack Platform solution, or
- Add them to the Overcloud images after OpenStack Platform’s distribution.
It is recommended to analyze the functionality of existing drivers so you can determine how to integrate your own certified hardware and software.
Upstream Repositories:
Upstream Blueprints:
- Launchpad: http://launchpad.net/cinder
Puppet Module:
Bugzilla components:
- openstack-cinder
- python-cinderclient
- openstack-puppet-modules
- openstack-tripleo-heat-templates
Integration Notes:
-
The upstream
cinder
repository contains the drivers incinder/volume/drivers/
The
puppet-cinder
repository contains two main directories for driver configuration:-
The
manifests/backend
directory contains a set of defined types that configure the drivers. -
The
manifests/volume
directory contains a set of classes to configure a default block storage device.
-
The
-
The
puppet-cinder
repository contains a library calledcinder_config
to add attributes to the Cinder configuration files.
8.4. Image Storage (Glance)
OpenStack Image Storage (Cinder) provides an API that interacts with storage types to provide storage for images. Glance provides a core set of drivers to support different storage hardware and protocols. For example, the core drivers include support for file, OpenStack Object Storage (Swift), OpenStack Block Storage (Cinder), and Red Hat Ceph Storage. Vendors can include drivers to support additional certified hardware.
Upstream Repositories:
OpenStack:
GitHub:
Upstream Blueprints:
- Launchpad: http://launchpad.net/glance
Puppet Module:
Bugzilla components:
- openstack-glance
- python-glanceclient
- openstack-puppet-modules
- openstack-tripleo-heat-templates
Integration Notes:
- Adding vendor-specific driver is not necessary as Glance can use Cinder, which contains integretion points, to manage image storage.
-
The upstream
glance_store
repository contains the drivers inglance_store/_drivers
. -
The
puppet-glance
repository contains the driver configuration in themanifests/backend
directory. -
The
puppet-glance
repository contains a library calledglance_api_config
to add attributes to the Glance configuration files.
8.6. OpenShift-on-OpenStack
OpenStack Platform aims to support OpenShift-on-OpenStack deployments. Although the partner integration for OpenShift is outside the scope of this document, you can find more information at the "Red Hat OpenShift Partners" page.