Chapter 1. Overview


This guide describes how to setup federation in a high availability Red Hat OpenStack Platform director environment, using a Red Hat Single Sign-On (RH-SSO) server for authentication services.

1.1. Operational Goals

By following this guide, your OpenStack deployment’s authentication service will be federated with RH-SSO, and will include the following characteristics:

  • Federation will be based on Security Assertion Markup Language (SAML).
  • The Identity Provider (IdP) is RH-SSO, and will be situated externally to the Red Hat OpenStack Platform deployment.
  • The RH-SSO IdP uses Red Hat Identity Management (IdM) as the federated user backing store. As a result, users and groups are managed in IdM, and RH-SSO will reference the user and group information that is stored in IdM.
  • Your IdM users will be authorized to access OpenStack when they are added to the IdM group: openstack-users.
  • OpenStack Keystone will have a group named federated_users. Members of the federated_users group will have the Member role, which grants them permission to access the project.
  • During the federated authentication process, members of the IdM group openstack-users are mapped into the OpenStack group federated_users. As a result, an IdM user will need to be a member of the openstack-users group in order to access OpenStack; if the user is not a member of the IdM group openstack-users, then authentication will fail.

1.2. Assumptions

This guide makes the following assumptions about your deployment:

  • A RH-SSO server is present, and you either have administrative privileges on the server, or the RH-SSO administrator has created a realm for you and given you administrative privileges on that realm. Since federated IdPs are external by definition, the RH-SSO server is assumed to be external to the Red Hat OpenStack Platform director overcloud.
  • An IdM server is present, and also external to the Red Hat OpenStack Platform director overcloud where users and groups are managed. RH-SSO will use IdM as its User Federation backing store.
  • The OpenStack deployment is based on Red Hat OpenStack Platform director.
  • The Red Hat OpenStack Platform director overcloud installation uses high availability (HA) features.
  • Only the Red Hat OpenStack Platform director overcloud will have federation enabled; the undercloud is not federated.
  • TLS encryption is used for all external communication.
  • All nodes have a Fully Qualified Domain Name (FQDN).
  • HAProxy terminates TLS front-end connections, and servers running behind HAProxy do not use TLS.
  • Pacemaker is used to manage some of the overcloud services, including HAProxy.
  • Red Hat OpenStack Platform director has an overcloud deployed.
  • You are able to SSH into the undercloud and overcloud nodes.
  • The examples described in the Keystone Federation Configuration Guide will be followed.
  • On the undercloud-0 node, you will install the helper files into the home directory of the stack user, and work in the stack user home directory.
  • On the controller-0 node, you will install the helper files into the home directory of the heat-admin user, and work in the heat-admin user home directory.

1.3. Prerequisites

  • The RH-SSO server has been configured and is external to the Red Hat OpenStack Platform director overcloud.
  • The IdM deployment is external to the Red Hat OpenStack Platform director overcloud.
  • Red Hat OpenStack Platform director has an overcloud deployed.
Reinstall mod_auth_mellon

If mod_auth_mellon was previously installed on your controller nodes (perhaps because it was included in a base image used to instantiate the controller nodes) you might need to reinstall it again. This is a consequence of the way in which Puppet manages Apache modules, where the Puppet Apache class will remove any Apache configuration files not under Puppet’s control. Note that Apache will not start if these files have been removed, and it will raise errors about unknown Mellon files. At the time of this writing, mod_auth_mellon remains outside of Puppet’s control. See Section 4.14, “Prevent Puppet From Deleting Unmanaged HTTPD Files” for information on how to prevent Puppet from removing Apache configuration files.

To check if Puppet removed any of the files belonging to the mod_auth_mellon RPM, you can perform a query to validate the`mod_auth_mellon` packages, for example:

$ rpm -qV mod_auth_mellon
missing   c /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/auth_mellon.conf
missing   c /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.modules.d/10-auth_mellon.conf

If RPM indicates these configuration files are absent, then Puppet has removed then. You can then restore the files:

$ sudo dnf reinstall mod_auth_mellon

For more information, see BZ#1434875 and BZ#1497718

1.4. Accessing the OpenStack Nodes

  1. As the root user, SSH into the node hosting the OpenStack deployment. For example:

    $ ssh root@xxx
  2. SSH into the undercloud node:

    $ ssh undercloud-0
  3. Become the stack user:

    $ su - stack
  4. Source the overcloud configuration to enable the required OpenStack environment variables:

    $ source overcloudrc
    Note

    Currently, Red Hat OpenStack Platform director sets up Keystone to use the Keystone v2 API but you will be using the Keystone v3 API. Later on in the guide you will create an overcloudrc.v3 file. From that point on you should use the v3 version of the overcloudrc file. See Section 4.8, “Use the Keystone Version 3 API” for more information.

Afer sourcing overcloudrc, you can issue commands using the openstack command line tool, which will operate against the overcloud (even though you’re currently still logged into an undercloud node). If you need to directly access one of the overcloud nodes, you can SSH to it as the heat-admin user. For example:

$ ssh heat-admin@controller-0

1.5. Understanding High Availability

Detailed information on high availability can be found in the High Availability Deployment and Usage guide.

  • Red Hat OpenStack Platform director distributes redundant copies of various OpenStack services across the overcloud deployment. These redundant services are deployed on the overcloud controller nodes, with director naming these nodes controller-0, controller-1, controller-2, and so on, depending on how many controller nodes Red Hat OpenStack Platform director has configured.
  • The IP address of the controller nodes are private to the overcloud and are not externally visible. This is because the services running on the controller nodes are HAProxy back-end servers. There is one publically visible IP address for the set of controller nodes; this is HAProxy’s front end. When a request arrives for a service on the public IP address, then HAProxy will select a back-end server to service the request.
  • The overcloud is organized as a high availability cluster. Pacemaker manages the cluster, performs health checks, and can fail over to another cluster resource if the resource stops functioning. Pacemaker is also aware of how to correctly start and stop resources.

1.5.1. HAProxy Overview

HAProxy serves a similar role to Pacemaker, as it also performs health checks on the back-end servers and only forwards requests to functioning back-end servers. There is a copy of HAProxy running on all controller nodes.

Although there are N copies of HAProxy running, only one is actually fielding requests at any given time; this active HAProxy instance is managed by Pacemaker. This approach helps prevent conflicts from occurring, and allows multiple copies of HAProxy to coordinate the distribution of requests across multiple back-ends. If Pacemaker detects HAProxy has failed, it reassigns the front-end IP address to a different HAProxy instance which then becomes the controlling HAProxy instance. You might think of it as high availability for high availability. The instances of HAProxy that are kept in reserve by Pacemaker are running, but they never see an incoming connection because Pacemaker has configured the networking so that connections only route to the active HAProxy instance.

1.5.2. Managing Pacemaker Services

Services that are managed by Pacemaker must not be managed by systemctl on a controller node. Use the Pacemaker pcs command instead, for example: sudo pcs resource restart haproxy-clone. You can determine the resource name using the Pacemaker status command: sudo pcs status. This will print a result similar to this:

Clone Set: haproxy-clone [haproxy]
Started: [ controller-1 ]
Stopped: [ controller-0 ]

1.5.3. Using the Configuration Script

Many of the steps in this guide require the execution of complicated commands, so to make that task easier (and to allow for repeatability) all the commands have been gathered into a master shell script called configure-federation. Each individual step can be executed by passing the name of the step to configure-federation. The list of possible commands can be seen by using the help option (-h or --help).

Note

You can find the script here: Chapter 6, The configure-federation file

It can be useful to know exactly what the command will be after variable substitution occurs, when the configure-federation script executes:

  • -n is dry-run mode: nothing will be modified, the exact operation will instead be written to stdout.
  • -v is verbose mode: the exact operation will be written to stdout just prior to executing it. This is useful for logging.

1.5.4. Site-specific Values

Certain values used in this guide are site-specific; it may otherwise have been confusing to include these site-specific values directly into this guide, and may have been a source of errors for someone attempting to replicate these steps. To address this, any site-specific values referenced in this guide are in the form of a variable. The variable name starts with a dollar-sign ($) and is all-caps with a prefix of FED_. For example, the URL used to access the RH-SSO server would be: $FED_RHSSO_URL

Note

You can find the variables file here: Chapter 7, The fed_variables file

Site-specific values can always be identified by searching for $FED_ Site-specific values used by the configure-federation script are gathered into the file fed_variables. You will need to edit this file to suit your deployment.

1.6. Using a Proxy or SSL terminator

When a server is behind a proxy, the environment it sees is different to what the client sees as the public identity of the server. A back-end server may have a different hostname, listen on a different port, or use a different protocol than what a client sees on the front side of the proxy. For many web apps this is not a major problem. Typically most of the problems occur when a server has to generate a self-referential URL (perhaps because it will redirect the client to a different URL on the same server). The URL the server generates must match the public address and port as seen by the client.

Authentication protocols are especially sensitive to the host, port and protocol (for example, HTTP/HTTPS) because they often need to assure a request was targeted at a specific server, on a specific port and on a secure transport. Proxies can interfere with this vital information, because by definition a proxy transforms a request received on its public front-end before dispatching it to a non-public server in the back-end. Similarly, responses from the non-public back-end server sometimes need adjustment so that it appears as if the response came from the public front-end of the proxy.

There are various approaches to solving this problem. Because SAML is sensitive to host, port, and protocol information, and because you are configuring SAML behind a high availability proxy (HAProxy), you must deal with these issues or your configuration will likely fail (often in cryptic ways).

1.6.1. Hostname and Port Considerations

The host and port details are used in multiple contexts:

  • The host and port in the URL used by the client.
  • The host HTTP header inserted into the HTTP request (as derived from the client URL host).
  • The host name of the front-facing proxy the client connects to. This is actually the FQDN of the IP address that the proxy is listening on.
  • The host and port of the back-end server which actually handled the client request.
  • The virtual host and port of the server that actually handled the client request.

It is important to understand how each of these values are used, otherwise there is a risk that the wrong host and port are used, with the result that the authentication protocols may fail because they cannot validate the parties involved in the transaction.

You can begin by considering the back-end server handling the request, because this is where the host and port are evaluated, and where most of the problems can occur:

The back-end server needs to know:

  • The URL of the request (including host and port).
  • Its own host and port.

Apache supports virtual name hosting, which allows a single server to host multiple domains. For example, a server running on example.com might service requests for both example.com and example-2.com, with these being virtual host names. Virtual hosts in Apache are configured inside a server configuration block, for example:

<VirtualHost>
  ServerName example.com
</VirtualHost>

When Apache receives a request, it gathers the host information from the HOST HTTP header, and then tries to match the host to the ServerName in its collection of virtual hosts.

The ServerName directive defines the request scheme, hostname, and port that the server uses to identify itself. The behavior of the ServerName directive is modified by the UseCanonicalName directive. When UseCanonicalName is enabled, Apache will use the hostname and port specified in the ServerName directive to construct the canonical name for the server. This name is used in all self-referential URLs, and for the values of SERVER_NAME and SERVER_PORT in CGIs. If UseCanonicalName is Off, Apache will form self-referential URLs using the hostname and port supplied by the client, if any are supplied.

If no port is specified in the ServerName, then the server will use the port from the incoming request. For optimal reliability and predictability, you should specify an explicit hostname and port using the ServerName directive. If no ServerName is specified, the server attempts to deduce the host by first asking the operating system for the system host name, and if that fails, performing a reverse lookup for an IP address present on the system. Consequently, this will produce the wrong host information when the server is behind a proxy, therefore the use of the ServerName directive is essential.

The Apache ServerName documentation is clear concerning the need to fully specify the scheme, host, and port in the Server name directive when the server is behind a proxy, where it states:

Sometimes, the server runs behind a device that processes SSL, such as a reverse proxy, load balancer or SSL offload appliance. When this is the case, specify the https:// scheme and the port number to which the clients connect in the ServerName directive to make sure that the server generates the correct self-referential URLs.

When proxies are in effect, they use X-Forwarded-* HTTP headers to allow the entity processing the request to recognize that the request was forwarded, and what the original values were before they were forwarded. The Red Hat OpenStack Platform director HAProxy configuration sets the X-Forwarded-Proto HTTP header based on whether the front connection used SSL/TLS or not, using this configuration:

http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }

In addition, Apache does not interpret this header, so responsibility falls to another component to process it properly. In the situation where HAProxy terminates SSL prior to the back-end server processing the request, it is irrelevant that the X-Forwarded-Proto HTTP header is set to HTTPS, because Apache does not use the header when an extension module (such as mellon) asks for the protocol scheme of the request. This is why it is essential to have the ServerName directive include the scheme:://host:port and that UseCanonicalName is enabled, otherwise Apache extension modules such as mod_auth_mellon will not function properly behind a proxy.

With regard to web apps hosted by Apache behind a proxy, it is the web app’s (or rather the web app framework) responsibility to process the forwarded header. Consequently, apps handle the protocol scheme of a forwarded request differently than Apache extension modules will. Since Dashboard (horizon) is a Django web app, it is Django’s responsibility to process the X-Forwarded-Proto header. This issue arises with the origin query parameter used by horizon during authentication. Horizon adds a origin query parameter to the keystone URL it invokes to perform authentication. The origin parameter is used by horizon to redirect back to original resource.

The origin parameter generated by horizon may incorrectly specify HTTP as the scheme instead of https despite the fact horizon is running with HTTPS enabled. This occurs because Horizon calls the function build_absolute_uri() to form the origin parameter. It is entirely up to the Django to identify the scheme because build_absolute_url() is ultimately implemented by Django. You can force Django to process the X-Forwarded-Proto using a special configuration directive. This is covered in the Django secure-proxy-ssl-header documentation.

You can enable this setting by uncommenting this line in /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings:

#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Note

Note that Django prefixes the header with "HTTP_", and converts hyphens to underscores.

After uncommenting, the Origin parameter will correctly use the HTTPS scheme. However, even when the ServerName directive includes the HTTPS scheme, the Django call build_absolute_url() will not use the HTTPS scheme. So for Django you must use the SECURE_PROXY_SSL_HEADER override, simply specifying the scheme in ServerName directive will not work. It is important to note that the Apache extension modules and web apps process the request scheme of a forwarded request differently, requiring that both the ServerName and X-Forwarded-Proto HTTP header techniques be used.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.