이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 4. Configure OpenStack for Federation


4.1. Determine the IP Address and FQDN Settings

The following nodes require an assigned Fully-Qualified Domain Name (FQDN):

  • The host running the Dashboard (horizon).
  • The host running the Identity Service (keystone), referenced in this guide as $FED_KEYSTONE_HOST. Note that more than one host will run a service in a high-availability environment, so the IP address is not a host address but rather the IP address bound to the service.
  • The host running RH-SSO.
  • The host running IdM.

The Red Hat OpenStack Platform director deployment does not configure DNS or assign FQDNs to the nodes, however, the authentication protocols (and TLS) require the use of FQDNs. As a result, you must determine the external public IP address of the overcloud. Note that you need the IP address of the overcloud, which is not the same as the IP address allocated to an individual node in the overcloud, such as controller-0, controller-1.

You will need the external public IP address of the overcloud because IP addresses are assigned to a high availability cluster, instead of an individual node. Pacemaker and HAProxy work together to provide the appearance of a single IP address; this IP address is entirely distinct from the individual IP address of any given node in the cluster. As a result, the correct way to think about the IP address of an OpenStack service is not in terms of which node that service is running on, but rather to consider the effective IP address that the cluster is advertising for that service (for example, the VIP).

4.1.1. Retrieve the IP address

In order to determine the correct IP address, you will need to assign a name to it, instead of using DNS. There are two ways to do this:

  1. Red Hat OpenStack Platform director uses one common public IP address for all OpenStack services, and separates those services on the single public IP address by port number; if you the know public IP address of one service in the OpenStack cluster then you know all of them (however that does not also tell you the port number of a service). You can examine the Keystone URL in the overcloudrc file located in the ~stack home directory on the undercloud. For example:

    export OS_AUTH_URL=https://10.0.0.101:13000/v2.0

    This tells you that the public keystone IP address is 10.0.0.101 and that keystone is available on port 13000. By extension, all other OpenStack services are also available on the 10.0.0.101 IP address with their own unique port number.

  2. However, the more accurate way of determining the IP addresses and port number information is to examine the HAProxy configuration file (/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg), which is located on each of the overcloud nodes. The haproxy.cfg file is an identical copy on each of the overcloud controller nodes; this is essential because Pacemaker will assign one controller node the responsibility of running HAProxy for the cluster, in the event of an HAProxy failure Pacemaker will reassign a different overcloud controller to run HAProxy. No matter which controller node is currently running HAProxy, it must act identically; therefore the haproxy.cfg files must be identical.

    1. To examine the haproxy.cfg file, SSH into one of the cluster’s controller nodes and review /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg. As noted above it does not matter which controller node you select.
    2. The haproxy.cfg file is divided into sections, with each beginning with a listen statement followed by the name of the service. Immediately inside the service section is a bind statement; these are the front IP addresses, some of which are public, and others are internal to the cluster. The server lines are the back IP addresses where the service is actually running, there should be one server line for each controller node in the cluster.
    3. To determine the public IP address and port of the service from the multiple bind entries in the section:

      Red Hat OpenStack Platform director puts the public IP address as the first bind entry. In addition, the public IP address should support TLS, so the bind entry will have the ssl keyword. The IP address should also match the IP address set in the OS_AUTH_URL located in the overstackrc file. For example, here is a sample keystone_public section from a haproxy.cfg:

      listen keystone_public
        bind 10.0.0.101:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
        bind 172.17.1.19:5000 transparent
        mode http
        http-request set-header X-Forwarded-Proto https if { ssl_fc }
        http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
        option forwardfor
        redirect scheme https code 301 if { hdr(host) -i 10.0.0.101 } !{ ssl_fc }
        rsprep ^Location:\ http://(.*) Location:\ https://\1
        server controller-0.internalapi.localdomain 172.17.1.13:5000 check fall 5 inter 2000 rise 2 cookie controller-0.internalapi.localdomain
        server controller-1.internalapi.localdomain 172.17.1.22:5000 check fall 5 inter 2000 rise 2 cookie controller-1.internalapi.localdomain
    4. The first bind line has the ssl keyword, and the IP address matches that of the OS_AUTH_URL located in the overstackrc file. As a result, you can be confident that keystone is publicly accessed at the IP address of 10.0.0.101 on port 13000.
    5. The second bind line is internal to the cluster, and is used by other OpenStack services running in the cluster (note that it does not use TLS because it is not public).
    6. The mode http setting indicates that the protocol in use is HTTP, this allows HAProxy to examine HTTP headers, among other tasks.
    7. The X-Forwarded-Proto lines:

      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }

      These settings require particular attention and will be covered in more detail in Section 4.1.2, “Set the Host Variables and Name the Host”. They guarantee that the HTTP header X-Forwarded-Proto will be set and seen by the back-end server. The back-end server in many cases needs to know if the client was using HTTPS. However, HAProxy terminates TLS so the back-end server will see the connection as non-TLS. The X-Forwarded-Proto HTTP header is a mechanism that allows the back-end server identify which protocol the client was actually using, instead of which protocol the request arrived on. It is essential that a client can not be able to send a X-Forwarded-Proto HTTP header, because that would allow the client to maliciously spoof that the protocol was HTTPS. The X-Forwarded-Proto HTTP header can either be deleted by the proxy when it is received from the client, or the proxy can forcefully set it and so mitigate any malicious use by the client. This is why X-Forwarded-Proto will always be set to one of https or http.

      The X-Forwarded-For HTTP header is used to track the client, which allows the back-end server to identify who the requesting client was instead of it appearing to be the proxy. This option causes the X-Forwarded-For HTTP header to be inserted into the request:

      option forwardfor

      See Section 4.1.2, “Set the Host Variables and Name the Host” for more information on forwarded proto, redirects, ServerName, among others.

    8. The following line will confirm that only HTTPS is used on the public IP address:

      redirect scheme https code 301 if { hdr(host) -i 10.0.0.101 } !{ ssl_fc }

      This setting identifies if the request was received on the public IP address (for example 10.0.0.101) and it was not HTTPS, then performs a 301 redirect and sets the scheme to HTTPS.

    9. HTTP servers (such as Apache) often generate self-referential URLs for redirect purposes. This redirect location must indicate the correct protocol, but if the server is behind a TLS terminator it will think its redirection URL should be HTTP and not HTTPS. This line identifies if a Location header appears in the response that uses the HTTP scheme, then rewrites it to use the HTTPS scheme:

      rsprep ^Location:\ http://(.*) Location:\ https://\1

4.1.2. Set the Host Variables and Name the Host

You will need to determine the IP address and port to use. In this example the IP address is 10.0.0.101 and the port is 13000.

  1. This value can be confirmed in overcloudrc:

    export OS_AUTH_URL=https://10.0.0.101:13000/v2.0
  2. And in the keystone_public section of the haproxy.cfg file:

    bind 10.0.0.101:13000 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
  3. You must also give the IP address a FQDN. This example uses overcloud.localdomain. Note that the IP address should be put in the /etc/hosts file since DNS is not being used:

    10.0.0.101 overcloud.localdomain # FQDN of the external VIP
    Note

    Red Hat OpenStack Platform director is expected to have already configured the hosts files on the overcloud nodes, but you may need to add the host entry on any external hosts that participate.

  4. The $FED_KEYSTONE_HOST and $FED_KEYSTONE_HTTPS_PORT must be set in the fed_variables file. Using the above example values:

    FED_KEYSTONE_HOST="overcloud.localdomain"
    FED_KEYSTONE_HTTPS_PORT=13000

Because Mellon is running on the Apache server that hosts keystone, the Mellon host:port and keystone host:port values will match.

Note

If you run hostname on one of the controller nodes it will likely be similar to this: controller-0.localdomain, but note that this is its internal cluster name, not its public name. You will instead need to use the public IP address.

4.2. Install Helper Files on undercloud-0

  1. Copy the configure-federation and fed_variables files into the ~stack home directory on undercloud-0. You will have created these files as part of Section 1.5.3, “Using the Configuration Script”.

4.3. Set your Deployment Variables

  1. The file fed_variables contains variables specific to your federation deployment. These variables are referenced in this guide as well as in the configure-federation helper script. Each site-specific federation variable is prefixed with FED_ and (when used as a variable) will use the $ variable syntax, such as $FED_. Make sure every FED_ variable in fed_variables is provided a value.

4.4. Copy the Helper Files From undercloud-0 to controller-0

  1. Copy the configure-federation and the edited fed_variables from the ~stack home directory on undercloud-0 to the ~heat-admin home directory on controller-0. For example:

    $ scp configure-federation fed_variables heat-admin@controller-0:/home/heat-admin
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation copy-helper-to-controller

4.5. Initialize the Working Environment on the undercloud

  1. On the undercloud node, as the stack user, create the fed_deployment directory. This location will be the file stash. For example:

    $ su - stack
    $ mkdir fed_deployment
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation initialize

4.6. Initialize the Working Environment on controller-0

  1. From the undercloud node, SSH into the controller-0 node as the heat-admin user and create the fed_deployment directory. This location will be the file stash. For example:

    $ ssh heat-admin@controller-0
    $ mkdir fed_deployment
Note

You can use the configure-federation script to perform the above step. From the controller-0 node: $ ./configure-federation initialize

4.7. Install mod_auth_mellon on Each Controller Node

  1. From the undercloud node, SSH into the controller-n node as the heat-admin user and install the mod_auth_mellon. For example:

    $ ssh heat-admin@controller-n # replace n with controller number
    $ sudo dnf reinstall mod_auth_mellon
Note

If mod_auth_mellon is already installed on the controller nodes, you may need to reinstall it again. See the Reinstall mod_auth_mellon note for more details.

Note

You can use the configure-federation script to perform the above step: $ ./configure-federation install-mod-auth-mellon

4.8. Use the Keystone Version 3 API

Before you can use the openstack command line client to administer the overcloud, you will need to configure certain parameters. Normally this is done by sourcing an rc file within your shell session, which sets the required environment variables. Red Hat OpenStack Platform director will have created an overcloudrc file for this purpose in the home directory of the stack user, in the undercloud-0 node. By default, the overcloudrc file is set to use the v2 version of the keystone API, however, federation requires the use of the v3 keystone API. As a result, you need to create a new rc file that uses the v3 keystone API.

  1. For example:

    $ source overcloudrc
    $ NEW_OS_AUTH_URL=`echo $OS_AUTH_URL | sed 's!v2.0!v3!'`
  2. Write the following contents to overcloudrc.v3:

      for key in \$( set | sed 's!=.*!!g'  | grep -E '^OS_') ; do unset $key ; done
      export OS_AUTH_URL=$NEW_OS_AUTH_URL
      export OS_USERNAME=$OS_USERNAME
      export OS_PASSWORD=$OS_PASSWORD
      export OS_USER_DOMAIN_NAME=Default
      export OS_PROJECT_DOMAIN_NAME=Default
      export OS_PROJECT_NAME=$OS_TENANT_NAME
      export OS_IDENTITY_API_VERSION=3
    Note

    You can use the configure-federation script to perform the above step: $ ./configure-federation create-v3-rcfile

  3. From this point forward, to work with the overcloud you will use the overcloudrc.v3 file:

    $ ssh undercloud-0
    $ su - stack
    $ source overcloudrc.v3

4.9. Add the RH-SSO FQDN to Each Controller

The mellon service will be running on each controller node and configured to connect to the RH-SSO IdP.

  1. If the FQDN of the RH-SSO IdP is not resolvable through DNS then you will have to manually add the FQDN to the /etc/hosts file on all controller nodes (after the Heat Hosts section):

    $ ssh heat-admin@controller-n
    $ sudo vi /etc/hosts
    
    # Add this line (substituting the variables) before this line:
    # HEAT_HOSTS_START - Do not edit manually within this section!
    ...
    # HEAT_HOSTS_END
    $FED_RHSSO_IP_ADDR $FED_RHSSO_FQDN

4.10. Install and Configure Mellon on the Controller Node

The keycloak-httpd-client-install tool performs many of the steps needed to configure mod_auth_mellon and have it authenticate against the RH-SSO IdP. The keycloak-httpd-client-install tool should be run on the node where mellon will run. In our case this means mellon will be running on the overcloud controllers protecting Keystone.

Note that this is a high availability deployment, and as such there will be multiple overcloud controller nodes, each running identical copies. As a result, the mellon setup will need to be replicated across each controller node. You will approach this by installing and configuring mellon on controller-0, and then gathering up all the configuration files that the keycloak-httpd-client-install tool created into an archive (for example, a tar file) and then let swift copy the archive over to each controller and unarchive the files there.

  1. Run the RH-SSO client installation:

      $ ssh heat-admin@controller-0
      $ dnf -y install keycloak-httpd-client-install
      $ sudo keycloak-httpd-client-install \
       --client-originate-method registration \
       --mellon-https-port $FED_KEYSTONE_HTTPS_PORT \
       --mellon-hostname $FED_KEYSTONE_HOST  \
       --mellon-root /v3 \
       --keycloak-server-url $FED_RHSSO_URL  \
       --keycloak-admin-password  $FED_RHSSO_ADMIN_PASSWORD \
       --app-name v3 \
       --keycloak-realm $FED_RHSSO_REALM \
       -l "/v3/auth/OS-FEDERATION/websso/mapped" \
       -l "/v3/auth/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/websso" \
       -l "/v3/OS-FEDERATION/identity_providers/rhsso/protocols/mapped/auth"
    Note

    You can use configure-federation script to perform the above step: $ ./configure-federation client-install

  2. After the client RPM installation, you should see output similar to this:

      [Step  1] Connect to Keycloak Server
      [Step  2] Create Directories
      [Step  3] Set up template environment
      [Step  4] Set up Service Provider X509 Certificiates
      [Step  5] Build Mellon httpd config file
      [Step  6] Build Mellon SP metadata file
      [Step  7] Query realms from Keycloak server
      [Step  8] Create realm on Keycloak server
      [Step  9] Query realm clients from Keycloak server
      [Step 10] Get new initial access token
      [Step 11] Creating new client using registration service
      [Step 12] Enable saml.force.post.binding
      [Step 13] Add group attribute mapper to client
      [Step 14] Add Redirect URIs to client
      [Step 15] Retrieve IdP metadata from Keycloak server
      [Step 16] Completed Successfully

4.11. Edit the Mellon Configuration

Additional mellon configuration is required for your deployment: As you will be using a list of groups during the IdP-assertion-to-Keystone mapping phase, the keystone mapping engine expects lists to be in a certain format (one value with items separated by a semicolon (;)). As a result, you must configure mellon so that when it receives multiple values for an attribute, it must know to combine the multiple attributes into a single value with items separated by a semicolon. This mellon directive will address that:

MellonMergeEnvVars On ";"
  1. To configure this setting in your deployment:

    $ vi /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf
  2. Locate the <Location /v3> block and add a line to it. For example:

      <Location /v3>
          ...
          MellonMergeEnvVars On ";"
      </Location>

4.12. Create an Archive of the Generated Configuration Files

The mellon configuration needs to be replicated across all controller nodes, so you will create an archive of the files that allows you to install the exact same file contents on each controller node. The archive will be stored in the ~heat-admin/fed_deployment subdirectory.

  1. Create the compressed tar archive:

    $ mkdir fed_deployment
    $ tar -cvzf rhsso_config.tar.gz \
      --exclude '*.orig' \
      --exclude '*~' \
      /var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2 \
      /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/v3_mellon_keycloak_openstack.conf
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation create-sp-archive

4.13. Retrieve the Mellon Configuration Archive

  1. On the undercloud-0 node, fetch the archive you just created and extract the files, as you will need access some of the data in subsequent steps (for example the entityID of the RH-SSO IdP).

    $ scp heat-admin@controller-0:/home/heat-admin/fed_deployment/rhsso_config.tar.gz ~/fed_deployment
    $ tar -C fed_deployment -xvf fed_deployment/rhsso_config.tar.gz
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation fetch-sp-archive

4.14. Prevent Puppet From Deleting Unmanaged HTTPD Files

By default, the Puppet Apache module will purge any files in the Apache configuration directories it is not managing. This is considered a reasonable precaution, as it prevents Apache from operating in any manner other than the configuration enforced by Puppet. However, this conflicts with the manual configuration of mellon in the HTTPD configuration directories. When the Apache Puppet apache::purge_configs flag is enabled (which it is by default), Puppet will delete files belonging to the mod_auth_mellon RPM when the mod_auth_mellon RPM is installed. It will also delete the configuration files generated by keycloak-httpd-client-install when it is run. Until the mellon files are under Puppet control, you will have to disable the apache::purge_configs flag.

You may also want to check if the mod_auth_mellon configuration files have already been removed in a previous run of overcloud_deploy, see Reinstall mod_auth_mellon for more information.

Note

Disabling the apache::purge_configs flag opens the controller nodes to vulnerabilities. Do not forget to re-enable it when Puppet adds support for managing mellon.

To override the apache::purge_configs flag, create a Puppet file containing the override and add the override file to the list of Puppet files used when overcloud_deploy.sh is run.

  1. Create the file fed_deployment/puppet_override_apache.yaml and add this content:

      parameter_defaults:
        ControllerExtraConfig:
          apache::purge_configs: false
  2. Add the file near the end of the overcloud_deploy.sh script. It should be the last -e argument. For example:
  -e /home/stack/fed_deployment/puppet_override_apache.yaml \
  --log-file overcloud_deployment_14.log &> overcloud_install.log
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation puppet-override-apache

4.15. Configure Keystone for Federation

This guide uses keystone domains, which require some extra configuration. If enabled, the keystone Puppet module can perform this extra configuration step.

  1. In one of the Puppet YAML files, add the following:

    keystone::using_domain_config: true

Some additional values must be set in /etc/keystone/keystone.conf to enable federation:

  • auth:methods
  • federation:trusted_dashboard
  • federation:sso_callback_template
  • federation:remote_id_attribute

An explanation of these configuration settings and their suggested values:

  • auth:methods - A list of allowed authentication methods. By default the list is: ['external', 'password', 'token', 'oauth1']. You will need to enable SAML using the mapped method, so this value should be: external,password,token,oauth1,mapped.
  • federation:trusted_dashboard - A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of this list. This configuration option may be repeated for multiple values. You must set this in order to use web-based SSO flows. For this deployment the value would be: https://$FED_KEYSTONE_HOST/dashboard/auth/websso/ Note that the host is $FED_KEYSTONE_HOST only because Red Hat OpenStack Platform director co-locates both keystone and horizon on the same host. If horizon is running on a different host to keystone, then you will need to adjust accordingly.
  • federation:sso_callback_template - The absolute path to a HTML file used as a Single Sign-On callback handler. This page is expected to redirect the user from keystone back to a trusted dashboard host, by form encoding a token in a POST request. Keystone’s default value should be sufficient for most deployments: /etc/keystone/sso_callback_template.html
  • federation:remote_id_attribute - The value used to obtain the entity ID of the Identity Provider. For mod_auth_mellon you will use MELLON_IDP. Note that this is set in the mellon configuration file using the MellonIdP IDP directive.

    1. Create the fed_deployment/puppet_override_keystone.yaml file with this content:

      parameter_defaults:
        controllerExtraConfig:
          keystone::using_domain_config: true
          keystone::config::keystone_config:
            identity/domain_configurations_from_database:
              value: true
            auth/methods:
              value: external,password,token,oauth1,mapped
            federation/trusted_dashboard:
              value: https://$FED_KEYSTONE_HOST/dashboard/auth/websso/
            federation/sso_callback_template:
              value: /etc/keystone/sso_callback_template.html
            federation/remote_id_attribute:
              value: MELLON_IDP
    2. Towards the end of the overcloud_deploy.sh script, add the file you just created. It should be the last -e argument. For example:

      -e /home/stack/fed_deployment/puppet_override_keystone.yaml \
      --log-file overcloud_deployment_14.log &> overcloud_install.log
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation puppet-override-keystone

4.16. Deploy the Mellon Configuration Archive

You will use swift artifacts to install the mellon configuration files on each controller node. For example:

$ source ~/stackrc
$ upload-swift-artifacts -f fed_deployment/rhsso_config.tar.gz
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation deploy-mellon-configuration

4.17. Redeploy the Overcloud

In earlier steps you made changes to the Puppet YAML configuration files and swift artifacts. These changes can now be applied using this command:

$ ./overcloud_deploy.sh
Note

In later steps, other configuration changes will be made to the overcloud controller nodes. Re-running Puppet using the overcloud_deploy.sh script may overwrite some of these changes. You should avoid applying the Puppet configuration from this point forward to avoid losing any manual edits that were made to the configuration files on the overcloud controller nodes.

4.18. Use Proxy Persistence for Keystone on Each Controller

With high availability, any one of the multiple back-end servers can be expected to field a request. Because of the number of redirections used by SAML, and the fact each of those redirections involves state information, it is vital that the same server processes all the transactions. In addition, a session will be established by mod_auth_mellon. Currently mod_auth_mellon is not capable of sharing its state information across multiple servers, so you must configure HAProxy to always direct requests from a client to the same server each time.

HAProxy can bind a client to the same server using either affinity or persistence. This article on HAProxy Sticky Sessions provides valuable background material.

The difference between persistence and affinity is that affinity is used when information from a layer below the application layer is used to pin a client request to a single server. Persistence is used when the application layer information binds a client to a single server sticky session. The main advantage of persistence over affinity is that it is much more accurate.

Persistence is implemented through the use of cookies. The HAProxy cookie directive names the cookie that will be used for persistence, along with parameters controlling its use. The HAProxy server directive has a cookie option that sets the value of the cookie, which should be set to the name of the server. If an incoming request does not have a cookie identifying the back-end server, then HAProxy selects a server based on its configured balancing algorithm. HAProxy ensures that the cookie is set to the name of the selected server in the response. If the incoming request has a cookie identifying a back-end server then HAProxy automatically selects that server to handle the request.

  1. To enable persistence in the keystone_public block of the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg configuration file, add this line:

    cookie SERVERID insert indirect nocache

    This setting states that SERVERID will be the name of the persistence cookie.

  2. Next, you must edit each server line and add cookie <server-name> as an additional option. For example:

    server controller-0 cookie controller-0
    server controller-1 cookie controller-1

Note that the other parts of the server directive have been omitted for clarity.

4.19. Create Federated Resources

You might recall from the introduction that you are going to follow the federation example in the Create keystone groups and assign roles section of the keystone federation documentation.

  1. Perform the following steps on the undercloud node as the stack user (after sourcing the overcloudrc.v3 file):

    $ openstack domain create federated_domain
    $ openstack project create  --domain federated_domain federated_project
    $ openstack group create federated_users --domain federated_domain
    $ openstack role add --group federated_users --group-domain federated_domain --domain federated_domain _member_
    $ openstack role add --group federated_users --group-domain federated_domain --project federated_project _member_
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation create-federated-resources

4.20. Create the Identity Provider in OpenStack

The IdP needs to be registered in keystone, which creates a binding between the entityID in the SAML assertion and the name of the IdP in keystone.

You will need to locate the entityID of the RH-SSO IdP. This value is located in the IdP metadata which was obtained when keycloak-httpd-client-install was run. The IdP metadata is stored in the /var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2/v3_keycloak_$FED_RHSSO_REALM_idp_metadata.xml file. In an earlier step you retrieved the mellon configuration archive and extracted it to the fed_deployment work area. As a result, you can find the IdP metadata in fed_deployment/var/lib/config-data/puppet-generated/keystone/etc/httpd/saml2/v3_keycloak_$FED_RHSSO_REALM_idp_metadata.xml. In the IdP metadata file, you will find a <EntityDescriptor> element with a entityID attribute. You need the value of the entityID attribute, and for example purposes this guide will assume it has been stored in the $FED_IDP_ENTITY_ID variable. You can name your IdP rhsso, which is assigned to the variable $FED_OPENSTACK_IDP_NAME. For example:

$ openstack identity provider create --remote-id $FED_IDP_ENTITY_ID $FED_OPENSTACK_IDP_NAME
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation openstack-create-idp

4.21. Create the Mapping File and Upload to Keystone

Keystone performs a mapping to match the IdP’s SAML assertion into a format that keystone can understand. The mapping is performed by keystone’s mapping engine and is based on a set of mapping rules that are bound to the IdP.

  1. These are the mapping rules used in this example (as described in the introduction):

    [
        {
            "local": [
                {
                    "user": {
                        "name": "{0}"
                    },
                    "group": {
                        "domain": {
                            "name": "federated_domain"
                        },
                        "name": "federated_users"
                    }
                }
            ],
            "remote": [
                {
                    "type": "MELLON_NAME_ID"
                },
                {
                    "type": "MELLON_groups",
                    "any_one_of": ["openstack-users"]
                }
            ]
        }
    ]

This mapping file contains only one rule. Rules are divided into two parts: local and remote. The mapping engine works by iterating over the list of rules until one matches, and then executing it. A rule is considered a match only if all the conditions in the remote part of the rule match. In this example the remote conditions specify:

  1. The assertion must contain a value called MELLON_NAME_ID.
  2. The assertion must contain a values called MELLON_groups and at least one of the groups in the group list must be openstack-users.

If the rule matches, then:

  1. The keystone user name will be assigned the value from MELLON_NAME_ID.
  2. The user will be assigned to the keystone group federated_users in the Default domain.

In summary, if the IdP successfully authenticates the user, and the IdP asserts that user belongs to the group openstack-users, then keystone will allow that user to access OpenStack with the privileges bound to the federated_users group in keystone.

4.21.1. Create the mapping

  1. To create the mapping in keystone, create a file containing the mapping rules and then upload it into keystone, giving it a reference name. Create the mapping file in the fed_deployment directory (for example, in fed_deployment/mapping_${FED_OPENSTACK_IDP_NAME}_saml2.json), and assign the name $FED_OPENSTACK_MAPPING_NAME to the mapping rules. For example:

    $ openstack mapping create --rules fed_deployment/mapping_rhsso_saml2.json $FED_OPENSTACK_MAPPING_NAME
Note

You can use the configure-federation script to perform the above procedure as two steps:

$ ./configure-federation create-mapping
$ ./configure-federation openstack-create-mapping
  • create-mapping - creates the mapping file.
  • openstack-create-mapping - performs the upload of the file.

4.22. Create a Keystone Federation Protocol

  1. Keystone uses the Mapped protocol to bind an IdP to a mapping. To establish this binding:

    $ openstack federation protocol create \
    --identity-provider $FED_OPENSTACK_IDP_NAME \
    --mapping $FED_OPENSTACK_MAPPING_NAME \
    mapped"
Note

You can use the configure-federation script to perform the above step: $ ./configure-federation openstack-create-protocol

4.23. Fully-Qualify the Keystone Settings

  1. On each controller node, edit /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/10-keystone_wsgi_main.conf to confirm that the ServerName directive inside the VirtualHost block includes the HTTPS scheme, the public hostname, and the public port. You must also enable the UseCanonicalName directive. For example:

    <VirtualHost>
      ServerName https:$FED_KEYSTONE_HOST:$FED_KEYSTONE_HTTPS_PORT
      UseCanonicalName On
      ...
    </VirtualHost>
Note

Be sure to substitute the $FED_ variables with the values specific to your deployment.

4.24. Configure Horizon to Use Federation

  1. On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and make sure the following configuration values are set:

    OPENSTACK_KEYSTONE_URL = "https://$FED_KEYSTONE_HOST:$FED_KEYSTONE_HTTPS_PORT/v3"
    OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"
    WEBSSO_ENABLED = True
    WEBSSO_INITIAL_CHOICE = "mapped"
    WEBSSO_CHOICES = (
        ("mapped", _("RH-SSO")),
        ("credentials", _("Keystone Credentials")),
    )
Note

Be sure to substitute the $FED_ variables with the values specific to your deployment.

4.25. Configure Horizon to Use the X-Forwarded-Proto HTTP Header

  1. On each controller node, edit /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and uncomment the line:

    #SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
Note

You must restart a container for configuration changes to take effect.

Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.