Chapter 3. Integrate with IdM using novajoin


Novajoin allows you to enroll your nodes with Red Hat Identity Manager (IdM) as part of the deployment process. As a result, you can integrate IdM features with your OpenStack deployment, including identities, kerberos credentials, and access controls.

Note

IdM enrollment through novajoin is currently only available for the undercloud and overcloud nodes. Novajoin integration for overcloud instances is expected to be supported in a later release.

3.1. Install and configure novajoin in the undercloud

3.1.1. Add the undercloud to the CA

Before deploying the overcloud, you must add the undercloud to the Certificate Authority (CA):

  1. On the undercloud node, install the python-novajoin package:

    $ sudo yum install python-novajoin
  2. On the undercloud node, run the novajoin-ipa-setup script, adjusting the values to suit your deployment:

    $ sudo /usr/libexec/novajoin-ipa-setup \
        --principal admin \
        --password <IdM admin password> \
        --server <IdM server hostname> \
        --realm <overcloud cloud domain (in upper case)> \
        --domain <overcloud cloud domain> \
        --hostname <undercloud hostname> \
        --precreate

    In the following section, you will use the resulting One-Time Password (OTP) to enroll the undercloud.

3.1.2. Add the undercloud to IdM

This procedure registers the undercloud with IdM and configures novajoin. Configure the following settings in undercloud.conf (within the [DEFAULT] section):

  1. The novajoin service is disabled by default. To enable it:

    [DEFAULT]
    enable_novajoin = true
  2. You need set a One-Time Password (OTP) to register the undercloud node with IdM:

    ipa_otp = <otp>
  3. Ensure the overcloud’s domain name served by neutron’s DHCP server matches the IdM domain (your kerberos realm in lowercase):

    overcloud_domain_name = <domain>
  4. Set the appropriate hostname for the undercloud:

    undercloud_hostname = <undercloud FQDN>
  5. Set IdM as the nameserver for the undercloud:

    undercloud_nameservers = <IdM IP>
  6. For larger environments, you will need to review the novajoin connection timeout values. In undercloud.conf, add a reference to a new file called undercloud-timeout.yaml:

    hieradata_override = /home/stack/undercloud-timeout.yaml

    Add the following options to undercloud-timeout.yaml. You can specify the timeout value in seconds, for example, 5:

    nova::api::vendordata_dynamic_connect_timeout: <timeout value>
    nova::api::vendordata_dynamic_read_timeout: <timeout value>
  7. Save the undercloud.conf file.
  8. Run the undercloud deployment command to apply the changes to your existing undercloud:

    $ openstack undercloud install

Verification

  1. Check the keytab files for a key entry for the undercloud:

     [root@undercloud-0 ~]# klist -kt
     Keytab name: FILE:/etc/krb5.keytab
     KVNO Timestamp           Principal
     ---- ------------------- ------------------------------------------------------
        1 04/28/2020 12:22:06 host/undercloud-0.redhat.local@REDHAT.LOCAL
        1 04/28/2020 12:22:06 host/undercloud-0.redhat.local@REDHAT.LOCAL
    
    
     [root@undercloud-0 ~]# klist -kt /etc/novajoin/krb5.keytab
     Keytab name: FILE:/etc/novajoin/krb5.keytab
     KVNO Timestamp           Principal
     ---- ------------------- ------------------------------------------------------
        1 04/28/2020 12:22:26 nova/undercloud-0.redhat.local@REDHAT.LOCAL
        1 04/28/2020 12:22:26 nova/undercloud-0.redhat.local@REDHAT.LOCAL
  2. Test the system /etc/krb.keytab file with the host principle:

     [root@undercloud-0 ~]# kinit -k
     [root@undercloud-0 ~]# klist
     Ticket cache: KEYRING:persistent:0:0
     Default principal: host/undercloud-0.redhat.local@REDHAT.LOCAL
    
     Valid starting       Expires              Service principal
     05/04/2020 10:34:30  05/05/2020 10:34:30  krbtgt/REDHAT.LOCAL@REDHAT.LOCAL
    
     [root@undercloud-0 ~]# kdestroy
     Other credential caches present, use -A to destroy all
  3. Test the novajoin /etc/novajoin/krb.keytab file with the nova principle:

     [root@undercloud-0 ~]# kinit -kt /etc/novajoin/krb5.keytab 'nova/undercloud-0.redhat.local@REDHAT.LOCAL'
     [root@undercloud-0 ~]# klist
     Ticket cache: KEYRING:persistent:0:0
     Default principal: nova/undercloud-0.redhat.local@REDHAT.LOCAL
    
     Valid starting       Expires              Service principal
     05/04/2020 10:39:14  05/05/2020 10:39:14  krbtgt/REDHAT.LOCAL@REDHAT.LOCAL

3.2. Install and configure novajoin in the overcloud

These sections describe how to register an overcloud node with IdM.

3.2.1. Configure overcloud DNS

For automatic detection of your IdM environment, and easier enrollment, consider using IdM as your DNS server:

  1. Connect to your undercloud:

    $ source ~/stackrc
  2. Configure the control plane subnet to use IdM as the DNS name server:

    $ openstack subnet set ctlplane-subnet --dns-nameserver  <idm_server_address>
  3. Set the DnsServers parameter in an environment file to use your IdM server:

    parameter_defaults:
      DnsServers: ["<idm_server_address>"]

    This parameter is usually defined in a custom network-environment.yaml file.

3.2.2. Configure overcloud to use novajoin

  1. To enable IdM integration, create a copy of the /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml environment file:

    $ cp /usr/share/openstack-tripleo-heat-templates/environments/predictable-placement/custom-domain.yaml \
      /home/stack/templates/custom-domain.yaml
  2. Edit the /home/stack/templates/custom-domain.yaml environment file and set the CloudDomain and CloudName* values to suit your deployment. For example:

    parameter_defaults:
      CloudDomain: lab.local
      CloudName: overcloud.lab.local
      CloudNameInternal: overcloud.internalapi.lab.local
      CloudNameStorage: overcloud.storage.lab.local
      CloudNameStorageManagement: overcloud.storagemgmt.lab.local
      CloudNameCtlplane: overcloud.ctlplane.lab.local
  3. Include the following environment files in the overcloud deployment process:

    • /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml
    • /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml
    • /home/stack/templates/custom-domain.yaml

      For example:

      openstack overcloud deploy \
        --templates \
         -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
         -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml \
         -e /home/stack/templates/custom-domain.yaml \

      As a result, the deployed overcloud nodes will be automatically enrolled with IdM.

  4. This only sets TLS for the internal endpoints. For the external endpoints you can use the normal means of adding TLS with the /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml environment file (which must be modified to add your custom certificate and key). Consequently, your openstack deploy command would be similar to this:

    openstack overcloud deploy \
      --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml \
      -e /home/stack/templates/custom-domain.yaml \
      -e /home/stack/templates/enable-tls.yaml
  5. Alternatively, you can also use IdM to issue your public certificates. In that case, you need to use the /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml environment file. For example:

    openstack overcloud deploy \
      --templates \
       -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
       -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-everywhere-endpoints-dns.yaml \
       -e /home/stack/templates/custom-domain.yaml \
       -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml

3.3. Validate a node in IdM

  1. Locate an overcloud node in IdM and confirm that the host entry includes Keytab:True:

    $ ipa host-show overcloud-node-01
      Host name: overcloud-node-01.lab.local
      Principal name: host/overcloud-node-01.lab.local@LAB.LOCAL
      Principal alias: host/overcloud-node-01.lab.local@LAB.LOCAL
      SSH public key fingerprint: <snip>
      Password: False
      Keytab: True
      Managed by: overcloud-node-01.lab.local
  2. SSH to the node and confirm that sssd can query IdM users. For example, to query an IdM user named susan:

    $ getent passwd susan
    uid=1108400007(susan) gid=1108400007(bob) groups=1108400007(susan)

3.4. Configure DNS entries for Novajoin

If you use the haproxy-public-tls-certmonger.yaml template to issue public certificates for endpoints, then you will need to manually create DNS entries for the VIP endpoints used by Novajoin:

  1. Identify the overcloud networks. You can expect to locate these in /home/stack/virt/network/network-environment.yaml:

    parameter_defaults:
        ControlPlaneDefaultRoute: 192.168.24.1
        ExternalAllocationPools:
        -   end: 10.0.0.149
            start: 10.0.0.101
        InternalApiAllocationPools:
        -   end: 172.17.1.149
            start: 172.17.1.10
        StorageAllocationPools:
        -   end: 172.17.3.149
            start: 172.17.3.10
        StorageMgmtAllocationPools:
        -   end: 172.17.4.149
            start: 172.17.4.10
  2. Create a list of virtual IP addresses (VIP) for each overcloud network. For example: /home/stack/virt/public_vip.yaml

    parameter_defaults:
        ControlFixedIPs: [{'ip_address':'192.168.24.101'}]
        PublicVirtualFixedIPs: [{'ip_address':'10.0.0.101'}]
        InternalApiVirtualFixedIPs: [{'ip_address':'172.17.1.101'}]
        StorageVirtualFixedIPs: [{'ip_address':'172.17.3.101'}]
        StorageMgmtVirtualFixedIPs: [{'ip_address':'172.17.4.101'}]
        RedisVirtualFixedIPs: [{'ip_address':'172.17.1.102'}]
  3. Add DNS entries to IdM for each of the VIPs. You may also need to create new zones. The following example demonstrates DNS record and zone creation for IdM:

    ipa dnsrecord-add lab.local overcloud --a-rec 10.0.0.101
    ipa dnszone-add ctlplane.lab.local
    ipa dnsrecord-add ctlplane.lab.local overcloud --a-rec 192.168.24.101
    ipa dnszone-add internalapi.lab.local
    ipa dnsrecord-add internalapi.lab.local overcloud --a-rec 172.17.1.101
    ipa dnszone-add storage.lab.local
    ipa dnsrecord-add storage.lab.local overcloud --a-rec 172.17.3.101
    ipa dnszone-add storagemgmt.lab.local
    ipa dnsrecord-add storagemgmt.lab.local overcloud --a-rec 172.17.4.101
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.