Questo contenuto non è disponibile nella lingua selezionata.

Chapter 14. Installing RHEL Identity Management in a controlled environment


Learn how to install a basic Identity Management (IdM) in RHEL environment for testing prior to production deployment. You install the software using an Ansible playbook, which ensures convenience and repeatability.

Prerequisites

  • A system running Red Hat Enterprise Linux (RHEL) with 16 GB of RAM or more.
  • A RHEL subscription.

Terminology and assumptions

  • root is the account on the managed nodes that is used to perform the actions defined in the Ansible playbooks.
  • controller.idm.example.com is the name of the Ansible control node, that is the host on which the Ansible playbooks are stored and run.
  • server.idm.example.com, replica.idm.example.com, and client.idm.example.com are the managed nodes on which Identity Management in RHEL is installed and configured.
  • The control node and the managed nodes are running on virtual machines. All these virtual machines are installed on one physical system that runs RHEL.

14.1. Installing RHEL on virtual machines for IdM as a proof of concept

Learn how to install RHEL on your virtual machines so that you can later install an IdM cluster on them using the ansible-freeipa RPM collection.

Prerequisites

  • You have downloaded the latest ISO image of RHEL 8, 9 or 10 from the Red Hat Customer Portal.

Procedure

  1. Use the ISO image to install a new VM for the controller system. For details, see Interactively installing RHEL from installation media. During the installation, pay attention to the following:

    1. If you are using the Virtual Machine Manager (VMM) to install your VMs, name the hosts in the controller, server, replica, and client, so that you can match the names in the VMM UI to the names of the hosts on the CLI.
    2. Reserve at least 4 GB of RAM on the VMs on which you are installing an IdM server and replica. 1 GB is enough for a client system.
    3. Reserve 20 GB for the storage on the IdM server and IdM replica.
    4. Select Install, not Test and Install.
    5. Create a local ansible user on the controller during the installation.
    6. Set an easy-to-remember password for the ansible user, for example 12345.
    7. In the Root password section, enter an easy-to-remember password, for example 1234.
    8. Check the Allow root SSH login with password check box.
  2. After the installation is complete, configure the host name for the controller VM:

    1. On the controller VM CLI, enter nmtui.
    2. Using the Down Arrow key, select Set system hostname.
    3. In the newly opened window, enter controller.idm.example.com.

      The host name must be a fully qualified domain name, such as controller.idm.example.com. For more information, see Meeting DNS host name and DNS requirements for IdM in Installing Identity Management.

    4. Using the Down and Right Arrow keys, select OK.
    5. Confirm the new host name by clicking OK again.
    6. In the higher-level interface, select OK and Quit by using the Down and Right Arrow keys.
    7. [Optional] To verify the host name, use the hostname utility on the system:

      # hostname
      Copy to Clipboard Toggle word wrap
      controller.idm.example.com
      Copy to Clipboard Toggle word wrap

      The output of hostname must not be localhost or localhost6.

  3. Repeat the previous steps for all the other VMs: server, replica, and client.
  4. Configure reciprocal logins to individual systems using host names instead of IP addresses:

    1. On the controller CLI, enter:

      # ip a
      Copy to Clipboard Toggle word wrap
      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
          link/ether 52:54:00:b7:e6:ac brd ff:ff:ff:ff:ff:ff
          inet 192.168.122.86/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
             valid_lft 3106sec preferred_lft 3106sec
          inet6 fe80::5054:ff:feb7:e6ac/64 scope link noprefixroute
             valid_lft forever preferred_lft forever
      Copy to Clipboard Toggle word wrap

      Note the IP address that starts with 192.168.X.X.

    2. Do the same on all the other virtual hosts.
    3. On controller, add the host names and IP addresses of all the virtual systems to /etc/hosts file. The file can look as follows:

      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
      ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      192.168.122.86 controller.idm.example.com controller
      192.168.122.42 server.idm.example.com server
      192.168.122.103 replica.idm.example.com replica
      192.168.122.200 client.idm.example.com client
      Copy to Clipboard Toggle word wrap
    4. Update the /etc/hosts file on your physical system in the same way.
  5. Ensure that the operating system on controller is up to date:

    1. SSH from your local system to the root account on controller:

      your-physical-system]$ ssh root@controller
      Copy to Clipboard Toggle word wrap
    2. Register the controller virtual machine with Red Hat’s subscription management service:

      # subscription-manager register --username <your_user_name> --password <your_password>
      Copy to Clipboard Toggle word wrap
    3. Ensure that you are using the latest packages:

      # yum update
      Copy to Clipboard Toggle word wrap
    4. Repeat the previous steps for all the other VMs.

Verification

  • Test connectivity between your physical system and one of the virtual systems by using its fully-qualified domain name (FQDN) or short name:

    your-physical-system]$ ping controller
    Copy to Clipboard Toggle word wrap
    PING controller.idm.example.com (192.168.122.86) 56(84) bytes of data.
    64 bytes from controller.idm.example.com (192.168.122.86): icmp_seq=1 ttl=64 time=0.353 ms
    64 bytes from controller.idm.example.com (192.168.122.86): icmp_seq=2 ttl=64 time=0.398 ms
    64 bytes from controller.idm.example.com (192.168.122.86): icmp_seq=3 ttl=64 time=0.453 ms
    Copy to Clipboard Toggle word wrap

14.2. Preparing the control node for installing IdM using Ansible playbooks

Learn how to prepare the Ansible control node for installing and configuring IdM on the managed nodes.

Procedure

  1. On the controller system, create an SSH public and private key:

    [ansible@controller]$ ssh-keygen
    Copy to Clipboard Toggle word wrap
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/ansible/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase): [Enter]
    Enter same passphrase again: [Enter]
    ...
    Copy to Clipboard Toggle word wrap

    Use the suggested default location for the key file. As this is a PoC environment, leave the passphrase empty.

  2. Create the ~/.ansible.cfg file with the following content:

    [defaults]
    inventory = /home/ansible/inventory
    remote_user = root
    Copy to Clipboard Toggle word wrap
    Note

    Settings in the ~/.ansible.cfg file have a higher priority and override settings from the global /etc/ansible/ansible.cfg file.

    With these settings, Ansible performs the following actions:

    • Manages hosts in the specified inventory file.
    • Uses the account set in the remote_user parameter when it establishes SSH connections to managed nodes.
  3. Create an ~/inventory file in INI or YAML format that lists the host names of managed hosts and the values for the required installation variables:

    [ipaserver]
    server.idm.example.com
    
    [ipaserver:vars]
    ipaserver_domain=idm.example.com
    ipaserver_realm=IDM.EXAMPLE.COM
    ipaserver_setup_dns=True
    ipaserver_auto_forwarders=True
    ipaadmin_password=Secret123
    ipadm_password=Secret123
    
    [ipareplicas]
    replica.idm.example.com
    
    [ipareplicas:vars]
    ipareplica_setup_dns=true
    ipareplica_auto_forwarders=true
    ipaadmin_password=Secret123
    ipareplica_servers=server.idm.example.com
    
    [ipaclients]
    client.idm.example.com
    
    [ipaclients:vars]
    ipaadmin_password=Secret123
    ipaclient_domain=idm.example.com
    ipaclient_configure_dns_resolver=true
    ipaclient_dns_servers=192.168.122.1
    ipaclient_servers=server.idm.example.com
    Copy to Clipboard Toggle word wrap
  4. Create an install-cluster.yml file with the following content:

    ---
    - name: Play to configure IPA server
      hosts: ipaserver
      become: true
      roles:
      - role: freeipa.ansible_freeipa.ipaserver
        state: present
    
    - name: Play to configure IPA clients with username/password
      hosts: ipaclients
      become: true
      roles:
      - role: freeipa.ansible_freeipa.ipaclient
        state: present
    
    - name: Play to configure IPA replicas
      hosts: ipareplicas
      serial: 1
      become: true
      roles:
      - role: freeipa.ansible_freeipa.ipareplica
        state: present
    Copy to Clipboard Toggle word wrap

    The playbook contains three plays:

    • The first one installs the primary IdM server.
    • The second one installs an IdM client.
    • The third one installs an IdM replica. The serial: 1 directive instructs Ansible to deploy only one replica at a time against the same IdM server.
  5. Using root privileges, install the ansible-freeipa collection:

    [root@controller]# dnf install ansible-freeipa
    Copy to Clipboard Toggle word wrap
    [...]
    Transaction Summary
    ========================================================================================================================================================================
    Install  11 Packages
    Total download size: 9.8 M
    Installed size: 42 M
    Is this ok [y/N]: y
    [...]
    Copy to Clipboard Toggle word wrap

14.3. Preparing the managed nodes for installing IdM using Ansible playbooks

Learn how to prepare your virtual machines as Ansible managed nodes so that they can be used for the installation of an IdM deployment.

Procedure

  1. Install the root user’s SSH public key on to the root account on the server managed node:

    1. Log in to the control node as root, and copy the SSH public key to the root account on server:

      [root@controller]$ ssh-copy-id root@server.idm.example.com
      Copy to Clipboard Toggle word wrap
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
      The authenticity of host 'server.idm.example.com (192.168.122.42)' can't be established.
      ECDSA key fingerprint is SHA256:9bZ33GJNODK3zbNhybokN/6Mq7hu3vpBXDrCxe7NAvo.
      Copy to Clipboard Toggle word wrap
    2. When prompted, connect by entering yes:

      Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
      Copy to Clipboard Toggle word wrap
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
      Copy to Clipboard Toggle word wrap
    3. When prompted, enter the password of root on server:

      root@server.idm.example.com's password: 1234
      Copy to Clipboard Toggle word wrap
      Number of key(s) added: 1
      Now try logging into the machine, with:   "ssh 'root@server.idm.example.com'"
      and check to make sure that only the key(s) you wanted were added.
      Copy to Clipboard Toggle word wrap
    4. Verify the SSH connection by remotely executing a command on server:

      [root@controller]$ ssh root@server.idm.example.com whoami
      Copy to Clipboard Toggle word wrap
      root
      Copy to Clipboard Toggle word wrap
  2. Repeat on all the other managed nodes, replica and client.

Verification

  1. Verify that you can execute commands from the control node on an all managed nodes:

    [root@controller]$ ansible all -m ping
    Copy to Clipboard Toggle word wrap
    BECOME password:
    client.idm.example.com | SUCCESS => {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python3"
        },
        "changed": false,
        "ping": "pong"
    }
    server.idm.example.com | SUCCESS => {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python3"
        },
        "changed": false,
        "ping": "pong"
    }
    replica.idm.example.com | SUCCESS => {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python3"
        },
        "changed": false,
        "ping": "pong"
    }
    Copy to Clipboard Toggle word wrap

    The hard-coded all group dynamically contains all hosts listed in the inventory file.

  2. Verify that privilege escalation works correctly. Use the Ansible command module to run the whoami utility on all managed nodes:

    [root@controller]$ ansible all -m command -a whoami
    Copy to Clipboard Toggle word wrap
    BECOME password: <password>
    client.idm.example.com | CHANGED | rc=0 >>
    root
    server.idm.example.com | CHANGED | rc=0 >>
    root
    replica.idm.example.com | CHANGED | rc=0 >>
    root
    Copy to Clipboard Toggle word wrap

    If the command returns root, you configured sudo on the managed nodes correctly.

14.4. Installing an IdM cluster in a virtual machine

Learn how to install the IdM primary server, client and replica on your virtual machines by using a single Ansible command on the control node.

Procedure

  • Install the IdM cluster:

    [root@controller]$ ansible-playbook -i inventory -vv install-cluster.yml
    Copy to Clipboard Toggle word wrap
Important

If you encounter recurring errors when installing the server, client, or replica, it’s best to wipe the host and perform a clean reinstallation rather than attempt to troubleshoot a failed setup.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat