16.4. Installer-provisioned post-installation configuration


After successfully deploying an installer-provisioned cluster, consider the following post-installation procedures.

16.4.1. Optional: Configuring NTP for disconnected clusters

OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.

OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

Procédure

  1. Create a Butane config, 99-master-chrony-conf-override.bu, including the contents of the chrony.conf file for the control plane nodes.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Butane config example

    variant: openshift
    version: 4.12.0
    metadata:
      name: 99-master-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: master
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # Use public servers from the pool.ntp.org project.
              # Please consider joining the pool (https://www.pool.ntp.org/join.html).
    
              # The Machine Config Operator manages this file
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    
              # Configure the control plane nodes to serve as local NTP servers
              # for all worker nodes, even if they are not in sync with an
              # upstream NTP server.
    
              # Allow NTP client access from the local network.
              allow all
              # Serve time even if not synchronized to a time source.
              local stratum 3 orphan
    Copy to Clipboard Toggle word wrap

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  2. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:

    $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap
  3. Create a Butane config, 99-worker-chrony-conf-override.bu, including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes.

    Butane config example

    variant: openshift
    version: 4.12.0
    metadata:
      name: 99-worker-chrony-conf-override
      labels:
        machineconfiguration.openshift.io/role: worker
    storage:
      files:
        - path: /etc/chrony.conf
          mode: 0644
          overwrite: true
          contents:
            inline: |
              # The Machine Config Operator manages this file.
              server openshift-master-0.<cluster-name>.<domain> iburst 
    1
    
              server openshift-master-1.<cluster-name>.<domain> iburst
              server openshift-master-2.<cluster-name>.<domain> iburst
    
              stratumweight 0
              driftfile /var/lib/chrony/drift
              rtcsync
              makestep 10 3
              bindcmdaddress 127.0.0.1
              bindcmdaddress ::1
              keyfile /etc/chrony.keys
              commandkey 1
              generatecommandkey
              noclientlog
              logchange 0.5
              logdir /var/log/chrony
    Copy to Clipboard Toggle word wrap

    1
    You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
  4. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap
  5. Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes.

    $ oc apply -f 99-master-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap

    Exemple de sortie

    machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
    Copy to Clipboard Toggle word wrap

  6. Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes.

    $ oc apply -f 99-worker-chrony-conf-override.yaml
    Copy to Clipboard Toggle word wrap

    Exemple de sortie

    machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
    Copy to Clipboard Toggle word wrap

  7. Check the status of the applied NTP settings.

    $ oc describe machineconfigpool
    Copy to Clipboard Toggle word wrap

16.4.2. Enabling a provisioning network after installation

The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal network.

You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO).

Conditions préalables

  • A dedicated physical network must exist, connected to all worker and control plane nodes.
  • You must isolate the native, untagged physical network.
  • The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed.
  • You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting.

Procédure

  1. When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1.
  2. Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes.
  3. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file:

    $ oc get provisioning -o yaml > enable-provisioning-nw.yaml
    Copy to Clipboard Toggle word wrap
  4. Modify the provisioning CR file:

    $ vim ~/enable-provisioning-nw.yaml
    Copy to Clipboard Toggle word wrap

    Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed. Then, add the provisioningIP, provisioningNetworkCIDR, provisioningDHCPRange, provisioningInterface, and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting.

    apiVersion: v1
    items:
    - apiVersion: metal3.io/v1alpha1
      kind: Provisioning
      metadata:
        name: provisioning-configuration
      spec:
        provisioningNetwork: 
    1
    
        provisioningIP: 
    2
    
        provisioningNetworkCIDR: 
    3
    
        provisioningDHCPRange: 
    4
    
        provisioningInterface: 
    5
    
        watchAllNameSpaces: 
    6
    Copy to Clipboard Toggle word wrap
    1
    The provisioningNetwork is one of Managed, Unmanaged, or Disabled. When set to Managed, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged, the system administrator configures the DHCP server manually.
    2
    The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.
    3
    The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled. For example: 192.168.0.1/24.
    4
    The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled. For example: 192.168.0.64, 192.168.0.253.
    5
    The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled. Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead.
    6
    Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false.
  5. Save the changes to the provisioning CR file.
  6. Apply the provisioning CR file to the cluster:

    $ oc apply -f enable-provisioning-nw.yaml
    Copy to Clipboard Toggle word wrap

16.4.3. Configuring an external load balancer

You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer.

Conditions préalables

  • On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system.
  • Load balance the API port, 6443, between each of the control plane nodes.
  • Load balance the application ports, 443 and 80, between all of the compute nodes.
  • On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.
  • Your load balancer must be able to access every machine in your cluster. Methods to allow this access include:

    • Attaching the load balancer to the cluster’s machine subnet.
    • Attaching floating IP addresses to machines that use the load balancer.
Important

External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.

Procédure

  1. Enable access to the cluster from your load balancer on ports 6443, 443, and 80.

    As an example, note this HAProxy configuration:

    A section of a sample HAProxy configuration

    ...
    listen my-cluster-api-6443
        bind 0.0.0.0:6443
        mode tcp
        balance roundrobin
        server my-cluster-master-2 192.0.2.2:6443 check
        server my-cluster-master-0 192.0.2.3:6443 check
        server my-cluster-master-1 192.0.2.1:6443 check
    listen my-cluster-apps-443
            bind 0.0.0.0:443
            mode tcp
            balance roundrobin
            server my-cluster-worker-0 192.0.2.6:443 check
            server my-cluster-worker-1 192.0.2.5:443 check
            server my-cluster-worker-2 192.0.2.4:443 check
    listen my-cluster-apps-80
            bind 0.0.0.0:80
            mode tcp
            balance roundrobin
            server my-cluster-worker-0 192.0.2.7:80 check
            server my-cluster-worker-1 192.0.2.9:80 check
            server my-cluster-worker-2 192.0.2.8:80 check
    Copy to Clipboard Toggle word wrap

  2. Add records to your DNS server for the cluster API and apps over the load balancer. For example:

    <load_balancer_ip_address> api.<cluster_name>.<base_domain>
    <load_balancer_ip_address> apps.<cluster_name>.<base_domain>
    Copy to Clipboard Toggle word wrap
  3. From a command line, use curl to verify that the external load balancer and DNS configuration are operational.

    1. Verify that the cluster API is accessible:

      $ curl https://<loadbalancer_ip_address>:6443/version --insecure
      Copy to Clipboard Toggle word wrap

      If the configuration is correct, you receive a JSON object in response:

      {
        "major": "1",
        "minor": "11+",
        "gitVersion": "v1.11.0+ad103ed",
        "gitCommit": "ad103ed",
        "gitTreeState": "clean",
        "buildDate": "2019-01-09T06:44:10Z",
        "goVersion": "go1.10.3",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
      Copy to Clipboard Toggle word wrap
    2. Verify that cluster applications are accessible:

      Note

      You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser.

      $ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
      Copy to Clipboard Toggle word wrap

      If the configuration is correct, you receive an HTTP response:

      HTTP/1.1 302 Found
      content-length: 0
      location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
      cache-control: no-cacheHTTP/1.1 200 OK
      referrer-policy: strict-origin-when-cross-origin
      set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
      x-content-type-options: nosniff
      x-dns-prefetch-control: off
      x-frame-options: DENY
      x-xss-protection: 1; mode=block
      date: Tue, 17 Nov 2020 08:42:10 GMT
      content-type: text/html; charset=utf-8
      set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
      cache-control: private
      Copy to Clipboard Toggle word wrap
Retour au début
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2025 Red Hat