Chapter 6. Installer-provisioned postinstallation configuration
After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures.
6.1. Optional: Configuring NTP for disconnected clusters
OpenShift Container Platform installs the chrony
Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure compute nodes as NTP clients of the control plane nodes after a successful deployment.

OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Install Butane on your installation host by using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo dnf -y install butane
$ sudo dnf -y install butane
Create a Butane config,
99-master-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the control plane nodes.NoteSee "Creating machine configs with Butane" for information about Butane.
Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan
variant: openshift version: 4.16.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst
1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan
- 1
- You must replace
<cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.
Use Butane to generate a
MachineConfig
object file,99-master-chrony-conf-override.yaml
, containing the configuration to be delivered to the control plane nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
Create a Butane config,
99-worker-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the compute nodes that references the NTP servers on the control plane nodes.Butane config example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony
variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst
1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony
- 1
- You must replace
<cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.
Use Butane to generate a
MachineConfig
object file,99-worker-chrony-conf-override.yaml
, containing the configuration to be delivered to the worker nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
Apply the
99-master-chrony-conf-override.yaml
policy to the control plane nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f 99-master-chrony-conf-override.yaml
$ oc apply -f 99-master-chrony-conf-override.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created
Apply the
99-worker-chrony-conf-override.yaml
policy to the compute nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f 99-worker-chrony-conf-override.yaml
$ oc apply -f 99-worker-chrony-conf-override.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
Check the status of the applied NTP settings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe machineconfigpool
$ oc describe machineconfigpool
6.2. Enabling a provisioning network after installation
The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning
network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the baremetal
network.
You can enable a provisioning
network after installation using the Cluster Baremetal Operator (CBO).
Prerequisites
- A dedicated physical network must exist, connected to all worker and control plane nodes.
- You must isolate the native, untagged physical network.
-
The network cannot have a DHCP server when the
provisioningNetwork
configuration setting is set toManaged
. -
You can omit the
provisioningInterface
setting in OpenShift Container Platform 4.10 to use thebootMACAddress
configuration setting.
Procedure
-
When setting the
provisioningInterface
setting, first identify the provisioning interface name for the cluster nodes. For example,eth0
oreno1
. -
Enable the Preboot eXecution Environment (PXE) on the
provisioning
network interface of the cluster nodes. Retrieve the current state of the
provisioning
network and save it to a provisioning custom resource (CR) file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get provisioning -o yaml > enable-provisioning-nw.yaml
$ oc get provisioning -o yaml > enable-provisioning-nw.yaml
Modify the provisioning CR file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vim ~/enable-provisioning-nw.yaml
$ vim ~/enable-provisioning-nw.yaml
Scroll down to the
provisioningNetwork
configuration setting and change it fromDisabled
toManaged
. Then, add theprovisioningIP
,provisioningNetworkCIDR
,provisioningDHCPRange
,provisioningInterface
, andwatchAllNameSpaces
configuration settings after theprovisioningNetwork
setting. Provide appropriate values for each setting.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: provisioningIP: provisioningNetworkCIDR: provisioningDHCPRange: provisioningInterface: watchAllNameSpaces:
apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork:
1 provisioningIP:
2 provisioningNetworkCIDR:
3 provisioningDHCPRange:
4 provisioningInterface:
5 watchAllNameSpaces:
6 - 1
- The
provisioningNetwork
is one ofManaged
,Unmanaged
, orDisabled
. When set toManaged
, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set toUnmanaged
, the system administrator configures the DHCP server manually. - 2
- The
provisioningIP
is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within theprovisioning
subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if theprovisioning
network isDisabled
. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. - 3
- The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the
provisioning
network isDisabled
. For example:192.168.0.1/24
. - 4
- The DHCP range. This setting is only applicable to a
Managed
provisioning network. Omit this configuration setting if theprovisioning
network isDisabled
. For example:192.168.0.64, 192.168.0.253
. - 5
- The NIC name for the
provisioning
interface on cluster nodes. TheprovisioningInterface
setting is only applicable toManaged
andUnmanaged
provisioning networks. Omit theprovisioningInterface
configuration setting if theprovisioning
network isDisabled
. Omit theprovisioningInterface
configuration setting to use thebootMACAddress
configuration setting instead. - 6
- Set this setting to
true
if you want metal3 to watch namespaces other than the defaultopenshift-machine-api
namespace. The default value isfalse
.
- Save the changes to the provisioning CR file.
Apply the provisioning CR file to the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f enable-provisioning-nw.yaml
$ oc apply -f enable-provisioning-nw.yaml
6.3. Creating a manifest object that includes a customized br-ex
bridge
As an alternative to using the configure-ovs.sh
shell script to set a br-ex
bridge on a bare-metal platform, you can create a NodeNetworkConfigurationPolicy
(NNCP) custom resource (CR) that includes an NMState configuration file. The Kubernetes NMState Operator uses the NMState configuration file to create a customized br-ex
bridge network configuration on each node in your cluster.
After creating the NodeNetworkConfigurationPolicy
CR, copy content from the NMState configuration file that was created during cluster installation into the NNCP CR. An incomplete NNCP CR file means that the the network policy described in the file cannot get applied to nodes in the cluster.
This feature supports the following tasks:
- Modifying the maximum transmission unit (MTU) for your cluster.
- Modifying attributes of a different bond interface, such as MIImon (Media Independent Interface Monitor), bonding mode, or Quality of Service (QoS).
- Updating DNS values.
Consider the following use cases for creating a manifest object that includes a customized br-ex
bridge:
-
You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes
br-ex
bridge network. Theconfigure-ovs.sh
shell script does not support making postinstallation changes to the bridge. - You want to deploy the bridge on a different interface than the interface available on a host or server IP address.
-
You want to make advanced configurations to the bridge that are not possible with the
configure-ovs.sh
shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces.
Prerequisites
-
You set a customized
br-ex
by using the alternative method toconfigure-ovs
. - You installed the Kubernetes NMState Operator.
Procedure
Create a
NodeNetworkConfigurationPolicy
(NNCP) CR and define a customizedbr-ex
bridge network configuration. Depending on your needs, ensure that you set a masquerade IP for either theipv4.address.ip
,ipv6.address.ip
, or both parameters. Always include a masquerade IP address in the NNCP CR and this address must match an in-use IP address block.ImportantAs a post-installation task, you can configure most parameters for a customized
br-ex
bridge that you defined in an existing NNCP CR, except for the primary IP address of the customizedbr-ex
bridge.If you want to convert your single-stack cluster network to a dual-stack cluster network, you can add or change a secondary IPv6 address in the NNCP CR, but the existing primary IP address cannot be changed.
Example of an NNCP CR that sets IPv6 and IPv4 masquerade IP addresses
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-br-ex spec: nodeSelector: kubernetes.io/hostname: worker-0 desiredState: interfaces: - name: enp2s0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: options: mcast-snooping-enable: true port: - name: enp2s0 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true auto-route-metric: 48 address: - ip: "169.254.169.2" prefix-length: 29 ipv6: enabled: true dhcp: true auto-route-metric: 48 address: - ip: "fd69::2" prefix-length: 125 # ...
apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-br-ex
1 spec: nodeSelector: kubernetes.io/hostname: worker-0 desiredState: interfaces: - name: enp2s0
2 type: ethernet
3 state: up
4 ipv4: enabled: false
5 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: options: mcast-snooping-enable: true port: - name: enp2s0
6 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true auto-route-metric: 48
7 address: - ip: "169.254.169.2" prefix-length: 29 ipv6: enabled: true dhcp: true auto-route-metric: 48 address: - ip: "fd69::2" prefix-length: 125 # ...
- 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- The type of ethernet.
- 4
- The requested state for the interface after creation.
- 5
- Disables IPv4 and IPv6 in this example.
- 6
- The node NIC to which the bridge is attached.
- 7
- Set the parameter to
48
to ensure thebr-ex
default route always has the highest precedence (lowest metric). This configuration prevents routing conflicts with any other interfaces that are automatically configured by theNetworkManager
service.
Next steps
-
Scaling compute nodes to apply the manifest object that includes a customized
br-ex
bridge to each compute node that exists in your cluster. For more information, see "Expanding the cluster" in the Additional resources section.
Additional resources