Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Advanced networking
Specialized and advanced networking topics in OpenShift Container Platform
Abstract
Chapter 1. Verifying connectivity to an endpoint Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating.
1.1. Connection health checks that are performed Link kopierenLink in die Zwischenablage kopiert!
To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services:
- Kubernetes API server service
- Kubernetes API server endpoints
- OpenShift API server service
- OpenShift API server endpoints
- Load balancers
To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets:
- Health check target service
- Health check target endpoints
1.2. Implementation of connection health checks Link kopierenLink in die Zwischenablage kopiert!
The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel.
The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks:
- Health check source
-
This program deploys in a single pod replica set managed by a
Deploymentobject. The program consumesPodNetworkConnectivityobjects and connects to thespec.targetEndpointspecified in each object. - Health check target
- A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node.
You can configure the nodes which network connectivity sources and targets run on with a node selector. Additionally, you can specify permissible tolerations for source and target pods. The configuration is defined in the singleton cluster custom resource of the Network API in the config.openshift.io/v1 API group.
Pod scheduling occurs after you have updated the configuration. Therefore, you must apply node labels that you intend to use in your selectors before updating the configuration. Labels applied after updating your network connectivity check pod placement are ignored.
Refer to the default configuration in the following YAML:
Default configuration for connectivity source and target pods
- 1
- Specifies the network diagnostics configuration. If a value is not specified or an empty object is specified, and
spec.disableNetworkDiagnostics=trueis set in thenetwork.operator.openshift.iocustom resource namedcluster, network diagnostics are disabled. If set, this value overridesspec.disableNetworkDiagnostics=true. - 2
- Specifies the diagnostics mode. The value can be the empty string,
All, orDisabled. The empty string is equivalent to specifyingAll. - 3
- Optional: Specifies a selector for connectivity check source pods. You can use the
nodeSelectorandtolerationsfields to further specify thesourceNodepods. These are optional for both source and target pods. You can omit them, use both, or use only one of them. - 4
- Optional: Specifies a selector for connectivity check target pods. You can use the
nodeSelectorandtolerationsfields to further specify thetargetNodepods. These are optional for both source and target pods. You can omit them, use both, or use only one of them.
1.3. Configuring pod connectivity check placement Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure which nodes the connectivity check pods run by modifying the network.config.openshift.io object named cluster.
Prerequisites
-
Install the OpenShift CLI (
oc).
Procedure
Edit the connectivity check configuration by entering the following command:
oc edit network.config.openshift.io cluster
$ oc edit network.config.openshift.io clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the text editor, update the
networkDiagnosticsstanza to specify the node selectors that you want for the source and target pods. - Save your changes and exit the text editor.
Verification
- Verify that the source and target pods are running on the intended nodes by entering the following command:
oc get pods -n openshift-network-diagnostics -o wide
$ oc get pods -n openshift-network-diagnostics -o wide
Example output
1.4. PodNetworkConnectivityCheck object fields Link kopierenLink in die Zwischenablage kopiert!
The PodNetworkConnectivityCheck object fields are described in the following tables.
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the object in the following format:
|
|
|
|
The namespace that the object is associated with. This value is always |
|
|
|
The name of the pod where the connection check originates, such as |
|
|
|
The target of the connection check, such as |
|
|
| Configuration for the TLS certificate to use. |
|
|
| The name of the TLS certificate used, if any. The default value is an empty string. |
|
|
| An object representing the condition of the connection test and logs of recent connection successes and failures. |
|
|
| The latest status of the connection check and any previous statuses. |
|
|
| Connection test logs from unsuccessful attempts. |
|
|
| Connect test logs covering the time periods of any outages. |
|
|
| Connection test logs from successful attempts. |
The following table describes the fields for objects in the status.conditions array:
| Field | Type | Description |
|---|---|---|
|
|
| The time that the condition of the connection transitioned from one status to another. |
|
|
| The details about last transition in a human readable format. |
|
|
| The last status of the transition in a machine readable format. |
|
|
| The status of the condition. |
|
|
| The type of the condition. |
The following table describes the fields for objects in the status.conditions array:
| Field | Type | Description |
|---|---|---|
|
|
| The timestamp from when the connection failure is resolved. |
|
|
| Connection log entries, including the log entry related to the successful end of the outage. |
|
|
| A summary of outage details in a human readable format. |
|
|
| The timestamp from when the connection failure is first detected. |
|
|
| Connection log entries, including the original failure. |
1.4.1. Connection log fields Link kopierenLink in die Zwischenablage kopiert!
The fields for a connection log entry are described in the following table. The object is used in the following fields:
-
status.failures[] -
status.successes[] -
status.outages[].startLogs[] -
status.outages[].endLogs[]
| Field | Type | Description |
|---|---|---|
|
|
| Records the duration of the action. |
|
|
| Provides the status in a human readable format. |
|
|
|
Provides the reason for status in a machine readable format. The value is one of |
|
|
| Indicates if the log entry is a success or failure. |
|
|
| The start time of connection check. |
1.5. Verifying network connectivity for an endpoint Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod, and verify that network diagnostics is enabled.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Access to the cluster as a user with the
cluster-adminrole.
Procedure
Confirm that network diagnostics are enable by entering the following command:
oc get network.config.openshift.io cluster -o yaml
$ oc get network.config.openshift.io cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the current
PodNetworkConnectivityCheckobjects by entering the following command:oc get podnetworkconnectivitycheck -n openshift-network-diagnostics
$ oc get podnetworkconnectivitycheck -n openshift-network-diagnosticsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the connection test logs:
- From the output of the previous command, identify the endpoint that you want to review the connectivity logs for.
View the object by entering the following command:
oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml
$ oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where
<name>specifies the name of thePodNetworkConnectivityCheckobject.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Changing the MTU for the cluster network Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can change the maximum transmission unit (MTU) for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
2.1. About the cluster MTU Link kopierenLink in die Zwischenablage kopiert!
During installation, the cluster network MTU is set automatically based on the primary network interface MTU of cluster nodes. You do not usually need to override the detected MTU.
You might want to change the MTU of the cluster network for one of the following reasons:
- The MTU detected during cluster installation is not correct for your infrastructure.
- Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance.
Only the OVN-Kubernetes network plugin supports changing the MTU value.
2.1.1. Service interruption considerations Link kopierenLink in die Zwischenablage kopiert!
When you initiate a maximum transmission unit (MTU) change on your cluster the following effects might impact service availability:
- At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart.
- Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change.
2.1.2. MTU value selection Link kopierenLink in die Zwischenablage kopiert!
When planning your maximum transmission unit (MTU) migration there are two related but distinct MTU values to consider.
- Hardware MTU: This MTU value is set based on the specifics of your network infrastructure.
-
Cluster network MTU: This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is
100bytes.
If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.
To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value (maxmtu) that is accepted by the network interface by using the ip -d link command.
2.1.3. How the migration process works Link kopierenLink in die Zwischenablage kopiert!
The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.
| User-initiated steps | OpenShift Container Platform activity |
|---|---|
| Set the following values in the Cluster Network Operator configuration:
| Cluster Network Operator (CNO): Confirms that each field is set to a valid value.
If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster. |
| Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use one of the following methods to accomplish this:
| N/A |
|
Set the | Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster with the new MTU configuration. |
2.2. Changing the cluster network MTU Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.
You cannot roll back an MTU value for nodes during the MTU migration process, but you can roll back the value after the MTU migration process completes.
The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect.
The following procedures describe how to change the cluster network MTU by using machine configs, Dynamic Host Configuration Protocol (DHCP), or an ISO image. If you use either the DHCP or ISO approaches, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster using an account with
cluster-adminpermissions. -
You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to
100less than the lowest hardware MTU value in your cluster. - If your nodes are physical machines, ensure that the cluster network and the connected network switches support jumbo frames.
- If your nodes are virtual machines (VMs), ensure that the hypervisor and the connected network switches support jumbo frames.
2.2.1. Checking the current cluster MTU value Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to obtain the current maximum transmission unit (MTU) for the cluster network.
Procedure
To obtain the current MTU for the cluster network, enter the following command:
oc describe network.config cluster
$ oc describe network.config clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2. Preparing your hardware MTU configuration Link kopierenLink in die Zwischenablage kopiert!
Many ways exist to configure the hardware maximum transmission unit (MTU) for your cluster nodes. The following examples show only the most common methods. Verify the correctness of your infrastructure MTU. Select your preferred method for configuring your hardware MTU in the cluster nodes.
Procedure
Prepare your configuration for the hardware MTU:
If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
dhcp-option-force=26,<mtu>
dhcp-option-force=26,<mtu>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<mtu>- Specifies the hardware MTU for the DHCP server to advertise.
- If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
Find the primary network interface by entering the following command:
oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<node_name>- Specifies the name of a node in your cluster.
Create the following
NetworkManagerconfiguration in the<interface>-mtu.conffile:[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>
[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<interface>- Specifies the primary network interface name.
<mtu>- Specifies the new hardware MTU value.
2.2.3. Creating MachineConfig objects Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to create the MachineConfig objects.
Procedure
Create two
MachineConfigobjects, one for the control plane nodes and another for the worker nodes in your cluster:Create the following Butane config in the
control-plane-interface.bufile:NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
0. For example,4.20.0. See "Creating machine configs with Butane" for information about Butane.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following Butane config in the
worker-interface.bufile:NoteThe Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in
0. For example,4.20.0. See "Creating machine configs with Butane" for information about Butane.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create
MachineConfigobjects from the Butane configs by running the following command:for manifest in control-plane-interface worker-interface; do butane --files-dir . $manifest.bu > $manifest.yaml done$ for manifest in control-plane-interface worker-interface; do butane --files-dir . $manifest.bu > $manifest.yaml doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningDo not apply these machine configs until explicitly instructed later in this procedure. Applying these machine configs now causes a loss of stability for the cluster.
2.2.4. Beginning the MTU migration Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to start the MTU migration.
Procedure
To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change.
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<overlay_from>- Specifies the current cluster network MTU value.
<overlay_to>-
Specifies the target MTU for the cluster network. This value is set relative to the value of
<machine_to>. For OVN-Kubernetes, this value must be100less than the value of<machine_to>. <machine_to>- Specifies the MTU for the primary network interface on the underlying host network.
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow As the Machine Config Operator updates machines in each machine config pool, the Operator reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get machineconfigpools
$ oc get machineconfigpoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true,UPDATING=false,DEGRADED=false.NoteBy default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
2.2.5. Verifying the machine configuration Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to verify the machine configuration.
Procedure
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
oc describe node | egrep "hostname|machineconfig"
$ oc describe node | egrep "hostname|machineconfig"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: DoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the following statements are true:
-
The value of
machineconfiguration.openshift.io/statefield isDone. -
The value of the
machineconfiguration.openshift.io/currentConfigfield is equal to the value of themachineconfiguration.openshift.io/desiredConfigfield.
-
The value of
To confirm that the machine config is correct, enter the following command:
oc get machineconfig <config_name> -o yaml | grep ExecStart
$ oc get machineconfig <config_name> -o yaml | grep ExecStartCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<config_name>-
Specifies the name of the machine config from the
machineconfiguration.openshift.io/currentConfigfield.
The machine config must include the following update to the systemd configuration:
ExecStart=/usr/local/bin/mtu-migration.sh
ExecStart=/usr/local/bin/mtu-migration.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.6. Applying the new hardware MTU value Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to apply the new hardware maximum transmission unit (MTU) value.
Procedure
Update the underlying network interface MTU value:
If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
for manifest in control-plane-interface worker-interface; do oc create -f $manifest.yaml done$ for manifest in control-plane-interface worker-interface; do oc create -f $manifest.yaml doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
As the Machine Config Operator updates machines in each machine config pool, the Operator reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get machineconfigpools
$ oc get machineconfigpoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true,UPDATING=false,DEGRADED=false.NoteBy default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.
Confirm the status of the new machine configuration on the hosts:
To list the machine configuration state and the name of the applied machine configuration, enter the following command:
oc describe node | egrep "hostname|machineconfig"
$ oc describe node | egrep "hostname|machineconfig"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: DoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the following statements are true:
-
The value of
machineconfiguration.openshift.io/statefield isDone. -
The value of the
machineconfiguration.openshift.io/currentConfigfield is equal to the value of themachineconfiguration.openshift.io/desiredConfigfield.
-
The value of
To confirm that the machine config is correct, enter the following command:
oc get machineconfig <config_name> -o yaml | grep path:
$ oc get machineconfig <config_name> -o yaml | grep path:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<config_name>-
Specifies the name of the machine config from the
machineconfiguration.openshift.io/currentConfigfield.
If the machine config is successfully deployed, the previous output contains the
/etc/NetworkManager/conf.d/99-<interface>-mtu.conffile path and theExecStart=/usr/local/bin/mtu-migration.shline.
2.2.7. Finalizing the MTU migration Link kopierenLink in die Zwischenablage kopiert!
Use the following procedure to finalize the MTU migration.
Procedure
To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin:
oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'$ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<mtu>-
Specifies the new cluster network MTU that you specified with
<overlay_to>.
After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
oc get machineconfigpools
$ oc get machineconfigpoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow A successfully updated node has the following status:
UPDATED=true,UPDATING=false,DEGRADED=false.
Verification
To get the current MTU for the cluster network, enter the following command:
oc describe network.config cluster
$ oc describe network.config clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the current MTU for the primary network interface of a node:
To list the nodes in your cluster, enter the following command:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To obtain the current MTU setting for the primary network interface on a node, enter the following command:
oc adm node-logs <node> -u ovs-configuration | grep configure-ovs.sh | grep mtu | grep <interface> | head -1
$ oc adm node-logs <node> -u ovs-configuration | grep configure-ovs.sh | grep mtu | grep <interface> | head -1Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<node>- Specifies a node from the output from the previous step.
<interface>- Specifies the primary network interface name for the node.
Example output
ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051
ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Using the Stream Control Transmission Protocol (SCTP) Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster.
3.1. Support for SCTP on OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default.
SCTP is a reliable message based protocol that runs on top of an IP network.
When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value.
3.1.1. Example configurations using SCTP protocol Link kopierenLink in die Zwischenablage kopiert!
You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object.
In the following example, a pod is configured to use SCTP:
In the following example, a service is configured to use SCTP:
In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label:
3.2. Enabling Stream Control Transmission Protocol (SCTP) Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a file named
load-sctp-module.yamlthat contains the following YAML definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create the
MachineConfigobject, enter the following command:oc create -f load-sctp-module.yaml
$ oc create -f load-sctp-module.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to
Ready, the configuration update is applied.oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled Link kopierenLink in die Zwischenablage kopiert!
You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service.
Prerequisites
-
Access to the internet from the cluster to install the
ncpackage. -
Install the OpenShift CLI (
oc). -
Access to the cluster as a user with the
cluster-adminrole.
Procedure
Create a pod starts an SCTP listener:
Create a file named
sctp-server.yamlthat defines a pod with the following YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod by entering the following command:
oc create -f sctp-server.yaml
$ oc create -f sctp-server.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a service for the SCTP listener pod.
Create a file named
sctp-service.yamlthat defines a service with the following YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create the service, enter the following command:
oc create -f sctp-service.yaml
$ oc create -f sctp-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a pod for the SCTP client.
Create a file named
sctp-client.yamlwith the following YAML:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create the
Podobject, enter the following command:oc apply -f sctp-client.yaml
$ oc apply -f sctp-client.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Run an SCTP listener on the server.
To connect to the server pod, enter the following command:
oc rsh sctpserver
$ oc rsh sctpserverCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the SCTP listener, enter the following command:
nc -l 30102 --sctp
$ nc -l 30102 --sctpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Connect to the SCTP listener on the server.
- Open a new terminal window or tab in your terminal program.
Obtain the IP address of the
sctpserviceservice. Enter the following command:oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}'$ oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To connect to the client pod, enter the following command:
oc rsh sctpclient
$ oc rsh sctpclientCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the SCTP client, enter the following command. Replace
<cluster_IP>with the cluster IP address of thesctpserviceservice.nc <cluster_IP> 30102 --sctp
# nc <cluster_IP> 30102 --sctpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Associating secondary interfaces metrics to network attachments Link kopierenLink in die Zwischenablage kopiert!
Administrators can use the pod_network_info metric to classify and monitor secondary network interfaces. The metric does this by adding a label that identifies the interface type, typically based on the associated NetworkAttachmentDefinition resource.
4.1. Extending secondary network metrics for monitoring Link kopierenLink in die Zwischenablage kopiert!
Secondary devices, or interfaces, are used for different purposes. Metrics from secondary network interfaces need to be classified to allow for effective aggregation and monitoring.
Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, relying on interface names alone becomes problematic when secondary interfaces are added because it is difficult to identify their purpose and use their metrics effectively..
When adding secondary interfaces, their names depend on the order in which they are added. Secondary interfaces can belong to distinct networks that can each serve a different purposes.
With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types.
The network type is generated from the name of the NetworkAttachmentDefinition resource, which distinguishes different secondary network classes. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names.
4.2. Network Metrics Daemon Link kopierenLink in die Zwischenablage kopiert!
The Network Metrics Daemon is a daemon component that collects and publishes network related metrics.
The kubelet is already publishing network related metrics you can observe. These metrics are:
-
container_network_receive_bytes_total -
container_network_receive_errors_total -
container_network_receive_packets_total -
container_network_receive_packets_dropped_total -
container_network_transmit_bytes_total -
container_network_transmit_errors_total -
container_network_transmit_packets_total -
container_network_transmit_packets_dropped_total
The labels in these metrics contain, among others:
- Pod name
- Pod namespace
-
Interface name (such as
eth0)
These metrics work well until new interfaces are added to the pod, for example via Multus, as it is not clear what the interface names refer to.
The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to.
This is addressed by introducing the new pod_network_name_info described in the following section.
4.3. Metrics with network name Link kopierenLink in die Zwischenablage kopiert!
The Network Metrics daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0.
Example of pod_network_name_info
pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0
pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0
The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition.
The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks.
Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation:
Chapter 5. BGP routing Link kopierenLink in die Zwischenablage kopiert!
5.1. About BGP routing Link kopierenLink in die Zwischenablage kopiert!
This feature provides native Border Gateway Protocol (BGP) routing capabilities for the cluster.
If you are using the MetalLB Operator and there are existing FRRConfiguration CRs in the metallb-system namespace created by cluster administrators or third-party cluster components other than the MetalLB Operator, you must ensure that they are copied to the openshift-frr-k8s namespace or that those third-party cluster components use the new namespace. For more information, see Migrating FRR-K8s resources.
5.1.1. About Border Gateway Protocol (BGP) routing Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports BGP routing through FRRouting (FRR), a free, open source internet routing protocol suite for Linux, UNIX, and similar operating systems. FRR-K8s is a Kubernetes-based daemon set that exposes a subset of the FRR API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration custom resource (CR) to access FRR services.
5.1.1.1. Supported platforms Link kopierenLink in die Zwischenablage kopiert!
BGP routing is supported on the following infrastructure types:
- Bare metal
BGP routing requires that you have properly configured BGP for your network provider. Outages or misconfigurations of your network provider might cause disruptions to your cluster network.
5.1.1.2. Considerations for use with the MetalLB Operator Link kopierenLink in die Zwischenablage kopiert!
The MetalLB Operator is installed as an add-on to the cluster. Deployment of the MetalLB Operator automatically enables FRR-K8s as an additional routing capability provider and uses the FRR-K8s daemon installed by this feature.
Before upgrading to 4.18, any existing FRRConfiguration in the metallb-system namespace not managed by the MetalLB operator (added by a cluster administrator or any other component) needs to be copied to the openshift-frr-k8s namespace manually, creating the namespace if necessary.
If you are using the MetalLB Operator and there are existing FRRConfiguration CRs in the metallb-system namespace created by cluster administrators or third-party cluster components other than MetalLB Operator, you must:
-
Ensure that these existing
FRRConfigurationCRs are copied to theopenshift-frr-k8snamespace. -
Ensure that the third-party cluster components use the new namespace for the
FRRConfigurationCRs that they create.
5.1.1.3. Cluster Network Operator configuration Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator API exposes the following API field to configure BGP routing:
-
spec.additionalRoutingCapabilities: Enables deployment of the FRR-K8s daemon for the cluster, which can be used independently of route advertisements. When enabled, the FRR-K8s daemon is deployed on all nodes.
5.1.1.4. BGP routing custom resources Link kopierenLink in die Zwischenablage kopiert!
The following custom resources are used to configure BGP routing:
FRRConfiguration- This custom resource defines the FRR configuration for the BGP routing. This CR is namespaced.
5.1.2. Configuring the FRRConfiguration CRD Link kopierenLink in die Zwischenablage kopiert!
The following section provides reference examples that use the FRRConfiguration custom resource (CR).
5.1.2.1. The routers field Link kopierenLink in die Zwischenablage kopiert!
You can use the routers field to configure multiple routers, one for each Virtual Routing and Forwarding (VRF) resource. For each router, you must define the Autonomous System Number (ASN).
You can also define a list of Border Gateway Protocol (BGP) neighbors to connect to, as in the following example:
Example FRRConfiguration CR
5.1.2.2. The toAdvertise field Link kopierenLink in die Zwischenablage kopiert!
By default, FRR-K8s does not advertise the prefixes configured as part of a router configuration. In order to advertise them, you use the toAdvertise field.
You can advertise a subset of the prefixes, as in the following example:
Example FRRConfiguration CR
- 1
- Advertises a subset of prefixes.
The following example shows you how to advertise all of the prefixes:
Example FRRConfiguration CR
- 1
- Advertises all prefixes.
5.1.2.3. The toReceive field Link kopierenLink in die Zwischenablage kopiert!
By default, FRR-K8s does not process any prefixes advertised by a neighbor. You can use the toReceive field to process such addresses.
You can configure for a subset of the prefixes, as in this example:
Example FRRConfiguration CR
The following example configures FRR to handle all the prefixes announced:
Example FRRConfiguration CR
5.1.2.4. The bgp field Link kopierenLink in die Zwischenablage kopiert!
You can use the bgp field to define various BFD profiles and associate them with a neighbor. In the following example, BFD backs up the BGP session and FRR can detect link failures:
Example FRRConfiguration CR
5.1.2.5. The nodeSelector field Link kopierenLink in die Zwischenablage kopiert!
By default, FRR-K8s applies the configuration to all nodes where the daemon is running. You can use the nodeSelector field to specify the nodes to which you want to apply the configuration. For example:
Example FRRConfiguration CR
5.1.2.6. The interface field Link kopierenLink in die Zwischenablage kopiert!
You can use the interface field to configure unnumbered BGP peering by using the following example configuration:
Example FRRConfiguration CR
- 1
- Activates unnumbered BGP peering.
To use the interface field, you must establish a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection.
If you use this field, you cannot specify a value in the spec.bgp.routers.neighbors.address field.
The fields for the FRRConfiguration custom resource are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
| Specifies the routers that FRR is to configure (one per VRF). |
|
|
| The Autonomous System Number (ASN) to use for the local end of the session. |
|
|
|
Specifies the ID of the |
|
|
| Specifies the host vrf used to establish sessions from this router. |
|
|
| Specifies the neighbors to establish BGP sessions with. |
|
|
|
Specifies the ASN to use for the remote end of the session. If you use this field, you cannot specify a value in the |
|
|
|
Detects the ASN to use for the remote end of the session without explicitly setting it. Specify |
|
|
|
Specifies the IP address to establish the session with. If you use this field, you cannot specify a value in the |
|
|
| Specifies the interface name to use when establishing a session. Use this field to configure unnumbered BGP peering. There must be a point-to-point, layer 2 connection between the two BGP peers. You can use unnumbered BGP peering with IPv4, IPv6, or dual-stack, but you must enable IPv6 RAs (Router Advertisements). Each interface is limited to one BGP connection. |
|
|
| Specifies the port to dial when establishing the session. Defaults to 179. |
|
|
|
Specifies the password to use for establishing the BGP session. |
|
|
|
Specifies the name of the authentication secret for the neighbor. The secret must be of type "kubernetes.io/basic-auth", and in the same namespace as the FRR-K8s daemon. The key "password" stores the password in the secret. |
|
|
| Specifies the requested BGP hold time, per RFC4271. Defaults to 180s. |
|
|
|
Specifies the requested BGP keepalive time, per RFC4271. Defaults to |
|
|
| Specifies how long BGP waits between connection attempts to a neighbor. |
|
|
| Indicates if the BGPPeer is multi-hops away. |
|
|
| Specifies the name of the BFD Profile to use for the BFD session associated with the BGP session. If not set, the BFD session is not set up. |
|
|
| Represents the list of prefixes to advertise to a neighbor, and the associated properties. |
|
|
| Specifies the list of prefixes to advertise to a neighbor. This list must match the prefixes that you define in the router. |
|
|
|
Specifies the mode to use when handling the prefixes. You can set to |
|
|
| Specifies the prefixes associated with an advertised local preference. You must specify the prefixes associated with a local preference in the prefixes allowed to be advertised. |
|
|
| Specifies the prefixes associated with the local preference. |
|
|
| Specifies the local preference associated with the prefixes. |
|
|
| Specifies the prefixes associated with an advertised BGP community. You must include the prefixes associated with a local preference in the list of prefixes that you want to advertise. |
|
|
| Specifies the prefixes associated with the community. |
|
|
| Specifies the community associated with the prefixes. |
|
|
| Specifies the prefixes to receive from a neighbor. |
|
|
| Specifies the information that you want to receive from a neighbor. |
|
|
| Specifies the prefixes allowed from a neighbor. |
|
|
|
Specifies the mode to use when handling the prefixes. When set to |
|
|
| Disables MP BGP to prevent it from separating IPv4 and IPv6 route exchanges into distinct BGP sessions. |
|
|
| Specifies all prefixes to advertise from this router instance. |
|
|
| Specifies the list of bfd profiles to use when configuring the neighbors. |
|
|
| The name of the BFD Profile to be referenced in other parts of the configuration. |
|
|
|
Specifies the minimum interval at which this system can receive control packets, in milliseconds. Defaults to |
|
|
|
Specifies the minimum transmission interval, excluding jitter, that this system wants to use to send BFD control packets, in milliseconds. Defaults to |
|
|
| Configures the detection multiplier to determine packet loss. To determine the connection loss-detection timer, multiply the remote transmission interval by this value. |
|
|
|
Configures the minimal echo receive transmission-interval that this system can handle, in milliseconds. Defaults to |
|
|
| Enables or disables the echo transmission mode. This mode is disabled by default, and not supported on multihop setups. |
|
|
| Mark session as passive. A passive session does not attempt to start the connection and waits for control packets from peers before it begins replying. |
|
|
| For multihop sessions only. Configures the minimum expected TTL for an incoming BFD control packet. |
|
|
| Limits the nodes that attempt to apply this configuration. If specified, only those nodes whose labels match the specified selectors attempt to apply the configuration. If it is not specified, all nodes attempt to apply this configuration. |
|
|
| Defines the observed state of FRRConfiguration. |
5.2. Enabling BGP routing Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable OVN-Kubernetes Border Gateway Protocol (BGP) routing support for your cluster.
5.2.1. Enabling Border Gateway Protocol (BGP) routing Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable Border Gateway Protocol (BGP) routing support for your cluster on bare-metal infrastructure.
If you are using BGP routing in conjunction with the MetalLB Operator, the necessary BGP routing support is enabled automatically. You do not need to manually enable BGP routing support.
5.2.1.1. Enabling BGP routing support Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable BGP routing support for your cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To enable a dynamic routing provider, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Disabling BGP routing Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable OVN-Kubernetes Border Gateway Protocol (BGP) routing support for your cluster.
5.3.1. Disabling Border Gateway Protocol (BGP) routing Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable Border Gateway Protocol (BGP) routing support for your cluster on bare-metal infrastructure.
5.3.1.1. Enabling BGP routing support Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable BGP routing support for your cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To disable dynamic routing, enter the following command:
oc patch Network.operator.openshift.io/cluster --type=merge -p '{$ oc patch Network.operator.openshift.io/cluster --type=merge -p '{ "spec": { "additionalRoutingCapabilities": null } }'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Migrating FRR-K8s resources Link kopierenLink in die Zwischenablage kopiert!
All user-created FRR-K8s custom resources (CRs) in the metallb-system namespace under OpenShift Container Platform 4.17 and earlier releases must be migrated to the openshift-frr-k8s namespace. As a cluster administrator, complete the steps in this procedure to migrate your FRR-K8s custom resources.
5.4.1. Migrating FRR-K8s resources Link kopierenLink in die Zwischenablage kopiert!
You can migrate the FRR-K8s FRRConfiguration custom resources from the metallb-system namespace to the openshift-frr-k8s namespace.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole.
Procedure
When upgrading from an earlier version of OpenShift Container Platform with the Metal LB Operator deployed, you must manually migrate your custom FRRConfiguration configurations from the metallb-system namespace to the openshift-frr-k8s namespace. To move these CRs, enter the following commands:
To create the
openshift-frr-k8snamespace, enter the following command:oc create namespace openshift-frr-k8s
$ oc create namespace openshift-frr-k8sCopy to Clipboard Copied! Toggle word wrap Toggle overflow To automate the migration, create a shell script named
migrate.shwith the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To execute the migration, run the following command:
bash migrate.sh
$ bash migrate.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To confirm that the migration succeeded, run the following command:
oc get frrconfigurations.frrk8s.metallb.io -n openshift-frr-k8s
$ oc get frrconfigurations.frrk8s.metallb.io -n openshift-frr-k8sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
After the migration is complete, you can remove the FRRConfiguration custom resources from the metallb-system namespace.
Chapter 6. Route advertisements Link kopierenLink in die Zwischenablage kopiert!
6.1. About route advertisements Link kopierenLink in die Zwischenablage kopiert!
This feature provides route advertisement capabilities for the OVN-Kubernetes network plugin. A Border Gateway Router (BGP) provider is required. For more information, see About BGP routing.
6.1.1. Advertise cluster network routes with Border Gateway Protocol Link kopierenLink in die Zwischenablage kopiert!
With route advertisements enabled, the OVN-Kubernetes network plugin supports advertising network routes for the default pod network and cluster user-defined (CUDN) networks to the provider network, including EgressIPs, and importing routes from the provider network to the default pod network and CUDNs. From the provider network, IP addresses advertised from the default pod network and CUDNs can be reached directly.
For example, you can import routes to the default pod network so you no longer need to manually configure routes on each node. Previously, you might have been setting the routingViaHost parameter to true and manually configuring routes on each node to approximate a similar configuration. With route advertisements you can accomplish this task seamlessly with routingViaHost parameter set to false.
You could also set the routingViaHost parameter to true in the Network custom resource CR for your cluster, but you must then manually configure routes on each node to simulate a similar configuration. When you enable route advertisements, you can set routingViaHost=false in the Network CR without having to then manually configure routes one each node.
Route reflectors on the provider network are supported and can reduce the number of BGP connections required to advertise routes on large networks.
If you use EgressIPs with route advertisements enabled, the layer 3 provider network is aware of EgressIP failovers. This means that you can locate cluster nodes that host EgressIPs on different layer 2 segments whereas before only the layer 2 provider network was aware so that required all the egress nodes to be on the same layer 2 segment.
6.1.1.1. Supported platforms Link kopierenLink in die Zwischenablage kopiert!
Advertising routes with border gateway protocol (BGP) is supported on the bare-metal infrastructure type.
6.1.1.2. Infrastructure requirements Link kopierenLink in die Zwischenablage kopiert!
To use route advertisements, you must have configured BGP for your network infrastructure. Outages or misconfigurations of your network infrastructure might cause disruptions to your cluster network.
6.1.1.3. Compatibility with other networking features Link kopierenLink in die Zwischenablage kopiert!
Route advertisements support the following OpenShift Container Platform Networking features:
- Multiple external gateways (MEG)
- MEG is not supported with this feature.
- EgressIPs
Supports the use and advertisement of EgressIPs. The node where an egress IP address resides advertises the EgressIP. An egress IP address must be on the same layer 2 network subnet as the egress node. The following limitations apply:
- Advertising EgressIPs from a user-defined network (CUDN) operating in layer 2 mode are not supported.
- Advertising EgressIPs for a network that has both egress IP addresses assigned to the primary network interface and egress IP addresses assigned to additional network interfaces is impractical. All EgressIPs are advertised on all of the BGP sessions of the selected FRRConfiguration instances, regardless of whether these sessions are established over the same interface that the EgressIP is assigned to or not, potentially leading to unwanted advertisements.
- Services
- Works with the MetalLB Operator to advertise services to the provider network.
- Egress service
- Full support.
- Egress firewall
- Full support.
- Egress QoS
- Full support.
- Network policies
- Full support.
- Direct pod ingress
- Full support for the default cluster network and cluster user-defined (CUDN) networks.
6.1.1.4. Considerations for use with the MetalLB Operator Link kopierenLink in die Zwischenablage kopiert!
The MetalLB Operator is installed as an add-on to the cluster. Deployment of the MetalLB Operator automatically enables FRR-K8s as an additional routing capability provider. This feature and the MetalLB Operator use the same FRR-K8s deployment.
6.1.1.5. Considerations for naming cluster user-defined networks (CUDNs) Link kopierenLink in die Zwischenablage kopiert!
When referencing a VRF device in a FRRConfiguration CR, the VRF name is the same as the CUDN name for VRF names that are less than or equal to 15 characters. It is recommended to use a VRF name no longer than 15 characters so that the VRF name can be inferred from the CUDN name.
6.1.1.6. BGP routing custom resources Link kopierenLink in die Zwischenablage kopiert!
The following custom resources (CRs) are used to configure route advertisements with BGP:
RouteAdvertisements-
This CR defines the advertisements for the BGP routing. From this CR, the OVN-Kubernetes controller generates a
FRRConfigurationobject that configures the FRR daemon to advertise cluster network routes. This CR is cluster scoped. FRRConfiguration-
This CR is used to define BGP peers and to configure route imports from the provider network into the cluster network. Before applying
RouteAdvertisementsobjects, at least one FRRConfiguration object must be initially defined to configure the BGP peers. This CR is namespaced.
6.1.1.7. OVN-Kubernetes controller generation of FRRConfiguration objects Link kopierenLink in die Zwischenablage kopiert!
An FRRConfiguration object is generated for each network and node selected by a RouteAdvertisements CR with the appropriate advertised prefixes that apply to each node. The OVN-Kubernetes controller checks whether the RouteAdvertisements-CR-selected nodes are a subset of the nodes that are selected by the RouteAdvertisements-CR-selected FRR configurations.
Any filtering or selection of prefixes to receive are not considered in FRRConfiguration objects that are generated from the RouteAdvertisement CRs. Configure any prefixes to receive on other FRRConfiguration objects. OVN-Kubernetes imports routes from the VRF into the appropriate network.
6.1.1.8. Cluster Network Operator configuration Link kopierenLink in die Zwischenablage kopiert!
The Cluster Network Operator (CNO) API exposes several fields to configure route advertisements:
-
spec.additionalRoutingCapabilities.providers: Specifies an additional routing provider, which is required to advertise routes. The only supported value isFRR, which enables deployment of the FRR-K8S daemon for the cluster. When enabled, the FRR-K8S daemon is deployed on all nodes. -
spec.defaultNetwork.ovnKubernetesConfig.routeAdvertisements: Enables route advertisements for the default cluster network and CUDN networks. Thespec.additionalRoutingCapabilitiesfield must be set toFRRto enable this feature.
6.1.2. RouteAdvertisements object configuration Link kopierenLink in die Zwischenablage kopiert!
You can define an RouteAdvertisements object, which is cluster scoped, with the following properties.
The fields for the RouteAdvertisements custom resource (CR) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Specifies the name of the |
|
|
|
Specifies an array that can contain a list of different types of networks to advertise. Supports only the |
|
|
|
Determines which |
|
|
| Specifies which networks to advertise among default cluster network and cluster user defined networks (CUDNs). |
|
|
|
Limits the advertisements to selected nodes. When |
|
|
|
Determines which router to advertise the routes in. Routes are advertised on the routers associated with this virtual routing and forwarding (VRF) target, as specified on the selected |
6.1.3. Examples advertising pod IP addresses with BGP Link kopierenLink in die Zwischenablage kopiert!
The following examples describe several configurations for advertising pod IP addresses and EgressIPs with Border Gateway Protocol (BGP). The external network border router has the 172.18.0.5 IP address. These configures assume that you have configured an external route reflector that can relay routes to all nodes on the cluster network.
6.1.3.1. Advertising the default cluster network Link kopierenLink in die Zwischenablage kopiert!
In this scenario, the default cluster network is exposed to the external network so that pod IP addresses and EgressIPs are advertised to the provider network.
This scenario relies upon the following FRRConfiguration object:
FRRConfiguration CR
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes for the default cluster network.
An example of a FRRConfiguration CR generated by OVN-Kubernetes
In the example generated FRRConfiguration object, <default_network_host_subnet> is the subnet of the default cluster network that is advertised to the provider network.
6.1.3.2. Advertising pod IPs from a cluster user-defined network over BGP Link kopierenLink in die Zwischenablage kopiert!
In this scenario, the blue cluster user-defined network (CUDN) is exposed to the external network so that the network’s pod IP addresses and EgressIPs are advertised to the provider network.
This scenario relies upon the following FRRConfiguration object:
FRRConfiguration CR
With this FRRConfiguration object, routes will be imported from neighbor 172.18.0.5 into the default VRF and are available to the default cluster network.
The CUDNs are advertised over the default VRF as illustrated in the following diagram:
- Red CUDN
-
A VRF named
redassociated with a CUDN namedred -
A subnet of
10.0.0.0/24
-
A VRF named
- Blue CUDN
-
A VRF named
blueassociated with a CUDN namedblue -
A subnet of
10.0.1.0/24
-
A VRF named
In this configuration, two separate CUDNs are defined. The red network covers the 10.0.0.0/24 subnet and the blue network covers the 10.0.1.0/24 subnet. The red and blue networks are labeled as export: true.
The following RouteAdvertisements CR describes the configuration for the red and blue tenants:
RouteAdvertisements CR for the red and blue tenants
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes. The following example is of one such configuration object, with the number of FRRConfiguration objects created depending on the node and networks selected.
An example of a FRRConfiguration CR generated by OVN-Kubernetes
The generated FRRConfiguration object configures the subnet 10.0.1.0/24, which belongs to network blue, to be imported into the default VRF and advertised to the 172.18.0.5 neighbor. An FRRConfiguration object is generated for each network and nodes selected by a RouteAdvertisements CR with the appropriate prefixes that apply to each node.
When the targetVRF field is omitted, the routes are leaked and advertised over the default VRF. Additionally, routes that were imported to the default VRF after the definition of the initial FRRConfiguration object are also imported into the blue VRF.
6.1.3.3. Advertising pod IPs from a cluster user-defined network over BGP with VPN Link kopierenLink in die Zwischenablage kopiert!
In this scenario, a VLAN interface is attached to the VRF device associated with the blue network. This setup provides a VRF lite design, where FRR-K8S is used to advertise the blue network only over the corresponding BGP session on the blue network VRF/VLAN link to the next hop Provide Edge (PE) router. The red tenant uses the same configuration. The blue and red networks are labeled as export: true.
This scenario does not support the use of EgressIPs.
The following diagram illustrates this configuration:
- Red CUDN
-
A VRF named
redassociated with a CUDN namedred - A VLAN interface attached to the VRF device and connected to the external PE router
-
An assigned subnet of
10.0.2.0/24
-
A VRF named
- Blue CUDN
-
A VRF named
blueassociated with a CUDN namedblue - A VLAN interface attached to the VRF device and connected to the external PE router
-
An assigned subnet of
10.0.1.0/24
-
A VRF named
This approach is available only when you set routingViaHost=true in the ovnKubernetesConfig.gatewayConfig specification of the OVN-Kubernetes network plugin.
In the following configuration, an additional FRRConfiguration CR configures peering with the PE router on the blue and red VLANs:
FRRConfiguration CR manually configured for BGP VPN setup
The following RouteAdvertisements CR describes the configuration for the blue and red tenants:
RouteAdvertisements CR for the blue and red tenants
In the RouteAdvertisements CR, the targetVRF is set to auto so that advertisements occur within the VRF device that corresponds to the individual networks that are selected. In this scenario, the pod subnet for blue is advertised over the blue VRF device, and the pod subnet for red is advertised over the red VRF device. Additionally, each BGP session imports routes to only the corresponding CUDN VRF as defined by the initial FRRConfiguration object.
When the OVN-Kubernetes controller sees this RouteAdvertisements CR, it generates further FRRConfiguration objects based on the selected ones that configure the FRR daemon to advertise the routes for the blue and red tenants.
FRRConfiguration CR generated by OVN-Kubernetes for blue and red tenants
In this scenario, any filtering or selection of routes to receive must be done in the FRRConfiguration CR that defines peering relationships.
6.2. Enabling route advertisements Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure additional route advertisements for your cluster. You must use the OVN-Kubernetes network plugin.
6.2.1. Enabling route advertisements Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable additional routing support for your cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To enable a routing provider and additional route advertisements, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Disabling route advertisements Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable additional route advertisements for your cluster.
6.3.1. Disabling route advertisements Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can disable additional route advertisements for your cluster.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with the
cluster-adminrole. - The cluster is installed on compatible infrastructure.
Procedure
To disable additional routing support, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Example route advertisements setup Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can configure the following example route advertisements setup for your cluster. This configuration is intended as a sample that demonstrates how to configure route advertisements.
6.4.1. Sample route advertisements setup Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can enable Border Gateway Protocol (BGP) routing support for your cluster. This configuration is intended as a sample that demonstrates how to configure route advertisements. The configuration uses route reflection rather than a full mesh setup.
BGP routing is supported only on bare-metal infrastructure.
Prerequisites
-
You installed the OpenShift CLI (
oc). -
You are logged in to the cluster as a user with
cluster-adminprivileges. - The cluster is installed on bare-metal infrastructure.
- You have a bare-metal system with access to the cluster where you plan to run the FRR daemon container.
Procedure
Confirm that the
RouteAdvertisementsfeature gate is enabled by running the following command:oc get featuregate -oyaml | grep -i routeadvertisement
$ oc get featuregate -oyaml | grep -i routeadvertisementCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
- name: RouteAdvertisements
- name: RouteAdvertisementsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Cluster Network Operator (CNO) by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It might take a few minutes for the CNO to restart all nodes.
Get the IP addresses of the nodes by running the following command:
oc get node -owide
$ oc get node -owideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the default pod network of each node by running the following command:
oc get node <node_name> -o=jsonpath={.metadata.annotations.k8s\\.ovn\\.org/node-subnets}$ oc get node <node_name> -o=jsonpath={.metadata.annotations.k8s\\.ovn\\.org/node-subnets}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"default":["10.129.0.0/23"],"ns1.udn-network-primary-layer3":["10.150.6.0/24"]}{"default":["10.129.0.0/23"],"ns1.udn-network-primary-layer3":["10.150.6.0/24"]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the bare-metal hypervisor, get the IP address for the external FRR container to use by running the following command:
ip -j -d route get <a cluster node's IP> | jq -r '.[] | .dev' | xargs ip -d -j address show | jq -r '.[] | .addr_info[0].local'
$ ip -j -d route get <a cluster node's IP> | jq -r '.[] | .dev' | xargs ip -d -j address show | jq -r '.[] | .addr_info[0].local'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
frr.conffile for FRR that includes each node’s IP address, as shown in the following example:Example
frr.confconfiguration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
daemonsthat includes the following content:Example
daemonsconfiguration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save both the
frr.confanddaemonsfiles in the same directory, such as/tmp/frr. Create an external FRR container by running the following command:
sudo podman run -d --privileged --network host --rm --ulimit core=-1 --name frr --volume /tmp/frr:/etc/frr quay.io/frrouting/frr:9.1.0
$ sudo podman run -d --privileged --network host --rm --ulimit core=-1 --name frr --volume /tmp/frr:/etc/frr quay.io/frrouting/frr:9.1.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following
FRRConfigurationandRouteAdvertisementsconfigurations:Create a
receive_all.yamlfile that includes the following content:Example
receive_all.yamlconfiguration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ra.yamlfile that includes the following content:Example
ra.yamlconfiguration fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Apply the
receive_all.yamlandra.yamlfiles by running the following command:for f in receive_all.yaml ra.yaml; do oc apply -f $f; done
$ for f in receive_all.yaml ra.yaml; do oc apply -f $f; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the configurations were applied:
Verify that the
FRRConfigurationconfigurations were created by running the following command:oc get frrconfiguration -A
$ oc get frrconfiguration -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
RouteAdvertisementsconfigurations were created by running the following command:oc get ra -A
$ oc get ra -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS default Accepted
NAME STATUS default AcceptedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Get the external FRR container ID by running the following command:
sudo podman ps | grep frr
$ sudo podman ps | grep frrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
22cfc713890e quay.io/frrouting/frr:9.1.0 /usr/lib/frr/dock... 5 hours ago Up 5 hours ago frr
22cfc713890e quay.io/frrouting/frr:9.1.0 /usr/lib/frr/dock... 5 hours ago Up 5 hours ago frrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the container ID that you obtained in the previous step to check the BGP neighbor and routes in the external FRR container’s
vtyshsession. Run the following command:sudo podman exec -it <container_id> vtysh -c "show ip bgp"
$ sudo podman exec -it <container_id> vtysh -c "show ip bgp"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the
frr-k8spod for each cluster node by running the following command:oc -n openshift-frr-k8s get pod -owide
$ oc -n openshift-frr-k8s get pod -owideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the OpenShift Container Platform cluster, check BGP routes on the cluster node’s
frr-k8spod in the FRR container by running the following command:oc -n openshift-frr-k8s -c frr rsh frr-k8s-86wmq
$ oc -n openshift-frr-k8s -c frr rsh frr-k8s-86wmqCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the IP routes from the cluster node by running the following command:
vtysh
sh-5.1# vtyshCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello, this is FRRouting (version 8.5.3). Copyright 1996-2005 Kunihiro Ishiguro, et al.
Hello, this is FRRouting (version 8.5.3). Copyright 1996-2005 Kunihiro Ishiguro, et al.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the IP routes by running the following command:
worker-2# show ip bgp
worker-2# show ip bgpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the OpenShift Container Platform cluster, debug the node by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Temporary namespace openshift-debug-lbtgh is created for debugging node... Starting pod/worker-2-debug-zrg4v ... To use host binaries, run `chroot /host` Pod IP: 192.168.111.25 If you don't see a command prompt, try pressing enter.
Temporary namespace openshift-debug-lbtgh is created for debugging node... Starting pod/worker-2-debug-zrg4v ... To use host binaries, run `chroot /host` Pod IP: 192.168.111.25 If you don't see a command prompt, try pressing enter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the BGP routes are being advertised by running the following command:
ip route show | grep bgp
sh-5.1# ip route show | grep bgpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Using PTP hardware Link kopierenLink in die Zwischenablage kopiert!
7.1. About PTP in OpenShift cluster nodes Link kopierenLink in die Zwischenablage kopiert!
Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).
If your openshift-sdn cluster with PTP uses the User Datagram Protocol (UDP) for hardware time stamping and you migrate to the OVN-Kubernetes plugin, the hardware time stamping cannot be applied to primary interface devices, such as an Open vSwitch (OVS) bridge. As a result, UDP version 4 configurations cannot work with a br-ex interface.
You can configure linuxptp services and use PTP-capable hardware in OpenShift Container Platform cluster nodes.
Use the OpenShift Container Platform web console or OpenShift CLI (oc) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features:
- Discovery of the PTP-capable devices in the cluster.
-
Management of the configuration of
linuxptpservices. -
Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator
cloud-event-proxysidecar.
The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure.
7.1.1. Elements of a PTP domain Link kopierenLink in die Zwischenablage kopiert!
PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a leader-follower hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Follower clocks are synchronized to leader clocks, and follower clocks can themselves be the source for other downstream clocks.
Figure 7.1. PTP nodes in the network
The three primary types of PTP clocks are described below.
- Grandmaster clock
- The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks synchronize to a Global Navigation Satellite System (GNSS) time source. The Grandmaster clock is the authoritative source of time in the network and is responsible for providing time synchronization to all other devices.
- Boundary clock
- The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
- Ordinary clock
- The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write timestamps.
7.1.1.1. Advantages of PTP over NTP Link kopierenLink in die Zwischenablage kopiert!
One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.
Hardware-based PTP provides optimal accuracy, since the NIC can timestamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.
Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service (chronyd) using a MachineConfig custom resource. For more information, see Disabling chrony time service.
7.1.2. Overview of linuxptp and gpsd in OpenShift Container Platform nodes Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform uses the PTP Operator with linuxptp and gpsd packages for high precision network synchronization. The linuxptp package provides tools and daemons for PTP timing in networks. Cluster hosts with Global Navigation Satellite System (GNSS) capable NICs use gpsd to interface with GNSS clock sources.
The linuxptp package includes the ts2phc, pmc, ptp4l, and phc2sys programs for system clock synchronization.
- ts2phc
ts2phcsynchronizes the PTP hardware clock (PHC) across PTP devices with a high degree of precision.ts2phcis used in grandmaster clock configurations. It receives the precision timing signal a high precision clock source such as Global Navigation Satellite System (GNSS). GNSS provides an accurate and reliable source of synchronized time for use in large distributed networks. GNSS clocks typically provide time information with a precision of a few nanoseconds.The
ts2phcsystem daemon sends timing information from the grandmaster clock to other PTP devices in the network by reading time information from the grandmaster clock and converting it to PHC format. PHC time is used by other devices in the network to synchronize their clocks with the grandmaster clock.- pmc
-
pmcimplements a PTP management client (pmc) according to IEEE standard 1588.1588.pmcprovides basic management access for theptp4lsystem daemon.pmcreads from standard input and sends the output over the selected transport, printing any replies it receives. - ptp4l
ptp4limplements the PTP boundary clock and ordinary clock and runs as a system daemon.ptp4ldoes the following:- Synchronizes the PHC to the source clock with hardware time stamping
- Synchronizes the system clock to the source clock with software time stamping
- phc2sys
-
phc2syssynchronizes the system clock to the PHC on the network interface controller (NIC). Thephc2syssystem daemon continuously monitors the PHC for timing information. When it detects a timing error, the PHC corrects the system clock.
The gpsd package includes the ubxtool, gspipe, gpsd, programs for GNSS clock synchronization with the host clock.
- ubxtool
-
ubxtoolCLI allows you to communicate with a u-blox GPS system. TheubxtoolCLI uses the u-blox binary protocol to communicate with the GPS. - gpspipe
-
gpspipeconnects togpsdoutput and pipes it tostdout. - gpsd
-
gpsdis a service daemon that monitors one or more GPS or AIS receivers connected to the host.
7.1.3. Overview of GNSS timing for PTP grandmaster clocks Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports receiving precision PTP timing from Global Navigation Satellite System (GNSS) sources and grandmaster clocks (T-GM) in the cluster.
OpenShift Container Platform supports PTP timing from GNSS sources with Intel E810 Westport Channel NICs only.
Figure 7.2. Overview of Synchronization with GNSS and T-GM
- Global Navigation Satellite System (GNSS)
GNSS is a satellite-based system used to provide positioning, navigation, and timing information to receivers around the globe. In PTP, GNSS receivers are often used as a highly accurate and stable reference clock source. These receivers receive signals from multiple GNSS satellites, allowing them to calculate precise time information. The timing information obtained from GNSS is used as a reference by the PTP grandmaster clock.
By using GNSS as a reference, the grandmaster clock in the PTP network can provide highly accurate timestamps to other devices, enabling precise synchronization across the entire network.
- Digital Phase-Locked Loop (DPLL)
- DPLL provides clock synchronization between different PTP nodes in the network. DPLL compares the phase of the local system clock signal with the phase of the incoming synchronization signal, for example, PTP messages from the PTP grandmaster clock. The DPLL continuously adjusts the local clock frequency and phase to minimize the phase difference between the local clock and the reference clock.
7.1.3.1. Handling leap second events in GNSS-synced PTP grandmaster clocks Link kopierenLink in die Zwischenablage kopiert!
A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC) to keep it synchronized with International Atomic Time (TAI). UTC leap seconds are unpredictable. Internationally agreed leap seconds are listed in leap-seconds.list. This file is regularly updated by the International Earth Rotation and Reference Systems Service (IERS). An unhandled leap second can have a significant impact on far edge RAN networks. It can cause the far edge RAN application to immediately disconnect voice calls and data sessions.
7.1.4. About PTP and clock synchronization error events Link kopierenLink in die Zwischenablage kopiert!
Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU).
Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU.
Event notifications are available to vRAN applications running on the same DU node. A publish/subscribe REST API passes events notifications to the messaging bus. Publish/subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic.
The PTP Operator generates fast event notifications for every PTP-capable network interface. The consumer application can subscribe to PTP events by using the PTP events REST API v2.
PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks, PTP grandmaster clocks, or PTP boundary clocks.
7.1.5. 2-card E810 NIC configuration reference Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports single and dual-NIC Intel E810 hardware for PTP timing in grandmaster clocks (T-GM) and boundary clocks (T-BC).
- Dual NIC grandmaster clock
You can use a cluster host that has dual-NIC hardware as PTP grandmaster clock. One NIC receives timing information from the global navigation satellite system (GNSS). The second NIC receives the timing information from the first using the SMA1 Tx/Rx connections on the E810 NIC faceplate. The system clock on the cluster host is synchronized from the NIC that is connected to the GNSS satellite.
Dual NIC grandmaster clocks are a feature of distributed RAN (D-RAN) configurations where the Remote Radio Unit (RRU) and Baseband Unit (BBU) are located at the same radio cell site. D-RAN distributes radio functions across multiple sites, with backhaul connections linking them to the core network.
Figure 7.3. Dual NIC grandmaster clock
NoteIn a dual-NIC T-GM configuration, a single
ts2phcprogram operate on two PTP hardware clocks (PHCs), one for each NIC.- Dual NIC boundary clock
For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks.
Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate
ptp4linstances for each NIC feeding the downstream clocks.- Highly available system clock with dual-NIC boundary clocks
You can configure Intel E810-XXVDA4 Salem channel dual-NIC hardware as dual PTP boundary clocks that provide timing for a highly available system clock. This configuration is useful when you have multiple time sources on different NICs. High availability ensures that the node does not lose timing synchronization if one of the two timing sources is lost or disconnected.
Each NIC is connected to the same upstream leader clock. Highly available boundary clocks use multiple PTP domains to synchronize with the target system clock. When a T-BC is highly available, the host system clock can maintain the correct offset even if one or more
ptp4linstances syncing the NIC PHC clock fails. If any single SFP port or cable failure occurs, the boundary clock stays in sync with the leader clock.Boundary clock leader source selection is done using the A-BMCA algorithm. For more information, see ITU-T recommendation G.8275.1.
7.1.6. Using dual-port NICs to improve redundancy for PTP ordinary clocks Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports single-port networking interface cards (NICs) as ordinary clocks for PTP timing. To improve redundancy, you can configure a dual-port NIC with one port as active and the other as standby.
Configuring linuxptp services as an ordinary clock with dual-port NIC redundancy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
In this configuration, the ports in a dual-port NIC operate as follows:
-
The active port functions as an ordinary clock in the
Followingport state. -
The standby port remains in the
Listeningport state. - If the active port fails, the standby port transitions to active to ensure continued PTP timing synchronization.
-
If both ports become faulty, the clock state moves to the
HOLDOVERstate, then theFREERUNstate when the holdover timeout expires, before resyncing to a leader clock.
7.1.6.1. Hardware requirements Link kopierenLink in die Zwischenablage kopiert!
You can configure PTP ordinary clocks with added redundancy on x86_64 or AArch64 architecture nodes.
For x86_64 architecture nodes, the nodes must feature dual-port NICs that support PTP and expose a single PTP hardware clock (PHC) per NIC, such as the Intel E810.
For AArch64 architecture nodes, you can use the following dual-port NICs only:
- NVIDIA ConnectX-7 series
NVIDIA BlueField-3 series, in NIC mode
- You must configure the NVIDIA BlueField-3 series DPU in NIC mode before configuring the interface as an ordinary clock with improved redundancy. For further information about configuring NIC mode, see NIC Mode for BlueField-3 (NVIDIA documentation), BlueField Management (NVIDIA documentation), and Configuring NIC Mode on BlueField-3 from Host BIOS HII UEFI Menu (NVIDIA documentation).
- You must restart the card after changing to NIC mode. For more information about restarting the card, see NVIDIA BlueField Reset and Reboot Procedures (NVIDIA documentation).
- Use the latest supported NVIDIA drivers and firmware to ensure proper PTP support and to expose a single PHC per NIC.
7.1.7. 3-card Intel E810 PTP grandmaster clock Link kopierenLink in die Zwischenablage kopiert!
OpenShift Container Platform supports cluster hosts with 3 Intel E810 NICs as PTP grandmaster clocks (T-GM).
- 3-card grandmaster clock
You can use a cluster host that has 3 NICs as PTP grandmaster clock. One NIC receives timing information from the global navigation satellite system (GNSS). The second and third NICs receive the timing information from the first by using the SMA1 Tx/Rx connections on the E810 NIC faceplate. The system clock on the cluster host is synchronized from the NIC that is connected to the GNSS satellite.
3-card NIC grandmaster clocks can be used for distributed RAN (D-RAN) configurations where the Radio Unit (RU) is connected directly to the distributed unit (DU) without a front haul switch, for example, if the RU and DU are located in the same radio cell site. D-RAN distributes radio functions across multiple sites, with backhaul connections linking them to the core network.
Figure 7.4. 3-card Intel E810 PTP grandmaster clock
NoteIn a 3-card T-GM configuration, a single
ts2phcprocess reports as 3ts2phcinstances in the system.
7.2. Configuring PTP devices Link kopierenLink in die Zwischenablage kopiert!
The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform.
When installed, the PTP Operator searches your cluster for Precision Time Protocol (PTP) capable network devices on each node. The Operator creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device.
Network interface controller (NIC) hardware with built-in PTP capabilities sometimes require a device-specific configuration. You can use hardware-specific NIC features for supported hardware with the PTP Operator by configuring a plugin in the PtpConfig custom resource (CR). The linuxptp-daemon service uses the named parameters in the plugin stanza to start linuxptp processes, ptp4l and phc2sys, based on the specific hardware configuration.
In OpenShift Container Platform 4.20, the Intel E810 NIC is supported with a PtpConfig plugin.
7.2.1. Installing the PTP Operator using the CLI Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can install the Operator by using the CLI.
Prerequisites
- A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
Procedure
Create a namespace for the PTP Operator.
Save the following YAML in the
ptp-namespace.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
NamespaceCR:oc create -f ptp-namespace.yaml
$ oc create -f ptp-namespace.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an Operator group for the PTP Operator.
Save the following YAML in the
ptp-operatorgroup.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
OperatorGroupCR:oc create -f ptp-operatorgroup.yaml
$ oc create -f ptp-operatorgroup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Subscribe to the PTP Operator.
Save the following YAML in the
ptp-sub.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
SubscriptionCR:oc create -f ptp-sub.yaml
$ oc create -f ptp-sub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To verify that the Operator is installed, enter the following command:
oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase
$ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Name Phase 4.20.0-202301261535 Succeeded
Name Phase 4.20.0-202301261535 SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.2. Installing the PTP Operator by using the web console Link kopierenLink in die Zwischenablage kopiert!
As a cluster administrator, you can install the PTP Operator by using the web console.
You have to create the namespace and Operator group as mentioned in the previous section.
Procedure
Install the PTP Operator using the OpenShift Container Platform web console:
- In the OpenShift Container Platform web console, click Ecosystem → Software Catalog.
- Choose PTP Operator from the list of available Operators, and then click Install.
- On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.
Optional: Verify that the PTP Operator installed successfully:
- Switch to the Ecosystem → Installed Operators page.
Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.
NoteDuring installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
If the Operator does not appear as installed, to troubleshoot further:
- Go to the Ecosystem → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
-
Go to the Workloads → Pods page and check the logs for pods in the
openshift-ptpproject.
7.2.3. Discovering PTP-capable network devices in your cluster Link kopierenLink in die Zwischenablage kopiert!
Identify PTP-capable network devices that exist in your cluster so that you can configure them
Prerequisties
- You installed the PTP Operator.
Procedure
To return a complete list of PTP capable network devices in your cluster, run the following command:
oc get NodePtpDevice -n openshift-ptp -o yaml
$ oc get NodePtpDevice -n openshift-ptp -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4. Configuring linuxptp services as a grandmaster clock Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as grandmaster clock (T-GM) by creating a PtpConfig custom resource (CR) that configures the host NIC.
The ts2phc utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks.
Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for an Intel Westport Channel E810-XXVDA4T network interface.
To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
Prerequisites
- For T-GM clocks in production environments, install an Intel E810 Westport Channel NIC in the bare-metal cluster host.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create the
PtpConfigCR. For example:Depending on your requirements, use one of the following T-GM configurations for your deployment. Save the YAML in the
grandmaster-clock-ptp-config.yamlfile:Example 7.1. PTP grandmaster clock configuration for E810 NIC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor E810 Westport Channel NICs, set the value for
ts2phc.nmea_serialportto/dev/gnss0.Create the CR by running the following command:
oc create -f grandmaster-clock-ptp-config.yaml
$ oc create -f grandmaster-clock-ptp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile. Run the following command:oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4.1. Configuring linuxptp services as a grandmaster clock for 2 E810 NICs Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock (T-GM) for 2 E810 NICs by creating a PtpConfig custom resource (CR) that configures the NICs.
You can configure the linuxptp services as a T-GM for the following E810 NICs:
- Intel E810-XXVDA4T Westport Channel NIC
- Intel E810-CQDA2T Logan Beach NIC
For distributed RAN (D-RAN) use cases, you can configure PTP for 2 NICs as follows:
- NIC 1 is synced to the global navigation satellite system (GNSS) time source.
-
NIC 2 is synced to the 1PPS timing output provided by NIC one. This configuration is provided by the PTP hardware plugin in the
PtpConfigCR.
The 2-card PTP T-GM configuration uses one instance of ptp4l and one instance of ts2phc. The ptp4l and ts2phc programs are each configured to operate on two PTP hardware clocks (PHCs), one for each NIC. The host system clock is synchronized from the NIC that is connected to the GNSS time source.
Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for dual Intel E810 network interfaces.
To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
Prerequisites
- For T-GM clocks in production environments, install two Intel E810 NICs in the bare-metal cluster host.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create the
PtpConfigCR. For example:Save the following YAML in the
grandmaster-clock-ptp-config-dual-nics.yamlfile:Example 7.2. PTP grandmaster clock configuration for dual E810 NICs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSet the value for
ts2phc.nmea_serialportto/dev/gnss0.Create the CR by running the following command:
oc create -f grandmaster-clock-ptp-config-dual-nics.yaml
$ oc create -f grandmaster-clock-ptp-config-dual-nics.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile. Run the following command:oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4.2. Configuring linuxptp services as a grandmaster clock for 3 E810 NICs Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock (T-GM) for 3 E810 NICs by creating a PtpConfig custom resource (CR) that configures the NICs.
You can configure the linuxptp services as a T-GM with 3 NICs for the following E810 NICs:
- Intel E810-XXVDA4T Westport Channel NIC
- Intel E810-CQDA2T Logan Beach NIC
For distributed RAN (D-RAN) use cases, you can configure PTP for 3 NICs as follows:
- NIC 1 is synced to the Global Navigation Satellite System (GNSS)
- NICs 2 and 3 are synced to NIC 1 with 1PPS faceplate connections
Use the following example PtpConfig CRs as the basis to configure linuxptp services as a 3-card Intel E810 T-GM.
Prerequisites
- For T-GM clocks in production environments, install 3 Intel E810 NICs in the bare-metal cluster host.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create the
PtpConfigCR. For example:Save the following YAML in the
three-nic-grandmaster-clock-ptp-config.yamlfile:Example 7.3. PTP grandmaster clock configuration for 3 E810 NICs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSet the value for
ts2phc.nmea_serialportto/dev/gnss0.Create the CR by running the following command:
oc create -f three-nic-grandmaster-clock-ptp-config.yaml
$ oc create -f three-nic-grandmaster-clock-ptp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m3q 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x6zkn 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m3q 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x6zkn 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Run the following command, and examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile:oc logs linuxptp-daemon-74m3q -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-74m3q -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.5. Grandmaster clock PtpConfig configuration reference Link kopierenLink in die Zwischenablage kopiert!
The following reference information describes the configuration options for the PtpConfig custom resource (CR) that configures the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock.
| PtpConfig CR field | Description |
|---|---|
|
|
Specify an array of
The plugin mechanism allows the PTP Operator to do automated hardware configuration. For the Intel Westport Channel NIC or the Intel Logan Beach NIC, when the |
|
|
Specify system configuration options for the |
|
|
Specify the required configuration to start |
|
| Specify the maximum amount of time to wait for the transmit (TX) timestamp from the sender before discarding the data. |
|
| Specify the JBOD boundary clock time delay value. This value is used to correct the time values that are passed between the network time devices. |
|
|
Specify system config options for the Note
Ensure that the network interface listed here is configured as grandmaster and is referenced as required in the |
|
|
Configure the scheduling policy for |
|
|
Set an integer value from 1-65 to configure FIFO priority for |
|
|
Optional. If |
|
|
Sets the configuration for the
|
|
|
Set options for the |
|
|
Specify an array of one or more |
|
|
Specify the |
|
|
Specify the |
|
|
Specify |
|
|
Set |
|
|
Set |
7.2.5.1. Grandmaster clock class sync state reference Link kopierenLink in die Zwischenablage kopiert!
The following table describes the PTP grandmaster clock (T-GM) gm.ClockClass states. Clock class states categorize T-GM clocks based on their accuracy and stability with regard to the Primary Reference Time Clock (PRTC) or other timing source.
Holdover specification is the amount of time a PTP clock can maintain synchronization without receiving updates from the primary time source.
| Clock class state | Description |
|---|---|
|
|
T-GM clock is connected to a PRTC in |
|
|
T-GM clock is in |
|
|
T-GM clock is in |
For more information, see "Phase/time traceability information", ITU-T G.8275.1/Y.1369.1 Recommendations.
7.2.5.2. Intel E810 NIC hardware configuration reference Link kopierenLink in die Zwischenablage kopiert!
Use this information to understand how to use the Intel E810 hardware plugin to configure the E810 network interface as PTP grandmaster clock. Hardware pin configuration determines how the network interface interacts with other components and devices in the system. The Intel E810 NIC has four connectors for external 1PPS signals: SMA1, SMA2, U.FL1, and U.FL2.
| Hardware pin | Recommended setting | Description |
|---|---|---|
|
|
|
Disables the |
|
|
|
Disables the |
|
|
|
Disables the |
|
|
|
Disables the |
You can set the pin configuration on the Intel E810 NIC by using the spec.profile.plugins.e810.pins parameters as shown in the following example:
pins:
<interface_name>:
<connector_name>: <function> <channel_number>
pins:
<interface_name>:
<connector_name>: <function> <channel_number>
Where:
<function>: Specifies the role of the pin. The following values are associated with the pin role:
-
0: Disabled -
1: Rx (Receive timestamping) -
2: Tx (Transmit timestamping)
<channel number>: A number associated with the physical connector. The following channel numbers are associated with the physical connectors:
-
1:SMA1orU.FL1 -
2:SMA2orU.FL2
Examples:
-
0 1: Disables the pin mapped toSMA1orU.FL1. -
1 2: Assigns the Rx function toSMA2orU.FL2.
SMA1 and U.FL1 connectors share channel one. SMA2 and U.FL2 connectors share channel two.
Set spec.profile.plugins.e810.ublxCmds parameters to configure the GNSS clock in the PtpConfig custom resource (CR).
You must configure an offset value to compensate for T-GM GPS antenna cable signal delay. To configure the optimal T-GM antenna offset value, make precise measurements of the GNSS antenna cable signal delay. Red Hat cannot assist in this measurement or provide any values for the required delay offsets.
Each of these ublxCmds stanzas correspond to a configuration that is applied to the host NIC by using ubxtool commands. For example:
- 1
- Measured T-GM antenna delay offset in nanoseconds. To get the required delay offset value, you must measure the cable delay using external test equipment.
The following table describes the equivalent ubxtool commands:
| ubxtool command | Description |
|---|---|
|
|
Enables antenna voltage control, allows antenna status to be reported in the |
|
| Enables the antenna to receive GPS signals. |
|
| Configures the antenna to receive signal from the Galileo GPS satellite. |
|
| Disables the antenna from receiving signal from the GLONASS GPS satellite. |
|
| Disables the antenna from receiving signal from the BeiDou GPS satellite. |
|
| Disables the antenna from receiving signal from the SBAS GPS satellite. |
|
| Configures the GNSS receiver survey-in process to improve its initial position estimate. This can take up to 24 hours to achieve an optimal result. |
|
| Runs a single automated scan of the hardware and reports on the NIC state and configuration settings. |
7.2.5.3. Dual E810 NIC configuration reference Link kopierenLink in die Zwischenablage kopiert!
Use this information to understand how to use the Intel E810 hardware plugin to configure a pair of E810 network interfaces as PTP grandmaster clock (T-GM).
Before you configure the dual-NIC cluster host, you must connect the two NICs with an SMA1 cable using the 1PPS faceplace connections.
When you configure a dual-NIC T-GM, you need to compensate for the 1PPS signal delay that occurs when you connect the NICs using the SMA1 connection ports. Various factors such as cable length, ambient temperature, and component and manufacturing tolerances can affect the signal delay. To compensate for the delay, you must calculate the specific value that you use to offset the signal delay.
| PtpConfig field | Description |
|---|---|
|
| Configure the E810 hardware pins using the PTP Operator E810 hardware plugin.
|
|
|
Use the |
|
|
Set the value of |
Each value in the spec.profile.plugins.e810.pins list follows the <function> <channel_number> format.
Where:
<function>: Specifies the pin role. The following values are associated with the pin role:
-
0: Disabled -
1: Receive (Rx) – for 1PPS IN -
2: Transmit (Tx) – for 1PPS OUT
<channel_number>: A number associated with the physical connector. The following channel numbers are associated with the physical connectors:
-
1:SMA1orU.FL1 -
2:SMA2orU.FL2
Examples:
-
2 1: Enables1PPS OUT(Tx) onSMA1. -
1 1: Enables1PPS IN(Rx) onSMA1.
The PTP Operator passes these values to the Intel E810 hardware plugin and writes them to the sysfs pin configuration interface on each NIC.
7.2.5.4. 3-card E810 NIC configuration reference Link kopierenLink in die Zwischenablage kopiert!
Use this information to understand how to configure 3 E810 NICs as PTP grandmaster clock (T-GM).
Before you configure the 3-card cluster host, you must connect the 3 NICs by using the 1PPS faceplate connections. The primary NIC 1PPS_out outputs feed the other 2 NICs.
When you configure a 3-card T-GM, you need to compensate for the 1PPS signal delay that occurs when you connect the NICs by using the SMA1 connection ports. Various factors such as cable length, ambient temperature, and component and manufacturing tolerances can affect the signal delay. To compensate for the delay, you must calculate the specific value that you use to offset the signal delay.
| PtpConfig field | Description |
|---|---|
|
| Configure the E810 hardware pins with the PTP Operator E810 hardware plugin.
|
|
|
Use the |
|
|
Set the value of |
7.2.6. Holdover in a grandmaster clock with GNSS as the source Link kopierenLink in die Zwischenablage kopiert!
Holdover allows the grandmaster (T-GM) clock to maintain synchronization performance when the global navigation satellite system (GNSS) source is unavailable. During this period, the T-GM clock relies on its internal oscillator and holdover parameters to reduce timing disruptions.
You can define the holdover behavior by configuring the following holdover parameters in the PTPConfig custom resource (CR):
MaxInSpecOffset-
Specifies the maximum allowed offset in nanoseconds. If the T-GM clock exceeds the
MaxInSpecOffsetvalue, it transitions to theFREERUNstate (clock class stategm.ClockClass 248). LocalHoldoverTimeout-
Specifies the maximum duration, in seconds, for which the T-GM clock remains in the holdover state before transitioning to the
FREERUNstate. LocalMaxHoldoverOffSet- Specifies the maximum offset that the T-GM clock can reach during the holdover state in nanoseconds.
If the MaxInSpecOffset value is less than the LocalMaxHoldoverOffset value, and the T-GM clock exceeds the maximum offset value, the T-GM clock transitions from the holdover state to the FREERUN state.
If the LocalMaxHoldoverOffSet value is less than the MaxInSpecOffset value, the holdover timeout occurs before the clock reaches the maximum offset. To resolve this issue, set the MaxInSpecOffset field and the LocalMaxHoldoverOffset field to the same value.
For information about clock class states, see "Grandmaster clock class sync state reference" document.
The T-GM clock uses the holdover parameters LocalMaxHoldoverOffSet and LocalHoldoverTimeout to calculate the slope. Slope is the rate at which the phase offset changes over time. It is measured in nanoseconds per second, where the set value indicates how much the offset increases over a given time period.
The T-GM clock uses the slope value to predict and compensate for time drift, so reducing timing disruptions during holdover. The T-GM clock uses the following formula to calculate the slope:
Slope =
localMaxHoldoverOffSet/localHoldoverTimeoutFor example, if the
LocalHoldOverTimeoutparameter is set to 60 seconds, and theLocalMaxHoldoverOffsetparameter is set to 3000 nanoseconds, the slope is calculated as follows:Slope = 3000 nanoseconds / 60 seconds = 50 nanoseconds per second
The T-GM clock reaches the maximum offset in 60 seconds.
The phase offset is converted from picoseconds to nanoseconds. As a result, the calculated phase offset during holdover is expressed in nanoseconds, and the resulting slope is expressed in nanoseconds per second.
The following figure illustrates the holdover behavior in a T-GM clock with GNSS as the source:
Figure 7.5. Holdover in a T-GM clock with GNSS as the source
The GNSS signal is lost, causing the T-GM clock to enter the HOLDOVER mode. The T-GM clock maintains time accuracy by using its internal clock.
The GNSS signal is restored and the T-GM clock re-enters the LOCKED mode. When the GNSS signal is restored, the T-GM clock re-enters the LOCKED mode only after all dependent components in the synchronization chain, such as ts2phc offset, digital phase-locked loop (DPLL) phase offset, and GNSS offset, reach a stable LOCKED mode.
The GNSS signal is lost again, and the T-GM clock re-enters the HOLDOVER mode. The time error begins to increase.
The time error exceeds the MaxInSpecOffset threshold due to prolonged loss of traceability.
The GNSS signal is restored, and the T-GM clock resumes synchronization. The time error starts to decrease.
The time error decreases and falls back within the MaxInSpecOffset threshold.
7.2.7. Applying unassisted holdover for boundary clocks and time slave clocks Link kopierenLink in die Zwischenablage kopiert!
The unassisted holdover feature enables an Intel E810-XXVDA4T Network Interface Card (NIC), configured as either a PTP boundary clock (T-BC) or a PTP time slave clock (T-TSC), to maintain highly accurate time synchronization even when the upstream timing signal is lost. This is achieved by relying on the NIC’s internal oscillator to enter a stable, controlled drift state.
The ts2phc service monitors the ptp4l instance bound to the timing receiver (TR) port. If, for example, the TR port stops operating as the time receiver, the upstream grandmaster clock (T-GM) deteriorates in quality or the link disconnects, the system enters holdover mode and reconfigures itself dynamically.
Applying unassisted holdover for T-BC and T-TSC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
- An Intel E810-XXVDA4T NIC.
Procedure
Configure the triple port T-BC NIC. See the example below where the
PtpConfigresource contains two profiles, one for time transmitter ports (00-tbc-tt) and one to configure all the hardware, the TR port, andts2phcandphc2sysprocesses:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 2 3
- All TT ports have the
masterOnlyset to 1. - 4
- The
phc2sysOptssetting in the TR profile specifies the upstream portens4f1as the source of the node time synchronization. - 5
- The TR profile contains the hardware plugin section.
- 6
- The interconnections section in the hardware plugin has three NICs:
ens4f0,ens1f0, andens8f0. The leading NIC,ens4f0, is the only one with thegnnsInputfield, set tofalse, and theupstreamPortfield that specifies the TR port. It also has a list ofphaseOutputConnectors,SMA1andSMA2. The following NICs have theinputConnectorfield. Set the time receiver NICens4f0and the specific TR port. that isupstreamPort: ens4f1, for both T-BC and T-TSC configurations. - 7
- The
ts2phcconfiguration contains thedomainNumberof the upstream PTP domain. - 8
- The
ts2phcconfiguration contains theuds_address. Its value is not important because the daemon patches it with the correct address. - 9
- The
ts2phcconfiguration must include all NICs participating in this setup (ens4f0,ens1f0, andens8f0). - 10
ts2phcOptssets the source as generic with-s genericand automatic with-a. The last option,--ts2phc.rh_external_pps 1, configures it to operate with external phase source, the digital phase-locked loop (DPLL).
NoteIn the single-NIC case, disable all pins or enable outputs if using for 1PPS measurements.
To render this configuration for T-TSC operation, remove the 00-tbc-tt profile and adjust the ts2phcConf section to list only the TR NIC.
Verification
To get the T-BC status, run the following command:
oc -linuxptp-daemon-container logs ds/linuxptp-daemon --since=1s -f |grep T-BC
$ oc -linuxptp-daemon-container logs ds/linuxptp-daemon --since=1s -f |grep T-BC
Example output
T-BC[1760525446]:[ts2phc.1.config] ens4f0 offset 1 T-BC-STATUS s2 T-BC[1760525447]:[ts2phc.1.config] ens4f0 offset 1 T-BC-STATUS s2 T-BC[1760525448]:[ts2phc.1.config] ens4f0 offset -1 T-BC-STATUS s2
T-BC[1760525446]:[ts2phc.1.config] ens4f0 offset 1 T-BC-STATUS s2
T-BC[1760525447]:[ts2phc.1.config] ens4f0 offset 1 T-BC-STATUS s2
T-BC[1760525448]:[ts2phc.1.config] ens4f0 offset -1 T-BC-STATUS s2
This is reported every second, where s2 indicates it is locked, s1 indicates holdover is activated, and s0, unlocked.
7.2.8. Configuring dynamic leap seconds handling for PTP grandmaster clocks Link kopierenLink in die Zwischenablage kopiert!
The PTP Operator container image includes the latest leap-seconds.list file that is available at the time of release. You can configure the PTP Operator to automatically update the leap second file by using Global Positioning System (GPS) announcements.
Leap second information is stored in an automatically generated ConfigMap resource named leap-configmap in the openshift-ptp namespace. The PTP Operator mounts the leap-configmap resource as a volume in the linuxptp-daemon pod that is accessible by the ts2phc process.
If the GPS satellite broadcasts new leap second data, the PTP Operator updates the leap-configmap resource with the new data. The ts2phc process picks up the changes automatically.
The following procedure is provided as reference. The 4.20 version of the PTP Operator enables automatic leap second management by default.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have installed the PTP Operator and configured a PTP grandmaster clock (T-GM) in the cluster.
Procedure
Configure automatic leap second handling in the
phc2sysOptssection of thePtpConfigCR. Set the following options:phc2sysOpts: -r -u 0 -m -N 8 -R 16 -S 2 -s ens2f0 -n 24
phc2sysOpts: -r -u 0 -m -N 8 -R 16 -S 2 -s ens2f0 -n 241 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NotePreviously, the T-GM required an offset adjustment in the
phc2sysconfiguration (-O -37) to account for historical leap seconds. This is no longer needed.Configure the Intel e810 NIC to enable periodical reporting of
NAV-TIMELSmessages by the GPS receiver in thespec.profile.plugins.e810.ublxCmdssection of thePtpConfigCR. For example:- args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248"- args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Validate that the configured T-GM is receiving
NAV-TIMELSmessages from the connected GPS. Run the following command:oc -n openshift-ptp -c linuxptp-daemon-container exec -it $(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20
$ oc -n openshift-ptp -c linuxptp-daemon-container exec -it $(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that the
leap-configmapresource has been successfully generated by the PTP Operator and is up to date with the latest version of the leap-seconds.list. Run the following command:oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}'$ oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}'1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.9. Configuring linuxptp services as a boundary clock Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services (ptp4l, phc2sys) as boundary clock by creating a PtpConfig custom resource (CR) object.
Use the following example PtpConfig CR as the basis to configure linuxptp services as the boundary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create the following
PtpConfigCR, and then save the YAML in theboundary-clock-ptp-config.yamlfile.Example PTP boundary clock configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 7.7. PTP boundary clock CR configuration options CR field Description nameThe name of the
PtpConfigCR.profileSpecify an array of one or more
profileobjects.nameSpecify the name of a profile object which uniquely identifies a profile object.
ptp4lOptsSpecify system config options for the
ptp4lservice. The options should not include the network interface name-i <interface>and service config file-f /etc/ptp4l.confbecause the network interface name and the service config file are automatically appended.ptp4lConfSpecify the required configuration to start
ptp4las boundary clock. For example,ens1f0synchronizes from a grandmaster clock andens1f3synchronizes connected devices.<interface_1>The interface that receives the synchronization clock.
<interface_2>The interface that sends the synchronization clock.
tx_timestamp_timeoutFor Intel Columbiaville 800 Series NICs, set
tx_timestamp_timeoutto50.boundary_clock_jbodFor Intel Columbiaville 800 Series NICs, ensure
boundary_clock_jbodis set to0. For Intel Fortville X710 Series NICs, ensureboundary_clock_jbodis set to1.phc2sysOptsSpecify system config options for the
phc2sysservice. If this field is empty, the PTP Operator does not start thephc2sysservice.ptpSchedulingPolicyScheduling policy for ptp4l and phc2sys processes. Default value is
SCHED_OTHER. UseSCHED_FIFOon systems that support FIFO scheduling.ptpSchedulingPriorityInteger value from 1-65 used to set FIFO priority for
ptp4landphc2sysprocesses whenptpSchedulingPolicyis set toSCHED_FIFO. TheptpSchedulingPriorityfield is not used whenptpSchedulingPolicyis set toSCHED_OTHER.ptpClockThresholdOptional. If
ptpClockThresholdis not present, default values are used for theptpClockThresholdfields.ptpClockThresholdconfigures how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.recommendSpecify an array of one or more
recommendobjects that define rules on how theprofileshould be applied to nodes..recommend.profileSpecify the
.recommend.profileobject name defined in theprofilesection..recommend.prioritySpecify the
prioritywith an integer value between0and99. A larger number gets lower priority, so a priority of99is lower than a priority of10. If a node can be matched with multiple profiles according to rules defined in thematchfield, the profile with the higher priority is applied to that node..recommend.matchSpecify
.recommend.matchrules withnodeLabelornodeNamevalues..recommend.match.nodeLabelSet
nodeLabelwith thekeyof thenode.Labelsfield from the node object by using theoc get nodes --show-labelscommand. For example,node-role.kubernetes.io/worker..recommend.match.nodeNameSet
nodeNamewith the value of thenode.Namefield from the node object by using theoc get nodescommand. For example,compute-1.example.com.Create the CR by running the following command:
oc create -f boundary-clock-ptp-config.yaml
$ oc create -f boundary-clock-ptp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile. Run the following command:oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.9.1. Configuring linuxptp services as boundary clocks for dual-NIC hardware Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services (ptp4l, phc2sys) as boundary clocks for dual-NIC hardware by creating a PtpConfig custom resource (CR) object for each NIC.
Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create two separate
PtpConfigCRs, one for each NIC, using the reference CR in "Configuring linuxptp services as a boundary clock" as the basis for each CR. For example:Create
boundary-clock-ptp-config-nic1.yaml, specifying values forphc2sysOpts:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to start
ptp4las a boundary clock. For example,ens5f0synchronizes from a grandmaster clock andens5f1synchronizes connected devices. - 2
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics.
Create
boundary-clock-ptp-config-nic2.yaml, removing thephc2sysOptsfield altogether to disable thephc2sysservice for the second NIC:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to start
ptp4las a boundary clock on the second NIC.
NoteYou must completely remove the
phc2sysOptsfield from the secondPtpConfigCR to disable thephc2sysservice on the second NIC.
Create the dual-NIC
PtpConfigCRs by running the following commands:Create the CR that configures PTP for the first NIC:
oc create -f boundary-clock-ptp-config-nic1.yaml
$ oc create -f boundary-clock-ptp-config-nic1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CR that configures PTP for the second NIC:
oc create -f boundary-clock-ptp-config-nic2.yaml
$ oc create -f boundary-clock-ptp-config-nic2.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the PTP Operator has applied the
PtpConfigCRs for both NICs. Examine the logs for thelinuxptpdaemon corresponding to the node that has the dual-NIC hardware installed. For example, run the following command:oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539
ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.9.2. Configuring linuxptp as a highly available system clock for dual-NIC Intel E810 PTP boundary clocks Link kopierenLink in die Zwischenablage kopiert!
You can configure the linuxptp services ptp4l and phc2sys as a highly available (HA) system clock for dual PTP boundary clocks (T-BC).
The highly available system clock uses multiple time sources from dual-NIC Intel E810 Salem channel hardware configured as two boundary clocks. Two boundary clocks instances participate in the HA setup, each with its own configuration profile. You connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.
Create two PtpConfig custom resource (CR) objects that configure the NICs as T-BC and a third PtpConfig CR that configures high availability between the two NICs.
You set phc2SysOpts options once in the PtpConfig CR that configures HA. Set the phc2sysOpts field to an empty string in the PtpConfig CRs that configure the two NICs. This prevents individual phc2sys processes from being set up for the two profiles.
The third PtpConfig CR configures a highly available system clock service. The CR sets the ptp4lOpts field to an empty string to prevent the ptp4l process from running. The CR adds profiles for the ptp4l configurations under the spec.profile.ptpSettings.haProfiles key and passes the kernel socket path of those profiles to the phc2sys service. When a ptp4l failure occurs, the phc2sys service switches to the backup ptp4l configuration. When the primary profile becomes active again, the phc2sys service reverts to the original state.
Ensure that you set spec.recommend.priority to the same value for all three PtpConfig CRs that you use to configure HA.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
- Configure a cluster node with Intel E810 Salem channel dual-NIC.
Procedure
Create two separate
PtpConfigCRs, one for each NIC, using the CRs in "Configuring linuxptp services as boundary clocks for dual-NIC hardware" as a reference for each CR.Create the
ha-ptp-config-nic1.yamlfile, specifying an empty string for thephc2sysOptsfield. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to start
ptp4las a boundary clock. For example,ens5f0synchronizes from a grandmaster clock andens5f1synchronizes connected devices. - 2
- Set
phc2sysOptswith an empty string. These values are populated from thespec.profile.ptpSettings.haProfilesfield of thePtpConfigCR that configures high availability.
Apply the
PtpConfigCR for NIC 1 by running the following command:oc create -f ha-ptp-config-nic1.yaml
$ oc create -f ha-ptp-config-nic1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
ha-ptp-config-nic2.yamlfile, specifying an empty string for thephc2sysOptsfield. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
PtpConfigCR for NIC 2 by running the following command:oc create -f ha-ptp-config-nic2.yaml
$ oc create -f ha-ptp-config-nic2.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the
PtpConfigCR that configures the HA system clock. For example:Create the
ptp-config-for-ha.yamlfile. SethaProfilesto match themetadata.namefields that are set in thePtpConfigCRs that configure the two NICs. For example:haProfiles: ha-ptp-config-nic1,ha-ptp-config-nic2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the
ptp4lOptsfield to an empty string. If it is not empty, thep4ptlprocess starts with a critical error.
ImportantDo not apply the high availability
PtpConfigCR before thePtpConfigCRs that configure the individual NICs.Apply the HA
PtpConfigCR by running the following command:oc create -f ptp-config-for-ha.yaml
$ oc create -f ptp-config-for-ha.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the PTP Operator has applied the
PtpConfigCRs correctly. Perform the following steps:Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThere should be only one
linuxptp-daemonpod.Check that the profile is correct by running the following command. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile.oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.10. Configuring linuxptp services as an ordinary clock Link kopierenLink in die Zwischenablage kopiert!
You can configure linuxptp services (ptp4l, phc2sys) as ordinary clock by creating a PtpConfig custom resource (CR) object.
Use the following example PtpConfig CR as the basis to configure linuxptp services as an ordinary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is required only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Create the following
PtpConfigCR, and then save the YAML in theordinary-clock-ptp-config.yamlfile.Example PTP ordinary clock configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 7.8. PTP ordinary clock CR configuration options CR field Description nameThe name of the
PtpConfigCR.profileSpecify an array of one or more
profileobjects. Each profile must be uniquely named.interfaceSpecify the network interface to be used by the
ptp4lservice, for exampleens787f1.ptp4lOptsSpecify system config options for the
ptp4lservice, for example-2to select the IEEE 802.3 network transport. The options should not include the network interface name-i <interface>and service config file-f /etc/ptp4l.confbecause the network interface name and the service config file are automatically appended. Append--summary_interval -4to use PTP fast events with this interface.phc2sysOptsSpecify system config options for the
phc2sysservice. If this field is empty, the PTP Operator does not start thephc2sysservice. For Intel Columbiaville 800 Series NICs, setphc2sysOptsoptions to-a -r -m -n 24 -N 8 -R 16.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics.ptp4lConfSpecify a string that contains the configuration to replace the default
/etc/ptp4l.conffile. To use the default configuration, leave the field empty.tx_timestamp_timeoutFor Intel Columbiaville 800 Series NICs, set
tx_timestamp_timeoutto50.boundary_clock_jbodFor Intel Columbiaville 800 Series NICs, set
boundary_clock_jbodto0.ptpSchedulingPolicyScheduling policy for
ptp4landphc2sysprocesses. Default value isSCHED_OTHER. UseSCHED_FIFOon systems that support FIFO scheduling.ptpSchedulingPriorityInteger value from 1-65 used to set FIFO priority for
ptp4landphc2sysprocesses whenptpSchedulingPolicyis set toSCHED_FIFO. TheptpSchedulingPriorityfield is not used whenptpSchedulingPolicyis set toSCHED_OTHER.ptpClockThresholdOptional. If
ptpClockThresholdis not present, default values are used for theptpClockThresholdfields.ptpClockThresholdconfigures how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.recommendSpecify an array of one or more
recommendobjects that define rules on how theprofileshould be applied to nodes..recommend.profileSpecify the
.recommend.profileobject name defined in theprofilesection..recommend.prioritySet
.recommend.priorityto0for ordinary clock..recommend.matchSpecify
.recommend.matchrules withnodeLabelornodeNamevalues..recommend.match.nodeLabelSet
nodeLabelwith thekeyof thenode.Labelsfield from the node object by using theoc get nodes --show-labelscommand. For example,node-role.kubernetes.io/worker..recommend.match.nodeNameSet
nodeNamewith the value of thenode.Namefield from the node object by using theoc get nodescommand. For example,compute-1.example.com.Create the
PtpConfigCR by running the following command:oc create -f ordinary-clock-ptp-config.yaml
$ oc create -f ordinary-clock-ptp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile. Run the following command:oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.10.1. Intel Columbiaville E800 series NIC as PTP ordinary clock reference Link kopierenLink in die Zwischenablage kopiert!
The following table describes the changes that you must make to the reference PTP configuration to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig custom resource (CR) that you apply to the cluster.
| PTP configuration | Recommended setting |
|---|---|
|
|
|
|
|
|
|
|
|
For phc2sysOpts, -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.
7.2.10.2. Configuring linuxptp services as an ordinary clock with dual-port NIC redundancy Link kopierenLink in die Zwischenablage kopiert!
You can configure linuxptp services (ptp4l, phc2sys) as an ordinary clock with dual-port NIC redundancy by creating a PtpConfig custom resource (CR) object. In a dual-port NIC configuration for an ordinary clock, if one port fails, the standby port takes over, maintaining PTP timing synchronization.
Configuring linuxptp services as an ordinary clock with dual-port NIC redundancy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
- Check the hardware requirements for using your dual-port NIC as an ordinary clock with added redundancy. For further information, see "Using dual-port NICs to improve redundancy for PTP ordinary clocks".
Procedure
Create the following
PtpConfigCR, and then save the YAML in theoc-dual-port-ptp-config.yamlfile.Example PTP ordinary clock dual-port configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the system config options for the
ptp4lservice. - 2
- Specify the interface configuration for the
ptp4lservice. In this example, settingmasterOnly 0for theens3f2andens3f3interfaces enables both ports on theens3interface to run as leader or follower clocks. In combination with theslaveOnly 1specification, this configuration ensures one port operates as the active ordinary clock, and the other port operates as a standby ordinary clock in theListeningport state. - 3
- Configures
ptp4lto run as an ordinary clock only.
Create the
PtpConfigCR by running the following command:oc create -f oc-dual-port-ptp-config.yaml
$ oc create -f oc-dual-port-ptp-config.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
PtpConfigprofile is applied to the node.Get the list of pods in the
openshift-ptpnamespace by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the profile is correct. Examine the logs of the
linuxptpdaemon that corresponds to the node you specified in thePtpConfigprofile. Run the following command:oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-containerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.11. Configuring FIFO priority scheduling for PTP hardware Link kopierenLink in die Zwischenablage kopiert!
In telco or other deployment types that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation.
To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR.
Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors.
Procedure
Edit the
PtpConfigCR profile:oc edit PtpConfig -n openshift-ptp
$ oc edit PtpConfig -n openshift-ptpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
ptpSchedulingPolicyandptpSchedulingPriorityfields:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and exit to apply the changes to the
PtpConfigCR.
Verification
Get the name of the
linuxptp-daemonpod and corresponding node where thePtpConfigCR has been applied:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
ptp4lprocess is running with the updatedchrtFIFO priority:oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt
$ oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m
I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.12. Configuring PTP log reduction Link kopierenLink in die Zwischenablage kopiert!
The linuxptp-daemon generates logs that you can use for debugging purposes. In telco or other deployment types that feature a limited storage capacity, these logs can add to the storage demand. Currently, the default logging rate is high, causing logs to rotate out in under 24 hours, which makes it difficult to track changes and identify problems.
You can achieve basic log reduction by configuring the PtpConfig custom resource (CR) to exclude log messages that report the master offset value. The master offset log message reports the difference between the clock of the current node and the master clock in nanoseconds. However, with this method, there is no summary status of filtered logs. The enhanced log reduction feature allows you to configure the logging rate of PTP logs. You can set a specific logging rate, which can help reduce the volume of logs generated by the linuxptp-daemon while still retaining essential information for troubleshooting. With the enhanced log reduction feature, you can also specify a threshold that still displays the offset logs if the offset is higher than that threshold.
7.2.12.1. Configuring log filtering for PTP Link kopierenLink in die Zwischenablage kopiert!
Modify the PtpConfig custom resource (CR) to configure basic log filtering and exclude log messages that report the master offset value.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Edit the
PtpConfigCR:oc edit PtpConfig -n openshift-ptp
$ oc edit PtpConfig -n openshift-ptpCopy to Clipboard Copied! Toggle word wrap Toggle overflow In
spec.profile, add theptpSettings.logReducespecification and set the value totrue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor debugging purposes, you can revert this specification to
Falseto include the master offset messages.-
Save and exit to apply the changes to the
PtpConfigCR.
Verification
Get the name of the
linuxptp-daemonpod and corresponding node where thePtpConfigCR has been applied:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that master offset messages are excluded from the logs by running the following command:
oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset"
$ oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- <linux_daemon_container> is the name of the
linuxptp-daemonpod, for examplelinuxptp-daemon-gmv2n.
When you configure the
logReducespecification, this command does not report any instances ofmaster offsetin the logs of thelinuxptpdaemon.
7.2.12.2. Configuring enhanced PTP log reduction Link kopierenLink in die Zwischenablage kopiert!
Basic log reduction effectively filters out frequent logs. However, if you want a periodic summary of the filtered logs, use the enhanced log reduction feature.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator.
Procedure
Edit the
PtpConfigcustom resource (CR):oc edit PtpConfig -n openshift-ptp
$ oc edit PtpConfig -n openshift-ptpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
ptpSettings.logReducespecification in thespec.profilesection, and set the value toenhanced:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Configure the interval for summary logs and a threshold in nanoseconds for the master offset logs. For example, to set the interval to 60 seconds and the threshold to 100 nanoseconds, add the
ptpSettings.logReducespecification in thespec.profilesection and set the value toenhanced 60s 100.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the
linuxptp-daemonis configured to generate summary logs every 30 seconds if no value is specified. In the example configuration, the daemon generates summary logs every 60 seconds and a threshold of 100 nanoseconds for the master offset logs is set. This means the daemon only produces summary logs at the specified interval. However, if your clock’s offset from the master exceeds plus or minus 100 nanoseconds, that specific log entry is recorded.
Optional: To set the interval without a master offset threshold, configure the
logReducefield toenhanced 60sin the YAML.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and exit to apply the changes to the
PtpConfigCR.
Verification
Get the name of the
linuxptp-daemonpod and the corresponding node where thePtpConfigCR is applied by running the following commandoc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that master offset messages are excluded from the logs by running the following command:
oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset"
$ oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- <linux_daemon_container> is the name of the
linuxptp-daemonpod, for example,linuxptp-daemon-gmv2n.
7.2.13. Troubleshooting common PTP Operator issues Link kopierenLink in die Zwischenablage kopiert!
Troubleshoot common problems with the PTP Operator by performing the following steps.
Prerequisites
-
Install the OpenShift Container Platform CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Install the PTP Operator on a bare-metal cluster with hosts that support PTP.
Procedure
Check the Operator and operands are successfully deployed in the cluster for the configured nodes.
oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen the PTP fast event bus is enabled, the number of ready
linuxptp-daemonpods is3/3. If the PTP fast event bus is not enabled,2/2is displayed.Check that supported hardware is found in the cluster.
oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the available PTP network interfaces for a node:
oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <node_name>
Specifies the node you want to query, for example,
compute-0.example.com.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check that the PTP interface is successfully synchronized to the primary clock by accessing the
linuxptp-daemonpod for the corresponding node.Get the name of the
linuxptp-daemonpod and corresponding node you want to troubleshoot by running the following command:oc get pods -n openshift-ptp -o wide
$ oc get pods -n openshift-ptp -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remote shell into the required
linuxptp-daemoncontainer:oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>
$ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <linux_daemon_container>
-
is the container you want to diagnose, for example
linuxptp-daemon-lmvgn.
In the remote shell connection to the
linuxptp-daemoncontainer, use the PTP Management Client (pmc) tool to diagnose the network interface. Run the followingpmccommand to check the sync status of the PTP device, for exampleptp4l.pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'
# pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output when the node is successfully synced to the primary clock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For GNSS-sourced grandmaster clocks, verify that the in-tree NIC ice driver is correct by running the following command, for example:
oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0
$ oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0
driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow For GNSS-sourced grandmaster clocks, verify that the
linuxptp-daemoncontainer is receiving signal from the GNSS antenna. If the container is not receiving the GNSS signal, the/dev/gnss0file is not populated. To verify, run the following command:oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0
$ oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
$GNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A $GNVTG,,T,,M,0.000,N,0.000,K,A*3D $GNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E $GNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 $GPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62
$GNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A $GNVTG,,T,,M,0.000,N,0.000,K,A*3D $GNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E $GNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 $GPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.14. Getting the DPLL firmware version for the CGU in an Intel 800 series NIC Link kopierenLink in die Zwischenablage kopiert!
You can get the digital phase-locked loop (DPLL) firmware version for the Clock Generation Unit (CGU) in an Intel 800 series NIC by opening a debug shell to the cluster node and querying the NIC hardware.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have installed an Intel 800 series NIC in the cluster host.
- You have installed the PTP Operator on a bare-metal cluster with hosts that support PTP.
Procedure
Start a debug pod by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <node_name>
- Is the node where you have installed the Intel 800 series NIC.
Check the CGU firmware version in the NIC by using the
devlinktool and the bus and device name where the NIC is installed. For example, run the following command:devlink dev info <bus_name>/<device_name> | grep cgu
sh-4.4# devlink dev info <bus_name>/<device_name> | grep cguCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
- <bus_name>
-
Is the bus where the NIC is installed. For example,
pci. - <device_name>
-
Is the NIC device name. For example,
0000:51:00.0.
Example output
cgu.id 36 fw.cgu 8032.16973825.6021
cgu.id 361 fw.cgu 8032.16973825.60212 Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe firmware version has a leading nibble and 3 octets for each part of the version number. The number
16973825in binary is0001 0000 0011 0000 0000 0000 0001. Use the binary value to decode the firmware version. For example:Expand Table 7.10. DPLL firmware version Binary part Decimal value 00011
0000 00113
0000 00000
0000 00011
7.2.15. Collecting PTP Operator data Link kopierenLink in die Zwischenablage kopiert!
You can use the oc adm must-gather command to collect information about your cluster, including features and objects associated with PTP Operator.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc). - You have installed the PTP Operator.
Procedure
To collect PTP Operator data with
must-gather, you must specify the PTP Operatormust-gatherimage.oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel9:v4.20
$ oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel9:v4.20Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Developing PTP events consumer applications with the REST API v2 Link kopierenLink in die Zwischenablage kopiert!
When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you deploy your consumer application in a separate application pod. The consumer application subscribes to PTP events by using the PTP events REST API v2.
The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information.
7.3.1. About the PTP fast event notifications framework Link kopierenLink in die Zwischenablage kopiert!
Use the Precision Time Protocol (PTP) fast event REST API v2 to subscribe cluster applications to PTP events that the bare-metal cluster node generates.
The fast events notifications framework uses a REST API for communication. The PTP events REST API v2 is based on the O-RAN O-Cloud Notification API Specification for Event Consumers 4.0 that is available from O-RAN ALLIANCE Specifications.
7.3.2. Retrieving PTP events with the PTP events REST API v2 Link kopierenLink in die Zwischenablage kopiert!
Applications subscribe to PTP events by using an O-RAN v4 compatible REST API in the producer-side cloud event proxy sidecar. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency.
Figure 7.6. Overview of consuming PTP fast events from the PTP event producer REST API v2
-
Event is generated on the cluster host -
The
linuxptp-daemonprocess in the PTP Operator-managed pod runs as a KubernetesDaemonSetand manages the variouslinuxptpprocesses (ptp4l,phc2sys, and optionally for grandmaster clocks,ts2phc). Thelinuxptp-daemonpasses the event to the UNIX domain socket. -
Event is passed to the cloud-event-proxy sidecar -
The PTP plugin reads the event from the UNIX domain socket and passes it to the
cloud-event-proxysidecar in the PTP Operator-managed pod.cloud-event-proxydelivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency. -
Event is published -
The
cloud-event-proxysidecar in the PTP Operator-managed pod processes the event and publishes the event by using the PTP events REST API v2. -
Consumer application requests a subscription and receives the subscribed event -
The consumer application sends an API request to the producer
cloud-event-proxysidecar to create a PTP events subscription. Once subscribed, the consumer application listens to the address specified in the resource qualifier and receives and processes the PTP events.
7.3.3. Configuring the PTP fast event notifications publisher Link kopierenLink in die Zwischenablage kopiert!
To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create.
Prerequisites
-
You have installed the OpenShift Container Platform CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have installed the PTP Operator.
Procedure
Modify the default PTP Operator config to enable PTP fast events.
Save the following YAML in the
ptp-operatorconfig.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enable PTP fast event notifications by setting
enableEventPublishertotrue.
Update the
PtpOperatorConfigCR:oc apply -f ptp-operatorconfig.yaml
$ oc apply -f ptp-operatorconfig.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
PtpConfigcustom resource (CR) for the PTP enabled interface, and set the required values forptpClockThresholdandptp4lOpts. The following YAML illustrates the required values that you must set in thePtpConfigCR:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Append
--summary_interval -4to use PTP fast events. - 2
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics. - 3
- Specify a string that contains the configuration to replace the default
/etc/ptp4l.conffile. To use the default configuration, leave the field empty. - 4
- Optional. If the
ptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
7.3.4. PTP events REST API v2 consumer application reference Link kopierenLink in die Zwischenablage kopiert!
PTP event consumer applications require the following features:
-
A web service running with a
POSThandler to receive the cloud native PTP events JSON payload -
A
createSubscriptionfunction to subscribe to the PTP events producer -
A
getCurrentStatefunction to poll the current state of the PTP events producer
The following example Go snippets illustrate these requirements:
Example PTP events consumer server function in Go
Example PTP events createSubscription function in Go
- 1
- Replace
<node_name>with the FQDN of the node that is generating the PTP events. For example,compute-1.example.com.
Example PTP events consumer getCurrentState function in Go
- 1
- Replace
<node_name>with the FQDN of the node that is generating the PTP events. For example,compute-1.example.com.
7.3.5. Reference event consumer deployment and service CRs using PTP events REST API v2 Link kopierenLink in die Zwischenablage kopiert!
Use the following example PTP event consumer custom resources (CRs) as a reference when deploying your PTP events consumer application for use with the PTP events REST API v2.
Reference cloud event consumer namespace
Reference cloud event consumer deployment
Reference cloud event consumer service account
apiVersion: v1 kind: ServiceAccount metadata: name: consumer-sa namespace: cloud-events
apiVersion: v1
kind: ServiceAccount
metadata:
name: consumer-sa
namespace: cloud-events
Reference cloud event consumer service
7.3.6. Subscribing to PTP events with the REST API v2 Link kopierenLink in die Zwischenablage kopiert!
Deploy your cloud-event-consumer application container and subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container in the pod managed by the PTP Operator.
Subscribe consumer applications to PTP events by sending a POST request to http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions passing the appropriate subscription request payload.
9043 is the default port for the cloud-event-proxy container deployed in the PTP event producer pod. You can configure a different port for your application as required.
7.3.7. Verifying that the PTP events REST API v2 consumer application is receiving events Link kopierenLink in die Zwischenablage kopiert!
Verify that the cloud-event-consumer container in the application pod is receiving Precision Time Protocol (PTP) events.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have installed and configured the PTP Operator.
- You have deployed a cloud events application pod and PTP events consumer application.
Procedure
Check the logs for the deployed events consumer application. For example, run the following command:
oc -n cloud-events logs -f deployment/cloud-consumer-deployment
$ oc -n cloud-events logs -f deployment/cloud-consumer-deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. Test the REST API by using
ocand port-forwarding port9043from thelinuxptp-daemondeployment. For example, run the following command:oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043
$ oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043
Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open a new shell prompt and test the REST API v2 endpoints:
curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health
$ curl -X GET http://localhost:9043/api/ocloudNotifications/v2/healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
OK
OKCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3.8. Monitoring PTP fast event metrics Link kopierenLink in die Zwischenablage kopiert!
You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack.
Prerequisites
-
Install the OpenShift Container Platform CLI
oc. -
Log in as a user with
cluster-adminprivileges. - Install and configure the PTP Operator on a node with PTP-capable hardware.
Procedure
Start a debug pod for the node by running the following command:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for PTP metrics exposed by the
linuxptp-daemoncontainer. For example, run the following command:curl http://localhost:9091/metrics
sh-4.4# curl http://localhost:9091/metricsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. You can also find PTP events in the logs for the
cloud-event-proxycontainer. For example, run the following command:oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy
$ oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example,
openshift_ptp_offset_ns. - In the OpenShift Container Platform web console, click Observe → Metrics.
- Paste the PTP metric name into the Expression field, and click Run queries.
7.3.9. PTP fast event metrics reference Link kopierenLink in die Zwischenablage kopiert!
The following table describes the PTP fast events metrics that are available from cluster nodes where the linuxptp-daemon service is running.
| Metric | Description | Example |
|---|---|---|
|
|
Returns the PTP clock class for the interface. Possible values for PTP clock class are 6 ( |
|
|
|
Returns the current PTP clock state for the interface. Possible values for PTP clock state are |
|
|
| Returns the delay in nanoseconds between the primary clock sending the timing packet and the secondary clock receiving the timing packet. |
|
|
|
Returns the current status of the highly available system clock when there are multiple time sources on different NICs. Possible values are 0 ( |
|
|
|
Returns the frequency adjustment in nanoseconds between 2 PTP clocks. For example, between the upstream clock and the NIC, between the system clock and the NIC, or between the PTP hardware clock ( |
|
|
|
Returns the configured PTP clock role for the interface. Possible values are 0 ( |
|
|
|
Returns the maximum offset in nanoseconds between 2 clocks or interfaces. For example, between the upstream GNSS clock and the NIC ( |
|
|
| Returns the offset in nanoseconds between the DPLL clock or the GNSS clock source and the NIC hardware clock. |
|
|
|
Returns a count of the number of times the |
|
|
| Returns a status code that shows whether the PTP processes are running or not. |
|
|
|
Returns values for
|
|
7.3.9.1. PTP fast event metrics only when T-GM is enabled Link kopierenLink in die Zwischenablage kopiert!
The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled.
| Metric | Description | Example |
|---|---|---|
|
|
Returns the current status of the digital phase-locked loop (DPLL) frequency for the NIC. Possible values are -1 ( |
|
|
|
Returns the current status of the NMEA connection. NMEA is the protocol that is used for 1PPS NIC connections. Possible values are 0 ( |
|
|
|
Returns the status of the DPLL phase for the NIC. Possible values are -1 ( |
|
|
|
Returns the current status of the NIC 1PPS connection. You use the 1PPS connection to synchronize timing between connected NICs. Possible values are 0 ( |
|
|
|
Returns the current status of the global navigation satellite system (GNSS) connection. GNSS provides satellite-based positioning, navigation, and timing services globally. Possible values are 0 ( |
|
7.4. PTP events REST API v2 reference Link kopierenLink in die Zwischenablage kopiert!
Use the following REST API v2 endpoints to subscribe the cloud-event-consumer application to Precision Time Protocol (PTP) events posted at http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2 in the PTP events producer pod.
api/ocloudNotifications/v2/subscriptions-
POST: Creates a new subscription -
GET: Retrieves a list of subscriptions -
DELETE: Deletes all subscriptions
-
api/ocloudNotifications/v2/subscriptions/{subscription_id}-
GET: Returns details for the specified subscription ID -
DELETE: Deletes the subscription associated with the specified subscription ID
-
api/ocloudNotifications/v2/health-
GET: Returns the health status ofocloudNotificationsAPI
-
api/ocloudNotifications/v2/publishers-
GET: Returns a list of PTP event publishers for the cluster node
-
api/ocloudnotifications/v2/{resource_address}/CurrentState-
GET: Returns the current state of the event type specified by the{resouce_address}.
-
7.4.1. PTP events REST API v2 endpoints Link kopierenLink in die Zwischenablage kopiert!
7.4.1.1. api/ocloudNotifications/v2/subscriptions Link kopierenLink in die Zwischenablage kopiert!
HTTP method
GET api/ocloudNotifications/v2/subscriptions
Description
Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions.
Example API response
HTTP method
POST api/ocloudNotifications/v2/subscriptions
Description
Creates a new subscription for the required event by passing the appropriate payload.
You can subscribe to the following PTP events:
-
sync-stateevents -
lock-stateevents -
gnss-sync-status eventsevents -
os-clock-sync-stateevents -
clock-classevents
| Parameter | Type |
|---|---|
| subscription | data |
Example sync-state subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}
Example PTP lock-state events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}
Example PTP gnss-sync-status events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}
Example PTP os-clock-sync-state events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}
Example PTP clock-class events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}
Example API response
The following subscription status events are possible:
| Status code | Description |
|---|---|
|
| Indicates that the subscription is created |
|
| Indicates that the server could not process the request because it was malformed or invalid |
|
| Indicates that the subscription resource is not available |
|
| Indicates that the subscription already exists |
HTTP method
DELETE api/ocloudNotifications/v2/subscriptions
Description
Deletes all subscriptions.
Example API response
{
"status": "deleted all subscriptions"
}
{
"status": "deleted all subscriptions"
}
7.4.1.2. api/ocloudNotifications/v2/subscriptions/{subscription_id} Link kopierenLink in die Zwischenablage kopiert!
HTTP method
GET api/ocloudNotifications/v2/subscriptions/{subscription_id}
Description
Returns details for the subscription with ID subscription_id.
| Parameter | Type |
|---|---|
|
| string |
Example API response
HTTP method
DELETE api/ocloudNotifications/v2/subscriptions/{subscription_id}
Description
Deletes the subscription with ID subscription_id.
| Parameter | Type |
|---|---|
|
| string |
| HTTP response | Description |
|---|---|
| 204 No Content | Success |
7.4.1.3. api/ocloudNotifications/v2/health Link kopierenLink in die Zwischenablage kopiert!
HTTP method
GET api/ocloudNotifications/v2/health/
Description
Returns the health status for the ocloudNotifications REST API.
| HTTP response | Description |
|---|---|
| 200 OK | Success |
7.4.1.4. api/ocloudNotifications/v2/publishers Link kopierenLink in die Zwischenablage kopiert!
HTTP method
GET api/ocloudNotifications/v2/publishers
Description
Returns a list of publisher details for the cluster node. The system generates notifications when the relevant equipment state changes.
You can use equipment synchronization status subscriptions together to deliver a detailed view of the overall synchronization health of the system.
Example API response
| HTTP response | Description |
|---|---|
| 200 OK | Success |
7.4.1.5. api/ocloudNotifications/v2/{resource_address}/CurrentState Link kopierenLink in die Zwischenablage kopiert!
HTTP method
GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/lock-state/CurrentState
GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state/CurrentState
GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/clock-class/CurrentState
GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/sync-state/CurrentState
GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/gnss-status/gnss-sync-state/CurrentState
Description
Returns the current state of the os-clock-sync-state, clock-class, lock-state, gnss-sync-status, or sync-state events for the cluster node.
-
os-clock-sync-statenotifications describe the host operating system clock synchronization state. Can be inLOCKEDorFREERUNstate. -
clock-classnotifications describe the current state of the PTP clock class. -
lock-statenotifications describe the current status of the PTP equipment lock state. Can be inLOCKED,HOLDOVERorFREERUNstate. -
sync-statenotifications describe the current status of the least synchronized of the PTP clocklock-stateandos-clock-sync-statestates. -
gnss-sync-statusnotifications describe the GNSS clock synchronization state.
| Parameter | Type |
|---|---|
|
| string |
Example lock-state API response
Example os-clock-sync-state API response
Example clock-class API response
Example sync-state API response
Example gnss-sync-state API response
Legal Notice
Link kopierenLink in die Zwischenablage kopiert!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.