Advanced networking
Specialized and advanced networking topics in OpenShift Container Platform
Abstract
Chapter 1. Verifying connectivity to an endpoint
The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating.
1.1. Connection health checks that are performed
To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services:
- Kubernetes API server service
- Kubernetes API server endpoints
- OpenShift API server service
- OpenShift API server endpoints
- Load balancers
To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets:
- Health check target service
- Health check target endpoints
1.2. Implementation of connection health checks
				The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel.
			
The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks:
- Health check source
- 
							This program deploys in a single pod replica set managed by a Deploymentobject. The program consumesPodNetworkConnectivityobjects and connects to thespec.targetEndpointspecified in each object.
- Health check target
- A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node.
				You can configure the nodes which network connectivity sources and targets run on with a node selector. Additionally, you can specify permissible tolerations for source and target pods. The configuration is defined in the singleton cluster custom resource of the Network API in the config.openshift.io/v1 API group.
			
Pod scheduling occurs after you have updated the configuration. Therefore, you must apply node labels that you intend to use in your selectors before updating the configuration. Labels applied after updating your network connectivity check pod placement are ignored.
Refer to the default configuration in the following YAML:
Default configuration for connectivity source and target pods
- 1
- Specifies the network diagnostics configuration. If a value is not specified or an empty object is specified, andspec.disableNetworkDiagnostics=trueis set in thenetwork.operator.openshift.iocustom resource namedcluster, network diagnostics are disabled. If set, this value overridesspec.disableNetworkDiagnostics=true.
- 2
- Specifies the diagnostics mode. The value can be the empty string,All, orDisabled. The empty string is equivalent to specifyingAll.
- 3
- Optional: Specifies a selector for connectivity check source pods. You can use thenodeSelectorandtolerationsfields to further specify thesourceNodepods. These are optional for both source and target pods. You can omit them, use both, or use only one of them.
- 4
- Optional: Specifies a selector for connectivity check target pods. You can use thenodeSelectorandtolerationsfields to further specify thetargetNodepods. These are optional for both source and target pods. You can omit them, use both, or use only one of them.
1.3. Configuring pod connectivity check placement
				As a cluster administrator, you can configure which nodes the connectivity check pods run by modifying the network.config.openshift.io object named cluster.
			
Prerequisites
- 
						Install the OpenShift CLI (oc).
Procedure
- Edit the connectivity check configuration by entering the following command: - oc edit network.config.openshift.io cluster - $ oc edit network.config.openshift.io cluster- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
						In the text editor, update the networkDiagnosticsstanza to specify the node selectors that you want for the source and target pods.
- Save your changes and exit the text editor.
Verification
- Verify that the source and target pods are running on the intended nodes by entering the following command:
oc get pods -n openshift-network-diagnostics -o wide
$ oc get pods -n openshift-network-diagnostics -o wideExample output
1.4. PodNetworkConnectivityCheck object fields
				The PodNetworkConnectivityCheck object fields are described in the following tables.
			
| Field | Type | Description | 
|---|---|---|
| 
								 | 
								 | 
								The name of the object in the following format:  
 | 
| 
								 | 
								 | 
								The namespace that the object is associated with. This value is always  | 
| 
								 | 
								 | 
								The name of the pod where the connection check originates, such as  | 
| 
								 | 
								 | 
								The target of the connection check, such as  | 
| 
								 | 
								 | Configuration for the TLS certificate to use. | 
| 
								 | 
								 | The name of the TLS certificate used, if any. The default value is an empty string. | 
| 
								 | 
								 | An object representing the condition of the connection test and logs of recent connection successes and failures. | 
| 
								 | 
								 | The latest status of the connection check and any previous statuses. | 
| 
								 | 
								 | Connection test logs from unsuccessful attempts. | 
| 
								 | 
								 | Connect test logs covering the time periods of any outages. | 
| 
								 | 
								 | Connection test logs from successful attempts. | 
				The following table describes the fields for objects in the status.conditions array:
			
| Field | Type | Description | 
|---|---|---|
| 
								 | 
								 | The time that the condition of the connection transitioned from one status to another. | 
| 
								 | 
								 | The details about last transition in a human readable format. | 
| 
								 | 
								 | The last status of the transition in a machine readable format. | 
| 
								 | 
								 | The status of the condition. | 
| 
								 | 
								 | The type of the condition. | 
				The following table describes the fields for objects in the status.conditions array:
			
| Field | Type | Description | 
|---|---|---|
| 
								 | 
								 | The timestamp from when the connection failure is resolved. | 
| 
								 | 
								 | Connection log entries, including the log entry related to the successful end of the outage. | 
| 
								 | 
								 | A summary of outage details in a human readable format. | 
| 
								 | 
								 | The timestamp from when the connection failure is first detected. | 
| 
								 | 
								 | Connection log entries, including the original failure. | 
1.4.1. Connection log fields
The fields for a connection log entry are described in the following table. The object is used in the following fields:
- 
							status.failures[]
- 
							status.successes[]
- 
							status.outages[].startLogs[]
- 
							status.outages[].endLogs[]
| Field | Type | Description | 
|---|---|---|
| 
									 | 
									 | Records the duration of the action. | 
| 
									 | 
									 | Provides the status in a human readable format. | 
| 
									 | 
									 | 
									Provides the reason for status in a machine readable format. The value is one of  | 
| 
									 | 
									 | Indicates if the log entry is a success or failure. | 
| 
									 | 
									 | The start time of connection check. | 
1.5. Verifying network connectivity for an endpoint
As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod, and verify that network diagnostics is enabled.
Prerequisites
- 
						Install the OpenShift CLI (oc).
- 
						Access to the cluster as a user with the cluster-adminrole.
Procedure
- Confirm that network diagnostics are enable by entering the following command: - oc get network.config.openshift.io cluster -o yaml - $ oc get network.config.openshift.io cluster -o yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- List the current - PodNetworkConnectivityCheckobjects by entering the following command:- oc get podnetworkconnectivitycheck -n openshift-network-diagnostics - $ oc get podnetworkconnectivitycheck -n openshift-network-diagnostics- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- View the connection test logs: - From the output of the previous command, identify the endpoint that you want to review the connectivity logs for.
- View the object by entering the following command: - oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml - $ oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where - <name>specifies the name of the- PodNetworkConnectivityCheckobject.- Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Chapter 2. Changing the MTU for the cluster network
As a cluster administrator, you can change the maximum transmission unit (MTU) for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
2.1. About the cluster MTU
During installation, the cluster network MTU is set automatically based on the primary network interface MTU of cluster nodes. You do not usually need to override the detected MTU.
You might want to change the MTU of the cluster network for one of the following reasons:
- The MTU detected during cluster installation is not correct for your infrastructure.
- Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance.
Only the OVN-Kubernetes network plugin supports changing the MTU value.
2.1.1. Service interruption considerations
When you initiate a maximum transmission unit (MTU) change on your cluster the following effects might impact service availability:
- At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart.
- Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change.
2.1.2. MTU value selection
When planning your maximum transmission unit (MTU) migration there are two related but distinct MTU values to consider.
- Hardware MTU: This MTU value is set based on the specifics of your network infrastructure.
- 
							Cluster network MTU: This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is 100bytes.
					If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001, and some have an MTU of 1500, you must set this value to 1400.
				
						To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value (maxmtu) that is accepted by the network interface by using the ip -d link command.
					
2.1.3. How the migration process works
The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.
| User-initiated steps | OpenShift Container Platform activity | 
|---|---|
| Set the following values in the Cluster Network Operator configuration: 
 | Cluster Network Operator (CNO): Confirms that each field is set to a valid value. 
 
									If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the  Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster. | 
| Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use one of the following methods to accomplish this: 
 | N/A | 
| 
									Set the  | Machine Config Operator (MCO): Performs a rolling reboot of each node in the cluster with the new MTU configuration. | 
2.2. Changing the cluster network MTU
As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.
You cannot roll back an MTU value for nodes during the MTU migration process, but you can roll back the value after the MTU migration process completes.
The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect.
The following procedures describe how to change the cluster network MTU by using machine configs, Dynamic Host Configuration Protocol (DHCP), or an ISO image. If you use either the DHCP or ISO approaches, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure.
Prerequisites
- 
						You have installed the OpenShift CLI (oc).
- 
						You have access to the cluster using an account with cluster-adminpermissions.
- 
						You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100less than the lowest hardware MTU value in your cluster.
- If your nodes are physical machines, ensure that the cluster network and the connected network switches support jumbo frames.
- If your nodes are virtual machines (VMs), ensure that the hypervisor and the connected network switches support jumbo frames.
2.2.1. Checking the current cluster MTU value
Use the following procedure to obtain the current maximum transmission unit (MTU) for the cluster network.
Procedure
- To obtain the current MTU for the cluster network, enter the following command: - oc describe network.config cluster - $ oc describe network.config cluster- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
2.2.2. Preparing your hardware MTU configuration
Many ways exist to configure the hardware maximum transmission unit (MTU) for your cluster nodes. The following examples show only the most common methods. Verify the correctness of your infrastructure MTU. Select your preferred method for configuring your hardware MTU in the cluster nodes.
Procedure
- Prepare your configuration for the hardware MTU: - If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: - dhcp-option-force=26,<mtu> - dhcp-option-force=26,<mtu>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <mtu>
- Specifies the hardware MTU for the DHCP server to advertise.
 
- If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
- If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. - Find the primary network interface by entering the following command: - oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 - $ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <node_name>
- Specifies the name of a node in your cluster.
 
- Create the following - NetworkManagerconfiguration in the- <interface>-mtu.conffile:- [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> - [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <interface>
- Specifies the primary network interface name.
- <mtu>
- Specifies the new hardware MTU value.
 
 
 
2.2.3. Creating MachineConfig objects
					Use the following procedure to create the MachineConfig objects.
				
Procedure
- Create two - MachineConfigobjects, one for the control plane nodes and another for the worker nodes in your cluster:- Create the following Butane config in the - control-plane-interface.bufile:Note- The Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in - 0. For example,- 4.17.0. See "Creating machine configs with Butane" for information about Butane.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the following Butane config in the - worker-interface.bufile:Note- The Butane version you specify in the config file should match the OpenShift Container Platform version and always ends in - 0. For example,- 4.17.0. See "Creating machine configs with Butane" for information about Butane.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create - MachineConfigobjects from the Butane configs by running the following command:- for manifest in control-plane-interface worker-interface; do butane --files-dir . $manifest.bu > $manifest.yaml done- $ for manifest in control-plane-interface worker-interface; do butane --files-dir . $manifest.bu > $manifest.yaml done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Warning- Do not apply these machine configs until explicitly instructed later in this procedure. Applying these machine configs now causes a loss of stability for the cluster. 
2.2.4. Beginning the MTU migration
Use the following procedure to start the MTU migration.
Procedure
- To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. - oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'- $ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <overlay_from>
- Specifies the current cluster network MTU value.
- <overlay_to>
- 
										Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to>. For OVN-Kubernetes, this value must be100less than the value of<machine_to>.
- <machine_to>
- Specifies the MTU for the primary network interface on the underlying host network.
 - oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'- $ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- As the Machine Config Operator updates machines in each machine config pool, the Operator reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: - oc get machineconfigpools - $ oc get machineconfigpools- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A successfully updated node has the following status: - UPDATED=true,- UPDATING=false,- DEGRADED=false.Note- By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. 
2.2.5. Verifying the machine configuration
Use the following procedure to verify the machine configuration.
Procedure
- Confirm the status of the new machine configuration on the hosts: - To list the machine configuration state and the name of the applied machine configuration, enter the following command: - oc describe node | egrep "hostname|machineconfig" - $ oc describe node | egrep "hostname|machineconfig"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done - kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the following statements are true: - 
											The value of machineconfiguration.openshift.io/statefield isDone.
- 
											The value of the machineconfiguration.openshift.io/currentConfigfield is equal to the value of themachineconfiguration.openshift.io/desiredConfigfield.
 
- 
											The value of 
- To confirm that the machine config is correct, enter the following command: - oc get machineconfig <config_name> -o yaml | grep ExecStart - $ oc get machineconfig <config_name> -o yaml | grep ExecStart- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <config_name>
- 
												Specifies the name of the machine config from the machineconfiguration.openshift.io/currentConfigfield.
 - The machine config must include the following update to the systemd configuration: - ExecStart=/usr/local/bin/mtu-migration.sh - ExecStart=/usr/local/bin/mtu-migration.sh- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
2.2.6. Applying the new hardware MTU value
Use the following procedure to apply the new hardware maximum transmission unit (MTU) value.
Procedure
- Update the underlying network interface MTU value: - If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. - for manifest in control-plane-interface worker-interface; do oc create -f $manifest.yaml done- $ for manifest in control-plane-interface worker-interface; do oc create -f $manifest.yaml done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
 
- As the Machine Config Operator updates machines in each machine config pool, the Operator reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: - oc get machineconfigpools - $ oc get machineconfigpools- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A successfully updated node has the following status: - UPDATED=true,- UPDATING=false,- DEGRADED=false.Note- By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. 
- Confirm the status of the new machine configuration on the hosts: - To list the machine configuration state and the name of the applied machine configuration, enter the following command: - oc describe node | egrep "hostname|machineconfig" - $ oc describe node | egrep "hostname|machineconfig"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done - kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Verify that the following statements are true: - 
											The value of machineconfiguration.openshift.io/statefield isDone.
- 
											The value of the machineconfiguration.openshift.io/currentConfigfield is equal to the value of themachineconfiguration.openshift.io/desiredConfigfield.
 
- 
											The value of 
- To confirm that the machine config is correct, enter the following command: - oc get machineconfig <config_name> -o yaml | grep path: - $ oc get machineconfig <config_name> -o yaml | grep path:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <config_name>
- 
												Specifies the name of the machine config from the machineconfiguration.openshift.io/currentConfigfield.
 - If the machine config is successfully deployed, the previous output contains the - /etc/NetworkManager/conf.d/99-<interface>-mtu.conffile path and the- ExecStart=/usr/local/bin/mtu-migration.shline.
 
2.2.7. Finalizing the MTU migration
Use the following procedure to finalize the MTU migration.
Procedure
- To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: - oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'- $ oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <mtu>
- 
										Specifies the new cluster network MTU that you specified with <overlay_to>.
 
- After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: - oc get machineconfigpools - $ oc get machineconfigpools- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - A successfully updated node has the following status: - UPDATED=true,- UPDATING=false,- DEGRADED=false.
Verification
- To get the current MTU for the cluster network, enter the following command: - oc describe network.config cluster - $ oc describe network.config cluster- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Get the current MTU for the primary network interface of a node: - To list the nodes in your cluster, enter the following command: - oc get nodes - $ oc get nodes- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To obtain the current MTU setting for the primary network interface on a node, enter the following command: - oc adm node-logs <node> -u ovs-configuration | grep configure-ovs.sh | grep mtu | grep <interface> | head -1 - $ oc adm node-logs <node> -u ovs-configuration | grep configure-ovs.sh | grep mtu | grep <interface> | head -1- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <node>
- Specifies a node from the output from the previous step.
- <interface>
- Specifies the primary network interface name for the node.
 - Example output - ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 - ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Chapter 3. Using the Stream Control Transmission Protocol (SCTP)
As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster.
3.1. Support for SCTP on OpenShift Container Platform
As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default.
SCTP is a reliable message based protocol that runs on top of an IP network.
				When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value.
			
3.1.1. Example configurations using SCTP protocol
					You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object.
				
In the following example, a pod is configured to use SCTP:
In the following example, a service is configured to use SCTP:
					In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label:
				
3.2. Enabling Stream Control Transmission Protocol (SCTP)
As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster.
Prerequisites
- 
						Install the OpenShift CLI (oc).
- 
						Access to the cluster as a user with the cluster-adminrole.
Procedure
- Create a file named - load-sctp-module.yamlthat contains the following YAML definition:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To create the - MachineConfigobject, enter the following command:- oc create -f load-sctp-module.yaml - $ oc create -f load-sctp-module.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to - Ready, the configuration update is applied.- oc get nodes - $ oc get nodes- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
3.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled
You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service.
Prerequisites
- 
						Access to the internet from the cluster to install the ncpackage.
- 
						Install the OpenShift CLI (oc).
- 
						Access to the cluster as a user with the cluster-adminrole.
Procedure
- Create a pod starts an SCTP listener: - Create a file named - sctp-server.yamlthat defines a pod with the following YAML:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the pod by entering the following command: - oc create -f sctp-server.yaml - $ oc create -f sctp-server.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create a service for the SCTP listener pod. - Create a file named - sctp-service.yamlthat defines a service with the following YAML:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To create the service, enter the following command: - oc create -f sctp-service.yaml - $ oc create -f sctp-service.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create a pod for the SCTP client. - Create a file named - sctp-client.yamlwith the following YAML:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To create the - Podobject, enter the following command:- oc apply -f sctp-client.yaml - $ oc apply -f sctp-client.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Run an SCTP listener on the server. - To connect to the server pod, enter the following command: - oc rsh sctpserver - $ oc rsh sctpserver- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To start the SCTP listener, enter the following command: - nc -l 30102 --sctp - $ nc -l 30102 --sctp- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Connect to the SCTP listener on the server. - Open a new terminal window or tab in your terminal program.
- Obtain the IP address of the - sctpserviceservice. Enter the following command:- oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}'- $ oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To connect to the client pod, enter the following command: - oc rsh sctpclient - $ oc rsh sctpclient- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- To start the SCTP client, enter the following command. Replace - <cluster_IP>with the cluster IP address of the- sctpserviceservice.- nc <cluster_IP> 30102 --sctp - # nc <cluster_IP> 30102 --sctp- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Chapter 4. Associating secondary interfaces metrics to network attachments
			Administrators can use the pod_network_info metric to classify and monitor secondary network interfaces. The metric does this by adding a label that identifies the interface type, typically based on the associated NetworkAttachmentDefinition resource.
		
4.1. Extending secondary network metrics for monitoring
Secondary devices, or interfaces, are used for different purposes. Metrics from secondary network interfaces need to be classified to allow for effective aggregation and monitoring.
Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, relying on interface names alone becomes problematic when secondary interfaces are added because it is difficult to identify their purpose and use their metrics effectively..
When adding secondary interfaces, their names depend on the order in which they are added. Secondary interfaces can belong to distinct networks that can each serve a different purposes.
				With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types.
			
				The network type is generated from the name of the NetworkAttachmentDefinition resource, which distinguishes different secondary network classes. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names.
			
4.2. Network Metrics Daemon
The Network Metrics Daemon is a daemon component that collects and publishes network related metrics.
The kubelet is already publishing network related metrics you can observe. These metrics are:
- 
						container_network_receive_bytes_total
- 
						container_network_receive_errors_total
- 
						container_network_receive_packets_total
- 
						container_network_receive_packets_dropped_total
- 
						container_network_transmit_bytes_total
- 
						container_network_transmit_errors_total
- 
						container_network_transmit_packets_total
- 
						container_network_transmit_packets_dropped_total
The labels in these metrics contain, among others:
- Pod name
- Pod namespace
- 
						Interface name (such as eth0)
These metrics work well until new interfaces are added to the pod, for example via Multus, as it is not clear what the interface names refer to.
The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to.
				This is addressed by introducing the new pod_network_name_info described in the following section.
			
4.3. Metrics with network name
				The Network Metrics daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0.
			
Example of pod_network_name_info
pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0
pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition.
				The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks.
			
				Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation:
			
Chapter 5. Using PTP hardware
5.1. About Precision Time Protocol in OpenShift cluster nodes
Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).
					If your openshift-sdn cluster with PTP uses the User Datagram Protocol (UDP) for hardware time stamping and you migrate to the OVN-Kubernetes plugin, the hardware time stamping cannot be applied to primary interface devices, such as an Open vSwitch (OVS) bridge. As a result, UDP version 4 configurations cannot work with a br-ex interface.
				
				You can configure linuxptp services and use PTP-capable hardware in OpenShift Container Platform cluster nodes.
			
				Use the OpenShift Container Platform web console or OpenShift CLI (oc) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features:
			
- Discovery of the PTP-capable devices in the cluster.
- 
						Management of the configuration of linuxptpservices.
- 
						Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxysidecar.
The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure.
5.1.1. Elements of a PTP domain
PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a leader-follower hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Follower clocks are synchronized to leader clocks, and follower clocks can themselves be the source for other downstream clocks.
Figure 5.1. PTP nodes in the network
The three primary types of PTP clocks are described below.
- Grandmaster clock
- The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks synchronize to a Global Navigation Satellite System (GNSS) time source. The Grandmaster clock is the authoritative source of time in the network and is responsible for providing time synchronization to all other devices.
- Boundary clock
- The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
- Ordinary clock
- The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write timestamps.
5.1.1.1. Advantages of PTP over NTP
One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.
Hardware-based PTP provides optimal accuracy, since the NIC can timestamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.
							Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service (chronyd) using a MachineConfig custom resource. For more information, see Disabling chrony time service.
						
5.1.2. Overview of linuxptp and gpsd in OpenShift Container Platform nodes
					OpenShift Container Platform uses the PTP Operator with linuxptp and gpsd packages for high precision network synchronization. The linuxptp package provides tools and daemons for PTP timing in networks. Cluster hosts with Global Navigation Satellite System (GNSS) capable NICs use gpsd to interface with GNSS clock sources.
				
					The linuxptp package includes the ts2phc, pmc, ptp4l, and phc2sys programs for system clock synchronization.
				
- ts2phc
- ts2phcsynchronizes the PTP hardware clock (PHC) across PTP devices with a high degree of precision.- ts2phcis used in grandmaster clock configurations. It receives the precision timing signal a high precision clock source such as Global Navigation Satellite System (GNSS). GNSS provides an accurate and reliable source of synchronized time for use in large distributed networks. GNSS clocks typically provide time information with a precision of a few nanoseconds.- The - ts2phcsystem daemon sends timing information from the grandmaster clock to other PTP devices in the network by reading time information from the grandmaster clock and converting it to PHC format. PHC time is used by other devices in the network to synchronize their clocks with the grandmaster clock.
- pmc
- 
								pmcimplements a PTP management client (pmc) according to IEEE standard 1588.1588.pmcprovides basic management access for theptp4lsystem daemon.pmcreads from standard input and sends the output over the selected transport, printing any replies it receives.
- ptp4l
- ptp4limplements the PTP boundary clock and ordinary clock and runs as a system daemon.- ptp4ldoes the following:- Synchronizes the PHC to the source clock with hardware time stamping
- Synchronizes the system clock to the source clock with software time stamping
 
- phc2sys
- 
								phc2syssynchronizes the system clock to the PHC on the network interface controller (NIC). Thephc2syssystem daemon continuously monitors the PHC for timing information. When it detects a timing error, the PHC corrects the system clock.
					The gpsd package includes the ubxtool, gspipe, gpsd, programs for GNSS clock synchronization with the host clock.
				
- ubxtool
- 
								ubxtoolCLI allows you to communicate with a u-blox GPS system. TheubxtoolCLI uses the u-blox binary protocol to communicate with the GPS.
- gpspipe
- 
								gpspipeconnects togpsdoutput and pipes it tostdout.
- gpsd
- 
								gpsdis a service daemon that monitors one or more GPS or AIS receivers connected to the host.
5.1.3. Overview of GNSS timing for PTP grandmaster clocks
OpenShift Container Platform supports receiving precision PTP timing from Global Navigation Satellite System (GNSS) sources and grandmaster clocks (T-GM) in the cluster.
OpenShift Container Platform supports PTP timing from GNSS sources with Intel E810 Westport Channel NICs only.
Figure 5.2. Overview of Synchronization with GNSS and T-GM
- Global Navigation Satellite System (GNSS)
- GNSS is a satellite-based system used to provide positioning, navigation, and timing information to receivers around the globe. In PTP, GNSS receivers are often used as a highly accurate and stable reference clock source. These receivers receive signals from multiple GNSS satellites, allowing them to calculate precise time information. The timing information obtained from GNSS is used as a reference by the PTP grandmaster clock. - By using GNSS as a reference, the grandmaster clock in the PTP network can provide highly accurate timestamps to other devices, enabling precise synchronization across the entire network. 
- Digital Phase-Locked Loop (DPLL)
- DPLL provides clock synchronization between different PTP nodes in the network. DPLL compares the phase of the local system clock signal with the phase of the incoming synchronization signal, for example, PTP messages from the PTP grandmaster clock. The DPLL continuously adjusts the local clock frequency and phase to minimize the phase difference between the local clock and the reference clock.
5.1.3.1. Handling leap second events in GNSS-synced PTP grandmaster clocks
A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC) to keep it synchronized with International Atomic Time (TAI). UTC leap seconds are unpredictable. Internationally agreed leap seconds are listed in leap-seconds.list. This file is regularly updated by the International Earth Rotation and Reference Systems Service (IERS). An unhandled leap second can have a significant impact on far edge RAN networks. It can cause the far edge RAN application to immediately disconnect voice calls and data sessions.
5.1.4. About PTP and clock synchronization error events
Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU).
Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU.
Event notifications are available to vRAN applications running on the same DU node. A publish/subscribe REST API passes events notifications to the messaging bus. Publish/subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic.
					The PTP Operator generates fast event notifications for every PTP-capable network interface. You can access the events by using a cloud-event-proxy sidecar container over an HTTP message bus.
				
PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks, PTP grandmaster clocks, or PTP boundary clocks.
5.1.5. 2-card E810 NIC configuration reference
OpenShift Container Platform supports single and dual-NIC Intel E810 hardware for PTP timing in grandmaster clocks (T-GM) and boundary clocks (T-BC).
- Dual NIC grandmaster clock
- You can use a cluster host that has dual-NIC hardware as PTP grandmaster clock. One NIC receives timing information from the global navigation satellite system (GNSS). The second NIC receives the timing information from the first using the SMA1 Tx/Rx connections on the E810 NIC faceplate. The system clock on the cluster host is synchronized from the NIC that is connected to the GNSS satellite. - Dual NIC grandmaster clocks are a feature of distributed RAN (D-RAN) configurations where the Remote Radio Unit (RRU) and Baseband Unit (BBU) are located at the same radio cell site. D-RAN distributes radio functions across multiple sites, with backhaul connections linking them to the core network. - Figure 5.3. Dual NIC grandmaster clock Note- In a dual-NIC T-GM configuration, a single - ts2phcprogram operate on two PTP hardware clocks (PHCs), one for each NIC.
- Dual NIC boundary clock
- For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks. - Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate - ptp4linstances for each NIC feeding the downstream clocks.
- Highly available system clock with dual-NIC boundary clocks
- You can configure Intel E810-XXVDA4 Salem channel dual-NIC hardware as dual PTP boundary clocks that provide timing for a highly available system clock. This configuration is useful when you have multiple time sources on different NICs. High availability ensures that the node does not lose timing synchronization if one of the two timing sources is lost or disconnected. - Each NIC is connected to the same upstream leader clock. Highly available boundary clocks use multiple PTP domains to synchronize with the target system clock. When a T-BC is highly available, the host system clock can maintain the correct offset even if one or more - ptp4linstances syncing the NIC PHC clock fails. If any single SFP port or cable failure occurs, the boundary clock stays in sync with the leader clock.- Boundary clock leader source selection is done using the A-BMCA algorithm. For more information, see ITU-T recommendation G.8275.1. 
5.1.6. 3-card Intel E810 PTP grandmaster clock
OpenShift Container Platform supports cluster hosts with 3 Intel E810 NICs as PTP grandmaster clocks (T-GM).
- 3-card grandmaster clock
- You can use a cluster host that has 3 NICs as PTP grandmaster clock. One NIC receives timing information from the global navigation satellite system (GNSS). The second and third NICs receive the timing information from the first by using the SMA1 Tx/Rx connections on the E810 NIC faceplate. The system clock on the cluster host is synchronized from the NIC that is connected to the GNSS satellite. - 3-card NIC grandmaster clocks can be used for distributed RAN (D-RAN) configurations where the Radio Unit (RU) is connected directly to the distributed unit (DU) without a front haul switch, for example, if the RU and DU are located in the same radio cell site. D-RAN distributes radio functions across multiple sites, with backhaul connections linking them to the core network. - Figure 5.4. 3-card Intel E810 PTP grandmaster clock Note- In a 3-card T-GM configuration, a single - ts2phcprocess reports as 3- ts2phcinstances in the system.
5.2. Configuring PTP devices
				The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform.
			
				When installed, the PTP Operator searches your cluster for Precision Time Protocol (PTP) capable network devices on each node. The Operator creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device.
			
				Network interface controller (NIC) hardware with built-in PTP capabilities sometimes require a device-specific configuration. You can use hardware-specific NIC features for supported hardware with the PTP Operator by configuring a plugin in the PtpConfig custom resource (CR). The linuxptp-daemon service uses the named parameters in the plugin stanza to start linuxptp processes, ptp4l and phc2sys, based on the specific hardware configuration.
			
					In OpenShift Container Platform 4.17, the Intel E810 NIC is supported with a PtpConfig plugin.
				
5.2.1. Installing the PTP Operator using the CLI
As a cluster administrator, you can install the Operator by using the CLI.
Prerequisites
- A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.
- 
							Install the OpenShift CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
Procedure
- Create a namespace for the PTP Operator. - Save the following YAML in the - ptp-namespace.yamlfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - NamespaceCR:- oc create -f ptp-namespace.yaml - $ oc create -f ptp-namespace.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create an Operator group for the PTP Operator. - Save the following YAML in the - ptp-operatorgroup.yamlfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - OperatorGroupCR:- oc create -f ptp-operatorgroup.yaml - $ oc create -f ptp-operatorgroup.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Subscribe to the PTP Operator. - Save the following YAML in the - ptp-sub.yamlfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - SubscriptionCR:- oc create -f ptp-sub.yaml - $ oc create -f ptp-sub.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- To verify that the Operator is installed, enter the following command: - oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase - $ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Name Phase 4.17.0-202301261535 Succeeded - Name Phase 4.17.0-202301261535 Succeeded- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.2. Installing the PTP Operator by using the web console
As a cluster administrator, you can install the PTP Operator by using the web console.
You have to create the namespace and Operator group as mentioned in the previous section.
Procedure
- Install the PTP Operator using the OpenShift Container Platform web console: - In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Choose PTP Operator from the list of available Operators, and then click Install.
- On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.
 
- Optional: Verify that the PTP Operator installed successfully: - Switch to the Operators → Installed Operators page.
- Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded. Note- During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. - If the Operator does not appear as installed, to troubleshoot further: - Go to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
- 
											Go to the Workloads → Pods page and check the logs for pods in the openshift-ptpproject.
 
 
5.2.3. Discovering PTP-capable network devices in your cluster
Identify PTP-capable network devices that exist in your cluster so that you can configure them
Prerequisties
- You installed the PTP Operator.
Procedure
- To return a complete list of PTP capable network devices in your cluster, run the following command: - oc get NodePtpDevice -n openshift-ptp -o yaml - $ oc get NodePtpDevice -n openshift-ptp -o yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.4. Configuring linuxptp services as a grandmaster clock
					You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as grandmaster clock (T-GM) by creating a PtpConfig custom resource (CR) that configures the host NIC.
				
					The ts2phc utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks.
				
						Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for an Intel Westport Channel E810-XXVDA4T network interface.
					
						To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
					
Prerequisites
- For T-GM clocks in production environments, install an Intel E810 Westport Channel NIC in the bare-metal cluster host.
- 
							Install the OpenShift CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create the - PtpConfigCR. For example:- Depending on your requirements, use one of the following T-GM configurations for your deployment. Save the YAML in the - grandmaster-clock-ptp-config.yamlfile:- Example 5.1. PTP grandmaster clock configuration for E810 NIC - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- For E810 Westport Channel NICs, set the value for - ts2phc.nmea_serialportto- /dev/gnss0.
- Create the CR by running the following command: - oc create -f grandmaster-clock-ptp-config.yaml - $ oc create -f grandmaster-clock-ptp-config.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Verification
- Check that the - PtpConfigprofile is applied to the node.- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the profile is correct. Examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile. Run the following command:- oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.4.1. Configuring linuxptp services as a grandmaster clock for dual E810 Westport Channel NICs
						You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock (T-GM) for 2 E810 NICs by creating a PtpConfig custom resource (CR) that configures the NICs.
					
						You can configure the linuxptp services as a T-GM for the following E810 NICs:
					
- Intel E810-XXVDA4T Westport Channel NIC
- Intel E810-CQDA2T Logan Beach NIC
For distributed RAN (D-RAN) use cases, you can configure PTP for 2 NICs as follows:
- NIC 1 is synced to the global navigation satellite system (GNSS) time source.
- 
								NIC 2 is synced to the 1PPS timing output provided by NIC one. This configuration is provided by the PTP hardware plugin in the PtpConfigCR.
						The 2-card PTP T-GM configuration uses one instance of ptp4l and one instance of ts2phc. The ptp4l and ts2phc programs are each configured to operate on two PTP hardware clocks (PHCs), one for each NIC. The host system clock is synchronized from the NIC that is connected to the GNSS time source.
					
							Use the following example PtpConfig CR as the basis to configure linuxptp services as T-GM for dual Intel E810 network interfaces.
						
							To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
						
Prerequisites
- For T-GM clocks in production environments, install two Intel E810 NICs in the bare-metal cluster host.
- 
								Install the OpenShift CLI (oc).
- 
								Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create the - PtpConfigCR. For example:- Save the following YAML in the - grandmaster-clock-ptp-config-dual-nics.yamlfile:- Example 5.2. PTP grandmaster clock configuration for dual E810 NICs - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Set the value for - ts2phc.nmea_serialportto- /dev/gnss0.
- Create the CR by running the following command: - oc create -f grandmaster-clock-ptp-config-dual-nics.yaml - $ oc create -f grandmaster-clock-ptp-config-dual-nics.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Verification
- Check that the - PtpConfigprofile is applied to the node.- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the profile is correct. Examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile. Run the following command:- oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.4.2. Configuring linuxptp services as a grandmaster clock for 3 E810 NICs
						You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock (T-GM) for 3 E810 NICs by creating a PtpConfig custom resource (CR) that configures the NICs.
					
						You can configure the linuxptp services as a T-GM with 3 NICs for the following E810 NICs:
					
- Intel E810-XXVDA4T Westport Channel NIC
- Intel E810-CQDA2T Logan Beach NIC
For distributed RAN (D-RAN) use cases, you can configure PTP for 3 NICs as follows:
- NIC 1 is synced to the Global Navigation Satellite System (GNSS)
- NICs 2 and 3 are synced to NIC 1 with 1PPS faceplate connections
						Use the following example PtpConfig CRs as the basis to configure linuxptp services as a 3-card Intel E810 T-GM.
					
Prerequisites
- For T-GM clocks in production environments, install 3 Intel E810 NICs in the bare-metal cluster host.
- 
								Install the OpenShift CLI (oc).
- 
								Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create the - PtpConfigCR. For example:- Save the following YAML in the - three-nic-grandmaster-clock-ptp-config.yamlfile:- Example 5.3. PTP grandmaster clock configuration for 3 E810 NICs - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Set the value for - ts2phc.nmea_serialportto- /dev/gnss0.
- Create the CR by running the following command: - oc create -f three-nic-grandmaster-clock-ptp-config.yaml - $ oc create -f three-nic-grandmaster-clock-ptp-config.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Verification
- Check that the - PtpConfigprofile is applied to the node.- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m3q 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x6zkn 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m3q 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x6zkn 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the profile is correct. Run the following command, and examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile:- oc logs linuxptp-daemon-74m3q -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-74m3q -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.5. Grandmaster clock PtpConfig configuration reference
					The following reference information describes the configuration options for the PtpConfig custom resource (CR) that configures the linuxptp services (ptp4l, phc2sys, ts2phc) as a grandmaster clock.
				
| PtpConfig CR field | Description | 
|---|---|
| 
									 | 
									Specify an array of  
									The plugin mechanism allows the PTP Operator to do automated hardware configuration. For the Intel Westport Channel NIC or the Intel Logan Beach NIC, when the  | 
| 
									 | 
									Specify system configuration options for the  | 
| 
									 | 
									Specify the required configuration to start  | 
| 
									 | Specify the maximum amount of time to wait for the transmit (TX) timestamp from the sender before discarding the data. | 
| 
									 | Specify the JBOD boundary clock time delay value. This value is used to correct the time values that are passed between the network time devices. | 
| 
									 | 
									Specify system config options for the  Note 
										Ensure that the network interface listed here is configured as grandmaster and is referenced as required in the  | 
| 
									 | 
									Configure the scheduling policy for  | 
| 
									 | 
									Set an integer value from 1-65 to configure FIFO priority for  | 
| 
									 | 
									Optional. If  | 
| 
									 | 
									Sets the configuration for the  
									 
									 
 | 
| 
									 | 
									Set options for the  | 
| 
									 | 
									Specify an array of one or more  | 
| 
									 | 
									Specify the  | 
| 
									 | 
									Specify the  | 
| 
									 | 
									Specify  | 
| 
									 | 
									Set  | 
| 
									 | 
									Set  | 
5.2.5.1. Grandmaster clock class sync state reference
						The following table describes the PTP grandmaster clock (T-GM) gm.ClockClass states. Clock class states categorize T-GM clocks based on their accuracy and stability with regard to the Primary Reference Time Clock (PRTC) or other timing source.
					
Holdover specification is the amount of time a PTP clock can maintain synchronization without receiving updates from the primary time source.
| Clock class state | Description | 
|---|---|
| 
										 | 
										T-GM clock is connected to a PRTC in  | 
| 
										 | 
										T-GM clock is in  | 
| 
										 | 
										T-GM clock is in  | 
For more information, see "Phase/time traceability information", ITU-T G.8275.1/Y.1369.1 Recommendations.
5.2.5.2. Intel E810 NIC hardware configuration reference
						Use this information to understand how to use the Intel E810 hardware plugin to configure the E810 network interface as PTP grandmaster clock. Hardware pin configuration determines how the network interface interacts with other components and devices in the system. The Intel E810 NIC has four connectors for external 1PPS signals: SMA1, SMA2, U.FL1, and U.FL2.
					
| Hardware pin | Recommended setting | Description | 
|---|---|---|
| 
										 | 
										 | 
										Disables the  | 
| 
										 | 
										 | 
										Disables the  | 
| 
										 | 
										 | 
										Disables the  | 
| 
										 | 
										 | 
										Disables the  | 
						You can set the pin configuration on the Intel E810 NIC by using the spec.profile.plugins.e810.pins parameters as shown in the following example:
					
pins:
      <interface_name>:
        <connector_name>: <function> <channel_number>
pins:
      <interface_name>:
        <connector_name>: <function> <channel_number>Where:
						<function>: Specifies the role of the pin. The following values are associated with the pin role:
					
- 
								0: Disabled
- 
								1: Rx (Receive timestamping)
- 
								2: Tx (Transmit timestamping)
						<channel number>: A number associated with the physical connector. The following channel numbers are associated with the physical connectors:
					
- 
								1:SMA1orU.FL1
- 
								2:SMA2orU.FL2
Examples:
- 
								0 1: Disables the pin mapped toSMA1orU.FL1.
- 
								1 2: Assigns the Rx function toSMA2orU.FL2.
							SMA1 and U.FL1 connectors share channel one. SMA2 and U.FL2 connectors share channel two.
						
						Set spec.profile.plugins.e810.ublxCmds parameters to configure the GNSS clock in the PtpConfig custom resource (CR).
					
You must configure an offset value to compensate for T-GM GPS antenna cable signal delay. To configure the optimal T-GM antenna offset value, make precise measurements of the GNSS antenna cable signal delay. Red Hat cannot assist in this measurement or provide any values for the required delay offsets.
						Each of these ublxCmds stanzas correspond to a configuration that is applied to the host NIC by using ubxtool commands. For example:
					
- 1
- Measured T-GM antenna delay offset in nanoseconds. To get the required delay offset value, you must measure the cable delay using external test equipment.
						The following table describes the equivalent ubxtool commands:
					
| ubxtool command | Description | 
|---|---|
| 
										 | 
										Enables antenna voltage control, allows antenna status to be reported in the  | 
| 
										 | Enables the antenna to receive GPS signals. | 
| 
										 | Configures the antenna to receive signal from the Galileo GPS satellite. | 
| 
										 | Disables the antenna from receiving signal from the GLONASS GPS satellite. | 
| 
										 | Disables the antenna from receiving signal from the BeiDou GPS satellite. | 
| 
										 | Disables the antenna from receiving signal from the SBAS GPS satellite. | 
| 
										 | Configures the GNSS receiver survey-in process to improve its initial position estimate. This can take up to 24 hours to achieve an optimal result. | 
| 
										 | Runs a single automated scan of the hardware and reports on the NIC state and configuration settings. | 
5.2.5.3. Dual E810 NIC configuration reference
Use this information to understand how to use the Intel E810 hardware plugin to configure a pair of E810 network interfaces as PTP grandmaster clock (T-GM).
Before you configure the dual-NIC cluster host, you must connect the two NICs with an SMA1 cable using the 1PPS faceplace connections.
When you configure a dual-NIC T-GM, you need to compensate for the 1PPS signal delay that occurs when you connect the NICs using the SMA1 connection ports. Various factors such as cable length, ambient temperature, and component and manufacturing tolerances can affect the signal delay. To compensate for the delay, you must calculate the specific value that you use to offset the signal delay.
| PtpConfig field | Description | 
|---|---|
| 
										 | Configure the E810 hardware pins using the PTP Operator E810 hardware plugin. 
 | 
| 
										 | 
										Use the  | 
| 
										 | 
										Set the value of  | 
						Each value in the spec.profile.plugins.e810.pins list follows the <function> <channel_number> format.
					
Where:
						<function>: Specifies the pin role. The following values are associated with the pin role:
					
- 
								0: Disabled
- 
								1: Receive (Rx) – for 1PPS IN
- 
								2: Transmit (Tx) – for 1PPS OUT
						<channel_number>: A number associated with the physical connector. The following channel numbers are associated with the physical connectors:
					
- 
								1:SMA1orU.FL1
- 
								2:SMA2orU.FL2
Examples:
- 
								2 1: Enables1PPS OUT(Tx) onSMA1.
- 
								1 1: Enables1PPS IN(Rx) onSMA1.
The PTP Operator passes these values to the Intel E810 hardware plugin and writes them to the sysfs pin configuration interface on each NIC.
5.2.5.4. 3-card E810 NIC configuration reference
Use this information to understand how to configure 3 E810 NICs as PTP grandmaster clock (T-GM).
						Before you configure the 3-card cluster host, you must connect the 3 NICs by using the 1PPS faceplate connections. The primary NIC 1PPS_out outputs feed the other 2 NICs.
					
When you configure a 3-card T-GM, you need to compensate for the 1PPS signal delay that occurs when you connect the NICs by using the SMA1 connection ports. Various factors such as cable length, ambient temperature, and component and manufacturing tolerances can affect the signal delay. To compensate for the delay, you must calculate the specific value that you use to offset the signal delay.
| PtpConfig field | Description | 
|---|---|
| 
										 | Configure the E810 hardware pins with the PTP Operator E810 hardware plugin. 
 | 
| 
										 | 
										Use the  | 
| 
										 | 
										Set the value of  | 
5.2.6. Configuring dynamic leap seconds handling for PTP grandmaster clocks
					The PTP Operator container image includes the latest leap-seconds.list file that is available at the time of release. You can configure the PTP Operator to automatically update the leap second file by using Global Positioning System (GPS) announcements.
				
					Leap second information is stored in an automatically generated ConfigMap resource named leap-configmap in the openshift-ptp namespace. The PTP Operator mounts the leap-configmap resource as a volume in the linuxptp-daemon pod that is accessible by the ts2phc process.
				
					If the GPS satellite broadcasts new leap second data, the PTP Operator updates the leap-configmap resource with the new data. The ts2phc process picks up the changes automatically.
				
The following procedure is provided as reference. The 4.17 version of the PTP Operator enables automatic leap second management by default.
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed the PTP Operator and configured a PTP grandmaster clock (T-GM) in the cluster.
Procedure
- Configure automatic leap second handling in the - phc2sysOptssection of the- PtpConfigCR. Set the following options:- phc2sysOpts: -r -u 0 -m -N 8 -R 16 -S 2 -s ens2f0 -n 24 - phc2sysOpts: -r -u 0 -m -N 8 -R 16 -S 2 -s ens2f0 -n 24- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- Previously, the T-GM required an offset adjustment in the - phc2sysconfiguration (- -O -37) to account for historical leap seconds. This is no longer needed.
- Configure the Intel e810 NIC to enable periodical reporting of - NAV-TIMELSmessages by the GPS receiver in the- spec.profile.plugins.e810.ublxCmdssection of the- PtpConfigCR. For example:- - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248"- - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Validate that the configured T-GM is receiving - NAV-TIMELSmessages from the connected GPS. Run the following command:- oc -n openshift-ptp -c linuxptp-daemon-container exec -it $(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20 - $ oc -n openshift-ptp -c linuxptp-daemon-container exec -it $(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Validate that the - leap-configmapresource has been successfully generated by the PTP Operator and is up to date with the latest version of the leap-seconds.list. Run the following command:- oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}'- $ oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}'- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.7. Configuring linuxptp services as a boundary clock
					You can configure the linuxptp services (ptp4l, phc2sys) as boundary clock by creating a PtpConfig custom resource (CR) object.
				
						Use the following example PtpConfig CR as the basis to configure linuxptp services as the boundary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
					
Prerequisites
- 
							Install the OpenShift CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create the following - PtpConfigCR, and then save the YAML in the- boundary-clock-ptp-config.yamlfile.- Example PTP boundary clock configuration - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Expand - Table 5.7. PTP boundary clock CR configuration options - CR field - Description - name- The name of the - PtpConfigCR.- profile- Specify an array of one or more - profileobjects.- name- Specify the name of a profile object which uniquely identifies a profile object. - ptp4lOpts- Specify system config options for the - ptp4lservice. The options should not include the network interface name- -i <interface>and service config file- -f /etc/ptp4l.confbecause the network interface name and the service config file are automatically appended.- ptp4lConf- Specify the required configuration to start - ptp4las boundary clock. For example,- ens1f0synchronizes from a grandmaster clock and- ens1f3synchronizes connected devices.- <interface_1>- The interface that receives the synchronization clock. - <interface_2>- The interface that sends the synchronization clock. - tx_timestamp_timeout- For Intel Columbiaville 800 Series NICs, set - tx_timestamp_timeoutto- 50.- boundary_clock_jbod- For Intel Columbiaville 800 Series NICs, ensure - boundary_clock_jbodis set to- 0. For Intel Fortville X710 Series NICs, ensure- boundary_clock_jbodis set to- 1.- phc2sysOpts- Specify system config options for the - phc2sysservice. If this field is empty, the PTP Operator does not start the- phc2sysservice.- ptpSchedulingPolicy- Scheduling policy for ptp4l and phc2sys processes. Default value is - SCHED_OTHER. Use- SCHED_FIFOon systems that support FIFO scheduling.- ptpSchedulingPriority- Integer value from 1-65 used to set FIFO priority for - ptp4land- phc2sysprocesses when- ptpSchedulingPolicyis set to- SCHED_FIFO. The- ptpSchedulingPriorityfield is not used when- ptpSchedulingPolicyis set to- SCHED_OTHER.- ptpClockThreshold- Optional. If - ptpClockThresholdis not present, default values are used for the- ptpClockThresholdfields.- ptpClockThresholdconfigures how long after the PTP master clock is disconnected before PTP events are triggered.- holdOverTimeoutis the time value in seconds before the PTP clock event state changes to- FREERUNwhen the PTP master clock is disconnected. The- maxOffsetThresholdand- minOffsetThresholdsettings configure offset values in nanoseconds that compare against the values for- CLOCK_REALTIME(- phc2sys) or master offset (- ptp4l). When the- ptp4lor- phc2sysoffset value is outside this range, the PTP clock state is set to- FREERUN. When the offset value is within this range, the PTP clock state is set to- LOCKED.- recommend- Specify an array of one or more - recommendobjects that define rules on how the- profileshould be applied to nodes.- .recommend.profile- Specify the - .recommend.profileobject name defined in the- profilesection.- .recommend.priority- Specify the - prioritywith an integer value between- 0and- 99. A larger number gets lower priority, so a priority of- 99is lower than a priority of- 10. If a node can be matched with multiple profiles according to rules defined in the- matchfield, the profile with the higher priority is applied to that node.- .recommend.match- Specify - .recommend.matchrules with- nodeLabelor- nodeNamevalues.- .recommend.match.nodeLabel- Set - nodeLabelwith the- keyof the- node.Labelsfield from the node object by using the- oc get nodes --show-labelscommand. For example,- node-role.kubernetes.io/worker.- .recommend.match.nodeName- Set - nodeNamewith the value of the- node.Namefield from the node object by using the- oc get nodescommand. For example,- compute-1.example.com.
- Create the CR by running the following command: - oc create -f boundary-clock-ptp-config.yaml - $ oc create -f boundary-clock-ptp-config.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Check that the - PtpConfigprofile is applied to the node.- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the profile is correct. Examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile. Run the following command:- oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.7.1. Configuring linuxptp services as boundary clocks for dual-NIC hardware
						You can configure the linuxptp services (ptp4l, phc2sys) as boundary clocks for dual-NIC hardware by creating a PtpConfig custom resource (CR) object for each NIC.
					
						Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.
					
Prerequisites
- 
								Install the OpenShift CLI (oc).
- 
								Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create two separate - PtpConfigCRs, one for each NIC, using the reference CR in "Configuring linuxptp services as a boundary clock" as the basis for each CR. For example:- Create - boundary-clock-ptp-config-nic1.yaml, specifying values for- phc2sysOpts:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to startptp4las a boundary clock. For example,ens5f0synchronizes from a grandmaster clock andens5f1synchronizes connected devices.
- 2
- Requiredphc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics.
 
- Create - boundary-clock-ptp-config-nic2.yaml, removing the- phc2sysOptsfield altogether to disable the- phc2sysservice for the second NIC:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to startptp4las a boundary clock on the second NIC.
 Note- You must completely remove the - phc2sysOptsfield from the second- PtpConfigCR to disable the- phc2sysservice on the second NIC.
 
- Create the dual-NIC - PtpConfigCRs by running the following commands:- Create the CR that configures PTP for the first NIC: - oc create -f boundary-clock-ptp-config-nic1.yaml - $ oc create -f boundary-clock-ptp-config-nic1.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the CR that configures PTP for the second NIC: - oc create -f boundary-clock-ptp-config-nic2.yaml - $ oc create -f boundary-clock-ptp-config-nic2.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Verification
- Check that the PTP Operator has applied the - PtpConfigCRs for both NICs. Examine the logs for the- linuxptpdaemon corresponding to the node that has the dual-NIC hardware installed. For example, run the following command:- oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539 - ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.7.2. Configuring linuxptp as a highly available system clock for dual-NIC Intel E810 PTP boundary clocks
						You can configure the linuxptp services ptp4l and phc2sys as a highly available (HA) system clock for dual PTP boundary clocks (T-BC).
					
						The highly available system clock uses multiple time sources from dual-NIC Intel E810 Salem channel hardware configured as two boundary clocks. Two boundary clocks instances participate in the HA setup, each with its own configuration profile. You connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.
					
						Create two PtpConfig custom resource (CR) objects that configure the NICs as T-BC and a third PtpConfig CR that configures high availability between the two NICs.
					
							You set phc2SysOpts options once in the PtpConfig CR that configures HA. Set the phc2sysOpts field to an empty string in the PtpConfig CRs that configure the two NICs. This prevents individual phc2sys processes from being set up for the two profiles.
						
						The third PtpConfig CR configures a highly available system clock service. The CR sets the ptp4lOpts field to an empty string to prevent the ptp4l process from running. The CR adds profiles for the ptp4l configurations under the spec.profile.ptpSettings.haProfiles key and passes the kernel socket path of those profiles to the phc2sys service. When a ptp4l failure occurs, the phc2sys service switches to the backup ptp4l configuration. When the primary profile becomes active again, the phc2sys service reverts to the original state.
					
							Ensure that you set spec.recommend.priority to the same value for all three PtpConfig CRs that you use to configure HA.
						
Prerequisites
- 
								Install the OpenShift CLI (oc).
- 
								Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
- Configure a cluster node with Intel E810 Salem channel dual-NIC.
Procedure
- Create two separate - PtpConfigCRs, one for each NIC, using the CRs in "Configuring linuxptp services as boundary clocks for dual-NIC hardware" as a reference for each CR.- Create the - ha-ptp-config-nic1.yamlfile, specifying an empty string for the- phc2sysOptsfield. For example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Specify the required interfaces to startptp4las a boundary clock. For example,ens5f0synchronizes from a grandmaster clock andens5f1synchronizes connected devices.
- 2
- Setphc2sysOptswith an empty string. These values are populated from thespec.profile.ptpSettings.haProfilesfield of thePtpConfigCR that configures high availability.
 
- Apply the - PtpConfigCR for NIC 1 by running the following command:- oc create -f ha-ptp-config-nic1.yaml - $ oc create -f ha-ptp-config-nic1.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the - ha-ptp-config-nic2.yamlfile, specifying an empty string for the- phc2sysOptsfield. For example:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Apply the - PtpConfigCR for NIC 2 by running the following command:- oc create -f ha-ptp-config-nic2.yaml - $ oc create -f ha-ptp-config-nic2.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create the - PtpConfigCR that configures the HA system clock. For example:- Create the - ptp-config-for-ha.yamlfile. Set- haProfilesto match the- metadata.namefields that are set in the- PtpConfigCRs that configure the two NICs. For example:- haProfiles: ha-ptp-config-nic1,ha-ptp-config-nic2- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Set theptp4lOptsfield to an empty string. If it is not empty, thep4ptlprocess starts with a critical error.
 
 Important- Do not apply the high availability - PtpConfigCR before the- PtpConfigCRs that configure the individual NICs.- Apply the HA - PtpConfigCR by running the following command:- oc create -f ptp-config-for-ha.yaml - $ oc create -f ptp-config-for-ha.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
Verification
- Verify that the PTP Operator has applied the - PtpConfigCRs correctly. Perform the following steps:- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- There should be only one - linuxptp-daemonpod.
- Check that the profile is correct by running the following command. Examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile.- oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.8. Configuring linuxptp services as an ordinary clock
					You can configure linuxptp services (ptp4l, phc2sys) as ordinary clock by creating a PtpConfig custom resource (CR) object.
				
						Use the following example PtpConfig CR as the basis to configure linuxptp services as an ordinary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is required only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information.
					
Prerequisites
- 
							Install the OpenShift CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Create the following - PtpConfigCR, and then save the YAML in the- ordinary-clock-ptp-config.yamlfile.- Example PTP ordinary clock configuration - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Expand - Table 5.8. PTP ordinary clock CR configuration options - CR field - Description - name- The name of the - PtpConfigCR.- profile- Specify an array of one or more - profileobjects. Each profile must be uniquely named.- interface- Specify the network interface to be used by the - ptp4lservice, for example- ens787f1.- ptp4lOpts- Specify system config options for the - ptp4lservice, for example- -2to select the IEEE 802.3 network transport. The options should not include the network interface name- -i <interface>and service config file- -f /etc/ptp4l.confbecause the network interface name and the service config file are automatically appended. Append- --summary_interval -4to use PTP fast events with this interface.- phc2sysOpts- Specify system config options for the - phc2sysservice. If this field is empty, the PTP Operator does not start the- phc2sysservice. For Intel Columbiaville 800 Series NICs, set- phc2sysOptsoptions to- -a -r -m -n 24 -N 8 -R 16.- -mprints messages to- stdout. The- linuxptp-daemon- DaemonSetparses the logs and generates Prometheus metrics.- ptp4lConf- Specify a string that contains the configuration to replace the default - /etc/ptp4l.conffile. To use the default configuration, leave the field empty.- tx_timestamp_timeout- For Intel Columbiaville 800 Series NICs, set - tx_timestamp_timeoutto- 50.- boundary_clock_jbod- For Intel Columbiaville 800 Series NICs, set - boundary_clock_jbodto- 0.- ptpSchedulingPolicy- Scheduling policy for - ptp4land- phc2sysprocesses. Default value is- SCHED_OTHER. Use- SCHED_FIFOon systems that support FIFO scheduling.- ptpSchedulingPriority- Integer value from 1-65 used to set FIFO priority for - ptp4land- phc2sysprocesses when- ptpSchedulingPolicyis set to- SCHED_FIFO. The- ptpSchedulingPriorityfield is not used when- ptpSchedulingPolicyis set to- SCHED_OTHER.- ptpClockThreshold- Optional. If - ptpClockThresholdis not present, default values are used for the- ptpClockThresholdfields.- ptpClockThresholdconfigures how long after the PTP master clock is disconnected before PTP events are triggered.- holdOverTimeoutis the time value in seconds before the PTP clock event state changes to- FREERUNwhen the PTP master clock is disconnected. The- maxOffsetThresholdand- minOffsetThresholdsettings configure offset values in nanoseconds that compare against the values for- CLOCK_REALTIME(- phc2sys) or master offset (- ptp4l). When the- ptp4lor- phc2sysoffset value is outside this range, the PTP clock state is set to- FREERUN. When the offset value is within this range, the PTP clock state is set to- LOCKED.- recommend- Specify an array of one or more - recommendobjects that define rules on how the- profileshould be applied to nodes.- .recommend.profile- Specify the - .recommend.profileobject name defined in the- profilesection.- .recommend.priority- Set - .recommend.priorityto- 0for ordinary clock.- .recommend.match- Specify - .recommend.matchrules with- nodeLabelor- nodeNamevalues.- .recommend.match.nodeLabel- Set - nodeLabelwith the- keyof the- node.Labelsfield from the node object by using the- oc get nodes --show-labelscommand. For example,- node-role.kubernetes.io/worker.- .recommend.match.nodeName- Set - nodeNamewith the value of the- node.Namefield from the node object by using the- oc get nodescommand. For example,- compute-1.example.com.
- Create the - PtpConfigCR by running the following command:- oc create -f ordinary-clock-ptp-config.yaml - $ oc create -f ordinary-clock-ptp-config.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Check that the - PtpConfigprofile is applied to the node.- Get the list of pods in the - openshift-ptpnamespace by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the profile is correct. Examine the logs of the - linuxptpdaemon that corresponds to the node you specified in the- PtpConfigprofile. Run the following command:- oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container - $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.2.8.1. Intel Columbiaville E800 series NIC as PTP ordinary clock reference
						The following table describes the changes that you must make to the reference PTP configuration to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig custom resource (CR) that you apply to the cluster.
					
| PTP configuration | Recommended setting | 
|---|---|
| 
										 | 
										 | 
| 
										 | 
										 | 
| 
										 | 
										 | 
							For phc2sysOpts, -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.
						
5.2.9. Configuring FIFO priority scheduling for PTP hardware
					In telco or other deployment types that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation.
				
					To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR.
				
						Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors.
					
Procedure
- Edit the - PtpConfigCR profile:- oc edit PtpConfig -n openshift-ptp - $ oc edit PtpConfig -n openshift-ptp- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Change the - ptpSchedulingPolicyand- ptpSchedulingPriorityfields:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
							Save and exit to apply the changes to the PtpConfigCR.
Verification
- Get the name of the - linuxptp-daemonpod and corresponding node where the- PtpConfigCR has been applied:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check that the - ptp4lprocess is running with the updated- chrtFIFO priority:- oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt - $ oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m - I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.10. Configuring log filtering for linuxptp services
					The linuxptp daemon generates logs that you can use for debugging purposes. In telco or other deployment types that feature a limited storage capacity, these logs can add to the storage demand.
				
					To reduce the number log messages, you can configure the PtpConfig custom resource (CR) to exclude log messages that report the master offset value. The master offset log message reports the difference between the current node’s clock and the master clock in nanoseconds.
				
Prerequisites
- 
							Install the OpenShift CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
- Install the PTP Operator.
Procedure
- Edit the - PtpConfigCR:- oc edit PtpConfig -n openshift-ptp - $ oc edit PtpConfig -n openshift-ptp- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- In - spec.profile, add the- ptpSettings.logReducespecification and set the value to- true:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- For debugging purposes, you can revert this specification to - Falseto include the master offset messages.
- 
							Save and exit to apply the changes to the PtpConfigCR.
Verification
- Get the name of the - linuxptp-daemonpod and corresponding node where the- PtpConfigCR has been applied:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that master offset messages are excluded from the logs by running the following command: - oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset" - $ oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset"- 1 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- <linux_daemon_container> is the name of thelinuxptp-daemonpod, for examplelinuxptp-daemon-gmv2n.
 - When you configure the - logReducespecification, this command does not report any instances of- master offsetin the logs of the- linuxptpdaemon.
5.2.11. Troubleshooting common PTP Operator issues
Troubleshoot common problems with the PTP Operator by performing the following steps.
Prerequisites
- 
							Install the OpenShift Container Platform CLI (oc).
- 
							Log in as a user with cluster-adminprivileges.
- Install the PTP Operator on a bare-metal cluster with hosts that support PTP.
Procedure
- Check the Operator and operands are successfully deployed in the cluster for the configured nodes. - oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- When the PTP fast event bus is enabled, the number of ready - linuxptp-daemonpods is- 3/3. If the PTP fast event bus is not enabled,- 2/2is displayed.
- Check that supported hardware is found in the cluster. - oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io - $ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check the available PTP network interfaces for a node: - oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml - $ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <node_name>
- Specifies the node you want to query, for example, - compute-0.example.com.- Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Check that the PTP interface is successfully synchronized to the primary clock by accessing the - linuxptp-daemonpod for the corresponding node.- Get the name of the - linuxptp-daemonpod and corresponding node you want to troubleshoot by running the following command:- oc get pods -n openshift-ptp -o wide - $ oc get pods -n openshift-ptp -o wide- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com - NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Remote shell into the required - linuxptp-daemoncontainer:- oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container> - $ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <linux_daemon_container>
- 
												is the container you want to diagnose, for example linuxptp-daemon-lmvgn.
 
- In the remote shell connection to the - linuxptp-daemoncontainer, use the PTP Management Client (- pmc) tool to diagnose the network interface. Run the following- pmccommand to check the sync status of the PTP device, for example- ptp4l.- pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' - # pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output when the node is successfully synced to the primary clock - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- For GNSS-sourced grandmaster clocks, verify that the in-tree NIC ice driver is correct by running the following command, for example: - oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0 - $ oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0 - driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- For GNSS-sourced grandmaster clocks, verify that the - linuxptp-daemoncontainer is receiving signal from the GNSS antenna. If the container is not receiving the GNSS signal, the- /dev/gnss0file is not populated. To verify, run the following command:- oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0 - $ oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - $GNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A $GNVTG,,T,,M,0.000,N,0.000,K,A*3D $GNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E $GNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 $GPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62 - $GNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A $GNVTG,,T,,M,0.000,N,0.000,K,A*3D $GNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E $GNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 $GPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.2.12. Getting the DPLL firmware version for the CGU in an Intel 800 series NIC
You can get the digital phase-locked loop (DPLL) firmware version for the Clock Generation Unit (CGU) in an Intel 800 series NIC by opening a debug shell to the cluster node and querying the NIC hardware.
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed an Intel 800 series NIC in the cluster host.
- You have installed the PTP Operator on a bare-metal cluster with hosts that support PTP.
Procedure
- Start a debug pod by running the following command: - oc debug node/<node_name> - $ oc debug node/<node_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <node_name>
- Is the node where you have installed the Intel 800 series NIC.
 
- Check the CGU firmware version in the NIC by using the - devlinktool and the bus and device name where the NIC is installed. For example, run the following command:- devlink dev info <bus_name>/<device_name> | grep cgu - sh-4.4# devlink dev info <bus_name>/<device_name> | grep cgu- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <bus_name>
- 
										Is the bus where the NIC is installed. For example, pci.
- <device_name>
- 
										Is the NIC device name. For example, 0000:51:00.0.
 - Example output - cgu.id 36 fw.cgu 8032.16973825.6021 - cgu.id 36- 1 - fw.cgu 8032.16973825.6021- 2 - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- The firmware version has a leading nibble and 3 octets for each part of the version number. The number - 16973825in binary is- 0001 0000 0011 0000 0000 0000 0001. Use the binary value to decode the firmware version. For example:- Expand - Table 5.10. DPLL firmware version - Binary part - Decimal value - 0001- 1 - 0000 0011- 3 - 0000 0000- 0 - 0000 0001- 1 
5.2.13. Collecting PTP Operator data
					You can use the oc adm must-gather command to collect information about your cluster, including features and objects associated with PTP Operator.
				
Prerequisites
- 
							You have access to the cluster as a user with the cluster-adminrole.
- 
							You have installed the OpenShift CLI (oc).
- You have installed the PTP Operator.
Procedure
- To collect PTP Operator data with - must-gather, you must specify the PTP Operator- must-gatherimage.- oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel9:v4.17 - $ oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel9:v4.17- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.3. Developing PTP events consumer applications with the REST API v2
When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you deploy your consumer application in a separate application pod. The consumer application subscribes to PTP events by using the PTP events REST API v2.
The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information.
5.3.1. About the PTP fast event notifications framework
Use the Precision Time Protocol (PTP) fast event REST API v2 to subscribe cluster applications to PTP events that the bare-metal cluster node generates.
The fast events notifications framework uses a REST API for communication. The PTP events REST API v1 and v2 are based on the O-RAN O-Cloud Notification API Specification for Event Consumers 4.0 that is available from O-RAN ALLIANCE Specifications.
Only the PTP events REST API v2 is O-RAN v4 compliant.
5.3.2. Retrieving PTP events with the PTP events REST API v2
					Applications subscribe to PTP events by using an O-RAN v4 compatible REST API in the producer-side cloud event proxy sidecar. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency.
				
Figure 5.5. Overview of consuming PTP fast events from the PTP event producer REST API v2
-   Event is generated on the cluster host Event is generated on the cluster host
- 
								The linuxptp-daemonprocess in the PTP Operator-managed pod runs as a KubernetesDaemonSetand manages the variouslinuxptpprocesses (ptp4l,phc2sys, and optionally for grandmaster clocks,ts2phc). Thelinuxptp-daemonpasses the event to the UNIX domain socket.
-   Event is passed to the cloud-event-proxy sidecar Event is passed to the cloud-event-proxy sidecar
- 
								The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxysidecar in the PTP Operator-managed pod.cloud-event-proxydelivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency.
-   Event is published Event is published
- 
								The cloud-event-proxysidecar in the PTP Operator-managed pod processes the event and publishes the event by using the PTP events REST API v2.
-   Consumer application requests a subscription and receives the subscribed event Consumer application requests a subscription and receives the subscribed event
- 
								The consumer application sends an API request to the producer cloud-event-proxysidecar to create a PTP events subscription. Once subscribed, the consumer application listens to the address specified in the resource qualifier and receives and processes the PTP events.
5.3.3. Configuring the PTP fast event notifications publisher
					To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create.
				
Prerequisites
- 
							You have installed the OpenShift Container Platform CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed the PTP Operator.
Procedure
- Modify the default PTP Operator config to enable PTP fast events. - Save the following YAML in the - ptp-operatorconfig.yamlfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow Note- In OpenShift Container Platform 4.13 or later, you do not need to set the - spec.ptpEventConfig.transportHostfield in the- PtpOperatorConfigresource when you use HTTP transport for PTP events.
- Update the - PtpOperatorConfigCR:- oc apply -f ptp-operatorconfig.yaml - $ oc apply -f ptp-operatorconfig.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create a - PtpConfigcustom resource (CR) for the PTP enabled interface, and set the required values for- ptpClockThresholdand- ptp4lOpts. The following YAML illustrates the required values that you must set in the- PtpConfigCR:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Append--summary_interval -4to use PTP fast events.
- 2
- Requiredphc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics.
- 3
- Specify a string that contains the configuration to replace the default/etc/ptp4l.conffile. To use the default configuration, leave the field empty.
- 4
- Optional. If theptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
 
5.3.4. PTP events REST API v2 consumer application reference
PTP event consumer applications require the following features:
- 
							A web service running with a POSThandler to receive the cloud native PTP events JSON payload
- 
							A createSubscriptionfunction to subscribe to the PTP events producer
- 
							A getCurrentStatefunction to poll the current state of the PTP events producer
The following example Go snippets illustrate these requirements:
Example PTP events consumer server function in Go
Example PTP events createSubscription function in Go
- 1
- Replace<node_name>with the FQDN of the node that is generating the PTP events. For example,compute-1.example.com.
Example PTP events consumer getCurrentState function in Go
- 1
- Replace<node_name>with the FQDN of the node that is generating the PTP events. For example,compute-1.example.com.
5.3.5. Reference event consumer deployment and service CRs using PTP events REST API v2
Use the following example PTP event consumer custom resources (CRs) as a reference when deploying your PTP events consumer application for use with the PTP events REST API v2.
Reference cloud event consumer namespace
Reference cloud event consumer deployment
Reference cloud event consumer service account
apiVersion: v1 kind: ServiceAccount metadata: name: consumer-sa namespace: cloud-events
apiVersion: v1
kind: ServiceAccount
metadata:
  name: consumer-sa
  namespace: cloud-eventsReference cloud event consumer service
5.3.6. Subscribing to PTP events with the REST API v2
					Deploy your cloud-event-consumer application container and subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container in the pod managed by the PTP Operator.
				
					Subscribe consumer applications to PTP events by sending a POST request to http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions passing the appropriate subscription request payload.
				
						9043 is the default port for the cloud-event-proxy container deployed in the PTP event producer pod. You can configure a different port for your application as required.
					
5.3.7. Verifying that the PTP events REST API v2 consumer application is receiving events
					Verify that the cloud-event-consumer container in the application pod is receiving Precision Time Protocol (PTP) events.
				
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed and configured the PTP Operator.
- You have deployed a cloud events application pod and PTP events consumer application.
Procedure
- Check the logs for the deployed events consumer application. For example, run the following command: - oc -n cloud-events logs -f deployment/cloud-consumer-deployment - $ oc -n cloud-events logs -f deployment/cloud-consumer-deployment- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Optional. Test the REST API by using - ocand port-forwarding port- 9043from the- linuxptp-daemondeployment. For example, run the following command:- oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043 - $ oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043 - Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Open a new shell prompt and test the REST API v2 endpoints: - curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health - $ curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - OK - OK- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
5.3.8. Monitoring PTP fast event metrics
					You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack.
				
Prerequisites
- 
							Install the OpenShift Container Platform CLI oc.
- 
							Log in as a user with cluster-adminprivileges.
- Install and configure the PTP Operator on a node with PTP-capable hardware.
Procedure
- Start a debug pod for the node by running the following command: - oc debug node/<node_name> - $ oc debug node/<node_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check for PTP metrics exposed by the - linuxptp-daemoncontainer. For example, run the following command:- curl http://localhost:9091/metrics - sh-4.4# curl http://localhost:9091/metrics- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Optional. You can also find PTP events in the logs for the - cloud-event-proxycontainer. For example, run the following command:- oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy - $ oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
							To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns.
- In the OpenShift Container Platform web console, click Observe → Metrics.
- Paste the PTP metric name into the Expression field, and click Run queries.
5.3.9. PTP fast event metrics reference
					The following table describes the PTP fast events metrics that are available from cluster nodes where the linuxptp-daemon service is running.
				
| Metric | Description | Example | 
|---|---|---|
| 
									 | 
									Returns the PTP clock class for the interface. Possible values for PTP clock class are 6 ( | 
									 | 
| 
									 | 
									Returns the current PTP clock state for the interface. Possible values for PTP clock state are  | 
									 | 
| 
									 | Returns the delay in nanoseconds between the primary clock sending the timing packet and the secondary clock receiving the timing packet. | 
									 | 
| 
									 | 
									Returns the current status of the highly available system clock when there are multiple time sources on different NICs. Possible values are 0 ( | 
									 | 
| 
									 | 
									Returns the frequency adjustment in nanoseconds between 2 PTP clocks. For example, between the upstream clock and the NIC, between the system clock and the NIC, or between the PTP hardware clock ( | 
									 | 
| 
									 | 
									Returns the configured PTP clock role for the interface. Possible values are 0 ( | 
									 | 
| 
									 | 
									Returns the maximum offset in nanoseconds between 2 clocks or interfaces. For example, between the upstream GNSS clock and the NIC ( | 
									 | 
| 
									 | Returns the offset in nanoseconds between the DPLL clock or the GNSS clock source and the NIC hardware clock. | 
									 | 
| 
									 | 
									Returns a count of the number of times the  | 
									 | 
| 
									 | Returns a status code that shows whether the PTP processes are running or not. | 
									 | 
| 
									 | 
									Returns values for  
 | 
									 | 
5.3.9.1. PTP fast event metrics only when T-GM is enabled
The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled.
| Metric | Description | Example | 
|---|---|---|
| 
										 | 
										Returns the current status of the digital phase-locked loop (DPLL) frequency for the NIC. Possible values are -1 ( | 
										 | 
| 
										 | 
										Returns the current status of the NMEA connection. NMEA is the protocol that is used for 1PPS NIC connections. Possible values are 0 ( | 
										 | 
| 
										 | 
										Returns the status of the DPLL phase for the NIC. Possible values are -1 ( | 
										 | 
| 
										 | 
										Returns the current status of the NIC 1PPS connection. You use the 1PPS connection to synchronize timing between connected NICs. Possible values are 0 ( | 
										 | 
| 
										 | 
										Returns the current status of the global navigation satellite system (GNSS) connection. GNSS provides satellite-based positioning, navigation, and timing services globally. Possible values are 0 ( | 
										 | 
5.4. PTP events REST API v2 reference
				Use the following REST API v2 endpoints to subscribe the cloud-event-consumer application to Precision Time Protocol (PTP) events posted at http://localhost:9043/api/ocloudNotifications/v2 in the PTP events producer pod.
			
- api/ocloudNotifications/v2/subscriptions- 
								POST: Creates a new subscription
- 
								GET: Retrieves a list of subscriptions
- 
								DELETE: Deletes all subscriptions
 
- 
								
- api/ocloudNotifications/v2/subscriptions/{subscription_id}- 
								GET: Returns details for the specified subscription ID
- 
								DELETE: Deletes the subscription associated with the specified subscription ID
 
- 
								
- api/ocloudNotifications/v2/health- 
								GET: Returns the health status ofocloudNotificationsAPI
 
- 
								
- api/ocloudNotifications/v2/publishers- 
								GET: Returns a list of PTP event publishers for the cluster node
 
- 
								
- api/ocloudnotifications/v2/{resource_address}/CurrentState- 
								GET: Returns the current state of the event type specified by the{resouce_address}.
 
- 
								
5.4.1. PTP events REST API v2 endpoints
5.4.1.1. api/ocloudNotifications/v2/subscriptions
HTTP method
						GET api/ocloudNotifications/v2/subscriptions
					
Description
						Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions.
					
Example API response
HTTP method
						POST api/ocloudNotifications/v2/subscriptions
					
Description
Creates a new subscription for the required event by passing the appropriate payload.
You can subscribe to the following PTP events:
- 
								sync-stateevents
- 
								lock-stateevents
- 
								gnss-sync-status eventsevents
- 
								os-clock-sync-stateevents
- 
								clock-classevents
| Parameter | Type | 
|---|---|
| subscription | data | 
Example sync-state subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}Example PTP lock-state events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}Example PTP gnss-sync-status events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}Example PTP os-clock-sync-state events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}Example PTP clock-class events subscription payload
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}
{
"EndpointUri": "http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event",
"ResourceAddress": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}Example API response
The following subscription status events are possible:
| Status code | Description | 
|---|---|
| 
										 | Indicates that the subscription is created | 
| 
										 | Indicates that the server could not process the request because it was malformed or invalid | 
| 
										 | Indicates that the subscription resource is not available | 
| 
										 | Indicates that the subscription already exists | 
HTTP method
						DELETE api/ocloudNotifications/v2/subscriptions
					
Description
Deletes all subscriptions.
Example API response
{
"status": "deleted all subscriptions"
}
{
"status": "deleted all subscriptions"
}5.4.1.2. api/ocloudNotifications/v2/subscriptions/{subscription_id}
HTTP method
						GET api/ocloudNotifications/v2/subscriptions/{subscription_id}
					
Description
						Returns details for the subscription with ID subscription_id.
					
| Parameter | Type | 
|---|---|
| 
										 | string | 
Example API response
HTTP method
						DELETE api/ocloudNotifications/v2/subscriptions/{subscription_id}
					
Description
						Deletes the subscription with ID subscription_id.
					
| Parameter | Type | 
|---|---|
| 
										 | string | 
| HTTP response | Description | 
|---|---|
| 204 No Content | Success | 
5.4.1.3. api/ocloudNotifications/v2/health
HTTP method
						GET api/ocloudNotifications/v2/health/
					
Description
						Returns the health status for the ocloudNotifications REST API.
					
| HTTP response | Description | 
|---|---|
| 200 OK | Success | 
5.4.1.4. api/ocloudNotifications/v2/publishers
HTTP method
						GET api/ocloudNotifications/v2/publishers
					
Description
Returns a list of publisher details for the cluster node. The system generates notifications when the relevant equipment state changes.
You can use equipment synchronization status subscriptions together to deliver a detailed view of the overall synchronization health of the system.
Example API response
| HTTP response | Description | 
|---|---|
| 200 OK | Success | 
5.4.1.5. api/ocloudNotifications/v2/{resource_address}/CurrentState
HTTP method
						GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/lock-state/CurrentState
					
						GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state/CurrentState
					
						GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/ptp-status/clock-class/CurrentState
					
						GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/sync-status/sync-state/CurrentState
					
						GET api/ocloudNotifications/v2/cluster/node/{node_name}/sync/gnss-status/gnss-sync-state/CurrentState
					
Description
						Returns the current state of the os-clock-sync-state, clock-class, lock-state, gnss-sync-status, or sync-state events for the cluster node.
					
- 
								os-clock-sync-statenotifications describe the host operating system clock synchronization state. Can be inLOCKEDorFREERUNstate.
- 
								clock-classnotifications describe the current state of the PTP clock class.
- 
								lock-statenotifications describe the current status of the PTP equipment lock state. Can be inLOCKED,HOLDOVERorFREERUNstate.
- 
								sync-statenotifications describe the current status of the least synchronized of the PTP clocklock-stateandos-clock-sync-statestates.
- 
								gnss-sync-statusnotifications describe the GNSS clock synchronization state.
| Parameter | Type | 
|---|---|
| 
										 | string | 
Example lock-state API response
Example os-clock-sync-state API response
Example clock-class API response
Example sync-state API response
Example gnss-sync-state API response
5.5. Developing PTP events consumer applications with the REST API v1
When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you deploy your consumer application in a separate application pod. The consumer application subscribes to PTP events by using the PTP events REST API v1.
The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information.
PTP events REST API v1 and events consumer application sidecar is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
5.5.1. About the PTP fast event notifications framework
Use the Precision Time Protocol (PTP) fast event REST API v2 to subscribe cluster applications to PTP events that the bare-metal cluster node generates.
The fast events notifications framework uses a REST API for communication. The PTP events REST API v1 and v2 are based on the O-RAN O-Cloud Notification API Specification for Event Consumers 4.0 that is available from O-RAN ALLIANCE Specifications.
Only the PTP events REST API v2 is O-RAN v4 compliant.
5.5.2. Retrieving PTP events with the PTP events REST API v1
					Applications run the cloud-event-proxy container in a sidecar pattern to subscribe to PTP events. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency.
				
Figure 5.6. Overview of PTP fast events with consumer sidecar and HTTP message transport
-   Event is generated on the cluster host Event is generated on the cluster host
- 
								linuxptp-daemonin the PTP Operator-managed pod runs as a KubernetesDaemonSetand manages the variouslinuxptpprocesses (ptp4l,phc2sys, and optionally for grandmaster clocks,ts2phc). Thelinuxptp-daemonpasses the event to the UNIX domain socket.
-   Event is passed to the cloud-event-proxy sidecar Event is passed to the cloud-event-proxy sidecar
- 
								The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxysidecar in the PTP Operator-managed pod.cloud-event-proxydelivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency.
-   Event is persisted Event is persisted
- 
								The cloud-event-proxysidecar in the PTP Operator-managed pod processes the event and publishes the cloud-native event by using a REST API.
-   Message is transported Message is transported
- 
								The message transporter transports the event to the cloud-event-proxysidecar in the application pod over HTTP.
-   Event is available from the REST API Event is available from the REST API
- 
								The cloud-event-proxysidecar in the Application pod processes the event and makes it available by using the REST API.
-   Consumer application requests a subscription and receives the subscribed event Consumer application requests a subscription and receives the subscribed event
- 
								The consumer application sends an API request to the cloud-event-proxysidecar in the application pod to create a PTP events subscription. Thecloud-event-proxysidecar creates an HTTP messaging listener protocol for the resource specified in the subscription.
					The cloud-event-proxy sidecar in the application pod receives the event from the PTP Operator-managed pod, unwraps the cloud events object to retrieve the data, and posts the event to the consumer application. The consumer application listens to the address specified in the resource qualifier and receives and processes the PTP event.
				
5.5.3. Configuring the PTP fast event notifications publisher
					To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create.
				
Prerequisites
- 
							You have installed the OpenShift Container Platform CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed the PTP Operator.
Procedure
- Modify the default PTP Operator config to enable PTP fast events. - Save the following YAML in the - ptp-operatorconfig.yamlfile:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Enable PTP fast event notifications by settingenableEventPublishertotrue.
 Note- In OpenShift Container Platform 4.13 or later, you do not need to set the - spec.ptpEventConfig.transportHostfield in the- PtpOperatorConfigresource when you use HTTP transport for PTP events.
- Update the - PtpOperatorConfigCR:- oc apply -f ptp-operatorconfig.yaml - $ oc apply -f ptp-operatorconfig.yaml- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Create a - PtpConfigcustom resource (CR) for the PTP enabled interface, and set the required values for- ptpClockThresholdand- ptp4lOpts. The following YAML illustrates the required values that you must set in the- PtpConfigCR:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - 1
- Append--summary_interval -4to use PTP fast events.
- 2
- Requiredphc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics.
- 3
- Specify a string that contains the configuration to replace the default/etc/ptp4l.conffile. To use the default configuration, leave the field empty.
- 4
- Optional. If theptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
 
5.5.4. PTP events consumer application reference
PTP event consumer applications require the following features:
- 
							A web service running with a POSThandler to receive the cloud native PTP events JSON payload
- 
							A createSubscriptionfunction to subscribe to the PTP events producer
- 
							A getCurrentStatefunction to poll the current state of the PTP events producer
The following example Go snippets illustrate these requirements:
Example PTP events consumer server function in Go
Example PTP events createSubscription function in Go
- 1
- Replace<node_name>with the FQDN of the node that is generating the PTP events. For example,compute-1.example.com.
Example PTP events consumer getCurrentState function in Go
5.5.5. Reference cloud-event-proxy deployment and service CRs
					Use the following example cloud-event-proxy deployment and subscriber service CRs as a reference when deploying your PTP events consumer application.
				
Reference cloud-event-proxy deployment with HTTP transport
Reference cloud-event-proxy subscriber service
5.5.6. Subscribing to PTP events with the REST API v1
					Deploy your cloud-event-consumer application container and cloud-event-proxy sidecar container in a separate application pod.
				
					Subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod.
				
						9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your application as required.
					
5.5.7. Verifying that the PTP events REST API v1 consumer application is receiving events
					Verify that the cloud-event-proxy container in the application pod is receiving PTP events.
				
Prerequisites
- 
							You have installed the OpenShift CLI (oc).
- 
							You have logged in as a user with cluster-adminprivileges.
- You have installed and configured the PTP Operator.
Procedure
- Get the list of active - linuxptp-daemonpods. Run the following command:- oc get pods -n openshift-ptp - $ oc get pods -n openshift-ptp- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h - NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Access the metrics for the required consumer-side - cloud-event-proxycontainer by running the following command:- oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics - $ oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - where: - <linuxptp-daemon>
- Specifies the pod you want to query, for example, - linuxptp-daemon-2t78p.- Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
5.5.8. Monitoring PTP fast event metrics
					You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack.
				
Prerequisites
- 
							Install the OpenShift Container Platform CLI oc.
- 
							Log in as a user with cluster-adminprivileges.
- Install and configure the PTP Operator on a node with PTP-capable hardware.
Procedure
- Start a debug pod for the node by running the following command: - oc debug node/<node_name> - $ oc debug node/<node_name>- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check for PTP metrics exposed by the - linuxptp-daemoncontainer. For example, run the following command:- curl http://localhost:9091/metrics - sh-4.4# curl http://localhost:9091/metrics- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Example output - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Optional. You can also find PTP events in the logs for the - cloud-event-proxycontainer. For example, run the following command:- oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy - $ oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- 
							To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns.
- In the OpenShift Container Platform web console, click Observe → Metrics.
- Paste the PTP metric name into the Expression field, and click Run queries.
5.5.9. PTP fast event metrics reference
					The following table describes the PTP fast events metrics that are available from cluster nodes where the linuxptp-daemon service is running.
				
| Metric | Description | Example | 
|---|---|---|
| 
									 | 
									Returns the PTP clock class for the interface. Possible values for PTP clock class are 6 ( | 
									 | 
| 
									 | 
									Returns the current PTP clock state for the interface. Possible values for PTP clock state are  | 
									 | 
| 
									 | Returns the delay in nanoseconds between the primary clock sending the timing packet and the secondary clock receiving the timing packet. | 
									 | 
| 
									 | 
									Returns the current status of the highly available system clock when there are multiple time sources on different NICs. Possible values are 0 ( | 
									 | 
| 
									 | 
									Returns the frequency adjustment in nanoseconds between 2 PTP clocks. For example, between the upstream clock and the NIC, between the system clock and the NIC, or between the PTP hardware clock ( | 
									 | 
| 
									 | 
									Returns the configured PTP clock role for the interface. Possible values are 0 ( | 
									 | 
| 
									 | 
									Returns the maximum offset in nanoseconds between 2 clocks or interfaces. For example, between the upstream GNSS clock and the NIC ( | 
									 | 
| 
									 | Returns the offset in nanoseconds between the DPLL clock or the GNSS clock source and the NIC hardware clock. | 
									 | 
| 
									 | 
									Returns a count of the number of times the  | 
									 | 
| 
									 | Returns a status code that shows whether the PTP processes are running or not. | 
									 | 
| 
									 | 
									Returns values for  
 | 
									 | 
5.5.9.1. PTP fast event metrics only when T-GM is enabled
The following table describes the PTP fast event metrics that are available only when PTP grandmaster clock (T-GM) is enabled.
| Metric | Description | Example | 
|---|---|---|
| 
										 | 
										Returns the current status of the digital phase-locked loop (DPLL) frequency for the NIC. Possible values are -1 ( | 
										 | 
| 
										 | 
										Returns the current status of the NMEA connection. NMEA is the protocol that is used for 1PPS NIC connections. Possible values are 0 ( | 
										 | 
| 
										 | 
										Returns the status of the DPLL phase for the NIC. Possible values are -1 ( | 
										 | 
| 
										 | 
										Returns the current status of the NIC 1PPS connection. You use the 1PPS connection to synchronize timing between connected NICs. Possible values are 0 ( | 
										 | 
| 
										 | 
										Returns the current status of the global navigation satellite system (GNSS) connection. GNSS provides satellite-based positioning, navigation, and timing services globally. Possible values are 0 ( | 
										 | 
5.6. PTP events REST API v1 reference
				Use the following Precision Time Protocol (PTP) fast event REST API v1 endpoints to subscribe the cloud-event-consumer application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the application pod.
			
PTP events REST API v1 and events consumer application sidecar is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.
The following API endpoints are available:
- api/ocloudNotifications/v1/subscriptions- 
								POST: Creates a new subscription
- 
								GET: Retrieves a list of subscriptions
- 
								DELETE: Deletes all subscriptions
 
- 
								
- api/ocloudNotifications/v1/subscriptions/{subscription_id}- 
								GET: Returns details for the specified subscription ID
- 
								DELETE: Deletes the subscription associated with the specified subscription ID
 
- 
								
- api/ocloudNotifications/v1/health- 
								GET: Returns the health status ofocloudNotificationsAPI
 
- 
								
- api/ocloudNotifications/v1/publishers- 
								GET: Returns a list of PTP event publishers for the cluster node
 
- 
								
- api/ocloudnotifications/v1/{resource_address}/CurrentState- 
								GET: Returns the current state of one the following event types:sync-state,os-clock-sync-state,clock-class,lock-state, orgnss-sync-statusevents
 
- 
								
5.6.1. PTP events REST API v1 endpoints
5.6.1.1. api/ocloudNotifications/v1/subscriptions
HTTP method
						GET api/ocloudNotifications/v1/subscriptions
					
Description
						Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions.
					
Example API response
HTTP method
						POST api/ocloudNotifications/v1/subscriptions
					
Description
						Creates a new subscription for the required event by passing the appropriate payload. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. You can subscribe to the following PTP events:
					
- 
								lock-stateevents
- 
								os-clock-sync-stateevents
- 
								clock-classevents
- 
								gnss-sync-statusevents
- 
								sync-stateevents
| Parameter | Type | 
|---|---|
| subscription | data | 
Example PTP events subscription payload
{
  "endpointUri": "http://localhost:8989/event",
  "resource": "/cluster/node/compute-1.example.com/ptp"
}
{
  "endpointUri": "http://localhost:8989/event",
  "resource": "/cluster/node/compute-1.example.com/ptp"
}Example PTP lock-state events subscription payload
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/ptp-status/lock-state"
}Example PTP os-clock-sync-state events subscription payload
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state"
}Example PTP clock-class events subscription payload
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/ptp-status/clock-class"
}Example PTP gnss-sync-status events subscription payload
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status"
}Example sync-state subscription payload
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}
{
"endpointUri": "http://localhost:8989/event",
"resource": "/cluster/node/{node_name}/sync/sync-status/sync-state"
}HTTP method
						DELETE api/ocloudNotifications/v1/subscriptions
					
Description
Deletes all subscriptions.
Example API response
{
"status": "deleted all subscriptions"
}
{
"status": "deleted all subscriptions"
}5.6.1.2. api/ocloudNotifications/v1/subscriptions/{subscription_id}
HTTP method
						GET api/ocloudNotifications/v1/subscriptions/{subscription_id}
					
Description
						Returns details for the subscription with ID subscription_id.
					
| Parameter | Type | 
|---|---|
| 
										 | string | 
Example API response
HTTP method
						DELETE api/ocloudNotifications/v1/subscriptions/{subscription_id}
					
Description
						Deletes the subscription with ID subscription_id.
					
| Parameter | Type | 
|---|---|
| 
										 | string | 
Example API response
{
"status": "OK"
}
{
"status": "OK"
}5.6.1.3. api/ocloudNotifications/v1/health
HTTP method
						GET api/ocloudNotifications/v1/health/
					
Description
						Returns the health status for the ocloudNotifications REST API.
					
Example API response
OK
OK5.6.1.4. api/ocloudNotifications/v1/publishers
							The api/ocloudNotifications/v1/publishers endpoint is only available from the cloud-event-proxy container in the PTP Operator managed pod. It is not available for consumer applications in the application pod.
						
HTTP method
						GET api/ocloudNotifications/v1/publishers
					
Description
Returns a list of publisher details for the cluster node. The system generates notifications when the relevant equipment state changes.
You can use equipment synchronization status subscriptions together to deliver a detailed view of the overall synchronization health of the system.
Example API response
5.6.1.5. api/ocloudNotifications/v1/{resource_address}/CurrentState
HTTP method
						GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/ptp-status/lock-state/CurrentState
					
						GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state/CurrentState
					
						GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/ptp-status/clock-class/CurrentState
					
						GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/sync-status/sync-state/CurrentState
					
						GET api/ocloudNotifications/v1/cluster/node/{node_name}/sync/gnss-status/gnss-sync-state/CurrentState
					
Description
						Returns the current state of the os-clock-sync-state, clock-class, lock-state, gnss-sync-status, or sync-state events for the cluster node.
					
- 
								os-clock-sync-statenotifications describe the host operating system clock synchronization state. Can be inLOCKEDorFREERUNstate.
- 
								clock-classnotifications describe the current state of the PTP clock class.
- 
								lock-statenotifications describe the current status of the PTP equipment lock state. Can be inLOCKED,HOLDOVERorFREERUNstate.
- 
								sync-statenotifications describe the current status of the least synchronized of theptp-status/lock-stateandsync-status/os-clock-sync-stateendpoints.
- 
								gnss-sync-statusnotifications describe the GNSS clock synchronization state.
| Parameter | Type | 
|---|---|
| 
										 | string | 
Example lock-state API response
Example os-clock-sync-state API response
Example clock-class API response
Example sync-state API response
Example gnss-sync-status API response
        Legal Notice
        
          
            
          
        
      
 
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
 
     
     
     
     
    