Chapter 10. Advanced managed cluster configuration with PolicyGenTemplate resources
You can use PolicyGenTemplate CRs to deploy custom functionality in your managed clusters.
10.1. Deploying additional changes to clusters Copy linkLink copied to clipboard!
If you require cluster configuration changes outside of the base GitOps Zero Touch Provisioning (ZTP) pipeline configuration, there are three options:
- Apply the additional configuration after the GitOps ZTP pipeline is complete
- When the GitOps ZTP pipeline deployment is complete, the deployed cluster is ready for application workloads. At this point, you can install additional Operators and apply configurations specific to your requirements. Ensure that additional configurations do not negatively affect the performance of the platform or allocated CPU budget.
- Add content to the GitOps ZTP library
- The base source custom resources (CRs) that you deploy with the GitOps ZTP pipeline can be augmented with custom content as required.
- Create extra manifests for the cluster installation
- Extra manifests are applied during installation and make the installation process more efficient.
Providing additional source CRs or modifying existing source CRs can significantly impact the performance or CPU profile of OpenShift Container Platform.
10.2. Using PolicyGenTemplate CRs to override source CRs content Copy linkLink copied to clipboard!
PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the ztp-site-generate container. You can think of PolicyGenTemplate CRs as a logical merge or patch to the base CR. Use PolicyGenTemplate CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR.
The following example procedure describes how to update fields in the generated PerformanceProfile CR for the reference configuration based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml file. Use the procedure as a basis for modifying other parts of the PolicyGenTemplate based on your requirements.
Prerequisites
- Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD.
Procedure
Review the baseline source CR for existing content. You can review the source CRs listed in the reference
PolicyGenTemplateCRs by extracting them from the GitOps Zero Touch Provisioning (ZTP) container.Create an
/outfolder:mkdir -p ./out
$ mkdir -p ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the source CRs:
podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15.1 extract /home/ztp --tar | tar x -C ./out
$ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15.1 extract /home/ztp --tar | tar x -C ./outCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Review the baseline
PerformanceProfileCR in./out/source-crs/PerformanceProfile.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny fields in the source CR which contain
$…are removed from the generated CR if they are not provided in thePolicyGenTemplateCR.Update the
PolicyGenTemplateentry forPerformanceProfilein thegroup-du-sno-ranGen.yamlreference file. The following examplePolicyGenTemplateCR stanza supplies appropriate CPU specifications, sets thehugepagesconfiguration, and adds a new field that setsgloballyDisableIrqLoadBalancingto false.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP argo CD application.
Example output
The GitOps ZTP application generates an RHACM policy that contains the generated PerformanceProfile CR. The contents of that CR are derived by merging the metadata and spec contents from the PerformanceProfile entry in the PolicyGenTemplate onto the source CR. The resulting CR has the following content:
In the /source-crs folder that you extract from the ztp-site-generate container, the $ syntax is not used for template substitution as implied by the syntax. Rather, if the policyGen tool sees the $ prefix for a string and you do not specify a value for that field in the related PolicyGenTemplate CR, the field is omitted from the output CR entirely.
An exception to this is the $mcp variable in /source-crs YAML files that is substituted with the specified value for mcp from the PolicyGenTemplate CR. For example, in example/policygentemplates/group-du-standard-ranGen.yaml, the value for mcp is worker:
spec:
bindingRules:
group-du-standard: ""
mcp: "worker"
spec:
bindingRules:
group-du-standard: ""
mcp: "worker"
The policyGen tool replace instances of $mcp with worker in the output CRs.
10.3. Adding custom content to the GitOps ZTP pipeline Copy linkLink copied to clipboard!
Perform the following procedure to add new content to the GitOps ZTP pipeline.
Procedure
-
Create a subdirectory named
source-crsin the directory that contains thekustomization.yamlfile for thePolicyGenTemplatecustom resource (CR). Add your user-provided CRs to the
source-crssubdirectory, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
source-crssubdirectory must be in the same directory as thekustomization.yamlfile.
Update the required
PolicyGenTemplateCRs to include references to the content you added in thesource-crs/custom-crsandsource-crs/elasticsearchdirectories. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application. Update the
ClusterGroupUpgradeCR to include the changedPolicyGenTemplateand save it ascgu-test.yaml. The following example shows a generatedcgu-test.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated
ClusterGroupUpgradeCR by running the following command:oc apply -f cgu-test.yaml
$ oc apply -f cgu-test.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the updates have succeeded by running the following command:
oc get cgu -A
$ oc get cgu -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies
NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policiesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Configuring policy compliance evaluation timeouts for PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Use Red Hat Advanced Cluster Management (RHACM) installed on a hub cluster to monitor and report on whether your managed clusters are compliant with applied policies. RHACM uses policy templates to apply predefined policy controllers and policies. Policy controllers are Kubernetes custom resource definition (CRD) instances.
You can override the default policy evaluation intervals with PolicyGenTemplate custom resources (CRs). You configure duration settings that define how long a ConfigurationPolicy CR can be in a state of policy compliance or non-compliance before RHACM re-evaluates the applied cluster policies.
The GitOps Zero Touch Provisioning (ZTP) policy generator generates ConfigurationPolicy CR policies with pre-defined policy evaluation intervals. The default value for the noncompliant state is 10 seconds. The default value for the compliant state is 10 minutes. To disable the evaluation interval, set the value to never.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data.
Procedure
To configure the evaluation interval for all policies in a
PolicyGenTemplateCR, addevaluationIntervalto thespecfield, and then set the appropriatecompliantandnoncompliantvalues. For example:spec: evaluationInterval: compliant: 30m noncompliant: 20sspec: evaluationInterval: compliant: 30m noncompliant: 20sCopy to Clipboard Copied! Toggle word wrap Toggle overflow To configure the evaluation interval for the
spec.sourceFilesobject in aPolicyGenTemplateCR, addevaluationIntervalto thesourceFilesfield, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Commit the
PolicyGenTemplateCRs files in the Git repository and push your changes.
Verification
Check that the managed spoke cluster policies are monitored at the expected intervals.
-
Log in as a user with
cluster-adminprivileges on the managed cluster. Get the pods that are running in the
open-cluster-management-agent-addonnamespace. Run the following command:oc get pods -n open-cluster-management-agent-addon
$ oc get pods -n open-cluster-management-agent-addonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d
NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the applied policies are being evaluated at the expected interval in the logs for the
config-policy-controllerpod:oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb
$ oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"}2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-config-policy-config"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {"policy": "compute-1-common-compute-1-catalog-policy-config"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5. Signalling GitOps ZTP cluster deployment completion with validator inform policies Copy linkLink copied to clipboard!
Create a validator inform policy that signals when the GitOps Zero Touch Provisioning (ZTP) installation and configuration of the deployed cluster is complete. This policy can be used for deployments of single-node OpenShift clusters, three-node clusters, and standard clusters.
Procedure
Create a standalone
PolicyGenTemplatecustom resource (CR) that contains the source filevalidatorCRs/informDuValidator.yaml. You only need one standalonePolicyGenTemplateCR for each cluster type. For example, this CR applies a validator inform policy for single-node OpenShift clusters:Example single-node cluster validator inform policy CR (group-du-sno-validator-ranGen.yaml)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of
PolicyGenTemplatesobject. This name is also used as part of the names for theplacementBinding,placementRule, andpolicythat are created in the requestednamespace. - 2
- This value should match the
namespaceused in the groupPolicyGenTemplates. - 3
- The
group-du-*label defined inbindingRulesmust exist in theSiteConfigfiles. - 4
- The label defined in
bindingExcludedRulesmust be`ztp-done:`. Theztp-donelabel is used in coordination with the Topology Aware Lifecycle Manager. - 5
mcpdefines theMachineConfigPoolobject that is used in the source filevalidatorCRs/informDuValidator.yaml. It should bemasterfor single node and three-node cluster deployments andworkerfor standard cluster deployments.- 6
- Optional. The default value is
inform. - 7
- This value is used as part of the name for the generated RHACM policy. The generated validator policy for the single node example is
group-du-sno-validator-du-policy.
-
Commit the
PolicyGenTemplateCR file in your Git repository and push the changes.
10.6. Configuring power states using PolicyGenTemplates CRs Copy linkLink copied to clipboard!
For low latency and high-performance edge deployments, it is necessary to disable or limit C-states and P-states. With this configuration, the CPU runs at a constant frequency, which is typically the maximum turbo frequency. This ensures that the CPU is always running at its maximum speed, which results in high performance and low latency. This leads to the best latency for workloads. However, this also leads to the highest power consumption, which might not be necessary for all workloads.
Workloads can be classified as critical or non-critical, with critical workloads requiring disabled C-state and P-state settings for high performance and low latency, while non-critical workloads use C-state and P-state settings for power savings at the expense of some latency and performance. You can configure the following three power states using GitOps Zero Touch Provisioning (ZTP):
- High-performance mode provides ultra low latency at the highest power consumption.
- Performance mode provides low latency at a relatively high power consumption.
- Power saving balances reduced power consumption with increased latency.
The default configuration is for a low latency, performance mode.
PolicyGenTemplate custom resources (CRs) allow you to overlay additional configuration details onto the base source CRs provided with the GitOps plugin in the ztp-site-generate container.
Configure the power states by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml.
The following common prerequisites apply to configuring all three power states.
Prerequisites
- You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for Argo CD.
- You have followed the procedure described in "Preparing the GitOps ZTP site configuration repository".
10.6.1. Configuring performance mode using PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Follow this example to set performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml.
Performance mode provides low latency at a relatively high power consumption.
Prerequisites
- You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance".
Procedure
Update the
PolicyGenTemplateentry forPerformanceProfilein thegroup-du-sno-ranGen.yamlreference file inout/argocd/example/policygentemplatesas follows to set performance mode.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application.
10.6.2. Configuring high-performance mode using PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Follow this example to set high performance mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml.
High performance mode provides ultra low latency at the highest power consumption.
Prerequisites
- You have configured the BIOS with performance related settings by following the guidance in "Configuring host firmware for low latency and high performance".
Procedure
Update the
PolicyGenTemplateentry forPerformanceProfilein thegroup-du-sno-ranGen.yamlreference file inout/argocd/example/policygentemplatesas follows to set high-performance mode.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application.
10.6.3. Configuring power saving mode using PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Follow this example to set power saving mode by updating the workloadHints fields in the generated PerformanceProfile CR for the reference configuration, based on the PolicyGenTemplate CR in the group-du-sno-ranGen.yaml.
The power saving mode balances reduced power consumption with increased latency.
Prerequisites
- You enabled C-states and OS-controlled P-states in the BIOS.
Procedure
Update the
PolicyGenTemplateentry forPerformanceProfilein thegroup-du-sno-ranGen.yamlreference file inout/argocd/example/policygentemplatesas follows to configure power saving mode. It is recommended to configure the CPU governor for the power saving mode through the additional kernel arguments object.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
schedutilgovernor is recommended, however, other governors that can be used includeondemandandpowersave.
-
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application.
Verification
Select a worker node in your deployed cluster from the list of nodes identified by using the following command:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the node by using the following command:
oc debug node/<node-name>
$ oc debug node/<node-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node-name>with the name of the node you want to verify the power state on.Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths as shown in the following example:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to verify the applied power state:
cat /proc/cmdline
# cat /proc/cmdlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Expected output
-
For power saving mode the
intel_pstate=passive.
10.6.4. Maximizing power savings Copy linkLink copied to clipboard!
Limiting the maximum CPU frequency is recommended to achieve maximum power savings. Enabling C-states on the non-critical workload CPUs without restricting the maximum CPU frequency negates much of the power savings by boosting the frequency of the critical CPUs.
Maximize power savings by updating the sysfs plugin fields, setting an appropriate value for max_perf_pct in the TunedPerformancePatch CR for the reference configuration. This example based on the group-du-sno-ranGen.yaml describes the procedure to follow to restrict the maximum CPU frequency.
Prerequisites
- You have configured power savings mode as described in "Using PolicyGenTemplate CRs to configure power savings mode".
Procedure
Update the
PolicyGenTemplateentry forTunedPerformancePatchin thegroup-du-sno-ranGen.yamlreference file inout/argocd/example/policygentemplates. To maximize power savings, addmax_perf_pctas shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
max_perf_pctcontrols the maximum frequency thecpufreqdriver is allowed to set as a percentage of the maximum supported CPU frequency. This value applies to all CPUs. You can check the maximum supported frequency in/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq. As a starting point, you can use a percentage that caps all CPUs at theAll Cores Turbofrequency. TheAll Cores Turbofrequency is the frequency that all cores will run at when the cores are all fully occupied.
NoteTo maximize power savings, set a lower value. Setting a lower value for
max_perf_pctlimits the maximum CPU frequency, thereby reducing power consumption, but also potentially impacting performance. Experiment with different values and monitor the system’s performance and power consumption to find the optimal setting for your use-case.-
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP Argo CD application.
10.7. Configuring LVM Storage using PolicyGenTemplate CRs Copy linkLink copied to clipboard!
You can configure Logical Volume Manager (LVM) Storage for managed clusters that you deploy with GitOps Zero Touch Provisioning (ZTP).
You use LVM Storage to persist event subscriptions when you use PTP events or bare-metal hardware events with HTTP transport.
Use the Local Storage Operator for persistent storage that uses local volumes in distributed units.
Prerequisites
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. - Create a Git repository where you manage your custom site configuration data.
Procedure
To configure LVM Storage for new managed clusters, add the following YAML to
spec.sourceFilesin thecommon-ranGen.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Storage LVMO subscription is deprecated. In future releases of OpenShift Container Platform, the storage LVMO subscription will not be available. Instead, you must use the Storage LVMS subscription.
In OpenShift Container Platform 4.15, you can use the Storage LVMS subscription instead of the LVMO subscription. The LVMS subscription does not require manual overrides in the
common-ranGen.yamlfile. Add the following YAML tospec.sourceFilesin thecommon-ranGen.yamlfile to use the Storage LVMS subscription:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
LVMClusterCR tospec.sourceFilesin your specific group or individual site configuration file. For example, in thegroup-du-sno-ranGen.yamlfile, add the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This example configuration creates a volume group (
vg1) with all the available devices, except the disk where OpenShift Container Platform is installed. A thin-pool logical volume is also created.
- Merge any other required changes and files with your custom site repository.
-
Commit the
PolicyGenTemplatechanges in Git, and then push the changes to your site configuration repository to deploy LVM Storage to new sites using GitOps ZTP.
10.8. Configuring PTP events with PolicyGenTemplate CRs Copy linkLink copied to clipboard!
You can use the GitOps ZTP pipeline to configure PTP events that use HTTP or AMQP transport.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
10.8.1. Configuring PTP events that use HTTP transport Copy linkLink copied to clipboard!
You can configure PTP events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data.
Procedure
Apply the following
PolicyGenTemplatechanges togroup-du-3node-ranGen.yaml,group-du-sno-ranGen.yaml, orgroup-du-standard-ranGen.yamlfiles according to your requirements:In
.sourceFiles, add thePtpOperatorConfigCR file that configures the transport host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn OpenShift Container Platform 4.13 or later, you do not need to set the
transportHostfield in thePtpOperatorConfigresource when you use HTTP transport with PTP events.Configure the
linuxptpandphc2sysfor the PTP clock type and interface. For example, add the following stanza into.sourceFiles:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Can be
PtpConfigMaster.yamlorPtpConfigSlave.yamldepending on your requirements. For configurations based ongroup-du-sno-ranGen.yamlorgroup-du-3node-ranGen.yaml, usePtpConfigSlave.yaml. - 2
- Device specific interface name.
- 3
- You must append the
--summary_interval -4value toptp4lOptsin.spec.sourceFiles.spec.profileto enable PTP fast events. - 4
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics. - 5
- Optional. If the
ptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
10.8.2. Configuring PTP events that use AMQP transport Copy linkLink copied to clipboard!
You can configure PTP events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data.
Procedure
Add the following YAML into
.spec.sourceFilesin thecommon-ranGen.yamlfile to configure the AMQP Operator:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the following
PolicyGenTemplatechanges togroup-du-3node-ranGen.yaml,group-du-sno-ranGen.yaml, orgroup-du-standard-ranGen.yamlfiles according to your requirements:In
.sourceFiles, add thePtpOperatorConfigCR file that configures the AMQ transport host to theconfig-policy:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
linuxptpandphc2sysfor the PTP clock type and interface. For example, add the following stanza into.sourceFiles:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Can be
PtpConfigMaster.yamlorPtpConfigSlave.yamldepending on your requirements. For configurations based ongroup-du-sno-ranGen.yamlorgroup-du-3node-ranGen.yaml, usePtpConfigSlave.yaml. - 2
- Device specific interface name.
- 3
- You must append the
--summary_interval -4value toptp4lOptsin.spec.sourceFiles.spec.profileto enable PTP fast events. - 4
- Required
phc2sysOptsvalues.-mprints messages tostdout. Thelinuxptp-daemonDaemonSetparses the logs and generates Prometheus metrics. - 5
- Optional. If the
ptpClockThresholdstanza is not present, default values are used for theptpClockThresholdfields. The stanza shows defaultptpClockThresholdvalues. TheptpClockThresholdvalues configure how long after the PTP master clock is disconnected before PTP events are triggered.holdOverTimeoutis the time value in seconds before the PTP clock event state changes toFREERUNwhen the PTP master clock is disconnected. ThemaxOffsetThresholdandminOffsetThresholdsettings configure offset values in nanoseconds that compare against the values forCLOCK_REALTIME(phc2sys) or master offset (ptp4l). When theptp4lorphc2sysoffset value is outside this range, the PTP clock state is set toFREERUN. When the offset value is within this range, the PTP clock state is set toLOCKED.
Apply the following
PolicyGenTemplatechanges to your specific site YAML files, for example,example-sno-site.yaml:In
.sourceFiles, add theInterconnectCR file that configures the AMQ router to theconfig-policy:- fileName: AmqInstance.yaml policyName: "config-policy"
- fileName: AmqInstance.yaml policyName: "config-policy"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy PTP fast events to new sites using GitOps ZTP.
10.9. Configuring bare-metal events with PolicyGenTemplate CRs Copy linkLink copied to clipboard!
You can use the GitOps ZTP pipeline to configure bare-metal events that use HTTP or AMQP transport.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
10.9.1. Configuring bare-metal events that use HTTP transport Copy linkLink copied to clipboard!
You can configure bare-metal events that use HTTP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data.
Procedure
Configure the Bare Metal Event Relay Operator by adding the following YAML to
spec.sourceFilesin thecommon-ranGen.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
HardwareEventCR tospec.sourceFilesin your specific group configuration file, for example, in thegroup-du-sno-ranGen.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Each baseboard management controller (BMC) requires a single
HardwareEventCR only.
NoteIn OpenShift Container Platform 4.13 or later, you do not need to set the
transportHostfield in theHardwareEventcustom resource (CR) when you use HTTP transport with bare-metal events.- Merge any other required changes and files with your custom site repository.
- Push the changes to your site configuration repository to deploy bare-metal events to new sites with GitOps ZTP.
Create the Redfish Secret by running the following command:
oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.9.2. Configuring bare-metal events that use AMQP transport Copy linkLink copied to clipboard!
You can configure bare-metal events that use AMQP transport on managed clusters that you deploy with the GitOps Zero Touch Provisioning (ZTP) pipeline.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data.
Procedure
To configure the AMQ Interconnect Operator and the Bare Metal Event Relay Operator, add the following YAML to
spec.sourceFilesin thecommon-ranGen.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
InterconnectCR to.spec.sourceFilesin the site configuration file, for example, theexample-sno-site.yamlfile:- fileName: AmqInstance.yaml policyName: "config-policy"
- fileName: AmqInstance.yaml policyName: "config-policy"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
HardwareEventCR tospec.sourceFilesin your specific group configuration file, for example, in thegroup-du-sno-ranGen.yamlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
transportHostURL is composed of the existing AMQ Interconnect CRnameandnamespace. For example, intransportHost: "amqp://amq-router.amq-router.svc.cluster.local", the AMQ Interconnectnameandnamespaceare both set toamq-router.
NoteEach baseboard management controller (BMC) requires a single
HardwareEventresource only.-
Commit the
PolicyGenTemplatechange in Git, and then push the changes to your site configuration repository to deploy bare-metal events monitoring to new sites using GitOps ZTP. Create the Redfish Secret by running the following command:
oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"
$ oc -n openshift-bare-metal-events create secret generic redfish-basic-auth \ --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> \ --from-literal=hostaddr="<bmc_host_ip_addr>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.10. Configuring the Image Registry Operator for local caching of images Copy linkLink copied to clipboard!
OpenShift Container Platform manages image caching using a local registry. In edge computing use cases, clusters are often subject to bandwidth restrictions when communicating with centralized image registries, which might result in long image download times.
Long download times are unavoidable during initial deployment. Over time, there is a risk that CRI-O will erase the /var/lib/containers/storage directory in the case of an unexpected shutdown. To address long image download times, you can create a local image registry on remote managed clusters using GitOps Zero Touch Provisioning (ZTP). This is useful in Edge computing scenarios where clusters are deployed at the far edge of the network.
Before you can set up the local image registry with GitOps ZTP, you need to configure disk partitioning in the SiteConfig CR that you use to install the remote managed cluster. After installation, you configure the local image registry using a PolicyGenTemplate CR. Then, the GitOps ZTP pipeline creates Persistent Volume (PV) and Persistent Volume Claim (PVC) CRs and patches the imageregistry configuration.
The local image registry can only be used for user application images and cannot be used for the OpenShift Container Platform or Operator Lifecycle Manager operator images.
10.10.1. Configuring disk partitioning with SiteConfig Copy linkLink copied to clipboard!
Configure disk partitioning for a managed cluster using a SiteConfig CR and GitOps Zero Touch Provisioning (ZTP). The disk partition details in the SiteConfig CR must match the underlying disk.
You must complete this procedure at installation time.
Prerequisites
- Install Butane.
Procedure
Create the
storage.bufile by using the following example YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Convert the
storage.buto an Ignition file by running the following command:butane storage.bu
$ butane storage.buCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}{"ignition":{"version":"3.2.0"},"storage":{"disks":[{"device":"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0","partitions":[{"label":"var-lib-containers","sizeMiB":0,"startMiB":250000}],"wipeTable":false}],"filesystems":[{"device":"/dev/disk/by-partlabel/var-lib-containers","format":"xfs","mountOptions":["defaults","prjquota"],"path":"/var/lib/containers","wipeFilesystem":true}]},"systemd":{"units":[{"contents":"# # Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target","enabled":true,"name":"var-lib-containers.mount"}]}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use a tool such as JSON Pretty Print to convert the output into JSON format.
Copy the output into the
.spec.clusters.nodes.ignitionConfigOverridefield in theSiteConfigCR.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the
.spec.clusters.nodes.ignitionConfigOverridefield does not exist, create it.
Verification
During or after installation, verify on the hub cluster that the
BareMetalHostobject shows the annotation by running the following command:oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]
$ oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations["bmac.agent-install.openshift.io/ignition-config-overrides"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}""{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow After installation, check the single-node OpenShift disk status.
Enter into a debug session on the single-node OpenShift node by running the following command.
This step instantiates a debug pod called
<node_name>-debug:oc debug node/my-sno-node
$ oc debug node/my-sno-nodeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set
/hostas the root directory within the debug shell by running the following command.The debug pod mounts the host’s root file system in
/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:chroot /host
# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow List information about all available block devices by running the following command:
lsblk
# lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display information about the file system disk space usage by running the following command:
df -h
# df -hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.10.2. Configuring the image registry using PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Use PolicyGenTemplate (PGT) CRs to apply the CRs required to configure the image registry and patch the imageregistry configuration.
Prerequisites
- You have configured a disk partition in the managed cluster.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data for use with GitOps Zero Touch Provisioning (ZTP).
Procedure
Configure the storage class, persistent volume claim, persistent volume, and image registry configuration in the appropriate
PolicyGenTemplateCR. For example, to configure an individual site, add the following YAML to the fileexample-sno-site.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set the appropriate value for
ztp-deploy-wavedepending on whether you are configuring image registries at the site, common, or group level.ztp-deploy-wave: "100"is suitable for development or testing because it allows you to group the referenced source files together. - 2
- In
ImageRegistryPV.yaml, ensure that thespec.local.pathfield is set to/var/imageregistryto match the value set for themount_pointfield in theSiteConfigCR.
ImportantDo not set
complianceType: mustonlyhavefor the- fileName: ImageRegistryConfig.yamlconfiguration. This can cause the registry pod deployment to fail.-
Commit the
PolicyGenTemplatechange in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application.
Verification
Use the following steps to troubleshoot errors with the local image registry on the managed clusters:
Verify successful login to the registry while logged in to the managed cluster. Run the following commands:
Export the managed cluster name:
cluster=<managed_cluster_name>
$ cluster=<managed_cluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the managed cluster
kubeconfigdetails:oc get secret -n $cluster $cluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-$cluster$ oc get secret -n $cluster $cluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-$clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Download and export the cluster
kubeconfig:oc get secret -n $cluster $cluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-$cluster && export KUBECONFIG=./kubeconfig-$cluster$ oc get secret -n $cluster $cluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-$cluster && export KUBECONFIG=./kubeconfig-$clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify access to the image registry from the managed cluster. See "Accessing the registry".
Check that the
ConfigCRD in theimageregistry.operator.openshift.iogroup instance is not reporting errors. Run the following command while logged in to the managed cluster:oc get image.config.openshift.io cluster -o yaml
$ oc get image.config.openshift.io cluster -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
PersistentVolumeClaimon the managed cluster is populated with data. Run the following command while logged in to the managed cluster:oc get pv image-registry-sc
$ oc get pv image-registry-scCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
registry*pod is running and is located under theopenshift-image-registrynamespace.oc get pods -n openshift-image-registry | grep registry*
$ oc get pods -n openshift-image-registry | grep registry*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d
cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the disk partition on the managed cluster is correct:
Open a debug shell to the managed cluster:
oc debug node/sno-1.example.com
$ oc debug node/sno-1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run
lsblkto check the host disk partitions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
/var/imageregistryindicates that the disk is correctly partitioned.
10.11. Using hub templates in PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP).
Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values.
Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means that you must create the objects referenced in the hub template in the same namespace where the policy is created.
The following supported hub template functions are available for use in GitOps ZTP with TALM:
fromConfigmapreturns the value of the provided data key in the namedConfigMapresource.NoteThere is a 1 MiB size limit for
ConfigMapCRs. The effective size forConfigMapCRs is further limited by thelast-applied-configurationannotation. To avoid thelast-applied-configurationlimitation, add the following annotation to the templateConfigMap:argocd.argoproj.io/sync-options: Replace=true
argocd.argoproj.io/sync-options: Replace=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
base64encreturns the base64-encoded value of the input string -
base64decreturns the decoded value of the base64-encoded input string -
indentreturns the input string with added indent spaces -
autoindentreturns the input string with added indent spaces based on the spacing used in the parent template -
toIntcasts and returns the integer value of the input value -
toBoolconverts the input string into a boolean value, and returns the boolean
Various Open source community functions are also available for use with GitOps ZTP.
10.11.1. Example hub templates Copy linkLink copied to clipboard!
The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace.
Returns the value with the key
common-key:{{hub fromConfigMap "default" "test-config" "common-key" hub}}{{hub fromConfigMap "default" "test-config" "common-key" hub}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Returns a string by using the concatenated value of the
.ManagedClusterNamefield and the string-name:{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}}{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Casts and returns a boolean value from the concatenated value of the
.ManagedClusterNamefield and the string-name:{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}}{{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Casts and returns an integer value from the concatenated value of the
.ManagedClusterNamefield and the string-name:{{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}}{{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.11.2. Specifying group and site configuration in group PolicyGenTemplate CRs with hub templates Copy linkLink copied to clipboard!
You can manage the configuration of fleets of clusters with ConfigMap CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters. Using hub templates in site PolicyGenTemplate (PGT) CRs means that you do not need to create a PolicyGenTemplate CR for each site.
You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region. Each cluster should have a label corresponding to the group or groups that the cluster is in. If you manage the configuration values for each group in different ConfigMap CRs, then you require only one group PolicyGenTemplate CR to apply the changes to all the clusters in the group by using hub templates.
The following example shows you how to use three ConfigMap CRs and one group PolicyGenTemplate CR to apply both site and group configuration to clusters grouped by hardware type and region.
When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields.
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. - You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application.
Procedure
Create three
ConfigMapCRs that contain the group and site configuration:Create a
ConfigMapCR namedgroup-hardware-types-configmapto hold the hardware-specific configuration. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
argocd.argoproj.io/sync-optionsannotation is required only if theConfigMapis larger than 1 MiB in size.
Create a
ConfigMapCR namedgroup-zones-configmapto hold the regional configuration. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ConfigMapCR namedsite-data-configmapto hold the site-specific configuration. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NoteEach
ConfigMapCR must be in the same namespace as the policy to be generated from the groupPolicyGenTemplateCR.-
Commit the
ConfigMapCRs in Git, and then push to the Git repository being monitored by the Argo CD application. Apply the hardware type and region labels to the clusters. The following command applies to a single cluster named
du-sno-1-zone-1and the labels chosen are"hardware-type": "hardware-type-1"and"group-du-sno-zone": "zone-1":oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}'$ oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a group
PolicyGenTemplateCR that uses hub templates to obtain the required data from theConfigMapobjects. This examplePolicyGenTemplateCR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed underspec.bindingRules:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo retrieve site-specific configuration values, use the
.ManagedClusterNamefield. This is a template context value set to the name of the target managed cluster.To retrieve group-specific configuration, use the
.ManagedClusterLabelsfield. This is a template context value set to the value of the managed cluster’s labels.Commit the site
PolicyGenTemplateCR in Git and push to the Git repository that is monitored by the ArgoCD application.NoteSubsequent changes to the referenced
ConfigMapCR are not automatically synced to the applied policies. You need to manually sync the newConfigMapchanges to update existingPolicyGenTemplateCRs. See "Syncing new ConfigMap changes to existing PolicyGenTemplate CRs".You can use the same
PolicyGenTemplateCR for multiple clusters. If there is a configuration change, then the only modifications you need to make are to theConfigMapobjects that hold the configuration for each cluster and the labels of the managed clusters.
10.11.3. Syncing new ConfigMap changes to existing PolicyGenTemplate CRs Copy linkLink copied to clipboard!
Prerequisites
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have created a
PolicyGenTemplateCR that pulls information from aConfigMapCR using hub cluster templates.
Procedure
-
Update the contents of your
ConfigMapCR, and apply the changes in the hub cluster. To sync the contents of the updated
ConfigMapCR to the deployed policy, do either of the following:Option 1: Delete the existing policy. ArgoCD uses the
PolicyGenTemplateCR to immediately recreate the deleted policy. For example, run the following command:oc delete policy <policy_name> -n <policy_namespace>
$ oc delete policy <policy_name> -n <policy_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Option 2: Apply a special annotation
policy.open-cluster-management.io/trigger-updateto the policy with a different value every time when you update theConfigMap. For example:oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"
$ oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing.
Optional: If it exists, delete the
ClusterGroupUpdateCR that contains the policy. For example:oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>
$ oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
ClusterGroupUpdateCR that includes the policy to apply with the updatedConfigMapchanges. For example, add the following YAML to the filecgr-example.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the updated policy:
oc apply -f cgr-example.yaml
$ oc apply -f cgr-example.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow