Este conteúdo não está disponível no idioma selecionado.

Chapter 5. Compliance Operator


5.1. Compliance Operator release notes

The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them.

These release notes track the development of the Compliance Operator in the OpenShift Container Platform.

For an overview of the Compliance Operator, see Understanding the Compliance Operator.

To access the latest release, see Updating the Compliance Operator.

5.1.1. OpenShift Compliance Operator 1.2.0

The following advisory is available for the OpenShift Compliance Operator 1.2.0:

5.1.1.1. New features and enhancements

  • The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark, where you can then register to download the benchmark.

    Important

    Upgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles.

    If your OpenShift Container Platform environment contains existing cis and cis-node remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0.

  • Additional clarity for auditing security context constraints (SCCs) is now available for the scc-limit-container-allowed-capabilities rule.

5.1.2. OpenShift Compliance Operator 1.1.0

The following advisory is available for the OpenShift Compliance Operator 1.1.0:

5.1.2.1. New features and enhancements

  • A start and end timestamp is now available in the ComplianceScan custom resource definition (CRD) status.

5.1.2.2. Bug fixes

  • Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules:

    • classification_banner
    • oauth_login_template_set
    • oauth_logout_url_set
    • oauth_provider_selection_set
    • ocp_allowed_registries
    • ocp_allowed_registries_for_import

      (OCPBUGS-10473)

  • Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following sysctl rules:

    • kubelet-enable-protect-kernel-sysctl
    • kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes
    • kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys
    • kubelet-enable-protect-kernel-sysctl-kernel-panic
    • kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops
    • kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory
    • kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom

      (OCPBUGS-11334)

  • Before this update, the ocp4-alert-receiver-configured rule did not include instructions. With this update, the ocp4-alert-receiver-configured rule now includes improved instructions. (OCPBUGS-7307)
  • Before this update, the rhcos4-sshd-set-loglevel-info rule would fail for the rhcos4-e8 profile. With this update, the remediation for the sshd-set-loglevel-info rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. (OCPBUGS-7816)
  • Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the scheduler-no-bind-address rule. With this update, the scheduler-no-bind-address rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. (OCPBUGS-8347)

5.1.3. OpenShift Compliance Operator 1.0.0

The following advisory is available for the OpenShift Compliance Operator 1.0.0:

5.1.3.1. New features and enhancements

5.1.3.2. Bug fixes

  • Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. (OCPBUGS-1803)
  • Before this update, the ocp4-api-server-audit-log-maxsize rule would result in a FAIL state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. (OCPBUGS-7520)
  • Before this update, the rhcos4-enable-fips-mode rule description was misleading that FIPS could be enabled after installation. With this update, the rhcos4-enable-fips-mode rule description clarifies that FIPS must be enabled at install time. (OCPBUGS-8358)

5.1.4. OpenShift Compliance Operator 0.1.61

The following advisory is available for the OpenShift Compliance Operator 0.1.61:

5.1.4.1. New features and enhancements

  • The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the ScanSetting object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information.

5.1.4.2. Bug fixes

  • Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a TailoredProfile for a remediation. (OCPBUGS-3864)
  • Before this update, the instructions for ocp4-kubelet-configure-tls-cipher-suites were incomplete, requiring users to refine the query manually. With this update, the query provided in ocp4-kubelet-configure-tls-cipher-suites returns the actual results to perform the audit steps. (OCPBUGS-3017)
  • Before this update,ScanSettingBinding objects created without a settingRef variable did not use an appropriate default value. With this update, the ScanSettingBinding objects without a settingRef variable use the default value. (OCPBUGS-3420)
  • Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. (OCPBUGS-4445)
  • Before this update, ComplianceCheckResult objects did not have correct descriptions. With this update, the Compliance Operator sources the ComplianceCheckResult information from the rule description. (OCPBUGS-4615)
  • Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. (OCPBUGS-4621)
  • Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. (OCPBUGS-4338)
  • Before this update, re-running scans on remediations that previously Applied might have been marked as Outdated after rescans were performed, despite no changes in the remediation content. The comparison of scans did not account for remediation metadata correctly. With this update, remediations retain the previously generated Applied status. (OCPBUGS-6710)
  • Before this update, a regression occurred when attempting to create a ScanSettingBinding that was using a TailoredProfile with a non-default MachineConfigPool marked the ScanSettingBinding as Failed. With this update, functionality is restored and custom ScanSettingBinding using a TailoredProfile performs correctly. (OCPBUGS-6827)
  • Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values (OCPBUGS-6708):

    • ocp4-cis-kubelet-enable-streaming-connections
    • ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available
    • ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree
    • ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available
    • ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available
  • Before this update, the selinux_confinement_of_daemons rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, the selinux_confinement_of_daemons rule is disabled. (OCPBUGS-6968)

5.1.5. OpenShift Compliance Operator 0.1.59

The following advisory is available for the OpenShift Compliance Operator 0.1.59:

5.1.5.1. New features and enhancements

  • The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture.

5.1.5.2. Bug fixes

  • Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS) ocp4-pci-dss and ocp4-pci-dss-node profiles on different architectures such as ppc64le. Now, the Compliance Operator supports ocp4-pci-dss and ocp4-pci-dss-node profiles on the ppc64le architecture. (OCPBUGS-3252)
  • Previously, after the recent update to version 0.1.57, the rerunner service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns the rerunner SA in 0.1.59, and upgrades from any previous version will not result in a missing SA. (OCPBUGS-3452)
  • In 0.1.57, the Operator started the controller metrics endpoint listening on port 8080. This resulted in TargetDown alerts since cluster monitoring expected port is 8383. With 0.1.59, the Operator starts the endpoint listening on port 8383 as expected. (OCPBUGS-3097)

5.1.6. OpenShift Compliance Operator 0.1.57

The following advisory is available for the OpenShift Compliance Operator 0.1.57:

5.1.6.1. New features and enhancements

5.1.6.2. Bug fixes

  • Previously, the Compliance Operator hard-coded notifications to the default openshift-compliance namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default openshift-compliance namespaces. (BZ#2060726)
  • Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. (BZ#2075041)
  • Previously, the Compliance Operator reported the ocp4-kubelet-configure-event-creation rule in a FAIL state after applying an automatic remediation because the eventRecordQPS value was set higher than the default value. Now, the ocp4-kubelet-configure-event-creation rule remediation sets the default value, and the rule applies correctly. (BZ#2082416)
  • The ocp4-configure-network-policies rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the ocp4-configure-network-policies rule for clusters using Calico CNIs. (BZ#2091794)
  • Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the debug=true option in the scan settings. This caused pods to be left on the cluster even after deleting the ScanSettingBinding. Now, pods are always deleted when a ScanSettingBinding is deleted.(BZ#2092913)
  • Previously, the Compliance Operator used an older version of the operator-sdk command that caused alerts about deprecated functionality. Now, an updated version of the operator-sdk command is included and there are no more alerts for deprecated functionality. (BZ#2098581)
  • Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. (BZ#2102511)
  • Previously, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation did not properly describe success criteria. As a result, the requirements for RotateKubeletClientCertificate were unclear. Now, the rule for ocp4-cis-node-master-kubelet-enable-cert-rotation reports accurately regardless of the configuration present in the kubelet configuration file. (BZ#2105153)
  • Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. (BZ#2105878)
  • Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the api-check-pods processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. (BZ#2117268)
  • Previously, rules evaluating the modprobe configuration would fail even after applying remediations due to a mismatch in values for the modprobe configuration. Now, the same values are used for the modprobe configuration in checks and remediations, ensuring consistent results. (BZ#2117747)

5.1.6.3. Deprecations

  • Specifying Install into all namespaces in the cluster or setting the WATCH_NAMESPACES environment variable to "" no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the openshift-compliance namespace by default. This change improves the Compliance Operator’s memory usage.

5.1.7. OpenShift Compliance Operator 0.1.53

The following advisory is available for the OpenShift Compliance Operator 0.1.53:

5.1.7.1. Bug fixes

  • Previously, the ocp4-kubelet-enable-streaming-connections rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when setting streamingConnectionIdleTimeout. (BZ#2069891)
  • Previously, group ownership for /etc/openvswitch/conf.db was incorrect on IBM Z architectures, resulting in ocp4-cis-node-worker-file-groupowner-ovs-conf-db check failures. Now, the check is marked NOT-APPLICABLE on IBM Z architecture systems. (BZ#2072597)
  • Previously, the ocp4-cis-scc-limit-container-allowed-capabilities rule reported in a FAIL state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result is MANUAL, which is consistent with other checks that require human intervention. (BZ#2077916)
  • Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly:

    • ocp4-cis-api-server-kubelet-client-cert
    • ocp4-cis-api-server-kubelet-client-key
    • ocp4-cis-kubelet-configure-tls-cert
    • ocp4-cis-kubelet-configure-tls-key

    Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. (BZ#2079813)

  • Previously, the content_rule_oauth_or_oauthclient_inactivity_timeout rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses the var_oauth_inactivity_timeout variable to set valid timeout length. (BZ#2081952)
  • Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. (BZ#2088202)
  • Previously, applying auto remediations for rhcos4-high-master-sysctl-kernel-yama-ptrace-scope and rhcos4-sysctl-kernel-core-pattern resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules report PASS accurately, even after remediations are applied.(BZ#2094382)
  • Previously, the Compliance Operator would fail in a CrashLoopBackoff state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. (BZ#2094854)

5.1.7.2. Known issue

  • When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:

    $ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis

    (BZ#2092913)

5.1.8. OpenShift Compliance Operator 0.1.52

The following advisory is available for the OpenShift Compliance Operator 0.1.52:

5.1.8.1. New features and enhancements

  • The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles.

5.1.8.2. Bug fixes

  • Previously, the OpenScap container would crash due to a mount permission issue in a security environment where DAC_OVERRIDE capability is dropped. Now, executable mount permissions are applied to all users. (BZ#2082151)
  • Previously, the compliance rule ocp4-configure-network-policies could be configured as MANUAL. Now, compliance rule ocp4-configure-network-policies is set to AUTOMATIC. (BZ#2072431)
  • Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. (BZ#2075029)
  • Previously, applying the Compliance Operator to the KubeletConfig would result in the node going into a NotReady state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. (BZ#2071854)
  • Previously, the Machine Config Operator used base64 instead of url-encoded code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle both base64 and url-encoded Machine Config code and the remediation applies correctly. (BZ#2082431)

5.1.8.3. Known issue

  • When "debug":true is set within the ScanSettingBinding object, the pods generated by the ScanSettingBinding object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:

    $ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis

    (BZ#2092913)

5.1.9. OpenShift Compliance Operator 0.1.49

The following advisory is available for the OpenShift Compliance Operator 0.1.49:

5.1.9.1. New features and enhancements

  • The Compliance Operator is now supported on the following architectures:

    • IBM Power
    • IBM Z
    • IBM LinuxONE

5.1.9.2. Bug fixes

  • Previously, the openshift-compliance content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as failed instead of not-applicable based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. (BZ#1994609)
  • Previously, the ocp4-moderate-routes-protected-by-tls rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. (BZ#2002695)
  • Previously, ocp-cis-configure-network-policies-namespace used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. (BZ#2038909)
  • Previously, remediations using the sshd jinja macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. (BZ#2049141)
  • Previously, the ocp4-cluster-version-operator-verify-integrity always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result for ocp4-cluster-version-operator-verify-integrity is able to detect verified versions and is accurate with the CVO history. (BZ#2053602)
  • Previously, the ocp4-api-server-no-adm-ctrl-plugins-disabled rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the ocp4-api-server-no-adm-ctrl-plugins-disabled rule accurately passes with all admission controller plugins enabled. (BZ#2058631)
  • Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. (BZ#2056911)

5.1.10. OpenShift Compliance Operator 0.1.48

The following advisory is available for the OpenShift Compliance Operator 0.1.48:

5.1.10.1. Bug fixes

  • Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a checkType of None. This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have a checkType of either Node or Platform. (BZ#2040282)
  • Previously, a manually created MachineConfig object for KubeletConfig prevented a KubeletConfig object from being generated for remediation, leaving the remediation in the Pending state. With this release, a KubeletConfig object is created by the remediation, regardless if there is a manually created MachineConfig object for KubeletConfig. As a result, KubeletConfig remediations now work as expected. (BZ#2040401)

5.1.11. OpenShift Compliance Operator 0.1.47

The following advisory is available for the OpenShift Compliance Operator 0.1.47:

5.1.11.1. New features and enhancements

  • The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS):

    • ocp4-pci-dss
    • ocp4-pci-dss-node
  • Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles.
  • Remediations for KubeletConfig are now available in node-level profiles.

5.1.11.2. Bug fixes

  • Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules.

    Additionally, remediations are created only for rules that satisfy minimum version requirements. (BZ#1965511)

  • Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render sshd_config, would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. (BZ#2033009)

5.1.12. OpenShift Compliance Operator 0.1.44

The following advisory is available for the OpenShift Compliance Operator 0.1.44:

5.1.12.1. New features and enhancements

  • In this release, the strictNodeScan option is now added to the ComplianceScan, ComplianceSuite and ScanSetting CRs. This option defaults to true which matches the previous behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option to false allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set the strictNodeScan value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling.
  • You can now customize the node that is used to schedule the result server workload by configuring the nodeSelector and tolerations attributes of the ScanSetting object. These attributes are used to place the ResultServer pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, the nodeSelector and the tolerations parameters defaulted to selecting one of the control plane nodes and tolerating the node-role.kubernetes.io/master taint. This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments.
  • The Compliance Operator can now remediate KubeletConfig objects.
  • A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched.
  • Rule objects now contain two new attributes, checkType and description. These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does.
  • This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the extends field in the TailoredProfile CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting the compliance.openshift.io/product-type: annotation or by setting the -node suffix for the TailoredProfile CR.
  • In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the node-role.kubernetes.io/master taint, meaning that they would either ran on nodes with no taints or only on nodes with the node-role.kubernetes.io/master taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints.
  • In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles:

    • ocp4-nerc-cip
    • ocp4-nerc-cip-node
    • rhcos4-nerc-cip
  • In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile.

5.1.12.2. Templating and variable use

  • In this release, the remediation template now allows multi-value variables.
  • With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the ComplianceCheckResult objects now use the label compliance.openshift.io/check-has-value that lists the variables a check has used.

5.1.12.3. Bug fixes

  • Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash.
  • Previously, using autoReplyRemediations to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state of NeedsReview. If one or more remediations are in a NeedsReview state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes.
  • The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization.
  • Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the profileparser annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. (BZ#1988259)
  • Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in TailoredProfile CRs.
  • Previously, when using tailored profiles, TailoredProfile variable values were allowed to be set using only a specific selection set. This restriction is now removed, and TailoredProfile variables can be set to any value.

5.1.13. Release Notes for Compliance Operator 0.1.39

The following advisory is available for the OpenShift Compliance Operator 0.1.39:

5.1.13.1. New features and enhancements

  • Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that ships with PCI DSS profiles.
  • Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile.

5.1.14. Additional resources

5.2. Supported compliance profiles

There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile.

Important

The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418.

5.2.1. Compliance profiles

The Compliance Operator provides the following compliance profiles:

Table 5.1. Supported compliance profiles
ProfileProfile titleApplicationCompliance Operator versionIndustry compliance benchmarkSupported architectures

ocp4-cis

CIS Red Hat OpenShift Container Platform 4 Benchmark v1.4.0

Platform

1.2.0+

CIS Benchmarks ™ [1]

x86_64 ppc64le s390x

ocp4-cis-node

CIS Red Hat OpenShift Container Platform 4 Benchmark v1.4.0

Node [2]

1.2.0+

CIS Benchmarks ™ [1]

x86_64 ppc64le s390x

ocp4-e8

Australian Cyber Security Centre (ACSC) Essential Eight

Platform

0.1.39+

ACSC Hardening Linux Workstations and Servers

x86_64

ocp4-moderate

NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level

Platform

0.1.39+

NIST SP-800-53 Release Search

x86_64

rhcos4-e8

Australian Cyber Security Centre (ACSC) Essential Eight

Node

0.1.39+

ACSC Hardening Linux Workstations and Servers

x86_64

rhcos4-moderate

NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS

Node

0.1.39+

NIST SP-800-53 Release Search

x86_64

ocp4-moderate-node

NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level

Node [2]

0.1.44+

NIST SP-800-53 Release Search

x86_64

ocp4-nerc-cip

North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Platform level

Platform

0.1.44+

NERC CIP Standards

x86_64

ocp4-nerc-cip-node

North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the Red Hat OpenShift Container Platform - Node level

Node [2]

0.1.44+

NERC CIP Standards

x86_64

rhcos4-nerc-cip

North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS

Node

0.1.44+

NERC CIP Standards

x86_64

ocp4-pci-dss

PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4

Platform

0.1.47+

PCI Security Standards ® Council Document Library

x86_64 ppc64le

ocp4-pci-dss-node

PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4

Node [2]

0.1.47+

PCI Security Standards ® Council Document Library

x86_64 ppc64le

ocp4-high

NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level

Platform

0.1.52+

NIST SP-800-53 Release Search

x86_64

ocp4-high-node

NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level

Node [2]

0.1.52+

NIST SP-800-53 Release Search

x86_64

rhcos4-high

NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS

Node

0.1.52+

NIST SP-800-53 Release Search

x86_64

  1. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark, where you can then register to download the benchmark.
  2. Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.

5.2.2. Additional resources

5.3. Installing the Compliance Operator

Before you can use the Compliance Operator, you must ensure it is deployed in the cluster.

Important

The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS, and Microsoft Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418.

5.3.1. Installing the Compliance Operator through the web console

Prerequisites

  • You must have admin privileges.

Procedure

  1. In the OpenShift Container Platform web console, navigate to Operators OperatorHub.
  2. Search for the Compliance Operator, then click Install.
  3. Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-compliance namespace.
  4. Click Install.

Verification

To confirm that the installation is successful:

  1. Navigate to the Operators Installed Operators page.
  2. Check that the Compliance Operator is installed in the openshift-compliance namespace and its status is Succeeded.

If the Operator is not installed successfully:

  1. Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures.
  2. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-compliance project that are reporting issues.
Important

If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities, the Compliance Operator may not function properly due to permissions issues.

You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator.

5.3.2. Installing the Compliance Operator using the CLI

Prerequisites

  • You must have admin privileges.

Procedure

  1. Define a Namespace object:

    Example namespace-object.yaml

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
      name: openshift-compliance

  2. Create the Namespace object:

    $ oc create -f namespace-object.yaml
  3. Define an OperatorGroup object:

    Example operator-group-object.yaml

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: compliance-operator
      namespace: openshift-compliance
    spec:
      targetNamespaces:
      - openshift-compliance

  4. Create the OperatorGroup object:

    $ oc create -f operator-group-object.yaml
  5. Define a Subscription object:

    Example subscription-object.yaml

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: compliance-operator-sub
      namespace: openshift-compliance
    spec:
      channel: "stable"
      installPlanApproval: Automatic
      name: compliance-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace

  6. Create the Subscription object:

    $ oc create -f subscription-object.yaml
Note

If you are setting the global scheduler feature and enable defaultNodeSelector, you must create the namespace manually and update the annotations of the openshift-compliance namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: “”. This removes the default node selector and prevents deployment failures.

Verification

  1. Verify the installation succeeded by inspecting the CSV file:

    $ oc get csv -n openshift-compliance
  2. Verify that the Compliance Operator is up and running:

    $ oc get deploy -n openshift-compliance
Important

If the restricted Security Context Constraints (SCC) have been modified to contain the system:authenticated group or has added requiredDropCapabilities, the Compliance Operator may not function properly due to permissions issues.

You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator.

5.3.3. Additional resources

5.4. Updating the Compliance Operator

As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster.

5.4.1. Preparing for an Operator update

The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.

The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).

Note

You cannot change installed Operators to a channel that is older than the current channel.

Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:

You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included.

5.4.2. Changing the update channel for an Operator

You can change the update channel for an Operator by using the OpenShift Container Platform web console.

Tip

If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the web console, navigate to Operators Installed Operators.
  2. Click the name of the Operator you want to change the update channel for.
  3. Click the Subscription tab.
  4. Click the name of the update channel under Channel.
  5. Click the newer update channel that you want to change to, then click Save.
  6. For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

    For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.

5.4.3. Manually approving a pending Operator update

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators.
  2. Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
  3. Click the Subscription tab. Any update requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for update. When satisfied, click Approve.
  6. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

5.5. Compliance Operator scans

The ScanSetting and ScanSettingBinding APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run:

$ oc explain scansettings

or

$ oc explain scansettingbindings

5.5.1. Running compliance scans

You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default.

Note

For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting object.

Procedure

  1. Inspect the ScanSetting object by running:

    $ oc describe scansettings default -n openshift-compliance

    Example output

    Name:         default
    Namespace:    openshift-compliance
    Labels:       <none>
    Annotations:  <none>
    API Version:  compliance.openshift.io/v1alpha1
    Kind:         ScanSetting
    Metadata:
      Creation Timestamp:  2022-10-10T14:07:29Z
      Generation:          1
      Managed Fields:
        API Version:  compliance.openshift.io/v1alpha1
        Fields Type:  FieldsV1
        fieldsV1:
          f:rawResultStorage:
            .:
            f:nodeSelector:
              .:
              f:node-role.kubernetes.io/master:
            f:pvAccessModes:
            f:rotation:
            f:size:
            f:tolerations:
          f:roles:
          f:scanTolerations:
          f:schedule:
          f:showNotApplicable:
          f:strictNodeScan:
        Manager:         compliance-operator
        Operation:       Update
        Time:            2022-10-10T14:07:29Z
      Resource Version:  56111
      UID:               c21d1d14-3472-47d7-a450-b924287aec90
    Raw Result Storage:
      Node Selector:
        node-role.kubernetes.io/master:
      Pv Access Modes:
        ReadWriteOnce 1
      Rotation:  3 2
      Size:      1Gi 3
      Tolerations:
        Effect:              NoSchedule
        Key:                 node-role.kubernetes.io/master
        Operator:            Exists
        Effect:              NoExecute
        Key:                 node.kubernetes.io/not-ready
        Operator:            Exists
        Toleration Seconds:  300
        Effect:              NoExecute
        Key:                 node.kubernetes.io/unreachable
        Operator:            Exists
        Toleration Seconds:  300
        Effect:              NoSchedule
        Key:                 node.kubernetes.io/memory-pressure
        Operator:            Exists
    Roles:
      master 4
      worker 5
    Scan Tolerations: 6
      Operator:           Exists
    Schedule:             0 1 * * * 7
    Show Not Applicable:  false
    Strict Node Scan:     true
    Events:               <none>

    1
    The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans.
    2
    The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.
    3
    The Compliance Operator will allocate one GB of storage for the scan results.
    4 5
    If the scan setting uses any profiles that scan cluster nodes, scan these node roles.
    6
    The default scan setting object scans all the nodes.
    7
    The default scan setting object runs scans at 01:00 each day.

    As an alternative to the default scan setting, you can use default-auto-apply, which has the following settings:

    Name:                      default-auto-apply
    Namespace:                 openshift-compliance
    Labels:                    <none>
    Annotations:               <none>
    API Version:               compliance.openshift.io/v1alpha1
    Auto Apply Remediations:   true 1
    Auto Update Remediations:  true 2
    Kind:                      ScanSetting
    Metadata:
      Creation Timestamp:  2022-10-18T20:21:00Z
      Generation:          1
      Managed Fields:
        API Version:  compliance.openshift.io/v1alpha1
        Fields Type:  FieldsV1
        fieldsV1:
          f:autoApplyRemediations:
          f:autoUpdateRemediations:
          f:rawResultStorage:
            .:
            f:nodeSelector:
              .:
              f:node-role.kubernetes.io/master:
            f:pvAccessModes:
            f:rotation:
            f:size:
            f:tolerations:
          f:roles:
          f:scanTolerations:
          f:schedule:
          f:showNotApplicable:
          f:strictNodeScan:
        Manager:         compliance-operator
        Operation:       Update
        Time:            2022-10-18T20:21:00Z
      Resource Version:  38840
      UID:               8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84
    Raw Result Storage:
      Node Selector:
        node-role.kubernetes.io/master:
      Pv Access Modes:
        ReadWriteOnce
      Rotation:  3
      Size:      1Gi
      Tolerations:
        Effect:              NoSchedule
        Key:                 node-role.kubernetes.io/master
        Operator:            Exists
        Effect:              NoExecute
        Key:                 node.kubernetes.io/not-ready
        Operator:            Exists
        Toleration Seconds:  300
        Effect:              NoExecute
        Key:                 node.kubernetes.io/unreachable
        Operator:            Exists
        Toleration Seconds:  300
        Effect:              NoSchedule
        Key:                 node.kubernetes.io/memory-pressure
        Operator:            Exists
    Roles:
      master
      worker
    Scan Tolerations:
      Operator:           Exists
    Schedule:             0 1 * * *
    Show Not Applicable:  false
    Strict Node Scan:     true
    Events:               <none>
    1 2
    Setting autoUpdateRemediations and autoApplyRemediations flags to true allows you to easily create ScanSetting objects that auto-remediate without extra steps.
  2. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSettingBinding
    metadata:
      name: cis-compliance
      namespace: openshift-compliance
    profiles:
      - name: ocp4-cis-node
        kind: Profile
        apiGroup: compliance.openshift.io/v1alpha1
      - name: ocp4-cis
        kind: Profile
        apiGroup: compliance.openshift.io/v1alpha1
    settingsRef:
      name: default
      kind: ScanSetting
      apiGroup: compliance.openshift.io/v1alpha1
  3. Create the ScanSettingBinding object by running:

    $ oc create -f <file-name>.yaml -n openshift-compliance

    At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects.

  4. Follow the compliance scan progress by running:

    $ oc get compliancescan -w -n openshift-compliance

    The scans progress through the scanning phases and eventually reach the DONE phase when complete. In most cases, the result of the scan is NON-COMPLIANT. You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information.

5.5.2. Scheduling the result server pod on a worker node

The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod.

This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes.

Procedure

  • Create a ScanSetting custom resource (CR) for the Compliance Operator:

    1. Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml:

      apiVersion: compliance.openshift.io/v1alpha1
      kind: ScanSetting
      metadata:
        name: rs-on-workers
        namespace: openshift-compliance
      rawResultStorage:
        nodeSelector:
          node-role.kubernetes.io/worker: "" 1
        pvAccessModes:
        - ReadWriteOnce
        rotation: 3
        size: 1Gi
        tolerations:
        - operator: Exists 2
      roles:
      - worker
      - master
      scanTolerations:
        - operator: Exists
      schedule: 0 1 * * *
      1
      The Compliance Operator uses this node to store scan results in ARF format.
      2
      The result server pod tolerates all taints.
    2. To create the ScanSetting CR, run the following command:

      $ oc create -f rs-workers.yaml

Verification

  • To verify that the ScanSetting object is created, run the following command:

    $ oc get scansettings rs-on-workers -n openshift-compliance -o yaml

    Example output

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSetting
    metadata:
      creationTimestamp: "2021-11-19T19:36:36Z"
      generation: 1
      name: rs-on-workers
      namespace: openshift-compliance
      resourceVersion: "48305"
      uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e
    rawResultStorage:
      nodeSelector:
        node-role.kubernetes.io/worker: ""
      pvAccessModes:
      - ReadWriteOnce
      rotation: 3
      size: 1Gi
      tolerations:
      - operator: Exists
    roles:
    - worker
    - master
    scanTolerations:
    - operator: Exists
    schedule: 0 1 * * *
    strictNodeScan: true

5.5.3. ScanSetting Custom Resource

The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself.

To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits.

Important

Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process.

5.5.4. Applying resource requests and limits

When the kubelet starts a container as part of a Pod, the kubelet passes that container’s requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.

The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.

If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values.

If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.

The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod’s container might be evicted.

Important

A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator.

5.5.5. Scheduling Pods with container resource requests

When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.

Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.

For each container, you can specify the following resource limits and request:

spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>

Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.

Example container resource requests and limits

apiVersion: v1
kind: Pod
metadata:
  name: frontend
spec:
  containers:
  - name: app
    image: images.my-company.example/app:v4
    resources:
      requests: 1
        memory: "64Mi"
        cpu: "250m"
      limits: 2
        memory: "128Mi"
        cpu: "500m"
  - name: log-aggregator
    image: images.my-company.example/log-aggregator:v6
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

1
The container is requesting 64 Mi of memory and 250 m CPU.
2
The container’s limits are 128 Mi of memory and 500 m CPU.

5.6. Understanding the Compliance Operator

The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.

Important

The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only.

5.6.1. Compliance Operator profiles

There are several profiles available as part of the Compliance Operator installation. You can use the oc get command to view available profiles, profile details, and specific rules.

  • View the available profiles:

    $ oc get -n openshift-compliance profiles.compliance

    Example output

    NAME                 AGE
    ocp4-cis             94m
    ocp4-cis-node        94m
    ocp4-e8              94m
    ocp4-high            94m
    ocp4-high-node       94m
    ocp4-moderate        94m
    ocp4-moderate-node   94m
    ocp4-nerc-cip        94m
    ocp4-nerc-cip-node   94m
    ocp4-pci-dss         94m
    ocp4-pci-dss-node    94m
    rhcos4-e8            94m
    rhcos4-high          94m
    rhcos4-moderate      94m
    rhcos4-nerc-cip      94m

    These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile’s name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product.

  • Run the following command to view the details of the rhcos4-e8 profile:

    $ oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8

    Example 5.1. Example output

    apiVersion: compliance.openshift.io/v1alpha1
    description: 'This profile contains configuration checks for Red Hat Enterprise Linux
      CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight.
      A copy of the Essential Eight in Linux Environments guide can be found at the ACSC
      website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers'
    id: xccdf_org.ssgproject.content_profile_e8
    kind: Profile
    metadata:
      annotations:
        compliance.openshift.io/image-digest: pb-rhcos4hrdkm
        compliance.openshift.io/product: redhat_enterprise_linux_coreos_4
        compliance.openshift.io/product-type: Node
      creationTimestamp: "2022-10-19T12:06:49Z"
      generation: 1
      labels:
        compliance.openshift.io/profile-bundle: rhcos4
      name: rhcos4-e8
      namespace: openshift-compliance
      ownerReferences:
      - apiVersion: compliance.openshift.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: ProfileBundle
        name: rhcos4
        uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
      resourceVersion: "43699"
      uid: 86353f70-28f7-40b4-bf0e-6289ec33675b
    rules:
    - rhcos4-accounts-no-uid-except-zero
    - rhcos4-audit-rules-dac-modification-chmod
    - rhcos4-audit-rules-dac-modification-chown
    - rhcos4-audit-rules-execution-chcon
    - rhcos4-audit-rules-execution-restorecon
    - rhcos4-audit-rules-execution-semanage
    - rhcos4-audit-rules-execution-setfiles
    - rhcos4-audit-rules-execution-setsebool
    - rhcos4-audit-rules-execution-seunshare
    - rhcos4-audit-rules-kernel-module-loading-delete
    - rhcos4-audit-rules-kernel-module-loading-finit
    - rhcos4-audit-rules-kernel-module-loading-init
    - rhcos4-audit-rules-login-events
    - rhcos4-audit-rules-login-events-faillock
    - rhcos4-audit-rules-login-events-lastlog
    - rhcos4-audit-rules-login-events-tallylog
    - rhcos4-audit-rules-networkconfig-modification
    - rhcos4-audit-rules-sysadmin-actions
    - rhcos4-audit-rules-time-adjtimex
    - rhcos4-audit-rules-time-clock-settime
    - rhcos4-audit-rules-time-settimeofday
    - rhcos4-audit-rules-time-stime
    - rhcos4-audit-rules-time-watch-localtime
    - rhcos4-audit-rules-usergroup-modification
    - rhcos4-auditd-data-retention-flush
    - rhcos4-auditd-freq
    - rhcos4-auditd-local-events
    - rhcos4-auditd-log-format
    - rhcos4-auditd-name-format
    - rhcos4-auditd-write-logs
    - rhcos4-configure-crypto-policy
    - rhcos4-configure-ssh-crypto-policy
    - rhcos4-no-empty-passwords
    - rhcos4-selinux-policytype
    - rhcos4-selinux-state
    - rhcos4-service-auditd-enabled
    - rhcos4-sshd-disable-empty-passwords
    - rhcos4-sshd-disable-gssapi-auth
    - rhcos4-sshd-disable-rhosts
    - rhcos4-sshd-disable-root-login
    - rhcos4-sshd-disable-user-known-hosts
    - rhcos4-sshd-do-not-permit-user-env
    - rhcos4-sshd-enable-strictmodes
    - rhcos4-sshd-print-last-log
    - rhcos4-sshd-set-loglevel-info
    - rhcos4-sysctl-kernel-dmesg-restrict
    - rhcos4-sysctl-kernel-kptr-restrict
    - rhcos4-sysctl-kernel-randomize-va-space
    - rhcos4-sysctl-kernel-unprivileged-bpf-disabled
    - rhcos4-sysctl-kernel-yama-ptrace-scope
    - rhcos4-sysctl-net-core-bpf-jit-harden
    title: Australian Cyber Security Centre (ACSC) Essential Eight
  • Run the following command to view the details of the rhcos4-audit-rules-login-events rule:

    $ oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events

    Example 5.2. Example output

    apiVersion: compliance.openshift.io/v1alpha1
    checkType: Node
    description: |-
      The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events:
    
      -w /var/log/tallylog -p wa -k logins
      -w /var/run/faillock -p wa -k logins
      -w /var/log/lastlog -p wa -k logins
    
      If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events:
    
      -w /var/log/tallylog -p wa -k logins
      -w /var/run/faillock -p wa -k logins
      -w /var/log/lastlog -p wa -k logins
    id: xccdf_org.ssgproject.content_rule_audit_rules_login_events
    kind: Rule
    metadata:
      annotations:
        compliance.openshift.io/image-digest: pb-rhcos4hrdkm
        compliance.openshift.io/rule: audit-rules-login-events
        control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a)
        control.compliance.openshift.io/PCI-DSS: Req-10.2.3
        policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3
        policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS
      creationTimestamp: "2022-10-19T12:07:08Z"
      generation: 1
      labels:
        compliance.openshift.io/profile-bundle: rhcos4
      name: rhcos4-audit-rules-login-events
      namespace: openshift-compliance
      ownerReferences:
      - apiVersion: compliance.openshift.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: ProfileBundle
        name: rhcos4
        uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
      resourceVersion: "44819"
      uid: 75872f1f-3c93-40ca-a69d-44e5438824a4
    rationale: Manual editing of these files may indicate nefarious activity, such as
      an attacker attempting to remove evidence of an intrusion.
    severity: medium
    title: Record Attempts to Alter Logon and Logout Events
    warning: Manual editing of these files may indicate nefarious activity, such as an
      attacker attempting to remove evidence of an intrusion.

5.6.1.1. Compliance Operator profile types

There are two types of compliance profiles available: Platform and Node.

Platform
Platform scans target your OpenShift Container Platform cluster.
Node
Node scans target the nodes of the cluster.
Important

For compliance profiles that have Node and Platform applications, such as pci-dss compliance profiles, you must run both in your OpenShift Container Platform environment.

5.6.2. Additional resources

5.7. Managing the Compliance Operator

This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object.

5.7.1. ProfileBundle CR example

The ProfileBundle object requires two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. You can define the built-in rhcos4 ProfileBundle object as shown in the following example:

apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
  creationTimestamp: "2022-10-19T12:06:30Z"
  finalizers:
  - profilebundle.finalizers.compliance.openshift.io
  generation: 1
  name: rhcos4
  namespace: openshift-compliance
  resourceVersion: "46741"
  uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
spec:
  contentFile: ssg-rhcos4-ds.xml 1
  contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2
status:
  conditions:
  - lastTransitionTime: "2022-10-19T12:07:51Z"
    message: Profile bundle successfully parsed
    reason: Valid
    status: "True"
    type: Ready
  dataStreamStatus: VALID
1
Location of the file containing the compliance content.
2
Content image location.
Important

The base image used for the content images must include coreutils.

5.7.2. Updating security content

Security content is included as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag:

$ oc -n openshift-compliance get profilebundles rhcos4 -oyaml

Example output

apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
  creationTimestamp: "2022-10-19T12:06:30Z"
  finalizers:
  - profilebundle.finalizers.compliance.openshift.io
  generation: 1
  name: rhcos4
  namespace: openshift-compliance
  resourceVersion: "46741"
  uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
spec:
  contentFile: ssg-rhcos4-ds.xml
  contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1
status:
  conditions:
  - lastTransitionTime: "2022-10-19T12:07:51Z"
    message: Profile bundle successfully parsed
    reason: Valid
    status: "True"
    type: Ready
  dataStreamStatus: VALID

1
Security container image.

Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles.

5.7.3. Additional resources

5.8. Tailoring the Compliance Operator

While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations’ needs and requirements. The process of modifying a profile is called tailoring.

The Compliance Operator provides the TailoredProfile object to help tailor profiles.

5.8.1. Creating a new tailored profile

You can write a tailored profile from scratch by using the TailoredProfile object. Set an appropriate title and description and leave the extends field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate:

  • Node scan: Scans the Operating System.
  • Platform scan: Scans the OpenShift Container Platform configuration.

Procedure

  • Set the following annotation on the TailoredProfile object:

Example new-profile.yaml

apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
  name: new-profile
  annotations:
    compliance.openshift.io/product-type: Node 1
spec:
  extends: ocp4-cis-node 2
  description: My custom profile 3
  title: Custom profile 4
  enableRules:
    - name: ocp4-etcd-unique-ca
      rationale: We really need to enable this
  disableRules:
    - name: ocp4-file-groupowner-cni-conf
      rationale: This does not apply to the cluster

1
Set Node or Platform accordingly.
2
The extends field is optional.
3
Use the description field to describe the function of the new TailoredProfile object.
4
Give your TailoredProfile object a title with the title field.
Note

Adding the -node suffix to the name field of the TailoredProfile object is similar to adding the Node product type annotation and generates an Operating System scan.

5.8.2. Using tailored profiles to extend existing ProfileBundles

While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents.

Procedure

  1. Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle:

    $ oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4
  2. Browse the available variables in the same ProfileBundle:

    $ oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4
  3. Create a tailored profile named nist-moderate-modified:

    1. Choose which rules you want to add to the nist-moderate-modified tailored profile. This example extends the rhcos4-moderate profile by disabling two rules and changing one value. Use the rationale value to describe why these changes were made:

      Example new-profile-node.yaml

      apiVersion: compliance.openshift.io/v1alpha1
      kind: TailoredProfile
      metadata:
        name: nist-moderate-modified
      spec:
        extends: rhcos4-moderate
        description: NIST moderate profile
        title: My modified NIST moderate profile
        disableRules:
        - name: rhcos4-file-permissions-var-log-messages
          rationale: The file contains logs of error messages in the system
        - name: rhcos4-account-disable-post-pw-expiration
          rationale: No need to check this as it comes from the IdP
        setValues:
        - name: rhcos4-var-selinux-state
          rationale: Organizational requirements
          value: permissive

      Table 5.2. Attributes for spec variables
      AttributeDescription

      extends

      Name of the Profile object upon which this TailoredProfile is built.

      title

      Human-readable title of the TailoredProfile.

      disableRules

      A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled.

      manualRules

      A list of name and rationale pairs. When a manual rule is added, the check result status will always be manual and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule.

      enableRules

      A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled.

      description

      Human-readable text describing the TailoredProfile.

      setValues

      A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting.

    2. Add the tailoredProfile.spec.manualRules attribute:

      Example tailoredProfile.spec.manualRules.yaml

      apiVersion: compliance.openshift.io/v1alpha1
      kind: TailoredProfile
      metadata:
        name: ocp4-manual-scc-check
      spec:
        extends: ocp4-cis
        description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL
        title: OCP4 CIS profile with manual SCC check
        manualRules:
          - name: ocp4-scc-limit-container-allowed-capabilities
            rationale: We use third party software that installs its own SCC with extra privileges

    3. Create the TailoredProfile object:

      $ oc create -n openshift-compliance -f new-profile-node.yaml 1
      1
      The TailoredProfile object is created in the default openshift-compliance namespace.

      Example output

      tailoredprofile.compliance.openshift.io/nist-moderate-modified created

  4. Define the ScanSettingBinding object to bind the new nist-moderate-modified tailored profile to the default ScanSetting object.

    Example new-scansettingbinding.yaml

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSettingBinding
    metadata:
      name: nist-moderate-modified
    profiles:
      - apiGroup: compliance.openshift.io/v1alpha1
        kind: Profile
        name: ocp4-moderate
      - apiGroup: compliance.openshift.io/v1alpha1
        kind: TailoredProfile
        name: nist-moderate-modified
    settingsRef:
      apiGroup: compliance.openshift.io/v1alpha1
      kind: ScanSetting
      name: default

  5. Create the ScanSettingBinding object:

    $ oc create -n openshift-compliance -f new-scansettingbinding.yaml

    Example output

    scansettingbinding.compliance.openshift.io/nist-moderate-modified created

5.9. Retrieving Compliance Operator raw results

When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes.

5.9.1. Obtaining Compliance Operator raw results from a persistent volume

Procedure

The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF).

  1. Explore the ComplianceSuite object:

    $ oc get compliancesuites nist-moderate-modified \
    -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'

    Example output

    {
         "name": "ocp4-moderate",
         "namespace": "openshift-compliance"
    }
    {
         "name": "nist-moderate-modified-master",
         "namespace": "openshift-compliance"
    }
    {
         "name": "nist-moderate-modified-worker",
         "namespace": "openshift-compliance"
    }

    This shows the persistent volume claims where the raw results are accessible.

  2. Verify the raw data location by using the name and namespace of one of the results:

    $ oc get pvc -n openshift-compliance rhcos4-moderate-worker

    Example output

    NAME                 	STATUS   VOLUME                                 	CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    rhcos4-moderate-worker   Bound	pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a   1Gi    	RWO        	gp2        	92m

  3. Fetch the raw results by spawning a pod that mounts the volume and copying the results:

    $ oc create -n openshift-compliance -f pod.yaml

    Example pod.yaml

    apiVersion: "v1"
    kind: Pod
    metadata:
      name: pv-extract
    spec:
      containers:
        - name: pv-extract-pod
          image: registry.access.redhat.com/ubi8/ubi
          command: ["sleep", "3000"]
          volumeMounts:
          - mountPath: "/workers-scan-results"
            name: workers-scan-vol
      volumes:
        - name: workers-scan-vol
          persistentVolumeClaim:
            claimName: rhcos4-moderate-worker

  4. After the pod is running, download the results:

    $ oc cp pv-extract:/workers-scan-results -n openshift-compliance .
    Important

    Spawning a pod that mounts the persistent volume will keep the claim as Bound. If the volume’s storage class in use has permissions set to ReadWriteOnce, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location.

  5. After the extraction is complete, the pod can be deleted:

    $ oc delete pod pv-extract -n openshift-compliance

5.10. Managing Compliance Operator result and remediation

Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.

5.10.1. Filters for compliance check results

By default, the ComplianceCheckResult objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.

List checks that belong to a specific suite:

$ oc get -n openshift-compliance compliancecheckresults \
  -l compliance.openshift.io/suite=workers-compliancesuite

List checks that belong to a specific scan:

$ oc get -n openshift-compliance compliancecheckresults \
-l compliance.openshift.io/scan=workers-scan

Not all ComplianceCheckResult objects create ComplianceRemediation objects. Only ComplianceCheckResult objects that can be remediated automatically do. A ComplianceCheckResult object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation label. The name of the remediation is the same as the name of the check.

List all failing checks that can be remediated automatically:

$ oc get -n openshift-compliance compliancecheckresults \
-l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'

List all failing checks sorted by severity:

$ oc get compliancecheckresults -n openshift-compliance \
-l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'

Example output

NAME                                                           STATUS   SEVERITY
nist-moderate-modified-master-configure-crypto-policy          FAIL     high
nist-moderate-modified-master-coreos-pti-kernel-argument       FAIL     high
nist-moderate-modified-master-disable-ctrlaltdel-burstaction   FAIL     high
nist-moderate-modified-master-disable-ctrlaltdel-reboot        FAIL     high
nist-moderate-modified-master-enable-fips-mode                 FAIL     high
nist-moderate-modified-master-no-empty-passwords               FAIL     high
nist-moderate-modified-master-selinux-state                    FAIL     high
nist-moderate-modified-worker-configure-crypto-policy          FAIL     high
nist-moderate-modified-worker-coreos-pti-kernel-argument       FAIL     high
nist-moderate-modified-worker-disable-ctrlaltdel-burstaction   FAIL     high
nist-moderate-modified-worker-disable-ctrlaltdel-reboot        FAIL     high
nist-moderate-modified-worker-enable-fips-mode                 FAIL     high
nist-moderate-modified-worker-no-empty-passwords               FAIL     high
nist-moderate-modified-worker-selinux-state                    FAIL     high
ocp4-moderate-configure-network-policies-namespaces            FAIL     high
ocp4-moderate-fips-mode-enabled-on-all-nodes                   FAIL     high

List all failing checks that must be remediated manually:

$ oc get -n openshift-compliance compliancecheckresults \
-l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'

The manual remediation steps are typically stored in the description attribute in the ComplianceCheckResult object.

Table 5.3. ComplianceCheckResult Status
ComplianceCheckResult StatusDescription

PASS

Compliance check ran to completion and passed.

FAIL

Compliance check ran to completion and failed.

INFO

Compliance check ran to completion and found something not severe enough to be considered an error.

MANUAL

Compliance check does not have a way to automatically assess the success or failure and must be checked manually.

INCONSISTENT

Compliance check reports different results from different sources, typically cluster nodes.

ERROR

Compliance check ran, but could not complete properly.

NOT-APPLICABLE

Compliance check did not run because it is not applicable or not selected.

5.10.2. Reviewing a remediation

Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult. After first scan, check for remediations with the state MissingDependencies.

Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects. This example is redacted to only show spec and status and omits metadata:

spec:
  apply: false
  current:
  object:
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf
              mode: 0644
              contents:
                source: data:,net.ipv4.conf.all.accept_redirects%3D0
  outdated: {}
status:
  applicationState: NotApplied

The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.

To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path attribute specifies the file that is being create by this remediation (/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf) and the spec.config.storage.files[0].contents.source attribute specifies the contents of that file.

Note

The contents of the files are URL-encoded.

Use the following Python script to view the contents:

$ echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))"

Example output

net.ipv4.conf.all.accept_redirects=0

5.10.3. Applying remediation when using customized machine config pools

When you create a custom MachineConfigPool, add a label to the MachineConfigPool so that machineConfigPoolSelector present in the KubeletConfig can match the label with MachineConfigPool.

Important

Do not set protectKernelDefaults: false in the KubeletConfig file, because the MachineConfigPool object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation.

Procedure

  1. List the nodes.

    $ oc get nodes -n openshift-compliance

    Example output

    NAME                                       STATUS  ROLES  AGE    VERSION
    ip-10-0-128-92.us-east-2.compute.internal  Ready   master 5h21m  v1.23.3+d99c04f
    ip-10-0-158-32.us-east-2.compute.internal  Ready   worker 5h17m  v1.23.3+d99c04f
    ip-10-0-166-81.us-east-2.compute.internal  Ready   worker 5h17m  v1.23.3+d99c04f
    ip-10-0-171-170.us-east-2.compute.internal Ready   master 5h21m  v1.23.3+d99c04f
    ip-10-0-197-35.us-east-2.compute.internal  Ready   master 5h22m  v1.23.3+d99c04f

  2. Add a label to nodes.

    $ oc -n openshift-compliance \
    label node ip-10-0-166-81.us-east-2.compute.internal \
    node-role.kubernetes.io/<machine_config_pool_name>=

    Example output

    node/ip-10-0-166-81.us-east-2.compute.internal labeled

  3. Create custom MachineConfigPool CR.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: <machine_config_pool_name>
      labels:
        pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1
    spec:
      machineConfigSelector:
      matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]}
      nodeSelector:
      matchLabels:
        node-role.kubernetes.io/<machine_config_pool_name>: ""
    1
    The labels field defines label name to add for Machine config pool(MCP).
  4. Verify MCP created successfully.

    $ oc get mcp -w

5.10.4. Evaluating KubeletConfig rules against default configuration values

OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks.

To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results.

No additional configuration changes are required to use this feature with default master and worker node pools configurations.

5.10.5. Scanning custom node pools

The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool.

If your cluster uses custom node pools outside the default worker and master node pools, you must supply additional variables to ensure the Compliance Operator aggregates a configuration file for that node pool.

Procedure

  1. To check the configuration against all pools in an example cluster containing master, worker, and custom example node pools, set the value of the ocp-var-role-master and opc-var-role-worker fields to example in the TailoredProfile object:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: TailoredProfile
    metadata:
      name: cis-example-tp
    spec:
      extends: ocp4-cis
      title: My modified NIST profile to scan example nodes
      setValues:
      - name: ocp4-var-role-master
        value: example
        rationale: test for example nodes
      - name: ocp4-var-role-worker
        value: example
        rationale: test for example nodes
      description: cis-example-scan
  2. Add the example role to the ScanSetting object that will be stored in the ScanSettingBinding CR:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSetting
    metadata:
      name: default
      namespace: openshift-compliance
    rawResultStorage:
      rotation: 3
      size: 1Gi
    roles:
    - worker
    - master
    - example
    scanTolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
      operator: Exists
    schedule: '0 1 * * *'
  3. Create a scan that uses the ScanSettingBinding CR:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSettingBinding
    metadata:
      name: cis
      namespace: openshift-compliance
    profiles:
    - apiGroup: compliance.openshift.io/v1alpha1
      kind: Profile
      name: ocp4-cis
    - apiGroup: compliance.openshift.io/v1alpha1
      kind: Profile
      name: ocp4-cis-node
    - apiGroup: compliance.openshift.io/v1alpha1
      kind: TailoredProfile
      name: cis-example-tp
    settingsRef:
      apiGroup: compliance.openshift.io/v1alpha1
      kind: ScanSetting
      name: default

The Compliance Operator checks the runtime KubeletConfig through the Node/Proxy API object and then uses variables such as ocp-var-role-master and ocp-var-role-worker to determine the nodes it performs the check against. In the ComplianceCheckResult, the KubeletConfig rules are shown as ocp4-cis-kubelet-*. The scan passes only if all selected nodes pass this check.

Verification

  • The Platform KubeletConfig rules are checked through the Node/Proxy object. You can find those rules by running the following command:

    $ oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name'

5.10.6. Remediating KubeletConfig sub pools

KubeletConfig remediation labels can be applied to MachineConfigPool sub-pools.

Procedure

  • Add a label to the sub-pool MachineConfigPool CR:

    $ oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=

5.10.7. Applying a remediation

The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true:

$ oc -n openshift-compliance \
patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \
--patch '{"spec":{"apply":true}}' --type=merge

After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-$scan-name-$suite-name. That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.

Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-$scan-name-$suite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true.

The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object.

Warning

Applying remediations automatically should only be done with careful consideration.

5.10.8. Remediating a platform check manually

Checks for Platform scans typically have to be remediated manually by the administrator for two reasons:

  • It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow.
  • Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised.

Procedure

  1. The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml, the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue:

    $ oc edit image.config.openshift.io/cluster

    Example output

    apiVersion: config.openshift.io/v1
    kind: Image
    metadata:
      annotations:
        release.openshift.io/create-only: "true"
      creationTimestamp: "2020-09-10T10:12:54Z"
      generation: 2
      name: cluster
      resourceVersion: "363096"
      selfLink: /apis/config.openshift.io/v1/images/cluster
      uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e
    spec:
      allowedRegistriesForImport:
      - domainName: registry.redhat.io
    status:
      externalRegistryHostnames:
      - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com
      internalRegistryHostname: image-registry.openshift-image-registry.svc:5000

  2. Re-run the scan:

    $ oc -n openshift-compliance \
    annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

5.10.9. Updating remediations

When a new version of compliance content is used, it might deliver a new and different version of a remediation than the previous version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated. The outdated objects are labeled so that they can be searched for easily.

The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes.

Procedure

  1. Search for any outdated remediations:

    $ oc -n openshift-compliance get complianceremediations \
    -l complianceoperator.openshift.io/outdated-remediation=

    Example output

    NAME                              STATE
    workers-scan-no-empty-passwords   Outdated

    The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes.

  2. Apply the newer version of the remediation:

    $ oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \
    --type json -p '[{"op":"remove", "path":/spec/outdated}]'
  3. The remediation state will switch from Outdated to Applied:

    $ oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords

    Example output

    NAME                              STATE
    workers-scan-no-empty-passwords   Applied

  4. The nodes will apply the newer remediation version and reboot.

5.10.10. Unapplying a remediation

It might be required to unapply a remediation that was previously applied.

Procedure

  1. Set the apply flag to false:

    $ oc -n openshift-compliance \
    patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \
    --patch '{"spec":{"apply":false}}' --type=merge
  2. The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation.

    Important

    All affected nodes with the remediation will be rebooted.

5.10.11. Removing a KubeletConfig remediation

KubeletConfig remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation.

Procedure

  1. Locate the scan-name and compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available remediation:

    $ oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml

    Example output

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ComplianceRemediation
    metadata:
      annotations:
        compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available
      creationTimestamp: "2022-01-05T19:52:27Z"
      generation: 1
      labels:
        compliance.openshift.io/scan-name: one-rule-tp-node-master 1
        compliance.openshift.io/suite: one-rule-ssb-node
      name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
      namespace: openshift-compliance
      ownerReferences:
      - apiVersion: compliance.openshift.io/v1alpha1
        blockOwnerDeletion: true
        controller: true
        kind: ComplianceCheckResult
        name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
        uid: fe8e1577-9060-4c59-95b2-3e2c51709adc
      resourceVersion: "84820"
      uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355
    spec:
      apply: true
      current:
        object:
          apiVersion: machineconfiguration.openshift.io/v1
          kind: KubeletConfig
          spec:
            kubeletConfig:
              evictionHard:
                imagefs.available: 10% 2
      outdated: {}
      type: Configuration
    status:
      applicationState: Applied

    1
    The scan name of the remediation.
    2
    The remediation that was added to the KubeletConfig objects.
    Note

    If the remediation invokes an evictionHard kubelet configuration, you must specify all of the evictionHard parameters: memory.available, nodefs.available, nodefs.inodesFree, imagefs.available, and imagefs.inodesFree. If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly.

  2. Remove the remediation:

    1. Set apply to false for the remediation object:

      $ oc -n openshift-compliance patch \
      complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \
      -p '{"spec":{"apply":false}}' --type=merge
    2. Using the scan-name, find the KubeletConfig object that the remediation was applied to:

      $ oc -n openshift-compliance get kubeletconfig \
      --selector compliance.openshift.io/scan-name=one-rule-tp-node-master

      Example output

      NAME                                 AGE
      compliance-operator-kubelet-master   2m34s

    3. Manually remove the remediation, imagefs.available: 10%, from the KubeletConfig object:

      $ oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master
      Important

      All affected nodes with the remediation will be rebooted.

Note

You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the next scheduled scan.

5.10.12. Inconsistent ComplianceScan

The ScanSetting object lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding objects would scan. Each node role usually maps to a machine config pool.

Important

It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.

If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT. All ComplianceCheckResult objects are also labeled with compliance.openshift.io/inconsistent-check.

Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status annotation and the annotation compliance.openshift.io/inconsistent-source contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the compliance.openshift.io/inconsistent-source annotation.

If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan= option:

$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

5.10.13. Additional resources

5.11. Performing advanced Compliance Operator tasks

The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.

5.11.1. Using the ComplianceSuite and ComplianceScan objects directly

While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly:

  • Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information.
  • Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool.
  • Pointing the Scan to a bespoke config map with a tailoring file.
  • For testing or development when the overhead of parsing profiles from bundles is not required.

The following example shows a ComplianceSuite that scans the worker machines with only a single rule:

apiVersion: compliance.openshift.io/v1alpha1
kind: ComplianceSuite
metadata:
  name: workers-compliancesuite
spec:
  scans:
    - name: workers-scan
      profile: xccdf_org.ssgproject.content_profile_moderate
      content: ssg-rhcos4-ds.xml
      contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc...
      debug: true
      rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins
      nodeSelector:
      node-role.kubernetes.io/worker: ""

The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects.

To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite.

5.11.2. Setting PriorityClass for ScanSetting scans

In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations.

Procedure

  • Set the PriorityClass variable:

    apiVersion: compliance.openshift.io/v1alpha1
    strictNodeScan: true
    metadata:
      name: default
      namespace: openshift-compliance
    priorityClass: compliance-high-priority 1
    kind: ScanSetting
    showNotApplicable: false
    rawResultStorage:
      nodeSelector:
        node-role.kubernetes.io/master: ''
      pvAccessModes:
        - ReadWriteOnce
      rotation: 3
      size: 1Gi
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
        - effect: NoSchedule
          key: node.kubernetes.io/memory-pressure
          operator: Exists
    schedule: 0 1 * * *
    roles:
      - master
      - worker
    scanTolerations:
      - operator: Exists
    1
    If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass.

5.11.3. Using raw tailored profiles

While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents.

Procedure

  1. Create the ConfigMap object from a file:

    $ oc -n openshift-compliance \
    create configmap nist-moderate-modified \
    --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
  2. Reference the tailoring file in a scan that belongs to a suite:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ComplianceSuite
    metadata:
      name: workers-compliancesuite
    spec:
      debug: true
      scans:
        - name: workers-scan
          profile: xccdf_org.ssgproject.content_profile_moderate
          content: ssg-rhcos4-ds.xml
          contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc...
          debug: true
      tailoringConfigMap:
          name: nist-moderate-modified
      nodeSelector:
        node-role.kubernetes.io/worker: ""

5.11.4. Performing a rescan

Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option:

$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

A rescan generates four additional mc for rhcos-moderate profile:

$ oc get mc

Example output

75-worker-scan-chronyd-or-ntpd-specify-remote-server
75-worker-scan-configure-usbguard-auditbackend
75-worker-scan-service-usbguard-enabled
75-worker-scan-usbguard-allow-hid-and-hub

Important

When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs.

5.11.5. Setting custom storage size for results

While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources.

A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.

5.11.5.1. Using custom result storage values

Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute.

Important

If your cluster does not specify a default storage class, this attribute must be set.

Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:

Example ScanSetting CR

apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
  name: default
  namespace: openshift-compliance
rawResultStorage:
  storageClassName: standard
  rotation: 10
  size: 10Gi
roles:
- worker
- master
scanTolerations:
- effect: NoSchedule
  key: node-role.kubernetes.io/master
  operator: Exists
schedule: '0 1 * * *'

5.11.6. Applying remediations generated by suite scans

Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations. This allows the Operator to apply all of the created remediations.

Procedure

  • Apply the compliance.openshift.io/apply-remediations annotation by running:
$ oc -n openshift-compliance \
annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=

5.11.7. Automatically update remediations

In some cases, a scan with newer content might mark remediations as OUTDATED. As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones.

Procedure

  • Apply the compliance.openshift.io/remove-outdated annotation:
$ oc -n openshift-compliance \
annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=

Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically.

5.11.8. Creating a custom SCC for the Compliance Operator

In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector.

Prerequisites

  • You must have admin privileges.

Procedure

  1. Define the SCC in a YAML file named restricted-adjusted-compliance.yaml:

    SecurityContextConstraints object definition

      allowHostDirVolumePlugin: false
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: true
      allowPrivilegedContainer: false
      allowedCapabilities: null
      apiVersion: security.openshift.io/v1
      defaultAddCapabilities: null
      fsGroup:
        type: MustRunAs
      kind: SecurityContextConstraints
      metadata:
        name: restricted-adjusted-compliance
      priority: 30 1
      readOnlyRootFilesystem: false
      requiredDropCapabilities:
      - KILL
      - SETUID
      - SETGID
      - MKNOD
      runAsUser:
        type: MustRunAsRange
      seLinuxContext:
        type: MustRunAs
      supplementalGroups:
        type: RunAsAny
      users:
      - system:serviceaccount:openshift-compliance:api-resource-collector 2
      volumes:
      - configMap
      - downwardAPI
      - emptyDir
      - persistentVolumeClaim
      - projected
      - secret

    1
    The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group.
    2
    Service Account used by Compliance Operator Scanner pod.
  2. Create the SCC:

    $ oc create -n openshift-compliance  -f restricted-adjusted-compliance.yaml

    Example output

    securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created

Verification

  1. Verify the SCC was created:

    $ oc get -n openshift-compliance scc restricted-adjusted-compliance

    Example output

    NAME                             PRIV    CAPS         SELINUX     RUNASUSER        FSGROUP     SUPGROUP   PRIORITY   READONLYROOTFS   VOLUMES
    restricted-adjusted-compliance   false   <no value>   MustRunAs   MustRunAsRange   MustRunAs   RunAsAny   30         false            ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]

5.11.9. Additional resources

5.12. Troubleshooting the Compliance Operator

This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips:

  • The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command:

     $ oc get events -n openshift-compliance

    Or view events for an object like a scan using the command:

    $ oc describe -n openshift-compliance compliancescan/cis-compliance
  • The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq:

    $ oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \
    | jq -c 'select(.logger == "profilebundlectrl")'
  • The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc, for example:

    $ date -d @1596184628.955853 --utc
  • Many custom resources, most importantly ComplianceSuite and ScanSetting, allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods.
  • If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs.

5.12.1. Anatomy of a scan

The following sections outline the components and stages of Compliance Operator scans.

5.12.1.1. Compliance sources

The compliance content is stored in Profile objects that are generated from a ProfileBundle object. The Compliance Operator creates a ProfileBundle object for the cluster and another for the cluster nodes.

$ oc get -n openshift-compliance profilebundle.compliance
$ oc get -n openshift-compliance profile.compliance

The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle, you can find the deployment and view logs of the pods in a deployment:

$ oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser
$ oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4
$ oc logs -n openshift-compliance pods/<pod-name>
$ oc describe -n openshift-compliance pod/<pod-name> -c profileparser

5.12.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging

With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects:

apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
  name: my-companys-constraints
debug: true
# For each role, a separate scan will be created pointing
# to a node-role specified in roles
roles:
  - worker
---
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-companys-compliance-requirements
profiles:
  # Node checks
  - name: rhcos4-e8
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  # Cluster checks
  - name: ocp4-e8
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: my-companys-constraints
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1

Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl. These objects have no status. Any issues are communicated in form of events:

Events:
  Type     Reason        Age    From                    Message
  ----     ------        ----   ----                    -------
  Normal   SuiteCreated  9m52s  scansettingbindingctrl  ComplianceSuite openshift-compliance/my-companys-compliance-requirements created

Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite.

5.12.1.3. ComplianceSuite custom resource lifecycle and debugging

The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl. This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done:

$ oc get cronjobs

Example output

NAME                                           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
<cron_name>                                    0 1 * * *   False     0        <none>          151m

For the most important issues, events are emitted. View them with oc describe compliancesuites/<name>. The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller.

5.12.1.4. ComplianceScan custom resource lifecycle and debugging

The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:

5.12.1.4.1. Pending phase

The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.

5.12.1.4.2. Launching phase

In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:

$ oc -n openshift-compliance get cm \
-l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=

These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results:

$ oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker

The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols.

Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name:

$ oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels

Example output

NAME                                                              READY   STATUS      RESTARTS   AGE   LABELS
rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod   0/2     Completed   0          39m   compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner

+ The scan then proceeds to the Running phase.

5.12.1.4.3. Running phase

The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:

  • init container: There is one init container called content-container. It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod.
  • scanner: This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod’s containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag.
  • logcollector: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name (compliance.openshift.io/scan-name=rhcos4-e8-worker):

    $ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod

    Example output

          Name:         rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
          Namespace:    openshift-compliance
          Labels:       compliance.openshift.io/scan-name-scan=rhcos4-e8-worker
                        complianceoperator.openshift.io/scan-result=
          Annotations:  compliance-remediations/processed:
                        compliance.openshift.io/scan-error-msg:
                        compliance.openshift.io/scan-result: NON-COMPLIANT
                        OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal
    
          Data
          ====
          exit-code:
          ----
          2
          results:
          ----
          <?xml version="1.0" encoding="UTF-8"?>
          ...

Scanner pods for Platform scans are similar, except:

  • There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from.
  • The scanner container does not need to mount the host file system.

When the scanner pods are done, the scans move on to the Aggregating phase.

5.12.1.4.4. Aggregating phase

In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.

When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects:

$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker

Example output

NAME                                                       STATUS   SEVERITY
rhcos4-e8-worker-accounts-no-uid-except-zero               PASS     high
rhcos4-e8-worker-audit-rules-dac-modification-chmod        FAIL     medium

and ComplianceRemediation objects:

$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker

Example output

NAME                                                       STATE
rhcos4-e8-worker-audit-rules-dac-modification-chmod        NotApplied
rhcos4-e8-worker-audit-rules-dac-modification-chown        NotApplied
rhcos4-e8-worker-audit-rules-execution-chcon               NotApplied
rhcos4-e8-worker-audit-rules-execution-restorecon          NotApplied
rhcos4-e8-worker-audit-rules-execution-semanage            NotApplied
rhcos4-e8-worker-audit-rules-execution-setfiles            NotApplied

After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.

5.12.1.4.5. Done phase

In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.

It is also possible to trigger a re-run of a scan in the Done phase by annotating it:

$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true. The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over.

5.12.1.5. ComplianceRemediation controller lifecycle and debugging

The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true:

$ oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge

The ComplianceRemediation controller (logger=remediationctrl) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations.

The MachineConfig object always begins with 75- and is named after the scan and the suite:

$ oc get mc | grep 75-

Example output

75-rhcos4-e8-worker-my-companys-compliance-requirements                                                3.2.0             2m46s

The remediations the mc currently consists of are listed in the machine config’s annotations:

$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements

Example output

Name:         75-rhcos4-e8-worker-my-companys-compliance-requirements
Labels:       machineconfiguration.openshift.io/role=worker
Annotations:  remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:

The ComplianceRemediation controller’s algorithm works like this:

  • All currently applied remediations are read into an initial remediation set.
  • If the reconciled remediation is supposed to be applied, it is added to the set.
  • A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed.
  • If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
  • Creating or modifying a MachineConfig object triggers a reboot of nodes that match the machineconfiguration.openshift.io/role label - see the Machine Config Operator documentation for more details.

The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:

$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

The scan will run and finish. Check for the remediation to pass:

$ oc -n openshift-compliance \
get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod

Example output

NAME                                                  STATUS   SEVERITY
rhcos4-e8-worker-audit-rules-dac-modification-chmod   PASS     medium

5.12.1.6. Useful labels

Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name label. The workload identifier is labeled with the workload label.

The Compliance Operator schedules the following workloads:

  • scanner: Performs the compliance scan.
  • resultserver: Stores the raw results for the compliance scan.
  • aggregator: Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations).
  • suitererunner: Will tag a suite to be re-run (when a schedule is set).
  • profileparser: Parses a datastream and creates the appropriate profiles, rules and variables.

When debugging and logs are required for a certain workload, run:

$ oc logs -l workload=<workload_name> -c <container_name>

5.12.2. Increasing Compliance Operator resource limits

In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits.

To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource.

Procedure

  1. To increase the Operator’s memory limits to 500 Mi, create the following patch file named co-memlimit-patch.yaml:

    spec:
      config:
        resources:
          limits:
            memory: 500Mi
  2. Apply the patch file:

    $ oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge

5.12.3. Configuring Operator resource constraints

The resources field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM).

Note

Resource Constraints applied in this process overwrites the existing resource constraints.

Procedure

  • Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the Subscription object:

    kind: Subscription
    metadata:
      name: custom-operator
    spec:
      package: etcd
      channel: alpha
      config:
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

5.12.4. Configuring ScanSetting timeout

The ScanSetting object has a timeout option that can be specified in the ComplianceScanSetting object as a duration string, such as 1h30m. If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout limit is reached.

Procedure

  • To set a timeout and maxRetryOnTimeout in ScanSetting, modify an existing ScanSetting object:

    apiVersion: compliance.openshift.io/v1alpha1
    kind: ScanSetting
    metadata:
      name: default
      namespace: openshift-compliance
    rawResultStorage:
      rotation: 3
      size: 1Gi
    roles:
    - worker
    - master
    scanTolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
      operator: Exists
    schedule: '0 1 * * *'
    timeout: '10m0s' 1
    maxRetryOnTimeout: 3 2
    1
    The timeout variable is defined as a duration string, such as 1h30m. The default value is 30m. To disable the timeout, set the value to 0s.
    2
    The maxRetryOnTimeout variable defines how many times a retry is attempted. The default value is 3.

5.12.5. Getting support

If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:

  • Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
  • Submit a support case to Red Hat Support.
  • Access other product documentation.

To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.

If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.

5.13. Uninstalling the Compliance Operator

You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI.

5.13.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console

To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • The OpenShift Compliance Operator must be installed.

Procedure

To remove the Compliance Operator by using the OpenShift Container Platform web console:

  1. Go to the Operators Installed Operators Compliance Operator page.

    1. Click All instances.
    2. In All namespaces, click the Options menu kebab and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects.
  2. Switch to the Administration Operators Installed Operators page.
  3. Click the Options menu kebab on the Compliance Operator entry and select Uninstall Operator.
  4. Switch to the Home Projects page.
  5. Search for 'compliance'.
  6. Click the Options menu kebab next to the openshift-compliance project, and select Delete Project.

    1. Confirm the deletion by typing openshift-compliance in the dialog box, and click Delete.

5.13.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI

To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • The OpenShift Compliance Operator must be installed.

Procedure

  1. Delete all objects in the namespace.

    1. Delete the ScanSettingBinding objects:

      $ oc delete ssb <ScanSettingBinding-name> -n openshift-compliance
    2. Delete the ScanSetting objects:

      $ oc delete ss <ScanSetting-name> -n openshift-compliance
    3. Delete the ComplianceSuite objects:

      $ oc delete suite <compliancesuite-name> -n openshift-compliance
    4. Delete the ComplianceScan objects:

      $ oc delete scan <compliancescan-name> -n openshift-compliance
    5. Obtain the ProfileBundle objects:

      $ oc get profilebundle.compliance -n openshift-compliance

      Example output

      NAME     CONTENTIMAGE                                                                     CONTENTFILE         STATUS
      ocp4     registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:<hash>   ssg-ocp4-ds.xml     VALID
      rhcos4   registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:<hash>   ssg-rhcos4-ds.xml   VALID

    6. Delete the ProfileBundle objects:

      $ oc delete profilebundle.compliance ocp4 rhcos4 -n openshift-compliance

      Example output

      profilebundle.compliance.openshift.io "ocp4" deleted
      profilebundle.compliance.openshift.io "rhcos4" deleted

  2. Delete the Subscription object:

    $ oc delete sub <Subscription-Name> -n openshift-compliance
  3. Delete the CSV object:

    $ oc delete csv <ComplianceCSV-Name> -n openshift-compliance
  4. Delete the project:

    $ oc delete project openshift-compliance

    Example output

    project.project.openshift.io "openshift-compliance" deleted

Verification

  1. Confirm the namespace is deleted:

    $ oc get project/openshift-compliance

    Example output

    Error from server (NotFound): namespaces "openshift-compliance" not found

5.14. Using the oc-compliance plugin

Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance plugin makes the process easier.

5.14.1. Installing the oc-compliance plugin

Procedure

  1. Extract the oc-compliance image to get the oc-compliance binary:

    $ podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/

    Example output

    W0611 20:35:46.486903   11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.

    You can now run oc-compliance.

5.14.2. Fetching raw results

When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it.

Procedure

  • Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the oc-compliance plugin, you can use a single command:

    $ oc compliance fetch-raw <object-type> <object-name> -o <output-path>
  • <object-type> can be either scansettingbinding, compliancescan or compliancesuite, depending on which of these objects the scans were launched with.
  • <object-name> is the name of the binding, suite, or scan object to gather the ARF file for, and <output-path> is the local directory to place the results.

    For example:

    $ oc compliance fetch-raw scansettingbindings my-binding -o /tmp/

    Example output

    Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master
    Fetching raw compliance results for scan 'ocp4-cis'.......
    The raw compliance results are available in the following directory: /tmp/ocp4-cis
    Fetching raw compliance results for scan 'ocp4-cis-node-worker'...........
    The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker
    Fetching raw compliance results for scan 'ocp4-cis-node-master'......
    The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master

View the list of files in the directory:

$ ls /tmp/ocp4-cis-node-master/

Example output

ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2  ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2  ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2

Extract the results:

$ bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml

View the results:

$ ls resultsdir/worker-scan/

Example output

worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml
worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2
worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2

5.14.3. Re-running scans

Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made.

Procedure

  • Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the oc-compliance plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the ScanSettingBinding object named my-binding:

    $ oc compliance rerun-now scansettingbindings my-binding

    Example output

    Rerunning scans from 'my-binding': ocp4-cis
    Re-running scan 'openshift-compliance/ocp4-cis'

5.14.4. Using ScanSettingBinding custom resources

When using the ScanSetting and ScanSettingBinding custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule, machine roles, tolerations, and so on. While that is easier than working with multiple ComplianceSuite or ComplianceScan objects, it can confuse new users.

The oc compliance bind subcommand helps you create a ScanSettingBinding CR.

Procedure

  1. Run:

    $ oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]
    • If you omit the -S flag, the default scan setting provided by the Compliance Operator is used.
    • The object type is the Kubernetes object type, which can be profile or tailoredprofile. More than one object can be provided.
    • The object name is the name of the Kubernetes resource, such as .metadata.name.
    • Add the --dry-run option to display the YAML file of the objects that are created.

      For example, given the following profiles and scan settings:

      $ oc get profile.compliance -n openshift-compliance

      Example output

      NAME              AGE
      ocp4-cis          9m54s
      ocp4-cis-node     9m54s
      ocp4-e8           9m54s
      ocp4-moderate     9m54s
      ocp4-ncp          9m54s
      rhcos4-e8         9m54s
      rhcos4-moderate   9m54s
      rhcos4-ncp        9m54s
      rhcos4-ospp       9m54s
      rhcos4-stig       9m54s

      $ oc get scansettings -n openshift-compliance

      Example output

      NAME                 AGE
      default              10m
      default-auto-apply   10m

  2. To apply the default settings to the ocp4-cis and ocp4-cis-node profiles, run:

    $ oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node

    Example output

    Creating ScanSettingBinding my-binding

    Once the ScanSettingBinding CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator.

5.14.5. Printing controls

Compliance standards are generally organized into a hierarchy as follows:

  • A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0.
  • A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures).
  • A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control.
  • The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy.

Procedure

  • The oc compliance controls subcommand provides a report of the standards and controls that a given profile satisfies:

    $ oc compliance controls profile ocp4-cis-node

    Example output

    +-----------+----------+
    | FRAMEWORK | CONTROLS |
    +-----------+----------+
    | CIS-OCP   | 1.1.1    |
    +           +----------+
    |           | 1.1.10   |
    +           +----------+
    |           | 1.1.11   |
    +           +----------+
    ...

5.14.6. Fetching compliance remediation details

The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation object into a directory to inspect.

Procedure

  1. View the remediations for a profile:

    $ oc compliance fetch-fixes profile ocp4-cis -o /tmp

    Example output

    No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1
    No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled'
    No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup'
    Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml
    No fixes to persist for rule 'ocp4-api-server-audit-log-path'
    No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa'
    No fixes to persist for rule 'ocp4-api-server-auth-mode-node'
    No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac'
    No fixes to persist for rule 'ocp4-api-server-basic-auth'
    No fixes to persist for rule 'ocp4-api-server-bind-address'
    No fixes to persist for rule 'ocp4-api-server-client-ca'
    Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml
    Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml

    1
    The No fixes to persist warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided.
  2. You can view a sample of the YAML file. The head command will show you the first 10 lines:

    $ head /tmp/ocp4-api-server-audit-log-maxsize.yaml

    Example output

    apiVersion: config.openshift.io/v1
    kind: APIServer
    metadata:
      name: cluster
    spec:
      maximumFileSizeMegabytes: 100

  3. View the remediation from a ComplianceRemediation object created after a scan:

    $ oc get complianceremediations -n openshift-compliance

    Example output

    NAME                                             STATE
    ocp4-cis-api-server-encryption-provider-cipher   NotApplied
    ocp4-cis-api-server-encryption-provider-config   NotApplied

    $ oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp

    Example output

    Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml

  4. You can view a sample of the YAML file. The head command will show you the first 10 lines:

    $ head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml

    Example output

    apiVersion: config.openshift.io/v1
    kind: APIServer
    metadata:
      name: cluster
    spec:
      encryption:
        type: aescbc

Warning

Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state.

5.14.7. Viewing ComplianceCheckResult object details

When scans are finished running, ComplianceCheckResult objects are created for the individual scan rules. The view-result subcommand provides a human-readable output of the ComplianceCheckResult object details.

Procedure

  • Run:

    $ oc compliance view-result ocp4-cis-scheduler-no-bind-address

5.15. Understanding the Custom Resource Definitions

The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found.

5.15.1. CRDs workflow

The CRD provides you the following workflow to complete the compliance scans:

  1. Define your compliance scan requirements
  2. Configure the compliance scan settings
  3. Process compliance requirements with compliance scans settings
  4. Monitor the compliance scans
  5. Check the compliance scan results

5.15.2. Defining the compliance scan requirements

By default, the Compliance Operator CRDs include ProfileBundle and Profile objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile object.

5.15.2.1. ProfileBundle object

When you install the Compliance Operator, it includes ready-to-run ProfileBundle objects. The Compliance Operator parses the ProfileBundle object and creates a Profile object for each profile in the bundle. It also parses Rule and Variable objects, which are used by the Profile object.

Example ProfileBundle object

apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
  name: <profile bundle name>
  namespace: openshift-compliance
status:
  dataStreamStatus: VALID 1

1
Indicates whether the Compliance Operator was able to parse the content files.
Note

When the contentFile fails, an errorMessage attribute appears, which provides details of the error that occurred.

Troubleshooting

When you roll back to a known content image from an invalid image, the ProfileBundle object stops responding and displays PENDING state. As a workaround, you can move to a different image than the previous one. Alternatively, you can delete and re-create the ProfileBundle object to return to the working state.

5.15.2.2. Profile object

The Profile object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node or Platform type. You can either directly use the Profile object or further customize it using a TailorProfile object.

Note

You cannot create or modify the Profile object manually because it is derived from a single ProfileBundle object. Typically, a single ProfileBundle object can include several Profile objects.

Example Profile object

apiVersion: compliance.openshift.io/v1alpha1
description: <description of the profile>
id: xccdf_org.ssgproject.content_profile_moderate 1
kind: Profile
metadata:
  annotations:
    compliance.openshift.io/product: <product name>
    compliance.openshift.io/product-type: Node 2
  creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ"
  generation: 1
  labels:
    compliance.openshift.io/profile-bundle: <profile bundle name>
  name: rhcos4-moderate
  namespace: openshift-compliance
  ownerReferences:
  - apiVersion: compliance.openshift.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ProfileBundle
    name: <profile bundle name>
    uid: <uid string>
  resourceVersion: "<version number>"
  selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate
  uid: <uid string>
rules: 3
- rhcos4-account-disable-post-pw-expiration
- rhcos4-accounts-no-uid-except-zero
- rhcos4-audit-rules-dac-modification-chmod
- rhcos4-audit-rules-dac-modification-chown
title: <title of the profile>

1
Specify the XCCDF name of the profile. Use this identifier when you define a ComplianceScan object as the value of the profile attribute of the scan.
2
Specify either a Node or Platform. Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform.
3
Specify the list of rules for the profile. Each rule corresponds to a single check.

5.15.2.3. Rule object

The Rule object, which forms the profiles, are also exposed as objects. Use the Rule object to define your compliance check requirements and specify how it could be fixed.

Example Rule object

    apiVersion: compliance.openshift.io/v1alpha1
    checkType: Platform 1
    description: <description of the rule>
    id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2
    instructions: <manual instructions for the scan>
    kind: Rule
    metadata:
      annotations:
        compliance.openshift.io/rule: configure-network-policies-namespaces
        control.compliance.openshift.io/CIS-OCP: 5.3.2
        control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3
          R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3
          R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1
        control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18)
      labels:
        compliance.openshift.io/profile-bundle: ocp4
      name: ocp4-configure-network-policies-namespaces
      namespace: openshift-compliance
    rationale: <description of why this rule is checked>
    severity: high 3
    title: <summary of the rule>

1
Specify the type of check this rule executes. Node profiles scan the cluster nodes and Platform profiles scan the Kubernetes platform. An empty value indicates there is no automated check.
2
Specify the XCCDF name of the rule, which is parsed directly from the datastream.
3
Specify the severity of the rule when it fails.
Note

The Rule object gets an appropriate label for an easy identification of the associated ProfileBundle object. The ProfileBundle also gets specified in the OwnerReferences of this object.

5.15.2.4. TailoredProfile object

Use the TailoredProfile object to modify the default Profile object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile object creates a ConfigMap, which can be referenced by a ComplianceScan object.

Tip

You can use the TailoredProfile object by referencing it in a ScanSettingBinding object. For more information about ScanSettingBinding, see ScanSettingBinding object.

Example TailoredProfile object

apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
  name: rhcos4-with-usb
spec:
  extends: rhcos4-moderate 1
  title: <title of the tailored profile>
  disableRules:
    - name: <name of a rule object to be disabled>
      rationale: <description of why this rule is checked>
status:
  id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2
  outputRef:
    name: rhcos4-with-usb-tp 3
    namespace: openshift-compliance
  state: READY 4

1
This is optional. Name of the Profile object upon which the TailoredProfile is built. If no value is set, a new profile is created from the enableRules list.
2
Specifies the XCCDF name of the tailored profile.
3
Specifies the ConfigMap name, which can be used as the value of the tailoringConfigMap.name attribute of a ComplianceScan.
4
Shows the state of the object such as READY, PENDING, and FAILURE. If the state of the object is ERROR, then the attribute status.errorMessage provides the reason for the failure.

With the TailoredProfile object, it is possible to create a new Profile object using the TailoredProfile construct. To create a new Profile, set the following configuration parameters :

  • an appropriate title
  • extends value must be empty
  • scan type annotation on the TailoredProfile object:

    compliance.openshift.io/product-type: Platform/Node
    Note

    If you have not set the product-type annotation, the Compliance Operator defaults to Platform scan type. Adding the -node suffix to the name of the TailoredProfile object results in node scan type.

5.15.3. Configuring the compliance scan settings

After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting object.

5.15.3.1. ScanSetting object

Use the ScanSetting object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting objects:

  • default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically.
  • default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both autoApplyRemediations and autoUpdateRemediations are set to true.

Example ScanSetting object

apiVersion: compliance.openshift.io/v1alpha1
autoApplyRemediations: true 1
autoUpdateRemediations: true 2
kind: ScanSetting
maxRetryOnTimeout: 3
metadata:
  creationTimestamp: "2022-10-18T20:21:00Z"
  generation: 1
  name: default-auto-apply
  namespace: openshift-compliance
  resourceVersion: "38840"
  uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84
rawResultStorage:
  nodeSelector:
    node-role.kubernetes.io/master: ""
  pvAccessModes:
  - ReadWriteOnce
  rotation: 3 3
  size: 1Gi 4
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
roles: 5
- master
- worker
scanTolerations:
- operator: Exists
schedule: 0 1 * * * 6
showNotApplicable: false
strictNodeScan: true
timeout: 30m

1
Set to true to enable auto remediations. Set to false to disable auto remediations.
2
Set to true to enable auto remediations for content updates. Set to false to disable auto remediations for content updates.
3
Specify the number of stored scans in the raw result format. The default value is 3. As the older results get rotated, the administrator must store the results elsewhere before the rotation happens.
4
Specify the storage size that should be created for the scan to store the raw results. The default value is 1Gi
6
Specify how often the scan should be run in cron format.
Note

To disable the rotation policy, set the value to 0.

5
Specify the node-role.kubernetes.io label value to schedule the scan for Node type. This value has to match the name of a MachineConfigPool.

5.15.4. Processing the compliance scan requirements with compliance scans settings

When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding object.

5.15.4.1. ScanSettingBinding object

Use the ScanSettingBinding object to specify your compliance requirements with reference to the Profile or TailoredProfile object. It is then linked to a ScanSetting object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite object based on the ScanSetting and ScanSettingBinding objects.

Example ScanSettingBinding object

apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: <name of the scan>
profiles: 1
  # Node checks
  - name: rhcos4-with-usb
    kind: TailoredProfile
    apiGroup: compliance.openshift.io/v1alpha1
  # Cluster checks
  - name: ocp4-moderate
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef: 2
  name: my-companys-constraints
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1

1
Specify the details of Profile or TailoredProfile object to scan your environment.
2
Specify the operational constraints, such as schedule and storage size.

The creation of ScanSetting and ScanSettingBinding objects results in the compliance suite. To get the list of compliance suite, run the following command:

$ oc get compliancesuites
Important

If you delete ScanSettingBinding, then compliance suite also is deleted.

5.15.5. Tracking the compliance scans

After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite object.

5.15.5.1. ComplianceSuite object

The ComplianceSuite object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result.

For Node type scans, you should map the scan to the MachineConfigPool, since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool.

Example ComplianceSuite object

apiVersion: compliance.openshift.io/v1alpha1
kind: ComplianceSuite
metadata:
  name: <name of the scan>
spec:
  autoApplyRemediations: false 1
  schedule: "0 1 * * *" 2
  scans: 3
    - name: workers-scan
      scanType: Node
      profile: xccdf_org.ssgproject.content_profile_moderate
      content: ssg-rhcos4-ds.xml
      contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc...
      rule: "xccdf_org.ssgproject.content_rule_no_netrc_files"
      nodeSelector:
        node-role.kubernetes.io/worker: ""
status:
  Phase: DONE 4
  Result: NON-COMPLIANT 5
  scanStatuses:
  - name: workers-scan
    phase: DONE
    result: NON-COMPLIANT

1
Set to true to enable auto remediations. Set to false to disable auto remediations.
2
Specify how often the scan should be run in cron format.
3
Specify a list of scan specifications to run in the cluster.
4
Indicates the progress of the scans.
5
Indicates the overall verdict of the suite.

The suite in the background creates the ComplianceScan object based on the scans parameter. You can programmatically fetch the ComplianceSuites events. To get the events for the suite, run the following command:

$ oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>
Important

You might create errors when you manually define the ComplianceSuite, since it contains the XCCDF attributes.

5.15.5.2. Advanced ComplianceScan Object

The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan object directly, you can instead manage it using a ComplianceSuite object.

Example Advanced ComplianceScan object

apiVersion: compliance.openshift.io/v1alpha1
kind: ComplianceScan
metadata:
  name: <name of the scan>
spec:
  scanType: Node 1
  profile: xccdf_org.ssgproject.content_profile_moderate 2
  content: ssg-ocp4-ds.xml
  contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3
  rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4
  nodeSelector: 5
    node-role.kubernetes.io/worker: ""
status:
  phase: DONE 6
  result: NON-COMPLIANT 7

1
Specify either Node or Platform. Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform.
2
Specify the XCCDF identifier of the profile that you want to run.
3
Specify the container image that encapsulates the profile files.
4
It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile.
Note

If you skip the rule parameter, then scan runs for all the available rules of the specified profile.

5
If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the MachineConfigPool label.
Note

If you do not specify nodeSelector parameter or match the MachineConfig label, scan will still run, but it will not create remediation.

6
Indicates the current phase of the scan.
7
Indicates the verdict of the scan.
Important

If you delete a ComplianceSuite object, then all the associated scans get deleted.

When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans events. To generate events for the suite, run the following command:

oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>

5.15.6. Viewing the compliance results

When the compliance suite reaches the DONE phase, you can view the scan results and possible remediations.

5.15.6.1. ComplianceCheckResult object

When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult object is created, which provides the state of the cluster for a specific rule.

Example ComplianceCheckResult object

apiVersion: compliance.openshift.io/v1alpha1
kind: ComplianceCheckResult
metadata:
  labels:
    compliance.openshift.io/check-severity: medium
    compliance.openshift.io/check-status: FAIL
    compliance.openshift.io/suite: example-compliancesuite
    compliance.openshift.io/scan-name: workers-scan
  name: workers-scan-no-direct-root-logins
  namespace: openshift-compliance
  ownerReferences:
  - apiVersion: compliance.openshift.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ComplianceScan
    name: workers-scan
description: <description of scan check>
instructions: <manual instructions for the scan>
id: xccdf_org.ssgproject.content_rule_no_direct_root_logins
severity: medium 1
status: FAIL 2

1
Describes the severity of the scan check.
2
Describes the result of the check. The possible values are:
  • PASS: check was successful.
  • FAIL: check was unsuccessful.
  • INFO: check was successful and found something not severe enough to be considered an error.
  • MANUAL: check cannot automatically assess the status and manual check is required.
  • INCONSISTENT: different nodes report different results.
  • ERROR: check run successfully, but could not complete.
  • NOTAPPLICABLE: check did not run as it is not applicable.

To get all the check results from a suite, run the following command:

oc get compliancecheckresults \
-l compliance.openshift.io/suite=workers-compliancesuite

5.15.6.2. ComplianceRemediation object

For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation object.

Example ComplianceRemediation object

apiVersion: compliance.openshift.io/v1alpha1
kind: ComplianceRemediation
metadata:
  labels:
    compliance.openshift.io/suite: example-compliancesuite
    compliance.openshift.io/scan-name: workers-scan
    machineconfiguration.openshift.io/role: worker
  name: workers-scan-disable-users-coredumps
  namespace: openshift-compliance
  ownerReferences:
  - apiVersion: compliance.openshift.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ComplianceCheckResult
    name: workers-scan-disable-users-coredumps
    uid: <UID>
spec:
  apply: false 1
  object:
    current: 2
       apiVersion: machineconfiguration.openshift.io/v1
       kind: MachineConfig
       spec:
         config:
           ignition:
             version: 2.2.0
           storage:
             files:
             - contents:
                 source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200
               filesystem: root
               mode: 420
               path: /etc/security/limits.d/75-disable_users_coredumps.conf
    outdated: {} 3

1
true indicates the remediation was applied. false indicates the remediation was not applied.
2
Includes the definition of the remediation.
3
Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them.

To get all the remediations from a suite, run the following command:

oc get complianceremediations \
-l compliance.openshift.io/suite=workers-compliancesuite

To list all failing checks that can be remediated automatically, run the following command:

oc get compliancecheckresults \
-l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'

To list all failing checks that can be remediated manually, run the following command:

oc get compliancecheckresults \
-l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.