Este contenido no está disponible en el idioma seleccionado.
Chapter 5. Compliance Operator
5.1. Compliance Operator overview
The OpenShift Container Platform Compliance Operator assists users by automating the inspection of numerous technical implementations and compares those against certain aspects of industry standards, benchmarks, and baselines; the Compliance Operator is not an auditor. In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment.
The Compliance Operator makes recommendations based on generally available information and practices regarding such standards and may assist with remediations, but actual compliance is your responsibility. You are required to work with an authorized auditor to achieve compliance with a standard. For the latest updates, see the Compliance Operator release notes
Compliance Operator concepts
Understanding the Compliance Operator
Understanding the Custom Resource Definitions
Compliance Operator management
Installing the Compliance Operator
Updating the Compliance Operator
Managing the Compliance Operator
Uninstalling the Compliance Operator
Compliance Operator scan management
Tailoring the Compliance Operator
Retrieving Compliance Operator raw results
Managing Compliance Operator remediation
Performing advanced Compliance Operator tasks
5.2. Compliance Operator release notes
The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them.
These release notes track the development of the Compliance Operator in the OpenShift Container Platform.
For an overview of the Compliance Operator, see Understanding the Compliance Operator.
To access the latest release, see Updating the Compliance Operator.
5.2.1. OpenShift Compliance Operator 1.6.0
The following advisory is available for the OpenShift Compliance Operator 1.6.0:
5.2.1.1. New features and enhancements
- The Compliance Operator now contains supported profiles for Payment Card Industry Data Security Standard (PCI-DSS) version 4. For more information, see Supported compliance profiles.
- The Compliance Operator now contains supported profiles for Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) V2R1. For more information, see Supported compliance profiles.
-
A
must-gather
extension is now available for the Compliance Operator installed onx86
,ppc64le
, ands390x
architectures. Themust-gather
tool provides crucial configuration details to Red Hat Customer Support and engineering. For more information, see Using the must-gather tool for the Compliance Operator.
5.2.1.2. Bug fixes
-
Before this release, a misleading description in the
ocp4-route-ip-whitelist
rule resulted in misunderstanding, causing potential for misconfigurations. With this update, the rule is now more clearly defined. (CMP-2485) -
Previously, the reporting of all of the
ComplianceCheckResults
for aDONE
statusComplianceScan
was incomplete. With this update, annotation has been added to report the number of totalComplianceCheckResults
for aComplianceScan
with aDONE
status. (CMP-2615) -
Previously, the
ocp4-cis-scc-limit-container-allowed-capabilities
rule description contained ambiguous guidelines, leading to confusion among users. With this update, the rule description and actionable steps are clarified. (OCPBUGS-17828) - Before this update, sysctl configurations caused certain auto remediations for RHCOS4 rules to fail scans in affected clusters. With this update, the correct sysctl settings are applied and RHCOS4 rules for FedRAMP High profiles pass scans correctly. (OCPBUGS-19690)
-
Before this update, an issue with a
jq
filter caused errors with therhacs-operator-controller-manager
deployment during compliance checks. With this update, thejq
filter expression is updated and therhacs-operator-controller-manager
deployment is exempt from compliance checks pertaining to container resource limits, eliminating false positive results. (OCPBUGS-19690) -
Before this update,
rhcos4-high
andrhcos4-moderate
profiles checked values of an incorrectly titled configuration file. As a result, some scan checks could fail. With this update, therhcos4
profiles now check the correct configuration file and scans pass correctly. (OCPBUGS-31674) -
Previously, the
accessokenInactivityTimeoutSeconds
variable used in theoauthclient-inactivity-timeout
rule was immutable, leading to aFAIL
status when performing DISA STIG scans. With this update, proper enforcement of theaccessTokenInactivityTimeoutSeconds
variable operates correctly and aPASS
status is now possible. (OCPBUGS-32551) - Before this update, some annotations for rules were not updated, displaying the incorrect control standards. With this update, annotations for rules are updated correctly, ensuring the correct control standards are displayed. (OCPBUGS-34982)
-
Previously, when upgrading to Compliance Operator 1.5.1, an incorrectly referenced secret in a
ServiceMonitor
configuration caused integration issues with the Prometheus Operator. With this update, the Compliance Operator will accurately reference the secret containing the token forServiceMonitor
metrics. (OCPBUGS-39417)
5.2.2. OpenShift Compliance Operator 1.5.1
The following advisory is available for the OpenShift Compliance Operator 1.5.1:
5.2.3. OpenShift Compliance Operator 1.5.0
The following advisory is available for the OpenShift Compliance Operator 1.5.0:
5.2.3.1. New features and enhancements
- With this update, the Compliance Operator provides a unique profile ID for easier programmatic use. (CMP-2450)
- With this release, the Compliance Operator is now tested and supported on the ROSA HCP environment. The Compliance Operator loads only Node profiles when running on ROSA HCP. This is because a Red Hat managed platform restricts access to the control plane, which makes Platform profiles irrelevant to the operator’s function.(CMP-2581)
5.2.3.2. Bug fixes
- CVE-2024-2961 is resolved in the Compliance Operator 1.5.0 release. (CVE-2024-2961)
- Previously, for ROSA HCP systems, profile listings were incorrect. This update allows the Compliance Operator to provide correct profile output. (OCPBUGS-34535)
-
With this release, namespaces can be excluded from the
ocp4-configure-network-policies-namespaces
check by setting theocp4-var-network-policies-namespaces-exempt-regex
variable in the tailored profile. (CMP-2543)
5.2.4. OpenShift Compliance Operator 1.4.1
The following advisory is available for the OpenShift Compliance Operator 1.4.1:
5.2.4.1. New features and enhancements
- As of this release, the Compliance Operator now provides the CIS OpenShift 1.5.0 profile rules. (CMP-2447)
-
With this update, the Compliance Operator now provides
OCP4 STIG ID
andSRG
with the profile rules. (CMP-2401) -
With this update, obsolete rules being applied to
s390x
have been removed. (CMP-2471)
5.2.4.2. Bug fixes
-
Previously, for Red Hat Enterprise Linux CoreOS (RHCOS) systems using Red Hat Enterprise Linux (RHEL) 9, application of the
ocp4-kubelet-enable-protect-kernel-sysctl-file-exist
rule failed. This update replaces the rule withocp4-kubelet-enable-protect-kernel-sysctl
. Now, after auto remediation is applied, RHEL 9-based RHCOS systems will showPASS
upon the application of this rule. (OCPBUGS-13589) -
Previously, after applying compliance remediations using profile
rhcos4-e8
, the nodes were no longer accessible using SSH to the core user account. With this update, nodes remain accessible through SSH using the `sshkey1 option. (OCPBUGS-18331) -
Previously, the
STIG
profile was missing rules from CaC that fulfill requirements on the publishedSTIG
for OpenShift Container Platform. With this update, upon remediation, the cluster satisfiesSTIG
requirements that can be remediated using Compliance Operator. (OCPBUGS-26193) -
Previously, creating a
ScanSettingBinding
object with profiles of different types for multiple products bypassed a restriction against multiple products types in a binding. With this update, the product validation now allows multiple products regardless of the of profile types in theScanSettingBinding
object. (OCPBUGS-26229) -
Previously, running the
rhcos4-service-debug-shell-disabled
rule showed asFAIL
even after auto-remediation was applied. With this update, running therhcos4-service-debug-shell-disabled
rule now showsPASS
after auto-remediation is applied. (OCPBUGS-28242) -
With this update, instructions for the use of the
rhcos4-banner-etc-issue
rule are enhanced to provide more detail. (OCPBUGS-28797) -
Previously the
api_server_api_priority_flowschema_catch_all
rule providedFAIL
status on OpenShift Container Platform 4.16 clusters. With this update, theapi_server_api_priority_flowschema_catch_all
rule providesPASS
status on OpenShift Container Platform 4.16 clusters. (OCPBUGS-28918) -
Previously, when a profile was removed from a completed scan shown in a
ScanSettingBinding
(SSB) object, the Compliance Operator did not remove the old scan. Afterward, when launching a new SSB using the deleted profile, the Compliance Operator failed to update the result. With this release of the Compliance Operator, the new SSB now shows the new compliance check result. (OCPBUGS-29272) -
Previously, on
ppc64le
architecture, the metrics service was not created. With this update, when deploying the Compliance Operator v1.4.1 onppc64le
architecture, the metrics service is now created correctly. (OCPBUGS-32797) -
Previously, on a HyperShift hosted cluster, a scan with the
ocp4-pci-dss profile
will run into an unrecoverable error due to afilter cannot iterate
issue. With this release, the scan for theocp4-pci-dss
profile will reachdone
status and return either aCompliance
orNon-Compliance
test result. (OCPBUGS-33067)
5.2.5. OpenShift Compliance Operator 1.4.0
The following advisory is available for the OpenShift Compliance Operator 1.4.0:
5.2.5.1. New features and enhancements
-
With this update, clusters which use custom node pools outside the default
worker
andmaster
node pools no longer need to supply additional variables to ensure Compliance Operator aggregates the configuration file for that node pool. -
Users can now pause scan schedules by setting the
ScanSetting.suspend
attribute toTrue
. This allows users to suspend a scan schedule and reactivate it without the need to delete and re-create theScanSettingBinding
. This simplifies pausing scan schedules during maintenance periods. (CMP-2123) -
Compliance Operator now supports an optional
version
attribute onProfile
custom resources. (CMP-2125) -
Compliance Operator now supports profile names in
ComplianceRules
. (CMP-2126) -
Compliance Operator compatibility with improved
cronjob
API improvements is available in this release. (CMP-2310)
5.2.5.2. Bug fixes
- Previously, on a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes were not skipped by the compliance scan. With this release, Windows nodes are correctly skipped when scanning. (OCPBUGS-7355)
-
With this update,
rprivate
default mount propagation is now handled correctly for root volume mounts of pods that rely on multipathing. (OCPBUGS-17494) -
Previously, the Compliance Operator would generate a remediation for
coreos_vsyscall_kernel_argument
without reconciling the rule even while applying the remediation. With release 1.4.0, thecoreos_vsyscall_kernel_argument
rule properly evaluates kernel arguments and generates an appropriate remediation.(OCPBUGS-8041) -
Before this update, rule
rhcos4-audit-rules-login-events-faillock
would fail even after auto-remediation has been applied. With this update,rhcos4-audit-rules-login-events-faillock
failure locks are now applied correctly after auto-remediation. (OCPBUGS-24594) -
Previously, upgrades from Compliance Operator 1.3.1 to Compliance Operator 1.4.0 would cause OVS rules scan results to go from
PASS
toNOT-APPLICABLE
. With this update, OVS rules scan results now showPASS
(OCPBUGS-25323)
5.2.6. OpenShift Compliance Operator 1.3.1
The following advisory is available for the OpenShift Compliance Operator 1.3.1:
This update addresses a CVE in an underlying dependency.
5.2.6.1. New features and enhancements
You can install and use the Compliance Operator in an OpenShift Container Platform cluster running in FIPS mode.
ImportantTo enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode.
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
5.2.6.2. Known issue
- On a cluster with Windows nodes, some rules will FAIL after auto remediation is applied because the Windows nodes are not skipped by the compliance scan. This differs from the expected results because the Windows nodes must be skipped when scanning. (OCPBUGS-7355)
5.2.7. OpenShift Compliance Operator 1.3.0
The following advisory is available for the OpenShift Compliance Operator 1.3.0:
5.2.7.1. New features and enhancements
- The Defense Information Systems Agency Security Technical Implementation Guide (DISA-STIG) for OpenShift Container Platform is now available from Compliance Operator 1.3.0. See Supported compliance profiles for additional information.
- Compliance Operator 1.3.0 now supports IBM Power® and IBM Z® for NIST 800-53 Moderate-Impact Baseline for OpenShift Container Platform platform and node profiles.
5.2.8. OpenShift Compliance Operator 1.2.0
The following advisory is available for the OpenShift Compliance Operator 1.2.0:
5.2.8.1. New features and enhancements
The CIS OpenShift Container Platform 4 Benchmark v1.4.0 profile is now available for platform and node applications. To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark, where you can then register to download the benchmark.
ImportantUpgrading to Compliance Operator 1.2.0 will overwrite the CIS OpenShift Container Platform 4 Benchmark 1.1.0 profiles.
If your OpenShift Container Platform environment contains existing
cis
andcis-node
remediations, there might be some differences in scan results after upgrading to Compliance Operator 1.2.0.-
Additional clarity for auditing security context constraints (SCCs) is now available for the
scc-limit-container-allowed-capabilities
rule.
5.2.9. OpenShift Compliance Operator 1.1.0
The following advisory is available for the OpenShift Compliance Operator 1.1.0:
5.2.9.1. New features and enhancements
-
A start and end timestamp is now available in the
ComplianceScan
custom resource definition (CRD) status. -
The Compliance Operator can now be deployed on hosted control planes using the OperatorHub by creating a
Subscription
file. For more information, see Installing the Compliance Operator on hosted control planes.
5.2.9.2. Bug fixes
Before this update, some Compliance Operator rule instructions were not present. After this update, instructions are improved for the following rules:
-
classification_banner
-
oauth_login_template_set
-
oauth_logout_url_set
-
oauth_provider_selection_set
-
ocp_allowed_registries
ocp_allowed_registries_for_import
-
Before this update, check accuracy and rule instructions were unclear. After this update, the check accuracy and instructions are improved for the following
sysctl
rules:-
kubelet-enable-protect-kernel-sysctl
-
kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxbytes
-
kubelet-enable-protect-kernel-sysctl-kernel-keys-root-maxkeys
-
kubelet-enable-protect-kernel-sysctl-kernel-panic
-
kubelet-enable-protect-kernel-sysctl-kernel-panic-on-oops
-
kubelet-enable-protect-kernel-sysctl-vm-overcommit-memory
kubelet-enable-protect-kernel-sysctl-vm-panic-on-oom
-
-
Before this update, the
ocp4-alert-receiver-configured
rule did not include instructions. With this update, theocp4-alert-receiver-configured
rule now includes improved instructions. (OCPBUGS-7307) -
Before this update, the
rhcos4-sshd-set-loglevel-info
rule would fail for therhcos4-e8
profile. With this update, the remediation for thesshd-set-loglevel-info
rule was updated to apply the correct configuration changes, allowing subsequent scans to pass after the remediation is applied. (OCPBUGS-7816) -
Before this update, a new installation of OpenShift Container Platform with the latest Compliance Operator install failed on the
scheduler-no-bind-address
rule. With this update, thescheduler-no-bind-address
rule has been disabled on newer versions of OpenShift Container Platform since the parameter was removed. (OCPBUGS-8347)
5.2.10. OpenShift Compliance Operator 1.0.0
The following advisory is available for the OpenShift Compliance Operator 1.0.0:
5.2.10.1. New features and enhancements
-
The Compliance Operator is now stable and the release channel is upgraded to
stable
. Future releases will follow Semantic Versioning. To access the latest release, see Updating the Compliance Operator.
5.2.10.2. Bug fixes
- Before this update, the compliance_operator_compliance_scan_error_total metric had an ERROR label with a different value for each error message. With this update, the compliance_operator_compliance_scan_error_total metric does not increase in values. (OCPBUGS-1803)
-
Before this update, the
ocp4-api-server-audit-log-maxsize
rule would result in aFAIL
state. With this update, the error message has been removed from the metric, decreasing the cardinality of the metric in line with best practices. (OCPBUGS-7520) -
Before this update, the
rhcos4-enable-fips-mode
rule description was misleading that FIPS could be enabled after installation. With this update, therhcos4-enable-fips-mode
rule description clarifies that FIPS must be enabled at install time. (OCPBUGS-8358)
5.2.11. OpenShift Compliance Operator 0.1.61
The following advisory is available for the OpenShift Compliance Operator 0.1.61:
5.2.11.1. New features and enhancements
-
The Compliance Operator now supports timeout configuration for Scanner Pods. The timeout is specified in the
ScanSetting
object. If the scan is not completed within the timeout, the scan retries until the maximum number of retries is reached. See Configuring ScanSetting timeout for more information.
5.2.11.2. Bug fixes
-
Before this update, Compliance Operator remediations required variables as inputs. Remediations without variables set were applied cluster-wide and resulted in stuck nodes, even though it appeared the remediation applied correctly. With this update, the Compliance Operator validates if a variable needs to be supplied using a
TailoredProfile
for a remediation. (OCPBUGS-3864) -
Before this update, the instructions for
ocp4-kubelet-configure-tls-cipher-suites
were incomplete, requiring users to refine the query manually. With this update, the query provided inocp4-kubelet-configure-tls-cipher-suites
returns the actual results to perform the audit steps. (OCPBUGS-3017) - Before this update, system reserved parameters were not generated in kubelet configuration files, causing the Compliance Operator to fail to unpause the machine config pool. With this update, the Compliance Operator omits system reserved parameters during machine configuration pool evaluation. (OCPBUGS-4445)
-
Before this update,
ComplianceCheckResult
objects did not have correct descriptions. With this update, the Compliance Operator sources theComplianceCheckResult
information from the rule description. (OCPBUGS-4615) - Before this update, the Compliance Operator did not check for empty kubelet configuration files when parsing machine configurations. As a result, the Compliance Operator would panic and crash. With this update, the Compliance Operator implements improved checking of the kubelet configuration data structure and only continues if it is fully rendered. (OCPBUGS-4621)
- Before this update, the Compliance Operator generated remediations for kubelet evictions based on machine config pool name and a grace period, resulting in multiple remediations for a single eviction rule. With this update, the Compliance Operator applies all remediations for a single rule. (OCPBUGS-4338)
-
Before this update, a regression occurred when attempting to create a
ScanSettingBinding
that was using aTailoredProfile
with a non-defaultMachineConfigPool
marked theScanSettingBinding
asFailed
. With this update, functionality is restored and customScanSettingBinding
using aTailoredProfile
performs correctly. (OCPBUGS-6827) Before this update, some kubelet configuration parameters did not have default values. With this update, the following parameters contain default values (OCPBUGS-6708):
-
ocp4-cis-kubelet-enable-streaming-connections
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-available
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-memory-available
-
ocp4-cis-kubelet-eviction-thresholds-set-hard-nodefs-available
-
-
Before this update, the
selinux_confinement_of_daemons
rule failed running on the kubelet because of the permissions necessary for the kubelet to run. With this update, theselinux_confinement_of_daemons
rule is disabled. (OCPBUGS-6968)
5.2.12. OpenShift Compliance Operator 0.1.59
The following advisory is available for the OpenShift Compliance Operator 0.1.59:
5.2.12.1. New features and enhancements
-
The Compliance Operator now supports Payment Card Industry Data Security Standard (PCI-DSS)
ocp4-pci-dss
andocp4-pci-dss-node
profiles on theppc64le
architecture.
5.2.12.2. Bug fixes
-
Previously, the Compliance Operator did not support the Payment Card Industry Data Security Standard (PCI DSS)
ocp4-pci-dss
andocp4-pci-dss-node
profiles on different architectures such asppc64le
. Now, the Compliance Operator supportsocp4-pci-dss
andocp4-pci-dss-node
profiles on theppc64le
architecture. (OCPBUGS-3252) -
Previously, after the recent update to version 0.1.57, the
rerunner
service account (SA) was no longer owned by the cluster service version (CSV), which caused the SA to be removed during the Operator upgrade. Now, the CSV owns thererunner
SA in 0.1.59, and upgrades from any previous version will not result in a missing SA. (OCPBUGS-3452)
5.2.13. OpenShift Compliance Operator 0.1.57
The following advisory is available for the OpenShift Compliance Operator 0.1.57:
5.2.13.1. New features and enhancements
-
KubeletConfig
checks changed fromNode
toPlatform
type.KubeletConfig
checks the default configuration of theKubeletConfig
. The configuration files are aggregated from all nodes into a single location per node pool. See EvaluatingKubeletConfig
rules against default configuration values. -
The
ScanSetting
Custom Resource now allows users to override the default CPU and memory limits of scanner pods through thescanLimits
attribute. For more information, see Increasing Compliance Operator resource limits. -
A
PriorityClass
object can now be set throughScanSetting
. This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see SettingPriorityClass
forScanSetting
scans.
5.2.13.2. Bug fixes
-
Previously, the Compliance Operator hard-coded notifications to the default
openshift-compliance
namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-defaultopenshift-compliance
namespaces. (BZ#2060726) - Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. This new feature evaluates the kubelet configuration and now reports accurately. (BZ#2075041)
-
Previously, the Compliance Operator reported the
ocp4-kubelet-configure-event-creation
rule in aFAIL
state after applying an automatic remediation because theeventRecordQPS
value was set higher than the default value. Now, theocp4-kubelet-configure-event-creation
rule remediation sets the default value, and the rule applies correctly. (BZ#2082416) -
The
ocp4-configure-network-policies
rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of theocp4-configure-network-policies
rule for clusters using Calico CNIs. (BZ#2091794) -
Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the
debug=true
option in the scan settings. This caused pods to be left on the cluster even after deleting theScanSettingBinding
. Now, pods are always deleted when aScanSettingBinding
is deleted.(BZ#2092913) -
Previously, the Compliance Operator used an older version of the
operator-sdk
command that caused alerts about deprecated functionality. Now, an updated version of theoperator-sdk
command is included and there are no more alerts for deprecated functionality. (BZ#2098581) - Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. (BZ#2102511)
-
Previously, the rule for
ocp4-cis-node-master-kubelet-enable-cert-rotation
did not properly describe success criteria. As a result, the requirements forRotateKubeletClientCertificate
were unclear. Now, the rule forocp4-cis-node-master-kubelet-enable-cert-rotation
reports accurately regardless of the configuration present in the kubelet configuration file. (BZ#2105153) - Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. (BZ#2105878)
-
Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the
api-check-pods
processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. (BZ#2117268) -
Previously, rules evaluating the
modprobe
configuration would fail even after applying remediations due to a mismatch in values for themodprobe
configuration. Now, the same values are used for themodprobe
configuration in checks and remediations, ensuring consistent results. (BZ#2117747)
5.2.13.3. Deprecations
-
Specifying Install into all namespaces in the cluster or setting the
WATCH_NAMESPACES
environment variable to""
no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or theopenshift-compliance
namespace by default. This change improves the Compliance Operator’s memory usage.
5.2.14. OpenShift Compliance Operator 0.1.53
The following advisory is available for the OpenShift Compliance Operator 0.1.53:
5.2.14.1. Bug fixes
-
Previously, the
ocp4-kubelet-enable-streaming-connections
rule contained an incorrect variable comparison, resulting in false positive scan results. Now, the Compliance Operator provides accurate scan results when settingstreamingConnectionIdleTimeout
. (BZ#2069891) -
Previously, group ownership for
/etc/openvswitch/conf.db
was incorrect on IBM Z® architectures, resulting inocp4-cis-node-worker-file-groupowner-ovs-conf-db
check failures. Now, the check is markedNOT-APPLICABLE
on IBM Z® architecture systems. (BZ#2072597) -
Previously, the
ocp4-cis-scc-limit-container-allowed-capabilities
rule reported in aFAIL
state due to incomplete data regarding the security context constraints (SCC) rules in the deployment. Now, the result isMANUAL
, which is consistent with other checks that require human intervention. (BZ#2077916) Previously, the following rules failed to account for additional configuration paths for API servers and TLS certificates and keys, resulting in reported failures even if the certificates and keys were set properly:
-
ocp4-cis-api-server-kubelet-client-cert
-
ocp4-cis-api-server-kubelet-client-key
-
ocp4-cis-kubelet-configure-tls-cert
-
ocp4-cis-kubelet-configure-tls-key
Now, the rules report accurately and observe legacy file paths specified in the kubelet configuration file. (BZ#2079813)
-
-
Previously, the
content_rule_oauth_or_oauthclient_inactivity_timeout
rule did not account for a configurable timeout set by the deployment when assessing compliance for timeouts. This resulted in the rule failing even if the timeout was valid. Now, the Compliance Operator uses thevar_oauth_inactivity_timeout
variable to set valid timeout length. (BZ#2081952) - Previously, the Compliance Operator used administrative permissions on namespaces not labeled appropriately for privileged use, resulting in warning messages regarding pod security-level violations. Now, the Compliance Operator has appropriate namespace labels and permission adjustments to access results without violating permissions. (BZ#2088202)
-
Previously, applying auto remediations for
rhcos4-high-master-sysctl-kernel-yama-ptrace-scope
andrhcos4-sysctl-kernel-core-pattern
resulted in subsequent failures of those rules in scan results, even though they were remediated. Now, the rules reportPASS
accurately, even after remediations are applied.(BZ#2094382) -
Previously, the Compliance Operator would fail in a
CrashLoopBackoff
state because of out-of-memory exceptions. Now, the Compliance Operator is improved to handle large machine configuration data sets in memory and function correctly. (BZ#2094854)
5.2.14.2. Known issue
When
"debug":true
is set within theScanSettingBinding
object, the pods generated by theScanSettingBinding
object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:$ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis
5.2.15. OpenShift Compliance Operator 0.1.52
The following advisory is available for the OpenShift Compliance Operator 0.1.52:
5.2.15.1. New features and enhancements
- The FedRAMP high SCAP profile is now available for use in OpenShift Container Platform environments. For more information, See Supported compliance profiles.
5.2.15.2. Bug fixes
-
Previously, the
OpenScap
container would crash due to a mount permission issue in a security environment whereDAC_OVERRIDE
capability is dropped. Now, executable mount permissions are applied to all users. (BZ#2082151) -
Previously, the compliance rule
ocp4-configure-network-policies
could be configured asMANUAL
. Now, compliance ruleocp4-configure-network-policies
is set toAUTOMATIC
. (BZ#2072431) - Previously, the Cluster Autoscaler would fail to scale down because the Compliance Operator scan pods were never removed after a scan. Now, the pods are removed from each node by default unless explicitly saved for debugging purposes. (BZ#2075029)
-
Previously, applying the Compliance Operator to the
KubeletConfig
would result in the node going into aNotReady
state due to unpausing the Machine Config Pools too early. Now, the Machine Config Pools are unpaused appropriately and the node operates correctly. (BZ#2071854) -
Previously, the Machine Config Operator used
base64
instead ofurl-encoded
code in the latest release, causing Compliance Operator remediation to fail. Now, the Compliance Operator checks encoding to handle bothbase64
andurl-encoded
Machine Config code and the remediation applies correctly. (BZ#2082431)
5.2.15.3. Known issue
When
"debug":true
is set within theScanSettingBinding
object, the pods generated by theScanSettingBinding
object are not removed when that binding is deleted. As a workaround, run the following command to delete the remaining pods:$ oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis
5.2.16. OpenShift Compliance Operator 0.1.49
The following advisory is available for the OpenShift Compliance Operator 0.1.49:
5.2.16.1. New features and enhancements
The Compliance Operator is now supported on the following architectures:
- IBM Power®
- IBM Z®
- IBM® LinuxONE
5.2.16.2. Bug fixes
-
Previously, the
openshift-compliance
content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show asfailed
instead ofnot-applicable
based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. (BZ#1994609) -
Previously, the
ocp4-moderate-routes-protected-by-tls
rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL/TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. (BZ#2002695) -
Previously,
ocp-cis-configure-network-policies-namespace
used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. (BZ#2038909) -
Previously, remediations using the
sshd jinja
macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. (BZ#2049141) -
Previously, the
ocp4-cluster-version-operator-verify-integrity
always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of OpenShift Container Platform would be verified. Now, the compliance check result forocp4-cluster-version-operator-verify-integrity
is able to detect verified versions and is accurate with the CVO history. (BZ#2053602) -
Previously, the
ocp4-api-server-no-adm-ctrl-plugins-disabled
rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of theocp4-api-server-no-adm-ctrl-plugins-disabled
rule accurately passes with all admission controller plugins enabled. (BZ#2058631) - Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. (BZ#2056911)
5.2.17. OpenShift Compliance Operator 0.1.48
The following advisory is available for the OpenShift Compliance Operator 0.1.48:
5.2.17.1. Bug fixes
-
Previously, some rules associated with extended Open Vulnerability and Assessment Language (OVAL) definitions had a
checkType
ofNone
. This was because the Compliance Operator was not processing extended OVAL definitions when parsing rules. With this update, content from extended OVAL definitions is parsed so that these rules now have acheckType
of eitherNode
orPlatform
. (BZ#2040282) -
Previously, a manually created
MachineConfig
object forKubeletConfig
prevented aKubeletConfig
object from being generated for remediation, leaving the remediation in thePending
state. With this release, aKubeletConfig
object is created by the remediation, regardless if there is a manually createdMachineConfig
object forKubeletConfig
. As a result,KubeletConfig
remediations now work as expected. (BZ#2040401)
5.2.18. OpenShift Compliance Operator 0.1.47
The following advisory is available for the OpenShift Compliance Operator 0.1.47:
5.2.18.1. New features and enhancements
The Compliance Operator now supports the following compliance benchmarks for the Payment Card Industry Data Security Standard (PCI DSS):
- ocp4-pci-dss
- ocp4-pci-dss-node
- Additional rules and remediations for FedRAMP moderate impact level are added to the OCP4-moderate, OCP4-moderate-node, and rhcos4-moderate profiles.
- Remediations for KubeletConfig are now available in node-level profiles.
5.2.18.2. Bug fixes
Previously, if your cluster was running OpenShift Container Platform 4.6 or earlier, remediations for USBGuard-related rules would fail for the moderate profile. This is because the remediations created by the Compliance Operator were based on an older version of USBGuard that did not support drop-in directories. Now, invalid remediations for USBGuard-related rules are not created for clusters running OpenShift Container Platform 4.6. If your cluster is using OpenShift Container Platform 4.6, you must manually create remediations for USBGuard-related rules.
Additionally, remediations are created only for rules that satisfy minimum version requirements. (BZ#1965511)
-
Previously, when rendering remediations, the compliance operator would check that the remediation was well-formed by using a regular expression that was too strict. As a result, some remediations, such as those that render
sshd_config
, would not pass the regular expression check and therefore, were not created. The regular expression was found to be unnecessary and removed. Remediations now render correctly. (BZ#2033009)
5.2.19. OpenShift Compliance Operator 0.1.44
The following advisory is available for the OpenShift Compliance Operator 0.1.44:
5.2.19.1. New features and enhancements
-
In this release, the
strictNodeScan
option is now added to theComplianceScan
,ComplianceSuite
andScanSetting
CRs. This option defaults totrue
which matches the previous behavior, where an error occurred if a scan was not able to be scheduled on a node. Setting the option tofalse
allows the Compliance Operator to be more permissive about scheduling scans. Environments with ephemeral nodes can set thestrictNodeScan
value to false, which allows a compliance scan to proceed, even if some of the nodes in the cluster are not available for scheduling. -
You can now customize the node that is used to schedule the result server workload by configuring the
nodeSelector
andtolerations
attributes of theScanSetting
object. These attributes are used to place theResultServer
pod, the pod that is used to mount a PV storage volume and store the raw Asset Reporting Format (ARF) results. Previously, thenodeSelector
and thetolerations
parameters defaulted to selecting one of the control plane nodes and tolerating thenode-role.kubernetes.io/master taint
. This did not work in environments where control plane nodes are not permitted to mount PVs. This feature provides a way for you to select the node and tolerate a different taint in those environments. -
The Compliance Operator can now remediate
KubeletConfig
objects. - A comment containing an error message is now added to help content developers differentiate between objects that do not exist in the cluster compared to objects that cannot be fetched.
-
Rule objects now contain two new attributes,
checkType
anddescription
. These attributes allow you to determine if the rule pertains to a node check or platform check, and also allow you to review what the rule does. -
This enhancement removes the requirement that you have to extend an existing profile to create a tailored profile. This means the
extends
field in theTailoredProfile
CRD is no longer mandatory. You can now select a list of rule objects to create a tailored profile. Note that you must select whether your profile applies to nodes or the platform by setting thecompliance.openshift.io/product-type:
annotation or by setting the-node
suffix for theTailoredProfile
CR. -
In this release, the Compliance Operator is now able to schedule scans on all nodes irrespective of their taints. Previously, the scan pods would only tolerated the
node-role.kubernetes.io/master taint
, meaning that they would either ran on nodes with no taints or only on nodes with thenode-role.kubernetes.io/master
taint. In deployments that use custom taints for their nodes, this resulted in the scans not being scheduled on those nodes. Now, the scan pods tolerate all node taints. In this release, the Compliance Operator supports the following North American Electric Reliability Corporation (NERC) security profiles:
- ocp4-nerc-cip
- ocp4-nerc-cip-node
- rhcos4-nerc-cip
- In this release, the Compliance Operator supports the NIST 800-53 Moderate-Impact Baseline for the Red Hat OpenShift - Node level, ocp4-moderate-node, security profile.
5.2.19.2. Templating and variable use
- In this release, the remediation template now allows multi-value variables.
-
With this update, the Compliance Operator can change remediations based on variables that are set in the compliance profile. This is useful for remediations that include deployment-specific values such as time outs, NTP server host names, or similar. Additionally, the
ComplianceCheckResult
objects now use the labelcompliance.openshift.io/check-has-value
that lists the variables a check has used.
5.2.19.3. Bug fixes
- Previously, while performing a scan, an unexpected termination occurred in one of the scanner containers of the pods. In this release, the Compliance Operator uses the latest OpenSCAP version 1.3.5 to avoid a crash.
-
Previously, using
autoReplyRemediations
to apply remediations triggered an update of the cluster nodes. This was disruptive if some of the remediations did not include all of the required input variables. Now, if a remediation is missing one or more required input variables, it is assigned a state ofNeedsReview
. If one or more remediations are in aNeedsReview
state, the machine config pool remains paused, and the remediations are not applied until all of the required variables are set. This helps minimize disruption to the nodes. - The RBAC Role and Role Binding used for Prometheus metrics are changed to 'ClusterRole' and 'ClusterRoleBinding' to ensure that monitoring works without customization.
-
Previously, if an error occurred while parsing a profile, rules or variables objects were removed and deleted from the profile. Now, if an error occurs during parsing, the
profileparser
annotates the object with a temporary annotation that prevents the object from being deleted until after parsing completes. (BZ#1988259) -
Previously, an error occurred if titles or descriptions were missing from a tailored profile. Because the XCCDF standard requires titles and descriptions for tailored profiles, titles and descriptions are now required to be set in
TailoredProfile
CRs. -
Previously, when using tailored profiles,
TailoredProfile
variable values were allowed to be set using only a specific selection set. This restriction is now removed, andTailoredProfile
variables can be set to any value.
5.2.20. Release Notes for Compliance Operator 0.1.39
The following advisory is available for the OpenShift Compliance Operator 0.1.39:
5.2.20.1. New features and enhancements
- Previously, the Compliance Operator was unable to parse Payment Card Industry Data Security Standard (PCI DSS) references. Now, the Operator can parse compliance content that is provided with PCI DSS profiles.
- Previously, the Compliance Operator was unable to execute rules for AU-5 control in the moderate profile. Now, permission is added to the Operator so that it can read Prometheusrules.monitoring.coreos.com objects and run the rules that cover AU-5 control in the moderate profile.
5.2.21. Additional resources
5.3. Compliance Operator support
5.3.1. Compliance Operator lifecycle
The Compliance Operator is a "Rolling Stream" Operator, meaning updates are available asynchronously of OpenShift Container Platform releases. For more information, see OpenShift Operator Life Cycles on the Red Hat Customer Portal.
5.3.2. Getting support
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal.
From the Customer Portal, you can:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
5.3.3. Using the must-gather tool for the Compliance Operator
Starting in Compliance Operator v1.6.0, you can collect data about the Compliance Operator resources by running the must-gather
command with the Compliance Operator image.
Consider using the must-gather
tool when opening support cases or filing bug reports, as it provides additional details about the Operator configuration and logs.
Procedure
Run the following command to collect data about the Compliance Operator:
$ oc adm must-gather --image=$(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name=="must-gather")].image}')
5.3.4. Additional resources
5.4. Compliance Operator concepts
5.4.1. Understanding the Compliance Operator
The Compliance Operator lets OpenShift Container Platform administrators describe the required compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.
The Compliance Operator is available for Red Hat Enterprise Linux CoreOS (RHCOS) deployments only.
5.4.1.1. Compliance Operator profiles
There are several profiles available as part of the Compliance Operator installation. You can use the oc get
command to view available profiles, profile details, and specific rules.
View the available profiles:
$ oc get profile.compliance -n openshift-compliance
Example output
NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1
These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profile’s name.
ocp4-e8
applies the Essential 8 benchmark to the OpenShift Container Platform product, whilerhcos4-e8
applies the Essential 8 benchmark to the Red Hat Enterprise Linux CoreOS (RHCOS) product.Run the following command to view the details of the
rhcos4-e8
profile:$ oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8
Example 5.1. Example output
apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: "2022-10-19T12:06:49Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "43699" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight
Run the following command to view the details of the
rhcos4-audit-rules-login-events
rule:$ oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events
Example 5.2. Example output
apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: "2022-10-19T12:07:08Z" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: "44819" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.
5.4.1.1.1. Compliance Operator profile types
There are two types of compliance profiles available: Platform and Node.
- Platform
- Platform scans target your OpenShift Container Platform cluster.
- Node
- Node scans target the nodes of the cluster.
For compliance profiles that have Node and Platform applications, such as pci-dss
compliance profiles, you must run both in your OpenShift Container Platform environment.
5.4.1.2. Additional resources
5.4.2. Understanding the Custom Resource Definitions
The Compliance Operator in the OpenShift Container Platform provides you with several Custom Resource Definitions (CRDs) to accomplish the compliance scans. To run a compliance scan, it leverages the predefined security policies, which are derived from the ComplianceAsCode community project. The Compliance Operator converts these security policies into CRDs, which you can use to run compliance scans and get remediations for the issues found.
5.4.2.1. CRDs workflow
The CRD provides you the following workflow to complete the compliance scans:
- Define your compliance scan requirements
- Configure the compliance scan settings
- Process compliance requirements with compliance scans settings
- Monitor the compliance scans
- Check the compliance scan results
5.4.2.2. Defining the compliance scan requirements
By default, the Compliance Operator CRDs include ProfileBundle
and Profile
objects, in which you can define and set the rules for your compliance scan requirements. You can also customize the default profiles by using a TailoredProfile
object.
5.4.2.2.1. ProfileBundle object
When you install the Compliance Operator, it includes ready-to-run ProfileBundle
objects. The Compliance Operator parses the ProfileBundle
object and creates a Profile
object for each profile in the bundle. It also parses Rule
and Variable
objects, which are used by the Profile
object.
Example ProfileBundle
object
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
name: <profile bundle name>
namespace: openshift-compliance
status:
dataStreamStatus: VALID 1
- 1
- Indicates whether the Compliance Operator was able to parse the content files.
When the contentFile
fails, an errorMessage
attribute appears, which provides details of the error that occurred.
Troubleshooting
When you roll back to a known content image from an invalid image, the ProfileBundle
object stops responding and displays PENDING
state. As a workaround, you can move to a different image than the previous one. Alternatively, you can delete and re-create the ProfileBundle
object to return to the working state.
5.4.2.2.2. Profile object
The Profile
object defines the rules and variables that can be evaluated for a certain compliance standard. It contains parsed out details about an OpenSCAP profile, such as its XCCDF identifier and profile checks for a Node
or Platform
type. You can either directly use the Profile
object or further customize it using a TailorProfile
object.
You cannot create or modify the Profile
object manually because it is derived from a single ProfileBundle
object. Typically, a single ProfileBundle
object can include several Profile
objects.
Example Profile
object
apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: "YYYY-MM-DDTMM:HH:SSZ" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: "<version number>" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>
- 1
- Specify the XCCDF name of the profile. Use this identifier when you define a
ComplianceScan
object as the value of the profile attribute of the scan. - 2
- Specify either a
Node
orPlatform
. Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. - 3
- Specify the list of rules for the profile. Each rule corresponds to a single check.
5.4.2.2.3. Rule object
The Rule
object, which forms the profiles, are also exposed as objects. Use the Rule
object to define your compliance check requirements and specify how it could be fixed.
Example Rule
object
apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>
- 1
- Specify the type of check this rule executes.
Node
profiles scan the cluster nodes andPlatform
profiles scan the Kubernetes platform. An empty value indicates there is no automated check. - 2
- Specify the XCCDF name of the rule, which is parsed directly from the datastream.
- 3
- Specify the severity of the rule when it fails.
The Rule
object gets an appropriate label for an easy identification of the associated ProfileBundle
object. The ProfileBundle
also gets specified in the OwnerReferences
of this object.
5.4.2.2.4. TailoredProfile object
Use the TailoredProfile
object to modify the default Profile
object based on your organization requirements. You can enable or disable rules, set variable values, and provide justification for the customization. After validation, the TailoredProfile
object creates a ConfigMap
, which can be referenced by a ComplianceScan
object.
You can use the TailoredProfile
object by referencing it in a ScanSettingBinding
object. For more information about ScanSettingBinding
, see ScanSettingBinding object.
Example TailoredProfile
object
apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4
- 1
- This is optional. Name of the
Profile
object upon which theTailoredProfile
is built. If no value is set, a new profile is created from theenableRules
list. - 2
- Specifies the XCCDF name of the tailored profile.
- 3
- Specifies the
ConfigMap
name, which can be used as the value of thetailoringConfigMap.name
attribute of aComplianceScan
. - 4
- Shows the state of the object such as
READY
,PENDING
, andFAILURE
. If the state of the object isERROR
, then the attributestatus.errorMessage
provides the reason for the failure.
With the TailoredProfile
object, it is possible to create a new Profile
object using the TailoredProfile
construct. To create a new Profile
, set the following configuration parameters :
- an appropriate title
-
extends
value must be empty scan type annotation on the
TailoredProfile
object:compliance.openshift.io/product-type: Platform/Node
NoteIf you have not set the
product-type
annotation, the Compliance Operator defaults toPlatform
scan type. Adding the-node
suffix to the name of theTailoredProfile
object results innode
scan type.
5.4.2.3. Configuring the compliance scan settings
After you have defined the requirements of the compliance scan, you can configure it by specifying the type of the scan, occurrence of the scan, and location of the scan. To do so, Compliance Operator provides you with a ScanSetting
object.
5.4.2.3.1. ScanSetting object
Use the ScanSetting
object to define and reuse the operational policies to run your scans. By default, the Compliance Operator creates the following ScanSetting
objects:
- default - it runs a scan every day at 1 AM on both master and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Remediation is neither applied nor updated automatically.
-
default-auto-apply - it runs a scan every day at 1AM on both control plane and worker nodes using a 1Gi Persistent Volume (PV) and keeps the last three results. Both
autoApplyRemediations
andautoUpdateRemediations
are set to true.
Example ScanSetting
object
apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: "2022-10-18T20:21:00Z" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: "38840" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: "" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m
- 1
- Set to
true
to enable auto remediations. Set tofalse
to disable auto remediations. - 2
- Set to
true
to enable auto remediations for content updates. Set tofalse
to disable auto remediations for content updates. - 3
- Specify the number of stored scans in the raw result format. The default value is
3
. As the older results get rotated, the administrator must store the results elsewhere before the rotation happens. - 4
- Specify the storage size that should be created for the scan to store the raw results. The default value is
1Gi
- 6
- Specify how often the scan should be run in cron format.Note
To disable the rotation policy, set the value to
0
. - 5
- Specify the
node-role.kubernetes.io
label value to schedule the scan forNode
type. This value has to match the name of aMachineConfigPool
.
5.4.2.4. Processing the compliance scan requirements with compliance scans settings
When you have defined the compliance scan requirements and configured the settings to run the scans, then the Compliance Operator processes it using the ScanSettingBinding
object.
5.4.2.4.1. ScanSettingBinding object
Use the ScanSettingBinding
object to specify your compliance requirements with reference to the Profile
or TailoredProfile
object. It is then linked to a ScanSetting
object, which provides the operational constraints for the scan. Then the Compliance Operator generates the ComplianceSuite
object based on the ScanSetting
and ScanSettingBinding
objects.
Example ScanSettingBinding
object
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1
The creation of ScanSetting
and ScanSettingBinding
objects results in the compliance suite. To get the list of compliance suite, run the following command:
$ oc get compliancesuites
If you delete ScanSettingBinding
, then compliance suite also is deleted.
5.4.2.5. Tracking the compliance scans
After the creation of compliance suite, you can monitor the status of the deployed scans using the ComplianceSuite
object.
5.4.2.5.1. ComplianceSuite object
The ComplianceSuite
object helps you keep track of the state of the scans. It contains the raw settings to create scans and the overall result.
For Node
type scans, you should map the scan to the MachineConfigPool
, since it contains the remediations for any issues. If you specify a label, ensure it directly applies to a pool.
Example ComplianceSuite
object
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: "0 1 * * *" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" nodeSelector: node-role.kubernetes.io/worker: "" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT
The suite in the background creates the ComplianceScan
object based on the scans
parameter. You can programmatically fetch the ComplianceSuites
events. To get the events for the suite, run the following command:
$ oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>
You might create errors when you manually define the ComplianceSuite
, since it contains the XCCDF attributes.
5.4.2.5.2. Advanced ComplianceScan Object
The Compliance Operator includes options for advanced users for debugging or integrating with existing tooling. While it is recommended that you not create a ComplianceScan
object directly, you can instead manage it using a ComplianceSuite
object.
Example Advanced ComplianceScan
object
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: "xccdf_org.ssgproject.content_rule_no_netrc_files" 4 nodeSelector: 5 node-role.kubernetes.io/worker: "" status: phase: DONE 6 result: NON-COMPLIANT 7
- 1
- Specify either
Node
orPlatform
. Node profiles scan the cluster nodes and platform profiles scan the Kubernetes platform. - 2
- Specify the XCCDF identifier of the profile that you want to run.
- 3
- Specify the container image that encapsulates the profile files.
- 4
- It is optional. Specify the scan to run a single rule. This rule has to be identified with the XCCDF ID, and has to belong to the specified profile.Note
If you skip the
rule
parameter, then scan runs for all the available rules of the specified profile. - 5
- If you are on the OpenShift Container Platform and wants to generate a remediation, then nodeSelector label has to match the
MachineConfigPool
label.NoteIf you do not specify
nodeSelector
parameter or match theMachineConfig
label, scan will still run, but it will not create remediation. - 6
- Indicates the current phase of the scan.
- 7
- Indicates the verdict of the scan.
If you delete a ComplianceSuite
object, then all the associated scans get deleted.
When the scan is complete, it generates the result as Custom Resources of the ComplianceCheckResult
object. However, the raw results are available in ARF format. These results are stored in a Persistent Volume (PV), which has a Persistent Volume Claim (PVC) associated with the name of the scan. You can programmatically fetch the ComplianceScans
events. To generate events for the suite, run the following command:
oc get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>
5.4.2.6. Viewing the compliance results
When the compliance suite reaches the DONE
phase, you can view the scan results and possible remediations.
5.4.2.6.1. ComplianceCheckResult object
When you run a scan with a specific profile, several rules in the profiles are verified. For each of these rules, a ComplianceCheckResult
object is created, which provides the state of the cluster for a specific rule.
Example ComplianceCheckResult
object
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2
- 1
- Describes the severity of the scan check.
- 2
- Describes the result of the check. The possible values are:
- PASS: check was successful.
- FAIL: check was unsuccessful.
- INFO: check was successful and found something not severe enough to be considered an error.
- MANUAL: check cannot automatically assess the status and manual check is required.
- INCONSISTENT: different nodes report different results.
- ERROR: check run successfully, but could not complete.
- NOTAPPLICABLE: check did not run as it is not applicable.
To get all the check results from a suite, run the following command:
oc get compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite
5.4.2.6.2. ComplianceRemediation object
For a specific check you can have a datastream specified fix. However, if a Kubernetes fix is available, then the Compliance Operator creates a ComplianceRemediation
object.
Example ComplianceRemediation
object
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3
- 1
true
indicates the remediation was applied.false
indicates the remediation was not applied.- 2
- Includes the definition of the remediation.
- 3
- Indicates remediation that was previously parsed from an earlier version of the content. The Compliance Operator still retains the outdated objects to give the administrator a chance to review the new remediations before applying them.
To get all the remediations from a suite, run the following command:
oc get complianceremediations \ -l compliance.openshift.io/suite=workers-compliancesuite
To list all failing checks that can be remediated automatically, run the following command:
oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'
To list all failing checks that can be remediated manually, run the following command:
oc get compliancecheckresults \ -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'
5.5. Compliance Operator management
5.5.1. Installing the Compliance Operator
Before you can use the Compliance Operator, you must ensure it is deployed in the cluster.
The Compliance Operator might report incorrect results on managed platforms, such as OpenShift Dedicated, Red Hat OpenShift Service on AWS Classic, and Microsoft Azure Red Hat OpenShift. For more information, see the Knowledgebase article Compliance Operator reports incorrect results on Managed Services.
Before deploying the Compliance Operator, you are required to define persistent storage in your cluster to store the raw results output. For more information, see Persistant storage overview and Managing the default storage class.
5.5.1.1. Installing the Compliance Operator through the web console
Prerequisites
-
You must have
admin
privileges. -
You must have a
StorageClass
resource configured.
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
OperatorHub. - Search for the Compliance Operator, then click Install.
-
Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the
openshift-compliance
namespace. - Click Install.
Verification
To confirm that the installation is successful:
-
Navigate to the Operators
Installed Operators page. -
Check that the Compliance Operator is installed in the
openshift-compliance
namespace and its status isSucceeded
.
If the Operator is not installed successfully:
-
Navigate to the Operators
Installed Operators page and inspect the Status
column for any errors or failures. -
Navigate to the Workloads
Pods page and check the logs in any pods in the openshift-compliance
project that are reporting issues.
If the restricted
Security Context Constraints (SCC) have been modified to contain the system:authenticated
group or has added requiredDropCapabilities
, the Compliance Operator may not function properly due to permissions issues.
You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator.
5.5.1.2. Installing the Compliance Operator using the CLI
Prerequisites
-
You must have
admin
privileges. -
You must have a
StorageClass
resource configured.
Procedure
Define a
Namespace
object:Example
namespace-object.yaml
apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance
- 1
- In OpenShift Container Platform 4.17, the pod security label must be set to
privileged
at the namespace level.
Create the
Namespace
object:$ oc create -f namespace-object.yaml
Define an
OperatorGroup
object:Example
operator-group-object.yaml
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance
Create the
OperatorGroup
object:$ oc create -f operator-group-object.yaml
Define a
Subscription
object:Example
subscription-object.yaml
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace
Create the
Subscription
object:$ oc create -f subscription-object.yaml
If you are setting the global scheduler feature and enable defaultNodeSelector
, you must create the namespace manually and update the annotations of the openshift-compliance
namespace, or the namespace where the Compliance Operator was installed, with openshift.io/node-selector: “”
. This removes the default node selector and prevents deployment failures.
Verification
Verify the installation succeeded by inspecting the CSV file:
$ oc get csv -n openshift-compliance
Verify that the Compliance Operator is up and running:
$ oc get deploy -n openshift-compliance
5.5.1.3. Installing the Compliance Operator on ROSA hosted control planes (HCP)
As of the Compliance Operator 1.5.0 release, the Operator is tested against Red Hat OpenShift Service on AWS using Hosted control planes.
Red Hat OpenShift Service on AWS Hosted control planes clusters have restricted access to the control plane, which is managed by Red Hat. By default, the Compliance Operator will schedule to nodes within the master
node pool, which is not available in Red Hat OpenShift Service on AWS Hosted control planes installations. This requires you to configure the Subscription
object in a way that allows the Operator to schedule on available node pools. This step is necessary for a successful installation on Red Hat OpenShift Service on AWS Hosted control planes clusters.
Prerequisites
-
You must have
admin
privileges. -
You must have a
StorageClass
resource configured.
Procedure
Define a
Namespace
object:Example
namespace-object.yaml
fileapiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance
- 1
- In OpenShift Container Platform 4.17, the pod security label must be set to
privileged
at the namespace level.
Create the
Namespace
object by running the following command:$ oc create -f namespace-object.yaml
Define an
OperatorGroup
object:Example
operator-group-object.yaml
fileapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance
Create the
OperatorGroup
object by running the following command:$ oc create -f operator-group-object.yaml
Define a
Subscription
object:Example
subscription-object.yaml
fileapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" 1
- 1
- Update the Operator deployment to deploy on
worker
nodes.
Create the
Subscription
object by running the following command:$ oc create -f subscription-object.yaml
Verification
Verify that the installation succeeded by running the following command to inspect the cluster service version (CSV) file:
$ oc get csv -n openshift-compliance
Verify that the Compliance Operator is up and running by using the following command:
$ oc get deploy -n openshift-compliance
If the restricted
Security Context Constraints (SCC) have been modified to contain the system:authenticated
group or has added requiredDropCapabilities
, the Compliance Operator may not function properly due to permissions issues.
You can create a custom SCC for the Compliance Operator scanner pod service account. For more information, see Creating a custom SCC for the Compliance Operator.
5.5.1.4. Installing the Compliance Operator on Hypershift hosted control planes
The Compliance Operator can be installed in hosted control planes using the OperatorHub by creating a Subscription
file.
Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
You must have
admin
privileges.
Procedure
Define a
Namespace
object similar to the following:Example
namespace-object.yaml
apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance
- 1
- In OpenShift Container Platform 4.17, the pod security label must be set to
privileged
at the namespace level.
Create the
Namespace
object by running the following command:$ oc create -f namespace-object.yaml
Define an
OperatorGroup
object:Example
operator-group-object.yaml
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance
Create the
OperatorGroup
object by running the following command:$ oc create -f operator-group-object.yaml
Define a
Subscription
object:Example
subscription-object.yaml
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: "stable" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: "" env: - name: PLATFORM value: "HyperShift"
Create the
Subscription
object by running the following command:$ oc create -f subscription-object.yaml
Verification
Verify the installation succeeded by inspecting the CSV file by running the following command:
$ oc get csv -n openshift-compliance
Verify that the Compliance Operator is up and running by running the following command:
$ oc get deploy -n openshift-compliance
Additional resources
5.5.1.5. Additional resources
- The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments.
5.5.2. Updating the Compliance Operator
As a cluster administrator, you can update the Compliance Operator on your OpenShift Container Platform cluster.
Updating your OpenShift Container Platform cluster to version 4.14 might cause the Compliance Operator to not work as expected. This is due to an ongoing known issue. For more information, see OCPBUGS-18025.
5.5.2.1. Preparing for an Operator update
The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel.
The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2
, 1.3
) or a release frequency (stable
, fast
).
You cannot change installed Operators to a channel that is older than the current channel.
Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators:
You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included.
5.5.2.2. Changing the update channel for an Operator
You can change the update channel for an Operator by using the OpenShift Container Platform web console.
If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the web console, navigate to Operators
Installed Operators. - Click the name of the Operator you want to change the update channel for.
- Click the Subscription tab.
- Click the name of the update channel under Update channel.
- Click the newer update channel that you want to change to, then click Save.
For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date. For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab.
5.5.2.3. Manually approving a pending Operator update
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
-
In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
- Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for update. When satisfied, click Approve.
-
Navigate back to the Operators
Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.
5.5.3. Managing the Compliance Operator
This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle
object.
5.5.3.1. ProfileBundle CR example
The ProfileBundle
object requires two pieces of information: the URL of a container image that contains the contentImage
and the file that contains the compliance content. The contentFile
parameter is relative to the root of the file system. You can define the built-in rhcos4
ProfileBundle
object as shown in the following example:
apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: "2022-10-19T12:06:30Z" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: "46741" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: "2022-10-19T12:07:51Z" message: Profile bundle successfully parsed reason: Valid status: "True" type: Ready dataStreamStatus: VALID
5.5.3.2. Updating security content
Security content is included as container images that the ProfileBundle
objects refer to. To accurately track updates to ProfileBundles
and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag:
$ oc -n openshift-compliance get profilebundles rhcos4 -oyaml
Example output
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
creationTimestamp: "2022-10-19T12:06:30Z"
finalizers:
- profilebundle.finalizers.compliance.openshift.io
generation: 1
name: rhcos4
namespace: openshift-compliance
resourceVersion: "46741"
uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
spec:
contentFile: ssg-rhcos4-ds.xml
contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1
status:
conditions:
- lastTransitionTime: "2022-10-19T12:07:51Z"
message: Profile bundle successfully parsed
reason: Valid
status: "True"
type: Ready
dataStreamStatus: VALID
- 1
- Security container image.
Each ProfileBundle
is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles.
5.5.3.3. Additional resources
- The Compliance Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager in disconnected environments.
5.5.4. Uninstalling the Compliance Operator
You can remove the OpenShift Compliance Operator from your cluster by using the OpenShift Container Platform web console or the CLI.
5.5.4.1. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the web console
To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - The OpenShift Compliance Operator must be installed.
Procedure
To remove the Compliance Operator by using the OpenShift Container Platform web console:
Go to the Operators
Installed Operators Compliance Operator page. - Click All instances.
- In All namespaces, click the Options menu and delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects.
-
Switch to the Administration
Operators Installed Operators page. - Click the Options menu on the Compliance Operator entry and select Uninstall Operator.
-
Switch to the Home
Projects page. - Search for 'compliance'.
Click the Options menu next to the openshift-compliance project, and select Delete Project.
-
Confirm the deletion by typing
openshift-compliance
in the dialog box, and click Delete.
-
Confirm the deletion by typing
5.5.4.2. Uninstalling the OpenShift Compliance Operator from OpenShift Container Platform using the CLI
To remove the Compliance Operator, you must first delete the objects in the namespace. After the objects are removed, you can remove the Operator and its namespace by deleting the openshift-compliance project.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions. - The OpenShift Compliance Operator must be installed.
Procedure
Delete all objects in the namespace.
Delete the
ScanSettingBinding
objects:$ oc delete ssb --all -n openshift-compliance
Delete the
ScanSetting
objects:$ oc delete ss --all -n openshift-compliance
Delete the
ComplianceSuite
objects:$ oc delete suite --all -n openshift-compliance
Delete the
ComplianceScan
objects:$ oc delete scan --all -n openshift-compliance
Delete the
ProfileBundle
objects:$ oc delete profilebundle.compliance --all -n openshift-compliance
Delete the Subscription object:
$ oc delete sub --all -n openshift-compliance
Delete the CSV object:
$ oc delete csv --all -n openshift-compliance
Delete the project:
$ oc delete project openshift-compliance
Example output
project.project.openshift.io "openshift-compliance" deleted
Verification
Confirm the namespace is deleted:
$ oc get project/openshift-compliance
Example output
Error from server (NotFound): namespaces "openshift-compliance" not found
5.6. Compliance Operator scan management
5.6.1. Supported compliance profiles
There are several profiles available as part of the Compliance Operator (CO) installation. While you can use the following profiles to assess gaps in a cluster, usage alone does not infer or guarantee compliance with a particular profile and is not an auditor.
In order to be compliant or certified under these various standards, you need to engage an authorized auditor such as a Qualified Security Assessor (QSA), Joint Authorization Board (JAB), or other industry recognized regulatory authority to assess your environment. You are required to work with an authorized auditor to achieve compliance with a standard.
The Compliance Operator might report incorrect results on some managed platforms, such as OpenShift Dedicated and Azure Red Hat OpenShift. For more information, see the Red Hat Knowledgebase Solution #6983418.
5.6.1.1. Compliance profiles
The Compliance Operator provides profiles to meet industry standard benchmarks.
The following tables reflect the latest available profiles in the Compliance Operator.
5.6.1.1.1. CIS compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-cis [1] | CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 | Platform | CIS Benchmarks ™ [1] |
| |
ocp4-cis-1-4 [3] | CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 | Platform | CIS Benchmarks ™ [4] |
| |
ocp4-cis-1-5 | CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 | Platform | CIS Benchmarks ™ [4] |
| |
ocp4-cis-node [1] | CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 | Node [2] | CIS Benchmarks ™ [4] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
ocp4-cis-node-1-4 [3] | CIS Red Hat OpenShift Container Platform Benchmark v1.4.0 | Node [2] | CIS Benchmarks ™ [4] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
ocp4-cis-node-1-5 | CIS Red Hat OpenShift Container Platform Benchmark v1.5.0 | Node [2] | CIS Benchmarks ™ [4] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
-
The
ocp4-cis
andocp4-cis-node
profiles maintain the most up-to-date version of the CIS benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as CIS v1.4.0, use theocp4-cis-1-4
andocp4-cis-node-1-4
profiles. - Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
- CIS v1.4.0 is superceded by CIS v1.5.0. It is recommended to apply the latest profile to your environment.
- To locate the CIS OpenShift Container Platform v4 Benchmark, go to CIS Benchmarks and click Download Latest CIS Benchmark, where you can then register to download the benchmark.
5.6.1.1.2. Essential Eight compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-e8 | Australian Cyber Security Centre (ACSC) Essential Eight | Platform |
| ||
rhcos4-e8 | Australian Cyber Security Centre (ACSC) Essential Eight | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
5.6.1.1.3. FedRAMP High compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-high [1] | NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level | Platform |
| ||
ocp4-high-node [1] | NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-high-node-rev-4 | NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Node level | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-high-rev-4 | NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level | Platform |
| ||
rhcos4-high [1] | NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
rhcos4-high-rev-4 | NIST 800-53 High-Impact Baseline for Red Hat Enterprise Linux CoreOS | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
-
The
ocp4-high
,ocp4-high-node
andrhcos4-high
profiles maintain the most up-to-date version of the FedRAMP High standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP high R4, use theocp4-high-rev-4
andocp4-high-node-rev-4
profiles. - Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
5.6.1.1.4. FedRAMP Moderate compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-moderate [1] | NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level | Platform |
| ||
ocp4-moderate-node [1] | NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-moderate-node-rev-4 | NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Node level | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-moderate-rev-4 | NIST 800-53 Moderate-Impact Baseline for Red Hat OpenShift - Platform level | Platform |
| ||
rhcos4-moderate [1] | NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
rhcos4-moderate-rev-4 | NIST 800-53 Moderate-Impact Baseline for Red Hat Enterprise Linux CoreOS | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
-
The
ocp4-moderate
,ocp4-moderate-node
andrhcos4-moderate
profiles maintain the most up-to-date version of the FedRAMP Moderate standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as FedRAMP Moderate R4, use theocp4-moderate-rev-4
andocp4-moderate-node-rev-4
profiles. - Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
5.6.1.1.5. NERC-CIP compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-nerc-cip | North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Platform level | Platform |
| ||
ocp4-nerc-cip-node | North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for the OpenShift Container Platform - Node level | Node [1] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
rhcos4-nerc-cip | North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) cybersecurity standards profile for Red Hat Enterprise Linux CoreOS | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
- Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
5.6.1.1.6. PCI-DSS compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-pci-dss [1] | PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 | Platform |
| ||
ocp4-pci-dss-3-2 [3] | PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 | Platform |
| ||
ocp4-pci-dss-4-0 | PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 | Platform |
| ||
ocp4-pci-dss-node [1] | PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-pci-dss-node-3-2 [3] | PCI-DSS v3.2.1 Control Baseline for OpenShift Container Platform 4 | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-pci-dss-node-4-0 | PCI-DSS v4 Control Baseline for OpenShift Container Platform 4 | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
-
The
ocp4-pci-dss
andocp4-pci-dss-node
profiles maintain the most up-to-date version of the PCI-DSS standard as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as PCI-DSS v3.2.1, use theocp4-pci-dss-3-2
andocp4-pci-dss-node-3-2
profiles. - Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
- PCI-DSS v3.2.1 is superceded by PCI-DSS v4. It is recommended to apply the latest profile to your environment.
5.6.1.1.7. STIG compliance profiles
Profile | Profile title | Application | Industry compliance benchmark | Supported architectures | Supported platforms |
---|---|---|---|---|---|
ocp4-stig [1] | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift | Platform |
| ||
ocp4-stig-node [1] | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-stig-node-v1r1 [3] | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-stig-node-v2r1 | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 | Node [2] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
ocp4-stig-v1r1 [3] | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 | Platform |
| ||
ocp4-stig-v2r1 | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 | Platform |
| ||
rhcos4-stig | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) | |
rhcos4-stig-v1r1 [3] | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V1R1 | Node | DISA-STIG [3] |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
rhcos4-stig-v2r1 | Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Openshift V2R1 | Node |
| Red Hat OpenShift Service on AWS with hosted control planes (ROSA HCP) |
-
The
ocp4-stig
,ocp4-stig-node
andrhcos4-stig
profiles maintain the most up-to-date version of the DISA-STIG benchmark as it becomes available in the Compliance Operator. If you want to adhere to a specific version, such as DISA-STIG V2R1, use theocp4-stig-v2r1
andocp4-stig-node-v2r1
profiles. - Node profiles must be used with the relevant Platform profile. For more information, see Compliance Operator profile types.
- DISA-STIG V1R1 is superceded by DISA-STIG V2R1. It is recommended to apply the latest profile to your environment.
5.6.1.1.8. About extended compliance profiles
Some compliance profiles have controls that require following industry best practices, resulting in some profiles extending others. Combining the Center for Internet Security (CIS) best practices with National Institute of Standards and Technology (NIST) security frameworks establishes a path to a secure and compliant environment.
For example, the NIST High-Impact and Moderate-Impact profiles extend the CIS profile to achieve compliance. As a result, extended compliance profiles eliminate the need to run both profiles in a single cluster.
Profile | Extends |
---|---|
ocp4-pci-dss | ocp4-cis |
ocp4-pci-dss-node | ocp4-cis-node |
ocp4-high | ocp4-cis |
ocp4-high-node | ocp4-cis-node |
ocp4-moderate | ocp4-cis |
ocp4-moderate-node | ocp4-cis-node |
ocp4-nerc-cip | ocp4-moderate |
ocp4-nerc-cip-node | ocp4-moderate-node |
5.6.1.2. Additional resources
5.6.2. Compliance Operator scans
The ScanSetting
and ScanSettingBinding
APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run:
$ oc explain scansettings
or
$ oc explain scansettingbindings
5.6.2.1. Running compliance scans
You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting
object with reasonable defaults on startup. This ScanSetting
object is named default
.
For all-in-one control plane and worker nodes, the compliance scan runs twice on the worker and control plane nodes. The compliance scan might generate inconsistent scan results. You can avoid inconsistent results by defining only a single role in the ScanSetting
object.
Procedure
Inspect the
ScanSetting
object by running the following command:$ oc describe scansettings default -n openshift-compliance
Example output
Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>
- 1
- The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode
ReadWriteOnce
because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally,ReadWriteOnce
access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use theReadWriteOnce
access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans. - 2
- The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.
- 3
- The Compliance Operator will allocate one GB of storage for the scan results.
- 4
- The
scansetting.rawResultStorage.storageClassName
field specifies thestorageClassName
value to use when creating thePersistentVolumeClaim
object to store the raw results. The default value is null, which will attempt to use the default storage class configured in the cluster. If there is no default class specified, then you must set a default class. - 5 6
- If the scan setting uses any profiles that scan cluster nodes, scan these node roles.
- 7
- The default scan setting object scans all the nodes.
- 8
- The default scan setting object runs scans at 01:00 each day.
As an alternative to the default scan setting, you can use
default-auto-apply
, which has the following settings:Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>
Create a
ScanSettingBinding
object that binds to the defaultScanSetting
object and scans the cluster using thecis
andcis-node
profiles. For example:apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1
Create the
ScanSettingBinding
object by running:$ oc create -f <file-name>.yaml -n openshift-compliance
At this point in the process, the
ScanSettingBinding
object is reconciled and based on theBinding
and theBound
settings. The Compliance Operator creates aComplianceSuite
object and the associatedComplianceScan
objects.Follow the compliance scan progress by running:
$ oc get compliancescan -w -n openshift-compliance
The scans progress through the scanning phases and eventually reach the
DONE
phase when complete. In most cases, the result of the scan isNON-COMPLIANT
. You can review the scan results and start applying remediations to make the cluster compliant. See Managing Compliance Operator remediation for more information.
5.6.2.2. Setting custom storage size for results
While the custom resources such as ComplianceCheckResult
represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd
key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size
attribute that is exposed in both the ScanSetting
and ComplianceScan
resources.
A related parameter is rawResultStorage.rotation
which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
5.6.2.2.1. Using custom result storage values
Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName
attribute.
If your cluster does not specify a default storage class, this attribute must be set.
Configure the ScanSetting
custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:
Example ScanSetting
CR
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'
5.6.2.3. Scheduling the result server pod on a worker node
The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector
and tolerations
attributes enable you to configure the location of the result server pod.
This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes.
Procedure
Create a
ScanSetting
custom resource (CR) for the Compliance Operator:Define the
ScanSetting
CR, and save the YAML file, for example,rs-workers.yaml
:apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *
To create the
ScanSetting
CR, run the following command:$ oc create -f rs-workers.yaml
Verification
To verify that the
ScanSetting
object is created, run the following command:$ oc get scansettings rs-on-workers -n openshift-compliance -o yaml
Example output
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: "2021-11-19T19:36:36Z" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: "48305" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: "" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true
5.6.2.4. ScanSetting
Custom Resource
The ScanSetting
Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector
container. To set the memory limits of the Operator, modify the Subscription
object if installed through OLM or the Operator deployment itself.
To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits.
Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process.
5.6.2.5. Configuring the hosted control planes management cluster
If you are hosting your own Hosted control plane or Hypershift environment and want to scan a Hosted Cluster from the management cluster, you will need to set the name and prefix namespace for the target Hosted Cluster. You can achieve this by creating a TailoredProfile
.
This procedure only applies to users managing their own hosted control planes environment.
Only ocp4-cis
and ocp4-pci-dss
profiles are supported in hosted control planes management clusters.
Prerequisites
- The Compliance Operator is installed in the management cluster.
Procedure
Obtain the
name
andnamespace
of the hosted cluster to be scanned by running the following command:$ oc get hostedcluster -A
Example output
NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available
In the management cluster, create a
TailoredProfile
extending the scan Profile and define the name and namespace of the Hosted Cluster to be scanned:Example
management-tailoredprofile.yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3
Create the
TailoredProfile
:$ oc create -n openshift-compliance -f mgmt-tp.yaml
5.6.2.6. Applying resource requests and limits
When the kubelet starts a container as part of a Pod, the kubelet passes that container’s requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.
The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.
If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min
and memory.low
values.
If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.
The kubelet tracks tmpfs
emptyDir
volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod’s container might be evicted.
A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator.
5.6.2.7. Scheduling Pods with container resource requests
When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.
Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.
For each container, you can specify the following resource limits and request:
spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>
Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.
Example container resource requests and limits
apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: "64Mi" cpu: "250m" limits: 2 memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]
5.6.3. Tailoring the Compliance Operator
While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations’ needs and requirements. The process of modifying a profile is called tailoring.
The Compliance Operator provides the TailoredProfile
object to help tailor profiles.
5.6.3.1. Creating a new tailored profile
You can write a tailored profile from scratch by using the TailoredProfile
object. Set an appropriate title
and description
and leave the extends
field empty. Indicate to the Compliance Operator what type of scan this custom profile will generate:
- Node scan: Scans the Operating System.
- Platform scan: Scans the OpenShift Container Platform configuration.
Procedure
-
Set the following annotation on the
TailoredProfile
object:
Example new-profile.yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster
- 1
- Set
Node
orPlatform
accordingly. - 2
- The
extends
field is optional. - 3
- Use the
description
field to describe the function of the newTailoredProfile
object. - 4
- Give your
TailoredProfile
object a title with thetitle
field.NoteAdding the
-node
suffix to thename
field of theTailoredProfile
object is similar to adding theNode
product type annotation and generates an Operating System scan.
5.6.3.2. Using tailored profiles to extend existing ProfileBundles
While the TailoredProfile
CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.
The ComplianceSuite
object contains an optional TailoringConfigMap
attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap
attribute is a name of a config map, which must contain a key called tailoring.xml
and the value of this key is the tailoring contents.
Procedure
Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS)
ProfileBundle
:$ oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4
Browse the available variables in the same
ProfileBundle
:$ oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4
Create a tailored profile named
nist-moderate-modified
:Choose which rules you want to add to the
nist-moderate-modified
tailored profile. This example extends therhcos4-moderate
profile by disabling two rules and changing one value. Use therationale
value to describe why these changes were made:Example
new-profile-node.yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive
Table 5.9. Attributes for spec variables Attribute Description extends
Name of the
Profile
object upon which thisTailoredProfile
is built.title
Human-readable title of the
TailoredProfile
.disableRules
A list of name and rationale pairs. Each name refers to a name of a rule object that is to be disabled. The rationale value is human-readable text describing why the rule is disabled.
manualRules
A list of name and rationale pairs. When a manual rule is added, the check result status will always be
manual
and remediation will not be generated. This attribute is automatic and by default has no values when set as a manual rule.enableRules
A list of name and rationale pairs. Each name refers to a name of a rule object that is to be enabled. The rationale value is human-readable text describing why the rule is enabled.
description
Human-readable text describing the
TailoredProfile
.setValues
A list of name, rationale, and value groupings. Each name refers to a name of the value set. The rationale is human-readable text describing the set. The value is the actual setting.
Add the
tailoredProfile.spec.manualRules
attribute:Example
tailoredProfile.spec.manualRules.yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges
Create the
TailoredProfile
object:$ oc create -n openshift-compliance -f new-profile-node.yaml 1
- 1
- The
TailoredProfile
object is created in the defaultopenshift-compliance
namespace.
Example output
tailoredprofile.compliance.openshift.io/nist-moderate-modified created
Define the
ScanSettingBinding
object to bind the newnist-moderate-modified
tailored profile to the defaultScanSetting
object.Example
new-scansettingbinding.yaml
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default
Create the
ScanSettingBinding
object:$ oc create -n openshift-compliance -f new-scansettingbinding.yaml
Example output
scansettingbinding.compliance.openshift.io/nist-moderate-modified created
5.6.4. Retrieving Compliance Operator raw results
When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes.
5.6.4.1. Obtaining Compliance Operator raw results from a persistent volume
Procedure
The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF).
Explore the
ComplianceSuite
object:$ oc get compliancesuites nist-moderate-modified \ -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'
Example output
{ "name": "ocp4-moderate", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-master", "namespace": "openshift-compliance" } { "name": "nist-moderate-modified-worker", "namespace": "openshift-compliance" }
This shows the persistent volume claims where the raw results are accessible.
Verify the raw data location by using the name and namespace of one of the results:
$ oc get pvc -n openshift-compliance rhcos4-moderate-worker
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m
Fetch the raw results by spawning a pod that mounts the volume and copying the results:
$ oc create -n openshift-compliance -f pod.yaml
Example pod.yaml
apiVersion: "v1" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: ["sleep", "3000"] volumeMounts: - mountPath: "/workers-scan-results" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker
After the pod is running, download the results:
$ oc cp pv-extract:/workers-scan-results -n openshift-compliance .
ImportantSpawning a pod that mounts the persistent volume will keep the claim as
Bound
. If the volume’s storage class in use has permissions set toReadWriteOnce
, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will not be possible for the Operator to schedule a pod and continue storing results in this location.After the extraction is complete, the pod can be deleted:
$ oc delete pod pv-extract -n openshift-compliance
5.6.5. Managing Compliance Operator result and remediation
Each ComplianceCheckResult
represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation
object with the same name, owned by the ComplianceCheckResult
is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.
Full remediation for Federal Information Processing Standards (FIPS) compliance requires enabling FIPS mode for the cluster. To enable FIPS mode, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode.
FIPS mode is supported on the following architectures:
-
x86_64
-
ppc64le
-
s390x
5.6.5.1. Filters for compliance check results
By default, the ComplianceCheckResult
objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.
List checks that belong to a specific suite:
$ oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/suite=workers-compliancesuite
List checks that belong to a specific scan:
$ oc get -n openshift-compliance compliancecheckresults \ -l compliance.openshift.io/scan=workers-scan
Not all ComplianceCheckResult
objects create ComplianceRemediation
objects. Only ComplianceCheckResult
objects that can be remediated automatically do. A ComplianceCheckResult
object has a related remediation if it is labeled with the compliance.openshift.io/automated-remediation
label. The name of the remediation is the same as the name of the check.
List all failing checks that can be remediated automatically:
$ oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'
List all failing checks sorted by severity:
$ oc get compliancecheckresults -n openshift-compliance \ -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'
Example output
NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high
List all failing checks that must be remediated manually:
$ oc get -n openshift-compliance compliancecheckresults \ -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'
The manual remediation steps are typically stored in the description
attribute in the ComplianceCheckResult
object.
ComplianceCheckResult Status | Description |
---|---|
PASS | Compliance check ran to completion and passed. |
FAIL | Compliance check ran to completion and failed. |
INFO | Compliance check ran to completion and found something not severe enough to be considered an error. |
MANUAL | Compliance check does not have a way to automatically assess the success or failure and must be checked manually. |
INCONSISTENT | Compliance check reports different results from different sources, typically cluster nodes. |
ERROR | Compliance check ran, but could not complete properly. |
NOT-APPLICABLE | Compliance check did not run because it is not applicable or not selected. |
5.6.5.2. Reviewing a remediation
Review both the ComplianceRemediation
object and the ComplianceCheckResult
object that owns the remediation. The ComplianceCheckResult
object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata
like the severity and the associated security controls. The ComplianceRemediation
object represents a way to fix the problem described in the ComplianceCheckResult
. After first scan, check for remediations with the state MissingDependencies
.
Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects
. This example is redacted to only show spec
and status
and omits metadata
:
spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied
The remediation payload is stored in the spec.current
attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig
object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap
or Secret
object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.
To see exactly what the remediation does when applied, the MachineConfig
object contents use the Ignition objects for the configuration. See the Ignition specification for further information about the format. In our example, the spec.config.storage.files[0].path
attribute specifies the file that is being create by this remediation (/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf
) and the spec.config.storage.files[0].contents.source
attribute specifies the contents of that file.
The contents of the files are URL-encoded.
Use the following Python script to view the contents:
$ echo "net.ipv4.conf.all.accept_redirects%3D0" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))"
Example output
net.ipv4.conf.all.accept_redirects=0
The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results.
5.6.5.3. Applying remediation when using customized machine config pools
When you create a custom MachineConfigPool
, add a label to the MachineConfigPool
so that machineConfigPoolSelector
present in the KubeletConfig
can match the label with MachineConfigPool
.
Do not set protectKernelDefaults: false
in the KubeletConfig
file, because the MachineConfigPool
object might fail to unpause unexpectedly after the Compliance Operator finishes applying remediation.
Procedure
List the nodes.
$ oc get nodes -n openshift-compliance
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.30.3 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.30.3 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.30.3
Add a label to nodes.
$ oc -n openshift-compliance \ label node ip-10-0-166-81.us-east-2.compute.internal \ node-role.kubernetes.io/<machine_config_pool_name>=
Example output
node/ip-10-0-166-81.us-east-2.compute.internal labeled
Create custom
MachineConfigPool
CR.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: ""
- 1
- The
labels
field defines label name to add for Machine config pool(MCP).
Verify MCP created successfully.
$ oc get mcp -w
5.6.5.4. Evaluating KubeletConfig rules against default configuration values
OpenShift Container Platform infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks.
To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results.
No additional configuration changes are required to use this feature with default master
and worker
node pools configurations.
5.6.5.5. Scanning custom node pools
The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool.
Procedure
Add the
example
role to theScanSetting
object that will be stored in theScanSettingBinding
CR:apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'
Create a scan that uses the
ScanSettingBinding
CR:apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default
Verification
The Platform KubeletConfig rules are checked through the
Node/Proxy
object. You can find those rules by running the following command:$ oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name'
5.6.5.6. Remediating KubeletConfig
sub pools
KubeletConfig
remediation labels can be applied to MachineConfigPool
sub-pools.
Procedure
Add a label to the sub-pool
MachineConfigPool
CR:$ oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=
5.6.5.7. Applying a remediation
The boolean attribute spec.apply
controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true
:
$ oc -n openshift-compliance \ patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":true}}' --type=merge
After the Compliance Operator processes the applied remediation, the status.ApplicationState
attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig
object named 75-$scan-name-$suite-name
. That MachineConfig
object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.
Note that when the Machine Config Operator applies a new MachineConfig
object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-$scan-name-$suite-name
MachineConfig
object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused
attribute of a MachineConfigPool
object to true
.
The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true
in the ScanSetting
top-level object.
Applying remediations automatically should only be done with careful consideration.
The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results.
5.6.5.8. Remediating a platform check manually
Checks for Platform scans typically have to be remediated manually by the administrator for two reasons:
- It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow.
-
Different checks modify different API objects, requiring automated remediation to possess
root
or superuser access to modify objects in the cluster, which is not advised.
Procedure
The example below uses the
ocp4-ocp-allowed-registries-for-import
rule, which would fail on a default OpenShift Container Platform installation. Inspect the ruleoc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml
, the rule is to limit the registries the users are allowed to import images from by setting theallowedRegistriesForImport
attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue:$ oc edit image.config.openshift.io/cluster
Example output
apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2020-09-10T10:12:54Z" generation: 2 name: cluster resourceVersion: "363096" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
Re-run the scan:
$ oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
5.6.5.9. Updating remediations
When a new version of compliance content is used, it might deliver a new and different version of a remediation than the previous version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated. The outdated objects are labeled so that they can be searched for easily.
The previously applied remediation contents would then be stored in the spec.outdated
attribute of a ComplianceRemediation
object and the new updated contents would be stored in the spec.current
attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated
attribute exists, it would be used to render the resulting MachineConfig
object. After the spec.outdated
attribute is removed, the Compliance Operator re-renders the resulting MachineConfig
object, which causes the Operator to push the configuration to the nodes.
Procedure
Search for any outdated remediations:
$ oc -n openshift-compliance get complianceremediations \ -l complianceoperator.openshift.io/outdated-remediation=
Example output
NAME STATE workers-scan-no-empty-passwords Outdated
The currently applied remediation is stored in the
Outdated
attribute and the new, unapplied remediation is stored in theCurrent
attribute. If you are satisfied with the new version, remove theOutdated
field. If you want to keep the updated content, remove theCurrent
andOutdated
attributes.Apply the newer version of the remediation:
$ oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \ --type json -p '[{"op":"remove", "path":/spec/outdated}]'
The remediation state will switch from
Outdated
toApplied
:$ oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords
Example output
NAME STATE workers-scan-no-empty-passwords Applied
- The nodes will apply the newer remediation version and reboot.
The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results.
5.6.5.10. Unapplying a remediation
It might be required to unapply a remediation that was previously applied.
Procedure
Set the
apply
flag tofalse
:$ oc -n openshift-compliance \ patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \ --patch '{"spec":{"apply":false}}' --type=merge
The remediation status will change to
NotApplied
and the compositeMachineConfig
object would be re-rendered to not include the remediation.ImportantAll affected nodes with the remediation will be rebooted.
The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results.
5.6.5.11. Removing a KubeletConfig remediation
KubeletConfig
remediations are included in node-level profiles. In order to remove a KubeletConfig remediation, you must manually remove it from the KubeletConfig
objects. This example demonstrates how to remove the compliance check for the one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
remediation.
Procedure
Locate the
scan-name
and compliance check for theone-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available
remediation:$ oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml
Example output
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: "2022-01-05T19:52:27Z" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: "84820" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied
NoteIf the remediation invokes an
evictionHard
kubelet configuration, you must specify all of theevictionHard
parameters:memory.available
,nodefs.available
,nodefs.inodesFree
,imagefs.available
, andimagefs.inodesFree
. If you do not specify all parameters, only the specified parameters are applied and the remediation will not function properly.Remove the remediation:
Set
apply
to false for the remediation object:$ oc -n openshift-compliance patch \ complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \ -p '{"spec":{"apply":false}}' --type=merge
Using the
scan-name
, find theKubeletConfig
object that the remediation was applied to:$ oc -n openshift-compliance get kubeletconfig \ --selector compliance.openshift.io/scan-name=one-rule-tp-node-master
Example output
NAME AGE compliance-operator-kubelet-master 2m34s
Manually remove the remediation,
imagefs.available: 10%
, from theKubeletConfig
object:$ oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master
ImportantAll affected nodes with the remediation will be rebooted.
You must also exclude the rule from any scheduled scans in your tailored profiles that auto-applies the remediation, otherwise, the remediation will be re-applied during the next scheduled scan.
5.6.5.12. Inconsistent ComplianceScan
The ScanSetting
object lists the node roles that the compliance scans generated from the ScanSetting
or ScanSettingBinding
objects would scan. Each node role usually maps to a machine config pool.
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.
If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult
object where some of the nodes will report as INCONSISTENT
. All ComplianceCheckResult
objects are also labeled with compliance.openshift.io/inconsistent-check
.
Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the compliance.openshift.io/most-common-status
annotation and the annotation compliance.openshift.io/inconsistent-source
contains pairs of hostname:status
of check statuses that differ from the most common status. If no common state can be found, all the hostname:status
pairs are listed in the compliance.openshift.io/inconsistent-source annotation
.
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the compliance.openshift.io/rescan=
option:
$ oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
5.6.5.13. Additional resources
5.6.6. Performing advanced Compliance Operator tasks
The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.
5.6.6.1. Using the ComplianceSuite and ComplianceScan objects directly
While it is recommended that users take advantage of the ScanSetting
and ScanSettingBinding
objects to define the suites and scans, there are valid use cases to define the ComplianceSuite
objects directly:
-
Specifying only a single rule to scan. This can be useful for debugging together with the
debug: true
attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information. - Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool.
- Pointing the Scan to a bespoke config map with a tailoring file.
- For testing or development when the overhead of parsing profiles from bundles is not required.
The following example shows a ComplianceSuite
that scans the worker machines with only a single rule:
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: ""
The ComplianceSuite
object and the ComplianceScan
objects referred to above specify several attributes in a format that OpenSCAP expects.
To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting
and ScanSettingBinding
or inspect the objects parsed from the ProfileBundle
objects like rules or profiles. Those objects contain the xccdf_org
identifiers you can use to refer to them from a ComplianceSuite
.
5.6.6.2. Setting PriorityClass
for ScanSetting
scans
In large scale environments, the default PriorityClass
object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass
variable to ensure the Compliance Operator is always given priority in resource constrained situations.
Procedure
Set the
PriorityClass
variable:apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists
- 1
- If the
PriorityClass
referenced in theScanSetting
cannot be found, the Operator will leave thePriorityClass
empty, issue a warning, and continue scheduling scans without aPriorityClass
.
5.6.6.3. Using raw tailored profiles
While the TailoredProfile
CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.
The ComplianceSuite
object contains an optional TailoringConfigMap
attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap
attribute is a name of a config map which must contain a key called tailoring.xml
and the value of this key is the tailoring contents.
Procedure
Create the
ConfigMap
object from a file:$ oc -n openshift-compliance \ create configmap nist-moderate-modified \ --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
Reference the tailoring file in a scan that belongs to a suite:
apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: ""
5.6.6.4. Performing a rescan
Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan=
option:
$ oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
A rescan generates four additional mc
for rhcos-moderate
profile:
$ oc get mc
Example output
75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub
When the scan setting default-auto-apply
label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig
objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs.
5.6.6.5. Setting custom storage size for results
While the custom resources such as ComplianceCheckResult
represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd
key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size
attribute that is exposed in both the ScanSetting
and ComplianceScan
resources.
A related parameter is rawResultStorage.rotation
which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
5.6.6.5.1. Using custom result storage values
Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName
attribute.
If your cluster does not specify a default storage class, this attribute must be set.
Configure the ScanSetting
custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:
Example ScanSetting
CR
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'
5.6.6.6. Applying remediations generated by suite scans
Although you can use the autoApplyRemediations
boolean parameter in a ComplianceSuite
object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations
. This allows the Operator to apply all of the created remediations.
Procedure
-
Apply the
compliance.openshift.io/apply-remediations
annotation by running:
$ oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=
5.6.6.7. Automatically update remediations
In some cases, a scan with newer content might mark remediations as OUTDATED
. As an administrator, you can apply the compliance.openshift.io/remove-outdated
annotation to apply new remediations and remove the outdated ones.
Procedure
-
Apply the
compliance.openshift.io/remove-outdated
annotation:
$ oc -n openshift-compliance \ annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=
Alternatively, set the autoUpdateRemediations
flag in a ScanSetting
or ComplianceSuite
object to update the remediations automatically.
5.6.6.8. Creating a custom SCC for the Compliance Operator
In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector
.
Prerequisites
-
You must have
admin
privileges.
Procedure
Define the SCC in a YAML file named
restricted-adjusted-compliance.yaml
:SecurityContextConstraints
object definitionallowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret
Create the SCC:
$ oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml
Example output
securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created
Verification
Verify the SCC was created:
$ oc get -n openshift-compliance scc restricted-adjusted-compliance
Example output
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
5.6.6.9. Additional resources
5.6.7. Troubleshooting Compliance Operator scans
This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips:
The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command:
$ oc get events -n openshift-compliance
Or view events for an object like a scan using the command:
$ oc describe -n openshift-compliance compliancescan/cis-compliance
The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a
ComplianceRemediation
cannot be applied, view the messages from theremediationctrl
controller. You can filter the messages from a single controller by parsing withjq
:$ oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \ | jq -c 'select(.logger == "profilebundlectrl")'
The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use
date -d @timestamp --utc
, for example:$ date -d @1596184628.955853 --utc
-
Many custom resources, most importantly
ComplianceSuite
andScanSetting
, allow thedebug
option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods. -
If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding
ComplianceCheckResult
object and use it as therule
attribute value in aScan
CR. Then, together with thedebug
option enabled, thescanner
container logs in the scanner pod would show the raw OpenSCAP logs.
5.6.7.1. Anatomy of a scan
The following sections outline the components and stages of Compliance Operator scans.
5.6.7.1.1. Compliance sources
The compliance content is stored in Profile
objects that are generated from a ProfileBundle
object. The Compliance Operator creates a ProfileBundle
object for the cluster and another for the cluster nodes.
$ oc get -n openshift-compliance profilebundle.compliance
$ oc get -n openshift-compliance profile.compliance
The ProfileBundle
objects are processed by deployments labeled with the Bundle
name. To troubleshoot an issue with the Bundle
, you can find the deployment and view logs of the pods in a deployment:
$ oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser
$ oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4
$ oc logs -n openshift-compliance pods/<pod-name>
$ oc describe -n openshift-compliance pod/<pod-name> -c profileparser
5.6.7.1.2. The ScanSetting and ScanSettingBinding objects lifecycle and debugging
With valid compliance content sources, the high-level ScanSetting
and ScanSettingBinding
objects can be used to generate ComplianceSuite
and ComplianceScan
objects:
apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true # For each role, a separate scan will be created pointing # to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1
Both ScanSetting
and ScanSettingBinding
objects are handled by the same controller tagged with logger=scansettingbindingctrl
. These objects have no status. Any issues are communicated in form of events:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created
Now a ComplianceSuite
object is created. The flow continues to reconcile the newly created ComplianceSuite
.
5.6.7.1.3. ComplianceSuite custom resource lifecycle and debugging
The ComplianceSuite
CR is a wrapper around ComplianceScan
CRs. The ComplianceSuite
CR is handled by controller tagged with logger=suitectrl
. This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl
also handles creating a CronJob
CR that re-runs the scans in the suite after the initial run is done:
$ oc get cronjobs
Example output
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m
For the most important issues, events are emitted. View them with oc describe compliancesuites/<name>
. The Suite
objects also have a Status
subresource that is updated when any of Scan
objects that belong to this suite update their Status
subresource. After all expected scans are created, control is passed to the scan controller.
5.6.7.1.4. ComplianceScan custom resource lifecycle and debugging
The ComplianceScan
CRs are handled by the scanctrl
controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
5.6.7.1.4.1. Pending phase
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
5.6.7.1.4.2. Launching phase
In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:
$ oc -n openshift-compliance get cm \ -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=
These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results:
$ oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
The PVCs are mounted by a per-scan ResultServer
deployment. A ResultServer
is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer
deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer
is protected by mutual TLS protocols.
Finally, the scanner pods are launched in this phase; one scanner pod for a Platform
scan instance and one scanner pod per matching node for a node
scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan
name:
$ oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels
Example output
NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner
+ The scan then proceeds to the Running phase.
5.6.7.1.4.3. Running phase
The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:
-
init container: There is one init container called
content-container
. It runs the contentImage container and executes a single command that copies the contentFile to the/content
directory shared with the other containers in this pod. -
scanner: This container runs the scan. For node scans, the container mounts the node filesystem as
/host
and mounts the content delivered by the init container. The container also mounts theentrypoint
ConfigMap
created in the Launching phase and executes it. The default script in the entrypointConfigMap
executes OpenSCAP and stores the result files in the/results
directory shared between the pod’s containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with thedebug
flag. logcollector: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the
ResultServer
and separately uploads the XCCDF results along with scan result and OpenSCAP result code as aConfigMap.
These result config maps are labeled with the scan name (compliance.openshift.io/scan-name=rhcos4-e8-worker
):$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
Example output
Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version="1.0" encoding="UTF-8"?> ...
Scanner pods for Platform
scans are similar, except:
-
There is one extra init container called
api-resource-collector
that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where thescanner
container would read them from. -
The
scanner
container does not need to mount the host file system.
When the scanner pods are done, the scans move on to the Aggregating phase.
5.6.7.1.4.4. Aggregating phase
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap
objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation
object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed
label. The result of this phase are ComplianceCheckResult
objects:
$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
Example output
NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium
and ComplianceRemediation
objects:
$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
Example output
NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied
After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.
5.6.7.1.4.5. Done phase
In the final scan phase, the scan resources are cleaned up if needed and the ResultServer
deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.
It is also possible to trigger a re-run of a scan in the Done phase by annotating it:
$ oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true
. The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite
controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation
controller takes over.
5.6.7.1.5. ComplianceRemediation controller lifecycle and debugging
The example scan has reported some findings. One of the remediations can be enabled by toggling its apply
attribute to true
:
$ oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge
The ComplianceRemediation
controller (logger=remediationctrl
) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig
object that contains all the applied remediations.
The MachineConfig
object always begins with 75-
and is named after the scan and the suite:
$ oc get mc | grep 75-
Example output
75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s
The remediations the mc
currently consists of are listed in the machine config’s annotations:
$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements
Example output
Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:
The ComplianceRemediation
controller’s algorithm works like this:
- All currently applied remediations are read into an initial remediation set.
- If the reconciled remediation is supposed to be applied, it is added to the set.
-
A
MachineConfig
object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the renderedMachineConfig
object is removed. - If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
-
Creating or modifying a
MachineConfig
object triggers a reboot of nodes that match themachineconfiguration.openshift.io/role
label - see the Machine Config Operator documentation for more details.
The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:
$ oc -n openshift-compliance \ annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
The scan will run and finish. Check for the remediation to pass:
$ oc -n openshift-compliance \ get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
Example output
NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium
5.6.7.1.6. Useful labels
Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the compliance.openshift.io/scan-name
label. The workload identifier is labeled with the workload
label.
The Compliance Operator schedules the following workloads:
- scanner: Performs the compliance scan.
- resultserver: Stores the raw results for the compliance scan.
- aggregator: Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations).
- suitererunner: Will tag a suite to be re-run (when a schedule is set).
- profileparser: Parses a datastream and creates the appropriate profiles, rules and variables.
When debugging and logs are required for a certain workload, run:
$ oc logs -l workload=<workload_name> -c <container_name>
5.6.7.2. Increasing Compliance Operator resource limits
In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits.
To increase the default memory and CPU limits of scanner pods, see `ScanSetting` Custom resource.
Procedure
To increase the Operator’s memory limits to 500 Mi, create the following patch file named
co-memlimit-patch.yaml
:spec: config: resources: limits: memory: 500Mi
Apply the patch file:
$ oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge
5.6.7.3. Configuring Operator resource constraints
The resources
field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM).
Resource Constraints applied in this process overwrites the existing resource constraints.
Procedure
Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the
Subscription
object:kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
5.6.7.4. Configuring ScanSetting resources
When using the Compliance Operator in a cluster that contains more than 500 MachineConfigs, the ocp4-pci-dss-api-checks-pod
pod may pause in the init
phase when performing a Platform
scan.
Resource constraints applied in this process overwrites the existing resource constraints.
Procedure
Confirm the
ocp4-pci-dss-api-checks-pod
pod is stuck in theInit:OOMKilled
status:$ oc get pod ocp4-pci-dss-api-checks-pod -w
Example output
NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m
Edit the
scanLimits
attribute in theScanSetting
CR to increase the available memory for theocp4-pci-dss-api-checks-pod
pod:timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1
- 1
- The default setting is
500Mi
.
Apply the
ScanSetting
CR to your cluster:$ oc apply -f scansetting.yaml
5.6.7.5. Configuring ScanSetting timeout
The ScanSetting
object has a timeout option that can be specified in the ComplianceScanSetting
object as a duration string, such as 1h30m
. If the scan does not finish within the specified timeout, the scan reattempts until the maxRetryOnTimeout
limit is reached.
Procedure
To set a
timeout
andmaxRetryOnTimeout
in ScanSetting, modify an existingScanSetting
object:apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2
5.6.7.6. Getting support
If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal.
From the Customer Portal, you can:
- Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
- Submit a support case to Red Hat Support.
- Access other product documentation.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.
5.6.8. Using the oc-compliance plugin
Although the Compliance Operator automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The oc-compliance
plugin makes the process easier.
5.6.8.1. Installing the oc-compliance plugin
Procedure
Extract the
oc-compliance
image to get theoc-compliance
binary:$ podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/
Example output
W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.
You can now run
oc-compliance
.
5.6.8.2. Fetching raw results
When a compliance scan finishes, the results of the individual checks are listed in the resulting ComplianceCheckResult
custom resource (CR). However, an administrator or auditor might require the complete details of the scan. The OpenSCAP tool creates an Advanced Recording Format (ARF) formatted file with the detailed results. This ARF file is too large to store in a config map or other standard Kubernetes resource, so a persistent volume (PV) is created to contain it.
Procedure
Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the
oc-compliance
plugin, you can use a single command:$ oc compliance fetch-raw <object-type> <object-name> -o <output-path>
-
<object-type>
can be eitherscansettingbinding
,compliancescan
orcompliancesuite
, depending on which of these objects the scans were launched with. <object-name>
is the name of the binding, suite, or scan object to gather the ARF file for, and<output-path>
is the local directory to place the results.For example:
$ oc compliance fetch-raw scansettingbindings my-binding -o /tmp/
Example output
Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'....... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'...... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master
View the list of files in the directory:
$ ls /tmp/ocp4-cis-node-master/
Example output
ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2
Extract the results:
$ bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml
View the results:
$ ls resultsdir/worker-scan/
Example output
worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2
5.6.8.3. Re-running scans
Although it is possible to run scans as scheduled jobs, you must often re-run a scan on demand, particularly after remediations are applied or when other changes to the cluster are made.
Procedure
Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the
oc-compliance
plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for theScanSettingBinding
object namedmy-binding
:$ oc compliance rerun-now scansettingbindings my-binding
Example output
Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'
5.6.8.4. Using ScanSettingBinding custom resources
When using the ScanSetting
and ScanSettingBinding
custom resources (CRs) that the Compliance Operator provides, it is possible to run scans for multiple profiles while using a common set of scan options, such as schedule
, machine roles
, tolerations
, and so on. While that is easier than working with multiple ComplianceSuite
or ComplianceScan
objects, it can confuse new users.
The oc compliance bind
subcommand helps you create a ScanSettingBinding
CR.
Procedure
Run:
$ oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]
-
If you omit the
-S
flag, thedefault
scan setting provided by the Compliance Operator is used. -
The object type is the Kubernetes object type, which can be
profile
ortailoredprofile
. More than one object can be provided. -
The object name is the name of the Kubernetes resource, such as
.metadata.name
. Add the
--dry-run
option to display the YAML file of the objects that are created.For example, given the following profiles and scan settings:
$ oc get profile.compliance -n openshift-compliance
Example output
NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1
$ oc get scansettings -n openshift-compliance
Example output
NAME AGE default 10m default-auto-apply 10m
-
If you omit the
To apply the
default
settings to theocp4-cis
andocp4-cis-node
profiles, run:$ oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node
Example output
Creating ScanSettingBinding my-binding
After the
ScanSettingBinding
CR is created, the bound profile begins scanning for both profiles with the related settings. Overall, this is the fastest way to begin scanning with the Compliance Operator.
5.6.8.5. Printing controls
Compliance standards are generally organized into a hierarchy as follows:
- A benchmark is the top-level definition of a set of controls for a particular standard. For example, FedRAMP Moderate or Center for Internet Security (CIS) v.1.6.0.
- A control describes a family of requirements that must be met in order to be in compliance with the benchmark. For example, FedRAMP AC-01 (access control policy and procedures).
- A rule is a single check that is specific for the system being brought into compliance, and one or more of these rules map to a control.
- The Compliance Operator handles the grouping of rules into a profile for a single benchmark. It can be difficult to determine which controls that the set of rules in a profile satisfy.
Procedure
The
oc compliance
controls
subcommand provides a report of the standards and controls that a given profile satisfies:$ oc compliance controls profile ocp4-cis-node
Example output
+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+ ...
5.6.8.6. Fetching compliance remediation details
The Compliance Operator provides remediation objects that are used to automate the changes required to make the cluster compliant. The fetch-fixes
subcommand can help you understand exactly which configuration remediations are used. Use the fetch-fixes
subcommand to extract the remediation objects from a profile, rule, or ComplianceRemediation
object into a directory to inspect.
Procedure
View the remediations for a profile:
$ oc compliance fetch-fixes profile ocp4-cis -o /tmp
Example output
No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml
- 1
- The
No fixes to persist
warning is expected whenever there are rules in a profile that do not have a corresponding remediation, because either the rule cannot be remediated automatically or a remediation was not provided.
You can view a sample of the YAML file. The
head
command will show you the first 10 lines:$ head /tmp/ocp4-api-server-audit-log-maxsize.yaml
Example output
apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100
View the remediation from a
ComplianceRemediation
object created after a scan:$ oc get complianceremediations -n openshift-compliance
Example output
NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied
$ oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp
Example output
Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml
You can view a sample of the YAML file. The
head
command will show you the first 10 lines:$ head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml
Example output
apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc
Use caution before applying remediations directly. Some remediations might not be applicable in bulk, such as the usbguard rules in the moderate profile. In these cases, allow the Compliance Operator to apply the rules because it addresses the dependencies and ensures that the cluster remains in a good state.
5.6.8.7. Viewing ComplianceCheckResult object details
When scans are finished running, ComplianceCheckResult
objects are created for the individual scan rules. The view-result
subcommand provides a human-readable output of the ComplianceCheckResult
object details.
Procedure
Run:
$ oc compliance view-result ocp4-cis-scheduler-no-bind-address