Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Policy deployment
The Kubernetes API is used to define and interact with policies. You are responsible for ensuring that the environment meets internal enterprise security standards for software engineering, secure engineering, resiliency, security, and regulatory compliance for workloads hosted on Kubernetes clusters.
2.1. Deployment options
The CertificatePolicy
, ConfigurationPolicy
, and OperatorPolicy
policies and Gatekeeper constraints can be deployed to your managed clusters in two ways. You can deploy policies to your managed clusters by using the Hub cluster policy framework or Deploying policies with external tools. View the following descriptions of each option:
- Hub cluster policy framework
-
The first approach is to leverage the policy framework on the hub cluster. This involves defining the policies and Gatekeeper constraints within a
Policy
object on the hub cluster and leveraging aPlacement
andPlacementBinding
to select which clusters the policies get deployed to. The policy framework handles the delivery to the managed cluster and the status aggregation back to the hub cluster. ThePolicy
objects are represented in the Policies table in the Red Hat Advanced Cluster Management for Kubernetes console. - Deploying policies with external tools
- Alternatively, you can use an external tool, such as Red Hat OpenShift GitOps, to deliver the policies and Gatekeeper constraints directly to the managed cluster. Your policies are shown in the Discovered policies table in the Red Hat Advanced Cluster Management console and can be a helpful option if a configuration management strategy is already in place, but there is a desire to supplement the strategy with Red Hat Advanced Cluster Management policies.
2.1.1. Policy deployment comparison table
See the following comparison table to learn which option supports specific features, which helps you make a decision on which deployment strategy is more suitable for your use case:
Feature | Policy framework | Deployed with external tools |
---|---|---|
Hub cluster templating | Yes | No |
Managed cluster templating | Yes | Yes |
Red Hat Advanced Cluster Management console support | Yes | Yes, you can view policies that are deployed by external tools from the Discover policies table. Policies that are deployed with external tools are not displayed from the Governance Overview dashboard. You must enable Search on your managed cluster. |
Policy grouping |
Yes, you can have a combination of statuses and deployments through |
You cannot use policy grouping directly on your policies when deployed from external tools, but Argo CD |
Compliance history | You can view the last 10 events per cluster per policy stored on the hub cluster. | No, but you can scrape the compliance history from the controller logs on each managed cluster. |
Policy dependencies | Yes | No. Alternatively, you can use the Argo CD sync waves feature to define dependencies. |
Policy Generator | Yes | No |
OpenShift GitOps health checks You must complete extra configuration for Argo CD versions earlier than 2.13. | Yes | Yes |
Policy compliance history API (Technology Preview) | Yes | No |
OpenShift GitOps applying native Kubernetes manifests and Red Hat Advanced Cluster Management policy on the managed cluster | No, you must deploy a policy on your Red Hat Advanced Cluster Management hub cluster. | Yes |
Policy compliance metric on the hub cluster for alerts | Yes | No |
Running Ansible jobs on policy noncompliance |
Yes, use the | No |
2.2. Additional resources
2.3. Hub cluster policy framework
To create and manage policies, gain visibility and remediate configurations to meet standards, use the Red Hat Advanced Cluster Management for Kubernetes Governance security policy framework. Red Hat Advanced Cluster Management for Kubernetes governance provides an extensible policy framework for enterprises to introduce their own security policies.
You can create a Policy
resource in any namespace on the hub cluster except the managed cluster namespaces. If you create a policy in a managed cluster namespace, it is deleted by Red Hat Advanced Cluster Management. Each Red Hat Advanced Cluster Management policy can be organized into one or more policy template definitions. For more details about the policy elements, view the Policy YAML table section on this page.
2.3.1. Requirements
Each policy requires a
Placement
resource that defines the clusters that the policy document is applied to, and aPlacementBinding
resource that binds the Red Hat Advanced Cluster Management for Kubernetes policy.Policy resources are applied to cluster based on an associated
Placement
definition, where you can view a list of managed clusters that match a certain criteria. A samplePlacement
resource that matches all clusters that have theenvironment=dev
label resembles the following YAML:apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-policy-role spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - {key: environment, operator: In, values: ["dev"]}
To learn more about placement and the supported criteria, see Placement overview in the Cluster lifecycle documentation.
In addition to the
Placement
resource, you must create aPlacementBinding
to bind yourPlacement
resource to a policy. A samplePlacement
resource that matches all clusters that have theenvironment=dev
label resembles the following YAML:apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-role placementRef: name: placement-policy-role 1 kind: Placement apiGroup: cluster.open-cluster-management.io subjects: 2 - name: policy-role kind: Policy apiGroup: policy.open-cluster-management.io
- 1
- If you use the previous sample, make sure you update the name of the
Placement
resource in theplacementRef
section to match your placement name. - 2
- You must update the name of the policy in the
subjects
section to match your policy name. Use theoc apply -f resource.yaml -n namespace
command to apply both thePlacement
and thePlacementbinding
resources. Make sure the policy, placement and placement binding are all created in the same namespace.
-
To use a
Placement
resource, you must bind aManagedClusterSet
resource to the namespace of thePlacement
resource with aManagedClusterSetBinding
resource. Refer to Creating a ManagedClusterSetBinding resource for additional details. -
When you create a
Placement
resource for your policy from the console, the status of the placement toleration is automatically added to thePlacement
resource. See Adding a toleration to a placement for more details.
Best practice: Use the command line interface (CLI) to make updates to the policies when you use the Placement
resource.
2.3.2. Hub cluster policy components
Learn more details about the policy components in the following sections:
2.3.2.1. Policy YAML structure
When you create a policy, you must include required parameter fields and values. Depending on your policy controller, you might need to include other optional fields and values. View the following YAML structure for policies:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: annotations: policy.open-cluster-management.io/standards: policy.open-cluster-management.io/categories: policy.open-cluster-management.io/controls: policy.open-cluster-management.io/description: spec: disabled: remediationAction: dependencies: - apiVersion: policy.open-cluster-management.io/v1 compliance: kind: Policy name: namespace: policy-templates: - objectDefinition: apiVersion: kind: metadata: name: spec: --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: bindingOverrides: remediationAction: subFilter: name: placementRef: name: kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: kind: apiGroup: --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: spec:
2.3.2.2. Policy YAML table
View the following table for policy parameter descriptions:
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name for identifying the policy resource. |
| Optional | Used to specify a set of security details that describes the set of standards the policy is trying to validate. All annotations documented here are represented as a string that contains a comma-separated list. Note: You can view policy violations based on the standards and categories that you define for your policy on the Policies page, from the console. |
| Optional |
When this parameter is set to |
| Optional |
Set this parameter to |
| Optional | The name or names of security standards the policy is related to. For example, National Institute of Standards and Technology (NIST) and Payment Card Industry (PCI). |
| Optional | A security control category represent specific requirements for one or more standards. For example, a System and Information Integrity category might indicate that your policy contains a data transfer protocol to protect personal information, as required by the HIPAA and PCI standards. |
| Optional | The name of the security control that is being checked. For example, Access Control or System and Information Integrity. |
| Required |
Set the value to |
| Optional |
Specifies the remediation of your policy. The parameter values are |
| Optional |
Specifies whether the labels and annotations of a policy should be copied when replicating the policy to a managed cluster. If you set to |
| Optional | Used to create a list of dependency objects detailed with extra considerations for compliance. |
| Required | Used to create one or more policies to apply to a managed cluster. |
| Optional | For policy templates, this is used to create a list of dependency objects detailed with extra considerations for compliance. |
| Optional | Used to mark a policy template as compliant until the dependency criteria is verified. Important: Some policy kinds might not support the enforce feature. |
2.3.2.3. Policy sample file
View the following YAML file which is a configuration policy for roles:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-role annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: AC Access Control policy.open-cluster-management.io/controls: AC-3 Access Enforcement policy.open-cluster-management.io/description: spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-role-example spec: remediationAction: inform # the policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction. severity: high namespaceSelector: include: ["default"] object-templates: - complianceType: mustonlyhave # role definition should exact match objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: sample-role rules: - apiGroups: ["extensions", "apps"] resources: ["deployments"] verbs: ["get", "list", "watch", "delete","patch"] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-role placementRef: name: placement-policy-role kind: Placement apiGroup: cluster.open-cluster-management.io subjects: - name: policy-role kind: Policy apiGroup: policy.open-cluster-management.io --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-policy-role spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - {key: environment, operator: In, values: ["dev"]}
2.3.3. Additional resources
Continue reading the related topics of the Red Hat Advanced Cluster Management governance framework:
2.3.4. Policy dependencies
Dependencies can be used to activate a policy only when other policies on your cluster are in a certain state. When the dependency criteria is not met, the policy is labeled as Pending
and resources are not created on your managed cluster. There are more details about the the criteria status in the policy status.
You can use policy dependencies to control the ordering of how objects are applied. For example, if you have a policy for an operator and another policy for a resource that the operator manages, you can set a dependency on the second policy so that it does not attempt to create the resource until the operator is installed. This can help with the performance on the managed cluster.
Required access: Policy administrator
View the following policy dependency example, where the ScanSettingBinding
is only created if the upstream-compliance-operator
policy is already compliant on the managed cluster:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: moderate-compliance-scan namespace: default spec: dependencies: 1 - apiVersion: policy.open-cluster-management.io/v1 compliance: Compliant kind: Policy name: upstream-compliance-operator namespace: default disabled: false policy-templates: - extraDependencies: 2 - apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy name: scan-setting-prerequisite compliance: Compliant ignorePending: false 3 objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: moderate-compliance-scan spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: moderate namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default remediationAction: enforce severity: low
- 1
- The
dependencies
field is set on aPolicy
object, and the requirements apply to all policy templates in the policy. - 2
- The
extraDependencies
field can be set on individual policy template. For example the parameter can be set for a configuration policy, and defines criteria that must be satisfied in addition to anydependencies
set in the policy. - 3
- The
ignorePending
field can be set on each individual policy template, and configures whether thePending
status on that template is considered asCompliant
orNonCompliant
when the overall policy compliance is calculated. By default, this is set tofalse
and aPending
template causes the policy to beNonCompliant
. When you set this totrue
the policy can still beCompliant
when this template isPending
, which is useful when that is expected status of the template.
Note: You cannot use a dependency to apply a policy on one cluster based on the status of a policy in another cluster.
2.3.5. Configuring policy compliance history API (Technology Preview) (Deprecated)
The policy compliance history API is an optional technical preview feature if you want long-term storage of Red Hat Advanced Cluster Management for Kubernetes policy compliance events in a queryable format. You can use the API to get additional details such as the spec
field to audit and troubleshoot your policy, and get compliance events when a policy is disabled or removed from a cluster.
The policy compliance history API can also generate a comma-separated values (CSV) spreadsheet of policy compliance events to help you with auditing and troubleshooting.
2.3.5.1. Prerequisites
The policy compliance history API requires a PostgreSQL server on version 13 or newer.
Some Red Hat supported options include using the
registry.redhat.io/rhel9/postgresql-15
container image, theregistry.redhat.io/rhel8/postgresql-13
container image, thepostgresql-server
RPM, orpostgresql/server
module. Review the applicable official Red Hat documentation on setup and configuration for the path you choose. The policy compliance history API is compatible with any standard PostgreSQL and is not limited to the official Red Hat supported offerings.- This PostgreSQL server must be reachable from the Red Hat Advanced Cluster Management hub cluster. If the PostgreSQL server is running externally of the hub cluster, ensure the routing and firewall configuration allows the hub cluster to connect to port 5432 of the PostgreSQL server. This port might be a different value if it is overridden in the PostgreSQL configuration.
2.3.5.2. Enabling the compliance history API
Configure your managed clusters to record policy compliance events to the API. You can enable this on all clusters or a subset of clusters. Complete the following steps:
Configure the PostgreSQL server as a cluster administrator. If you deployed PostgreSQL on your Red Hat Advanced Cluster Management hub cluster, temporarily port-forward the PostgreSQL port to use the
psql
command. Run the following command:oc -n <PostgreSQL namespace> port-forward <PostgreSQL pod name> 5432:5432
In a different terminal, connect to the PostgreSQL server locally similar to the following command:
psql 'postgres://postgres:@127.0.0.1:5432/postgres'
Create a user and database for your Red Hat Advanced Cluster Management hub cluster with the following SQL statements:
CREATE USER "rhacm-policy-compliance-history" WITH PASSWORD '<replace with password>'; CREATE DATABASE "rhacm-policy-compliance-history" WITH OWNER="rhacm-policy-compliance-history";
Create the
governance-policy-database
Secret
resource to use this database for the policy compliance history API. Run the following command:oc -n open-cluster-management create secret generic governance-policy-database \ 1 --from-literal="user=rhacm-policy-compliance-history" \ --from-literal="password=rhacm-policy-compliance-history" \ --from-literal="host=<replace with host name of the Postgres server>" \ 2 --from-literal="dbname=ocm-compliance-history" \ --from-literal="sslmode=verify-full" \ --from-file="ca=<replace>" 3
- 1
- Add the namespace where Red Hat Advanced Cluster Management is installed. By default, Red Hat Advanced Cluster Management is installed in the
open-cluster-management
namespace. - 2
- Add the host name of the PostgresQL server. If you deployed the PostgreSQL server on the Red Hat Advanced Cluster Management hub cluster and it is not exposed outside of the cluster, you can use the
Service
object for the host value. The format is<service name>.<namespace>.svc
. Note, this approach depends on the network policies of the Red Hat Advanced Cluster Management hub cluster. - 3
- You must specify the Certificate Authority certificate file in the
ca
data field that signed the TLS certificate of the PostgreSQL server. If you do not provide this value, you must change the sslmode value accordingly, though it is not recommended since it reduces the security of the database connection.
Add the
cluster.open-cluster-management.io/backup
label to backup theSecret
resource for a Red Hat Advanced Cluster Management hub cluster restore operation. Run the following command:oc -n open-cluster-management label secret governance-policy-database cluster.open-cluster-management.io/backup=""
For more customization of the PostgreSQL connection, use the
connectionURL
data field directly and provide a value in the format of a PostgreSQL connection URI. Special characters in the password must be URL encoded. One option is to use Python to generate the URL encoded format of the password. For example, if the password is$uper<Secr&t%>
, run the following Python command to get the output%24uper%3CSecr%26t%25%3E
:python -c 'import urllib.parse; import sys; print(urllib.parse.quote(sys.argv[1]))' '$uper<Secr&t%>'
Run the command to test the policy compliance history API after you create the
governance-policy-database
Secret
. An OpenShiftRoute
object is automatically created in the same namespace. If routes on the Red Hat Advanced Cluster Management hub cluster do not utilize a trusted certificate, you can choose to provide the-k
flag in the curl command to skip TLS verification, though this is not recommended:curl -H "Authorization: Bearer $(oc whoami --show-token)" \ "https://$(oc -n open-cluster-management get route governance-history-api -o jsonpath='{.spec.host}')/api/v1/compliance-events"
If successful, the curl command returns a value similar to the following message:
{"data":[],"metadata":{"page":1,"pages":0,"per_page":20,"total":0}}
If it is not successful, the curl command might return either of the two messages:
{"message":"The database is unavailable"}
{"message":"Internal Error"}
If you receive a message, view the Kubernetes events in the
open-cluster-management
namespace with the following command:oc -n open-cluster-management get events --field-selector reason=OCMComplianceEventsDBError
If you receive instructions from the event to view the
governance-policy-propagator
logs, run the following command:oc -n open-cluster-management logs -l name=governance-policy-propagator -f
You might receive an error message that indicates the user, password, or database is incorrectly specified. See the following message example:
2024-03-05T12:17:14.500-0500 info compliance-events-api complianceeventsapi/complianceeventsapi_controller.go:261 The database connection failed: pq: password authentication failed for user "rhacm-policy-compliance-history"
Update the
governance-policy-database
Secret
resource with the correct PostgreSQL connection settings with the following command:oc -n open-cluster-management edit secret governance-policy-database
2.3.5.3. Setting the compliance history API URL
Set the policy compliance history API URL to enable the feature on managed clusters. Complete the following steps:
Retrieve the external URL of the policy compliance history API with the following command:
echo "https://$(oc -n open-cluster-management get route governance-history-api -o=jsonpath='{.spec.host}')"
The output might resemble the following information, with the domain name of your Red Hat Advanced Cluster Management hub cluster:
https://governance-history-api-open-cluster-management.apps.openshift.redhat.com
Create an
AddOnDeploymentConfig
object similar to the following example:apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: governance-policy-framework namespace: open-cluster-management spec: customizedVariables: - name: complianceHistoryAPIURL value: <replace with URL from previous command>
-
Replace the
value
parameter value with your compliance history external URL.
-
Replace the
2.3.5.3.1. Enabling on all managed clusters
Enable the compliance history API on all managed clusters to record compliance events from your managed clusters. Complete the following steps:
Configure the
governance-policy-framework
ClusterManagementAddOn
object to use theAddOnDeploymentConfig
with the following command:oc edit ClusterManagementAddOn governance-policy-framework
Add or update the
spec.supportedConfigs
array. Your resource might have the following configuration:- group: addon.open-cluster-management.io resource: addondeploymentconfigs defaultConfig: name: governance-policy-framework namespace: open-cluster-management
2.3.5.3.2. Enabling compliance history on a single managed cluster
Enable the compliance history API on a single managed cluster to record compliance events from the managed cluster. Complete the following steps:
Configure the
governance-policy-framework
ManagedClusterAddOn
resource in the managed cluster namespace. Run the following command from your Red Hat Advanced Cluster Management hub cluster with the following command:oc -n <manage-cluster-namespace> edit ManagedClusterAddOn governance-policy-framework
-
Replace the
<manage-cluster-namespace>
placeholder with the managed cluster name you intend to enable.
-
Replace the
Add or update the
spec.configs
array to have an entry similar to the following example:- group: addon.open-cluster-management.io resource: addondeploymentconfigs name: governance-policy-framework namespace: open-cluster-management
To verify the configuration, confirm that the deployment on your managed cluster is using the
--compliance-api-url
container argument. Run the following command:oc -n open-cluster-management-agent-addon get deployment governance-policy-framework -o jsonpath='{.spec.template.spec.containers[1].args}'
The output might resemble the following:
["--enable-lease=true","--hub-cluster-configfile=/var/run/klusterlet/kubeconfig","--leader-elect=false","--log-encoder=console","--log-level=0","--v=-1","--evaluation-concurrency=2","--client-max-qps=30","--client-burst=45","--disable-spec-sync=true","--cluster-namespace=local-cluster","--compliance-api-url=https://governance-history-api-open-cluster-management.apps.openshift.redhat.com"]
Any new policy compliance events are recorded in the policy compliance history API.
If policy compliance events are not being recorded for a specific managed cluster, view the
governance-policy-framework
logs on the affected managed cluster:oc -n open-cluster-management-agent-addon logs deployment/governance-policy-framework -f
Log messages similar to the following message are displayed. If the
message
value is empty, the policy compliance history API URL is incorrect or there is a network communication issue:024-03-05T19:28:38.063Z info policy-status-sync statussync/policy_status_sync.go:750 Failed to record the compliance event with the compliance API. Will requeue. {"statusCode": 503, "message": ""}
- If the policy compliance history API URL is incorrect, edit the URL on the hub cluster with the following command:
oc -n open-cluster-management edit AddOnDeploymentConfig governance-policy-framework
+ Note: If you experience a network communication issue, you must diagnose the problem based on your network infrastructure.
2.3.5.4. Additional resource
2.3.6. Integrating Policy Generator
By integrating the Policy Generator, you can use it to automatically build Red Hat Advanced Cluster Management for Kubernetes policies. To integrate the Policy Generator, see Policy Generator.
For an example of what you can do with the Policy Generator, see Generating a policy that installs the Compliance Operator.
2.3.7. Policy Generator
The Policy Generator is a part of the Red Hat Advanced Cluster Management for Kubernetes application lifecycle subscription Red Hat OpenShift GitOps workflow that generates Red Hat Advanced Cluster Management policies using Kustomize. The Policy Generator builds Red Hat Advanced Cluster Management policies from Kubernetes manifest YAML files, which are provided through a PolicyGenerator
manifest YAML file that is used to configure it. The Policy Generator is implemented as a Kustomize generator plug-in. For more information on Kustomize, read the Kustomize documentation.
View the following sections for more information about the Policy Generator:
2.3.7.1. Policy Generator capabilities
The integration of the Policy Generator with the Red Hat Advanced Cluster Management application lifecycle subscription OpenShift GitOps workflow simplifies the distribution of Kubernetes resource objects to managed OpenShift Container Platform clusters, and Kubernetes clusters through Red Hat Advanced Cluster Management policies.
Use the Policy Generator to complete the following actions:
- Convert any Kubernetes manifest files to Red Hat Advanced Cluster Management configuration policies, including manifests that are created from a Kustomize directory.
- Patch the input Kubernetes manifests before they are inserted into a generated Red Hat Advanced Cluster Management policy.
- Generate additional configuration policies so you can report on Gatekeeper policy violations through Red Hat Advanced Cluster Management for Kubernetes.
- Generate policy sets on the hub cluster.
2.3.7.2. Policy Generator configuration structure
The Policy Generator is a Kustomize generator plug-in that is configured with a manifest of the PolicyGenerator
kind and policy.open-cluster-management.io/v1
API version. Continue reading to learn about the configuration structure:
To use the plug-in, add a
generators
section in akustomization.yaml
file. View the following example:generators: - policy-generator-config.yaml 1
- 1
- The
policy-generator-config.yaml
file that is referenced in the previous example is a YAML file with the instructions of the policies to generate.
A simple
PolicyGenerator
configuration file might resemble the following example, withpolicyDefaults
andpolicies
:apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: configmap.yaml 1
- 1
- The
configmap.yaml
file represents a Kubernetes manifest YAML file to be included in the policy. Alternatively, you can set the path to a Kustomize directory, or a directory with multiple Kubernetes manifest YAML files. View the followingConfigMap
example:
apiVersion: v1 kind: ConfigMap metadata: name: my-config namespace: default data: key1: value1 key2: value2
You can use the
object-templates-raw
manifest to automatically generate aConfigurationPolicy
resource with the content you add. For example, you can create a manifest file with the following syntax:object-templates-raw: | {{- range (lookup "v1" "ConfigMap" "my-namespace" "").items }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: i-am-from: template {{- end }}
After you create a manifest file, you can create a
PolicyGenerator
configuration file. See the following YAML example and forpath
, enter the path to yourmanifest.yaml
file:apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: config-data-policies policyDefaults: namespace: policies policySets: [] policies: - name: config-data manifests: - path: manifest.yaml
The generated
Policy
resource, along with the generatedPlacement
resource, andPlacementBinding
resource, might resemble the following examples. Note: Specifications for resources are described in the Policy Generator configuration reference table.apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-config-data namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-config-data namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-config-data subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: config-data --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: config-data namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: config-data spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: key1: value1 key2: value2 kind: ConfigMap metadata: name: my-config namespace: default remediationAction: inform severity: low
2.3.7.3. Policy Generator configuration reference table
All the fields in the policyDefaults
section, except for namespace
, can be overridden for each policy, and all the fields in the policySetDefaults
section can be overridden for each policy set.
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name for identifying the policy resource. |
| Optional |
If multiple policies use the same placement, this name is used to generate a unique name for the resulting |
| Required |
Any default value listed here is overridden by an entry in the policies array except for |
| Required | The namespace of all the policies. |
| Optional |
Determines the policy controller behavior when comparing the manifest to objects on the cluster. The values that you can use are |
| Optional |
Overrides |
| Optional |
Array of categories to be used in the |
| Optional |
Array of controls to be used in the |
| Optional |
An array of standards to be used in the |
| Optional |
Annotations that the policy includes in the |
| Optional |
Key-value pairs of annotations to set on generated configuration policies. For example, you can disable policy templates by defining the following parameter: |
| Optional |
Copies the labels and annotations for all policies and adds them to a replica policy. Set to |
| Optional | Configures the compliance messages emitted by the configuration policy to use one of the specified Go templates based on the current compliance. |
| Optional |
The severity of the policy violation. The default value is |
| Optional |
Whether the policy is disabled, meaning it is not propagated and no status as a result. The default value is |
| Optional |
The remediation mechanism of your policy. The parameter values are |
| Required for namespaced objects that do not have a namespace specified |
Determines namespaces in the managed cluster that the object is applied to. The |
| Optional |
Specifies the frequency for a policy to be evaluated when it is in a particular compliance state. Use the parameters
When managed clusters have low resources, the evaluation interval can be set to long polling intervals to reduce CPU and memory usage on the Kubernetes API and policy controller. These are in the format of durations. For example, |
| Optional |
Specifies the evaluation frequency for a compliant policy. If you want to enable the previous polling behavior, set this parameter to |
| Optional |
Specifies the evaluation frequency for a non-compliant policy. If you want to enable the previous polling behavior, set this parameter to |
| Optional |
Determines whether objects created or monitored by the policy should be deleted when the policy is deleted. Pruning only takes place if the remediation action of the policy has been set to |
| Optional |
Describes whether to delete and recreate an object when an update is required. The
When the |
| Optional |
Specifies if and where to log the difference between the object on the cluster and the |
| Optional |
A list of objects that must be in specific compliance states before this policy is applied. Cannot be specified when |
| Required | The name of the object being depended on. |
| Optional | The namespace of the object being depended on. The default is the namespace of policies set for the Policy Generator. |
| Optional |
The compliance state the object needs to be in. The default value is |
| Optional |
The kind of the object. By default, the kind is set to |
| Optional |
The API version of the object. The default value is |
| Optional | The description of the policy you want to create. |
| Optional |
A list of objects that must be in specific compliance states before this policy is applied. The dependencies that you define are added to each policy template (for example, |
| Required | The name of the object being depended on. |
| Optional | The namespace of the object being depended on. By default, the value is set to the namespace of policies set for the Policy Generator. |
| Optional |
The compliance state the object needs to be in. The default value is |
| Optional |
The kind of the object. The default value is to |
| Optional |
The API version of the object. The default value is |
| Optional |
Bypass compliance status checks when the Policy Generator is waiting for its dependencies to reach their desired states. The default value is |
| Optional |
Automatically generate |
| Optional |
Automatically generate |
| Optional |
This determines if a single configuration policy is generated for all the manifests being wrapped in the policy. If set to |
| Optional |
Set |
| Optional |
When the policy references a Kyverno policy manifest, this determines if an additional configuration policy is generated to receive policy violations in Red Hat Advanced Cluster Management, when the Kyverno policy is violated. The default value is |
| Optional |
Labels that the policy includes in its |
| Optional |
Overrides the |
| Optional |
Array of policy sets that the policy joins. Policy set details can be defined in the |
| Optional |
Generate placement manifests for policies. Set to |
| Optional |
When a policy is part of a policy set, by default, the generator does not generate the placement for this policy since a placement is generated for the policy set. Set |
| Optional | The placement configuration for the policies. This defaults to a placement configuration that matches all clusters. |
| Optional | Specifying a name to consolidate placements that contain the same cluster label selectors. |
| Optional |
Specify a placement by defining a cluster label selector using either |
| Optional |
Define this parameter to use a placement that already exists on the cluster. A |
| Optional |
To reuse an existing placement, specify the path relative to the location of the |
| Optional |
|
| Optional |
|
| Optional |
|
| Optional |
Default values for policy sets. Any default value listed for this parameter is overridden by an entry in the |
| Optional |
The placement configuration for the policies. This defaults to a placement configuration that matches all clusters. See |
| Optional |
Generate placement manifests for policy sets. Set to |
| Required |
The list of policies to create along with overrides to either the default values, or the values that are set in |
| Optional | The description of the policy you want to create. |
| Required | The name of the policy to create. |
| Required |
The list of Kubernetes object manifests to include in the policy, along with overrides to either the default values, the values set in this |
| Required |
Path to a single file, a flat directory of files, or a Kustomize directory relative to the The following manifests are supported:
|
| Optional |
Serves as the |
| Optional |
A list of Kustomize patches to apply to the manifest at the path. If there are multiple manifests, the patch requires the |
| Optional |
Labels that the policy includes in its |
| Optional |
The list of policy sets to create, along with overrides to either the default values or the values that are set in |
| Required | The name of the policy set to create. |
| Optional | The description of the policy set to create. |
| Optional |
The list of policies to be included in the policy set. If |
2.3.7.4. Additional resources
- Read Generating a policy to install GitOps Operator.
- Read to Policy set controller for more details.
- Read Applying Kustomize for more information.
- Read the Governance documentation for more topics.
-
See an example of a
kustomization.yaml
file. - Refer to the Kubernetes labels and selectors documentation.
- Refer to Gatekeeper for more details.
- Refer to the Kustomize documentation.
2.3.8. Generating a policy that installs the Compliance Operator
Generate a policy that installs the Compliance Operator onto your clusters. For an operator that uses the namespaced installation mode, such as the Compliance Operator, an OperatorGroup
manifest is also required.
Complete the following steps:
Create a YAML file with a
Namespace
, aSubscription
, and anOperatorGroup
manifest calledcompliance-operator.yaml
. The following example installs these manifests in thecompliance-operator
namespace:apiVersion: v1 kind: Namespace metadata: name: openshift-compliance --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace
Create a
PolicyGenerator
configuration file. View the followingPolicyGenerator
policy example that installs the Compliance Operator on all OpenShift Container Platform managed clusters:apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: install-compliance-operator policyDefaults: namespace: policies placement: labelSelector: matchExpressions: - key: vendor operator: In values: - "OpenShift" policies: - name: install-compliance-operator manifests: - path: compliance-operator.yaml
Add the policy generator to your
kustomization.yaml
file. Thegenerators
section might resemble the following configuration:generators: - policy-generator-config.yaml
As a result, the generated policy resembles the following file:
apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-install-compliance-operator namespace: policies spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: vendor operator: In values: - OpenShift --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-compliance-operator namespace: policies placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-install-compliance-operator subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-compliance-operator --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/description: name: install-compliance-operator namespace: policies spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-compliance-operator spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-compliance - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: channel: release-0.1 name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - compliance-operator remediationAction: enforce severity: low
As a result, the generated policy is displayed.
2.3.9. Governance policy framework architecture
Enhance the security for your cluster with the Red Hat Advanced Cluster Management for Kubernetes governance lifecycle. The product governance lifecycle is based on using supported policies, processes, and procedures to manage security and compliance from a central interface page. View the following diagram of the governance architecture:
View the following component descriptions for the governance architecture diagram:
Governance policy framework: The previous image represents the framework that runs as the
governance-policy-framework
pod on managed clusters and contains the following controllers:- Spec sync controller: Synchronizes the replicated policy in the managed cluster namespace on the hub cluster to the managed cluster namespace on the managed cluster.
- Status sync controller: Records compliance events from policy controllers in the replicated policies on the hub and managed cluster. The status only contains updates that are relevant to the current policy and does not consider past statuses if the policy is deleted and recreated.
-
Template sync controller: Creates, updates, and deletes objects in the managed cluster namespace on the managed cluster based on the definitions from the replicated policy
spec.policy-templates
entries. - Gatekeeper sync controller: Records Gatekeeper constraint audit results as compliance events in corresponding Red Hat Advanced Cluster Management policies.
- Policy propagator controller: Runs on the Red Hat Advanced Cluster Management hub cluster and generates the replicated policies in the managed cluster namespaces on the hub based on the placements bound to the root policy. It also aggregates compliance status from replicated policies to the root policy status and initiates automations based on policy automations bound to the root policy.
- Governance policy add-on controller: Runs on the Red Hat Advanced Cluster Management hub cluster and manages the installation of policy controllers on managed clusters.
2.3.9.1. Governance architecture components
The governance architecture also include following components:
Governance dashboard: Provides a summary of your cloud governance and risk details, which include policy and cluster violations. Refer to the Manage Governance dashboard section to learn about the structure of an Red Hat Advanced Cluster Management for Kubernetes policy framework, and how to use the Red Hat Advanced Cluster Management for Kubernetes Governance dashboard.
Notes:
-
When a policy is propagated to a managed cluster, it is first replicated to the cluster namespace on the hub cluster, and is named and labeled using
namespaceName.policyName
. When you create a policy, make sure that the length of thenamespaceName.policyName
does not exceed 63 characters due to the Kubernetes length limit for label values. -
When you search for a policy in the hub cluster, you might also receive the name of the replicated policy in the managed cluster namespace. For example, if you search for
policy-dhaz-cert
in thedefault
namespace, the following policy name from the hub cluster might also appear in the managed cluster namespace:default.policy-dhaz-cert
.
-
When a policy is propagated to a managed cluster, it is first replicated to the cluster namespace on the hub cluster, and is named and labeled using
- Policy-based governance framework: Supports policy creation and deployment to various managed clusters based on attributes associated with clusters, such as a geographical region. There are examples of the predefined policies and instructions on deploying policies to your cluster in the open source community. Additionally, when policies are violated, automations can be configured to run and take any action that the user chooses.
-
Open source community: Supports community contributions with a foundation of the Red Hat Advanced Cluster Management policy framework. Policy controllers and third-party policies are also a part of the
open-cluster-management/policy-collection
repository. You can contribute and deploy policies using GitOps.
2.3.9.2. Additional resources
2.3.10. Governance dashboard
Manage your security policies and policy violations by using the Governance dashboard to create, view, and edit your resources. You can create YAML files for your policies from the command line and console. Continue reading for details about the Governance dashboard from the console.
2.3.10.1. Governance page
The following tabs are displayed on the Governance page Overview, Policy sets, and Policies. Read the following descriptions to know which information is displayed:
Overview
The following summary cards are displayed from the Overview tab: Policy set violations, Policy violations, Clusters, Categories, Controls, and Standards.
Policy sets
Create and manage hub cluster policy sets.
Policies
- Create and manage security policies. The table of policies list the following details of a policy: Name, Namespace, and Cluster violations are displayed.
- You can edit, enable or disable, set remediation to inform or enforce, or remove a policy by selecting the Actions icon. You can view the categories and standards of a specific policy by selecting the drop-down arrow to expand the row.
- Reorder your table columns in the Manage column dialog box. Select the Manage column icon for the dialog box to be displayed. To reorder your columns, select the Reorder icon and move the column name. For columns that you want to appear in the table, click the checkbox for specific column names and select the Save button.
Complete bulk actions by selecting multiple policies and clicking the Actions button. You can also customize your policy table by clicking the Filter button.
When you select a policy in the table list, the following tabs of information are displayed from the console:
- Details: Select the Details tab to view policy details and placement details. In the Placement table, the Compliance column provides links to view the compliance of the clusters that are displayed.
- Results: Select the Results tab to view a table list of all clusters that are associated to the policy.
- From the Message column, click the View details link to view the template details, template YAML, and related resources. You can also view related resources. Click the View history link to view the violation message and a time of the last report.
2.3.10.2. Governance automation configuration
If there is a configured automation for a specific policy, you can select the automation to view more details. View the following descriptions of the schedule frequency options for your automation:
-
Manual run: Manually set this automation to run once. After the automation runs, it is set to
disabled
. Note: You can only select Manual run mode when the schedule frequency is disabled. -
Run once mode: When a policy is violated, the automation runs one time. After the automation runs, it is set to
disabled
. After the automation is set todisabled
, you must continue to run the automation manually. When you run once mode, the extra variable oftarget_clusters
is automatically supplied with the list of clusters that violated the policy. The Ansible Automation Platform Job template must havePROMPT ON LAUNCH
enabled for theEXTRA VARIABLES
section (also known asextra_vars
). -
Run everyEvent mode: When a policy is violated, the automation runs every time for each unique policy violation per managed cluster. Use the
DelayAfterRunSeconds
parameter to set the minimum seconds before an automation can be restarted on the same cluster. If the policy is violated multiple times during the delay period and kept in the violated state, the automation runs one time after the delay period. The default is 0 seconds and is only applicable for theeveryEvent
mode. When you runeveryEvent
mode, the extra variable oftarget_clusters
and Ansible Automation Platform Job template is the same as once mode. -
Disable automation: When the scheduled automation is set to
disabled
, the automation does not run until the setting is updated.
The following variables are automatically provided in the extra_vars
of the Ansible Automation Platform Job:
-
policy_name
: The name of the non-compliant root policy that initiates the Ansible Automation Platform job on the hub cluster. -
policy_namespace
: The namespace of the root policy. -
hub_cluster
: The name of the hub cluster, which is determined by the value in theclusters
DNS
object. -
policy_sets
: This parameter contains all associated policy set names of the root policy. If the policy is not within a policy set, thepolicy_set
parameter is empty. -
policy_violations
: This parameter contains a list of non-compliant cluster names, and the value is the policystatus
field for each non-compliant cluster.
2.3.10.3. Additional resources
Review the following topics to learn more about creating and updating your security policies:
2.3.11. Creating configuration policies
You can create a YAML file for your configuration policy from the command line interface (CLI) or from the console. As you create a configuration policy from the console, a YAML file is displayed in the YAML editor.
2.3.11.1. Prerequisites
- Required access: Administrator or cluster administrator
- If you have an existing Kubernetes manifest, consider using the Policy Generator to automatically include the manifests in a policy. See the Policy Generator documentation.
2.3.11.2. Creating a configuration policy from the CLI
Complete the following steps to create a configuration policy from the (CLI):
Create a YAML file for your configuration policy. Run the following command:
oc create -f configpolicy-1.yaml
Your configuration policy might resemble the following policy:
apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-1 namespace: my-policies policy-templates: - apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: mustonlyhave-configuration spec: namespaceSelector: include: ["default"] exclude: ["kube-system"] remediationAction: inform disabled: false complianceType: mustonlyhave object-templates:
Apply the policy by running the following command:
oc apply -f <policy-file-name> --namespace=<namespace>
Verify and list the policies by running the following command:
oc get policies.policy.open-cluster-management.io --namespace=<namespace>
Your configuration policy is created.
2.3.11.2.1. Viewing your configuration policy from the CLI
Complete the following steps to view your configuration policy from the CLI:
View details for a specific configuration policy by running the following command:
oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yaml
View a description of your configuration policy by running the following command:
oc describe policies.policy.open-cluster-management.io <name> -n <namespace>
2.3.11.2.2. Viewing your configuration policy from the console
View any configuration policy and its status from the console.
After you log in to your cluster from the console, select Governance to view a table list of your policies. Note: You can filter the table list of your policies by selecting the All policies tab or Cluster violations tab.
Select one of your policies to view more details. The Details, Clusters, and Templates tabs are displayed.
2.3.11.3. Disabling configuration policies
To disable your configuration policy, select Disable policy from the Actions menu of the policy. The policy is disabled, but not deleted.
2.3.11.4. Deleting a configuration policy
Delete a configuration policy from the CLI or the console. Complete the following steps:
To delete the policy from your target cluster or clusters, run the following command:
oc delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>
Verify that your policy is removed by running the following command:
oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace>
- To delete a configuration policy from the console, click the Actions icon for the policy you want to delete in the policy violation table and then click Delete.
Your policy is deleted.
2.3.11.5. Additional resources
- See configuration policy samples that are supported by Red Hat Advanced Cluster Management from the CM-Configuration-Management folder.
- Alternatively, you can refer to the Table of sample configuration policies to view other configuration policies that are monitored by the controller.
2.3.12. Configuring Ansible Automation Platform for governance
Red Hat Advanced Cluster Management for Kubernetes governance can be integrated with Red Hat Ansible Automation Platform to create policy violation automations. You can configure the automation from the Red Hat Advanced Cluster Management console.
2.3.12.1. Prerequisites
- A supported OpenShift Container Platform version
- You must have Ansible Automation Platform version 3.7.3 or a later version installed. It is best practice to install the latest supported version of Ansible Automation Platform. See Red Hat Ansible Automation Platform documentation for more details.
Install the Ansible Automation Platform Resource Operator from the Operator Lifecycle Manager. In the Update Channel section, select
stable-2.x-cluster-scoped
. Select the All namespaces on the cluster (default) installation mode.Note: Ensure that the Ansible Automation Platform job template is idempotent when you run it. If you do not have Ansible Automation Platform Resource Operator, you can find it from the Red Hat OpenShift Container Platform OperatorHub page.
For more information about installing and configuring Red Hat Ansible Automation Platform, see Setting up Ansible tasks.
2.3.12.2. Creating a policy violation automation from the console
After you log in to your Red Hat Advanced Cluster Management hub cluster, select Governance from the navigation menu, and then click on the Policies tab to view the policy tables.
Configure an automation for a specific policy by clicking Configure in the Automation column. You can create automation when the policy automation panel appears. From the Ansible credential section, click the drop-down menu to select an Ansible credential. If you need to add a credential, see Managing credentials overview.
Note: This credential is copied to the same namespace as the policy. The credential is used by the AnsibleJob
resource that is created to initiate the automation. Changes to the Ansible credential in the Credentials section of the console is automatically updated.
After a credential is selected, click the Ansible job drop-down list to select a job template. In the Extra variables section, add the parameter values from the extra_vars
section of the PolicyAutomation
. Select the frequency of the automation. You can select Run once mode, Run everyEvent mode, or Disable automation.
Save your policy violation automation by selecting Submit. When you select the View Job link from the Ansible job details side panel, the link directs you to the job template on the Search page. After you successfully create the automation, it is displayed in the Automation column.
Note: When you delete a policy that has an associated policy automation, the policy automation is automatically deleted as part of clean up.
Your policy violation automation is created from the console.
2.3.12.3. Creating a policy violation automation from the CLI
Complete the following steps to configure a policy violation automation from the CLI:
-
From your terminal, log in to your Red Hat Advanced Cluster Management hub cluster using the
oc login
command. - Find or create a policy that you want to add an automation to. Note the policy name and namespace.
Create a
PolicyAutomation
resource using the following sample as a guide:apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicyAutomation metadata: name: policyname-policy-automation spec: automationDef: extra_vars: your_var: your_value name: Policy Compliance Template secret: ansible-tower type: AnsibleJob mode: disabled policyRef: policyname
-
The Automation template name in the previous sample is
Policy Compliance Template
. Change that value to match your job template name. -
In the
extra_vars
section, add any parameters you need to pass to the Automation template. -
Set the mode to either
once
,everyEvent
, ordisabled
. -
Set the
policyRef
to the name of your policy. -
Create a secret in the same namespace as this
PolicyAutomation
resource that contains the Ansible Automation Platform credential. In the previous example, the secret name isansible-tower
. Use the sample from application lifecycle to see how to create the secret. Create the
PolicyAutomation
resource.Notes:
An immediate run of the policy automation can be initiated by adding the following annotation to the
PolicyAutomation
resource:metadata: annotations: policy.open-cluster-management.io/rerun: "true"
-
When the policy is in
once
mode, the automation runs when the policy is non-compliant. Theextra_vars
variable, namedtarget_clusters
is added and the value is an array of each managed cluster name where the policy is non-compliant. -
When the policy is in
everyEvent
mode and theDelayAfterRunSeconds
exceeds the defined time value, the policy is non-compliant and the automation runs for every policy violation.
2.4. Policy deployment with external tools
To deploy CertificatePolicy
, ConfigurationPolicy
, OperatorPolicy
resources, and Gatekeeper constraints directly to your managed cluster, you can use an external tool such as Red Hat OpenShift GitOps.
2.4.1. Deployment workflow
Your CertificatePolicy
, ConfigurationPolicy
, OperatorPolicy
policies must be in the open-cluster-management-policies
namespace or the managed cluster namespace. Policies in other namespaces are ignored and do not receive a status. If you are using a Red Hat OpenShift GitOps version with an Argo CD version earlier than 2.13, you must configure Red Hat Advanced Cluster Management for Kubernetes policy health checks
The OpenShift GitOps service account must have permission to manage Red Hat Advanced Cluster Management for Kubernetes policies. Deploy policies with OpenShift GitOps and view the policies from the Discovered policies table of the Governance dashboard. The policies are grouped by the policy name
and kind
fields.