Este contenido no está disponible en el idioma seleccionado.

Chapter 2. Policy deployment


The Kubernetes API is used to define and interact with policies. You are responsible for ensuring that the environment meets internal enterprise security standards for software engineering, secure engineering, resiliency, security, and regulatory compliance for workloads hosted on Kubernetes clusters.

2.1. Deployment options

The CertificatePolicy, ConfigurationPolicy, and OperatorPolicy policies and Gatekeeper constraints can be deployed to your managed clusters in two ways. You can deploy policies to your managed clusters by using the Hub cluster policy framework or Deploying policies with external tools. View the following descriptions of each option:

Hub cluster policy framework
The first approach is to leverage the policy framework on the hub cluster. This involves defining the policies and Gatekeeper constraints within a Policy object on the hub cluster and leveraging a Placement and PlacementBinding to select which clusters the policies get deployed to. The policy framework handles the delivery to the managed cluster and the status aggregation back to the hub cluster. The Policy objects are represented in the Policies table in the Red Hat Advanced Cluster Management for Kubernetes console.
Deploying policies with external tools
Alternatively, you can use an external tool, such as Red Hat OpenShift GitOps, to deliver the policies and Gatekeeper constraints directly to the managed cluster. Your policies are shown in the Discovered policies table in the Red Hat Advanced Cluster Management console and can be a helpful option if a configuration management strategy is already in place, but there is a desire to supplement the strategy with Red Hat Advanced Cluster Management policies.

2.1.1. Policy deployment comparison table

See the following comparison table to learn which option supports specific features, which helps you make a decision on which deployment strategy is more suitable for your use case:

FeaturePolicy frameworkDeployed with external tools

Hub cluster templating

Yes

No

Managed cluster templating

Yes

Yes

Red Hat Advanced Cluster Management console support

Yes

Yes, you can view policies that are deployed by external tools from the Discover policies table. Policies that are deployed with external tools are not displayed from the Governance Overview dashboard. You must enable Search on your managed cluster.

Policy grouping

Yes, you can have a combination of statuses and deployments through Policy and PolicySet resources.

You cannot use policy grouping directly on your policies when deployed from external tools, but Argo CD Application objects for each grouping gives a high-level status.

Compliance history

You can view the last 10 events per cluster per policy stored on the hub cluster.

No, but you can scrape the compliance history from the controller logs on each managed cluster.

Policy dependencies

Yes

No. Alternatively, you can use the Argo CD sync waves feature to define dependencies.

Policy Generator

Yes

No

OpenShift GitOps health checks

You must complete extra configuration for Argo CD versions earlier than 2.13.

Yes

Yes

Policy compliance history API (Technology Preview)

Yes

No

OpenShift GitOps applying native Kubernetes manifests and Red Hat Advanced Cluster Management policy on the managed cluster

No, you must deploy a policy on your Red Hat Advanced Cluster Management hub cluster.

Yes

Policy compliance metric on the hub cluster for alerts

Yes

No

Running Ansible jobs on policy noncompliance

Yes, use the PolicyAutomation resource.

No

2.2. Additional resources

2.3. Hub cluster policy framework

To create and manage policies, gain visibility and remediate configurations to meet standards, use the Red Hat Advanced Cluster Management for Kubernetes Governance security policy framework. Red Hat Advanced Cluster Management for Kubernetes governance provides an extensible policy framework for enterprises to introduce their own security policies.

You can create a Policy resource in any namespace on the hub cluster except the managed cluster namespaces. If you create a policy in a managed cluster namespace, it is deleted by Red Hat Advanced Cluster Management. Each Red Hat Advanced Cluster Management policy can be organized into one or more policy template definitions. For more details about the policy elements, view the Policy YAML table section on this page.

2.3.1. Requirements

  • Each policy requires a Placement resource that defines the clusters that the policy document is applied to, and a PlacementBinding resource that binds the Red Hat Advanced Cluster Management for Kubernetes policy.

    Policy resources are applied to cluster based on an associated Placement definition, where you can view a list of managed clusters that match a certain criteria. A sample Placement resource that matches all clusters that have the environment=dev label resembles the following YAML:

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-policy-role
    spec:
      predicates:
      - requiredClusterSelector:
        labelSelector:
        matchExpressions:
          - {key: environment, operator: In, values: ["dev"]}

    To learn more about placement and the supported criteria, see Placement overview in the Cluster lifecycle documentation.

  • In addition to the Placement resource, you must create a PlacementBinding to bind your Placement resource to a policy. A sample Placement resource that matches all clusters that have the environment=dev label resembles the following YAML:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-policy-role
    placementRef:
      name: placement-policy-role 1
      kind: Placement
      apiGroup: cluster.open-cluster-management.io
    subjects: 2
    - name: policy-role
      kind: Policy
      apiGroup: policy.open-cluster-management.io
    1
    If you use the previous sample, make sure you update the name of the Placement resource in the placementRef section to match your placement name.
    2
    You must update the name of the policy in the subjects section to match your policy name. Use the oc apply -f resource.yaml -n namespace command to apply both the Placement and the Placementbinding resources. Make sure the policy, placement and placement binding are all created in the same namespace.
  • To use a Placement resource, you must bind a ManagedClusterSet resource to the namespace of the Placement resource with a ManagedClusterSetBinding resource. Refer to Creating a ManagedClusterSetBinding resource for additional details.
  • When you create a Placement resource for your policy from the console, the status of the placement toleration is automatically added to the Placement resource. See Adding a toleration to a placement for more details.

Best practice: Use the command line interface (CLI) to make updates to the policies when you use the Placement resource.

2.3.2. Hub cluster policy components

Learn more details about the policy components in the following sections:

2.3.2.1. Policy YAML structure

When you create a policy, you must include required parameter fields and values. Depending on your policy controller, you might need to include other optional fields and values. View the following YAML structure for policies:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name:
  annotations:
    policy.open-cluster-management.io/standards:
    policy.open-cluster-management.io/categories:
    policy.open-cluster-management.io/controls:
    policy.open-cluster-management.io/description:
spec:
  disabled:
  remediationAction:
  dependencies:
  - apiVersion: policy.open-cluster-management.io/v1
    compliance:
    kind: Policy
    name:
    namespace:
  policy-templates:
    - objectDefinition:
        apiVersion:
        kind:
        metadata:
          name:
        spec:
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
bindingOverrides:
  remediationAction:
subFilter:
  name:
placementRef:
  name:
  kind: Placement
  apiGroup: cluster.open-cluster-management.io
subjects:
- name:
  kind:
  apiGroup:
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name:
spec:

2.3.2.2. Policy YAML table

View the following table for policy parameter descriptions:

Table 2.1. Parameter table
FieldOptional or requiredDescription

apiVersion

Required

Set the value to policy.open-cluster-management.io/v1.

kind

Required

Set the value to Policy to indicate the type of policy.

metadata.name

Required

The name for identifying the policy resource.

metadata.annotations

Optional

Used to specify a set of security details that describes the set of standards the policy is trying to validate. All annotations documented here are represented as a string that contains a comma-separated list.

Note: You can view policy violations based on the standards and categories that you define for your policy on the Policies page, from the console.

bindingOverrides.remediationAction

Optional

When this parameter is set to enforce, it provides a way for you to override the remediation action of the related PlacementBinding resources for configuration policies. The default value is null.

subFilter

Optional

Set this parameter to restriction to select a subset of bound policies. The default value is null.

annotations.policy.open-cluster-management.io/standards

Optional

The name or names of security standards the policy is related to. For example, National Institute of Standards and Technology (NIST) and Payment Card Industry (PCI).

annotations.policy.open-cluster-management.io/categories

Optional

A security control category represent specific requirements for one or more standards. For example, a System and Information Integrity category might indicate that your policy contains a data transfer protocol to protect personal information, as required by the HIPAA and PCI standards.

annotations.policy.open-cluster-management.io/controls

Optional

The name of the security control that is being checked. For example, Access Control or System and Information Integrity.

spec.disabled

Required

Set the value to true or false. The disabled parameter provides the ability to enable and disable your policies.

spec.remediationAction

Optional

Specifies the remediation of your policy. The parameter values are enforce and inform. If specified, the spec.remediationAction value that is defined overrides any remediationAction parameter defined in the child policies in the policy-templates section. For example, if the spec.remediationAction value is set to enforce, then the remediationAction in the policy-templates section is set to enforce during runtime.

spec.copyPolicyMetadata

Optional

Specifies whether the labels and annotations of a policy should be copied when replicating the policy to a managed cluster. If you set to true, all of the labels and annotations of the policy are copied to the replicated policy. If you set to false, only the policy framework specific policy labels and annotations are copied to the replicated policy.

spec.dependencies

Optional

Used to create a list of dependency objects detailed with extra considerations for compliance.

spec.policy-templates

Required

Used to create one or more policies to apply to a managed cluster.

spec.policy-templates.extraDependencies

Optional

For policy templates, this is used to create a list of dependency objects detailed with extra considerations for compliance.

spec.policy-templates.ignorePending

Optional

Used to mark a policy template as compliant until the dependency criteria is verified.

Important: Some policy kinds might not support the enforce feature.

2.3.2.3. Policy sample file

View the following YAML file which is a configuration policy for roles:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  name: policy-role
  annotations:
    policy.open-cluster-management.io/standards: NIST SP 800-53
    policy.open-cluster-management.io/categories: AC Access Control
    policy.open-cluster-management.io/controls: AC-3 Access Enforcement
    policy.open-cluster-management.io/description:
spec:
  remediationAction: inform
  disabled: false
  policy-templates:
    - objectDefinition:
        apiVersion: policy.open-cluster-management.io/v1
        kind: ConfigurationPolicy
        metadata:
          name: policy-role-example
        spec:
          remediationAction: inform # the policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
          severity: high
          namespaceSelector:
            include: ["default"]
          object-templates:
            - complianceType: mustonlyhave # role definition should exact match
              objectDefinition:
                apiVersion: rbac.authorization.k8s.io/v1
                kind: Role
                metadata:
                  name: sample-role
                rules:
                  - apiGroups: ["extensions", "apps"]
                    resources: ["deployments"]
                    verbs: ["get", "list", "watch", "delete","patch"]
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  name: binding-policy-role
placementRef:
  name: placement-policy-role
  kind: Placement
  apiGroup: cluster.open-cluster-management.io
subjects:
- name: policy-role
  kind: Policy
  apiGroup: policy.open-cluster-management.io
---
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
  name: placement-policy-role
spec:
  predicates:
  - requiredClusterSelector:
      labelSelector:
        matchExpressions:
        - {key: environment, operator: In, values: ["dev"]}

2.3.3. Additional resources

Continue reading the related topics of the Red Hat Advanced Cluster Management governance framework:

2.3.4. Policy dependencies

Dependencies can be used to activate a policy only when other policies on your cluster are in a certain state. When the dependency criteria is not met, the policy is labeled as Pending and resources are not created on your managed cluster. There are more details about the the criteria status in the policy status.

You can use policy dependencies to control the ordering of how objects are applied. For example, if you have a policy for an operator and another policy for a resource that the operator manages, you can set a dependency on the second policy so that it does not attempt to create the resource until the operator is installed. This can help with the performance on the managed cluster.

Required access: Policy administrator

View the following policy dependency example, where the ScanSettingBinding is only created if the upstream-compliance-operator policy is already compliant on the managed cluster:

apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
  annotations:
    policy.open-cluster-management.io/categories: CM Configuration Management
    policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    policy.open-cluster-management.io/standards: NIST SP 800-53
    policy.open-cluster-management.io/description:
  name: moderate-compliance-scan
  namespace: default
spec:
  dependencies: 1
  - apiVersion: policy.open-cluster-management.io/v1
    compliance: Compliant
    kind: Policy
    name: upstream-compliance-operator
    namespace: default
  disabled: false
  policy-templates:
  - extraDependencies: 2
    - apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      name: scan-setting-prerequisite
      compliance: Compliant
    ignorePending: false 3
    objectDefinition:
      apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      metadata:
        name: moderate-compliance-scan
      spec:
        object-templates:
        - complianceType: musthave
          objectDefinition:
            apiVersion: compliance.openshift.io/v1alpha1
            kind: ScanSettingBinding
            metadata:
              name: moderate
              namespace: openshift-compliance
            profiles:
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-moderate
            - apiGroup: compliance.openshift.io/v1alpha1
              kind: Profile
              name: ocp4-moderate-node
            settingsRef:
              apiGroup: compliance.openshift.io/v1alpha1
              kind: ScanSetting
              name: default
        remediationAction: enforce
        severity: low
1
The dependencies field is set on a Policy object, and the requirements apply to all policy templates in the policy.
2
The extraDependencies field can be set on individual policy template. For example the parameter can be set for a configuration policy, and defines criteria that must be satisfied in addition to any dependencies set in the policy.
3
The ignorePending field can be set on each individual policy template, and configures whether the Pending status on that template is considered as Compliant or NonCompliant when the overall policy compliance is calculated. By default, this is set to false and a Pending template causes the policy to be NonCompliant. When you set this to true the policy can still be Compliant when this template is Pending, which is useful when that is expected status of the template.

Note: You cannot use a dependency to apply a policy on one cluster based on the status of a policy in another cluster.

2.3.5. Configuring policy compliance history API (Technology Preview) (Deprecated)

The policy compliance history API is an optional technical preview feature if you want long-term storage of Red Hat Advanced Cluster Management for Kubernetes policy compliance events in a queryable format. You can use the API to get additional details such as the spec field to audit and troubleshoot your policy, and get compliance events when a policy is disabled or removed from a cluster.

The policy compliance history API can also generate a comma-separated values (CSV) spreadsheet of policy compliance events to help you with auditing and troubleshooting.

2.3.5.1. Prerequisites

  • The policy compliance history API requires a PostgreSQL server on version 13 or newer.

    Some Red Hat supported options include using the registry.redhat.io/rhel9/postgresql-15 container image, the registry.redhat.io/rhel8/postgresql-13 container image, the postgresql-server RPM, or postgresql/server module. Review the applicable official Red Hat documentation on setup and configuration for the path you choose. The policy compliance history API is compatible with any standard PostgreSQL and is not limited to the official Red Hat supported offerings.

  • This PostgreSQL server must be reachable from the Red Hat Advanced Cluster Management hub cluster. If the PostgreSQL server is running externally of the hub cluster, ensure the routing and firewall configuration allows the hub cluster to connect to port 5432 of the PostgreSQL server. This port might be a different value if it is overridden in the PostgreSQL configuration.

2.3.5.2. Enabling the compliance history API

Configure your managed clusters to record policy compliance events to the API. You can enable this on all clusters or a subset of clusters. Complete the following steps:

  1. Configure the PostgreSQL server as a cluster administrator. If you deployed PostgreSQL on your Red Hat Advanced Cluster Management hub cluster, temporarily port-forward the PostgreSQL port to use the psql command. Run the following command:

    oc -n <PostgreSQL namespace> port-forward <PostgreSQL pod name> 5432:5432
  2. In a different terminal, connect to the PostgreSQL server locally similar to the following command:

    psql 'postgres://postgres:@127.0.0.1:5432/postgres'
  3. Create a user and database for your Red Hat Advanced Cluster Management hub cluster with the following SQL statements:

    CREATE USER "rhacm-policy-compliance-history" WITH PASSWORD '<replace with password>';
    CREATE DATABASE "rhacm-policy-compliance-history" WITH OWNER="rhacm-policy-compliance-history";
  4. Create the governance-policy-database Secret resource to use this database for the policy compliance history API. Run the following command:

    oc -n open-cluster-management create secret generic governance-policy-database \ 1
        --from-literal="user=rhacm-policy-compliance-history" \
        --from-literal="password=rhacm-policy-compliance-history" \
        --from-literal="host=<replace with host name of the Postgres server>" \ 2
        --from-literal="dbname=ocm-compliance-history" \
      --from-literal="sslmode=verify-full" \
        --from-file="ca=<replace>" 3
    1
    Add the namespace where Red Hat Advanced Cluster Management is installed. By default, Red Hat Advanced Cluster Management is installed in the open-cluster-management namespace.
    2
    Add the host name of the PostgresQL server. If you deployed the PostgreSQL server on the Red Hat Advanced Cluster Management hub cluster and it is not exposed outside of the cluster, you can use the Service object for the host value. The format is <service name>.<namespace>.svc. Note, this approach depends on the network policies of the Red Hat Advanced Cluster Management hub cluster.
    3
    You must specify the Certificate Authority certificate file in the ca data field that signed the TLS certificate of the PostgreSQL server. If you do not provide this value, you must change the sslmode value accordingly, though it is not recommended since it reduces the security of the database connection.
  5. Add the cluster.open-cluster-management.io/backup label to backup the Secret resource for a Red Hat Advanced Cluster Management hub cluster restore operation. Run the following command:

    oc -n open-cluster-management label secret governance-policy-database cluster.open-cluster-management.io/backup=""
  6. For more customization of the PostgreSQL connection, use the connectionURL data field directly and provide a value in the format of a PostgreSQL connection URI. Special characters in the password must be URL encoded. One option is to use Python to generate the URL encoded format of the password. For example, if the password is $uper<Secr&t%>, run the following Python command to get the output %24uper%3CSecr%26t%25%3E:

    python -c 'import urllib.parse; import sys; print(urllib.parse.quote(sys.argv[1]))' '$uper<Secr&t%>'
  7. Run the command to test the policy compliance history API after you create the governance-policy-database Secret. An OpenShift Route object is automatically created in the same namespace. If routes on the Red Hat Advanced Cluster Management hub cluster do not utilize a trusted certificate, you can choose to provide the -k flag in the curl command to skip TLS verification, though this is not recommended:

    curl -H "Authorization: Bearer $(oc whoami --show-token)" \
        "https://$(oc -n open-cluster-management get route governance-history-api -o jsonpath='{.spec.host}')/api/v1/compliance-events"
    • If successful, the curl command returns a value similar to the following message:

      {"data":[],"metadata":{"page":1,"pages":0,"per_page":20,"total":0}}
    • If it is not successful, the curl command might return either of the two messages:

      {"message":"The database is unavailable"}
      {"message":"Internal Error"}
    • If you receive a message, view the Kubernetes events in the open-cluster-management namespace with the following command:

      oc -n open-cluster-management get events --field-selector reason=OCMComplianceEventsDBError
      1. If you receive instructions from the event to view the governance-policy-propagator logs, run the following command:

        oc -n open-cluster-management logs -l name=governance-policy-propagator -f

      You might receive an error message that indicates the user, password, or database is incorrectly specified. See the following message example:

    2024-03-05T12:17:14.500-0500	info	compliance-events-api	complianceeventsapi/complianceeventsapi_controller.go:261	The database connection failed: pq: password authentication failed for user "rhacm-policy-compliance-history"
  8. Update the governance-policy-database Secret resource with the correct PostgreSQL connection settings with the following command:

    oc -n open-cluster-management edit secret governance-policy-database

2.3.5.3. Setting the compliance history API URL

Set the policy compliance history API URL to enable the feature on managed clusters. Complete the following steps:

  1. Retrieve the external URL of the policy compliance history API with the following command:

    echo "https://$(oc -n open-cluster-management get route governance-history-api -o=jsonpath='{.spec.host}')"

    The output might resemble the following information, with the domain name of your Red Hat Advanced Cluster Management hub cluster:

    https://governance-history-api-open-cluster-management.apps.openshift.redhat.com
  2. Create an AddOnDeploymentConfig object similar to the following example:

    apiVersion: addon.open-cluster-management.io/v1alpha1
    kind: AddOnDeploymentConfig
    metadata:
      name: governance-policy-framework
      namespace: open-cluster-management
    spec:
      customizedVariables:
        - name: complianceHistoryAPIURL
          value: <replace with URL from previous command>
    • Replace the value parameter value with your compliance history external URL.
2.3.5.3.1. Enabling on all managed clusters

Enable the compliance history API on all managed clusters to record compliance events from your managed clusters. Complete the following steps:

  1. Configure the governance-policy-framework ClusterManagementAddOn object to use the AddOnDeploymentConfig with the following command:

    oc edit ClusterManagementAddOn governance-policy-framework
  2. Add or update the spec.supportedConfigs array. Your resource might have the following configuration:

      - group: addon.open-cluster-management.io
        resource: addondeploymentconfigs
        defaultConfig:
          name: governance-policy-framework
          namespace: open-cluster-management
2.3.5.3.2. Enabling compliance history on a single managed cluster

Enable the compliance history API on a single managed cluster to record compliance events from the managed cluster. Complete the following steps:

  1. Configure the governance-policy-framework ManagedClusterAddOn resource in the managed cluster namespace. Run the following command from your Red Hat Advanced Cluster Management hub cluster with the following command:

    oc -n <manage-cluster-namespace> edit ManagedClusterAddOn governance-policy-framework
    • Replace the <manage-cluster-namespace> placeholder with the managed cluster name you intend to enable.
  2. Add or update the spec.configs array to have an entry similar to the following example:

    - group: addon.open-cluster-management.io
      resource: addondeploymentconfigs
      name: governance-policy-framework
      namespace: open-cluster-management
  3. To verify the configuration, confirm that the deployment on your managed cluster is using the --compliance-api-url container argument. Run the following command:

    oc -n open-cluster-management-agent-addon get deployment governance-policy-framework -o jsonpath='{.spec.template.spec.containers[1].args}'

    The output might resemble the following:

    ["--enable-lease=true","--hub-cluster-configfile=/var/run/klusterlet/kubeconfig","--leader-elect=false","--log-encoder=console","--log-level=0","--v=-1","--evaluation-concurrency=2","--client-max-qps=30","--client-burst=45","--disable-spec-sync=true","--cluster-namespace=local-cluster","--compliance-api-url=https://governance-history-api-open-cluster-management.apps.openshift.redhat.com"]

    Any new policy compliance events are recorded in the policy compliance history API.

    1. If policy compliance events are not being recorded for a specific managed cluster, view the governance-policy-framework logs on the affected managed cluster:

      oc -n open-cluster-management-agent-addon logs deployment/governance-policy-framework -f
    2. Log messages similar to the following message are displayed. If the message value is empty, the policy compliance history API URL is incorrect or there is a network communication issue:

      024-03-05T19:28:38.063Z        info    policy-status-sync      statussync/policy_status_sync.go:750    Failed to record the compliance event with the compliance API. Will requeue.       {"statusCode": 503, "message": ""}
    3. If the policy compliance history API URL is incorrect, edit the URL on the hub cluster with the following command:
    oc -n open-cluster-management edit AddOnDeploymentConfig governance-policy-framework

    + Note: If you experience a network communication issue, you must diagnose the problem based on your network infrastructure.

2.3.5.4. Additional resource

2.3.6. Integrating Policy Generator

By integrating the Policy Generator, you can use it to automatically build Red Hat Advanced Cluster Management for Kubernetes policies. To integrate the Policy Generator, see Policy Generator.

For an example of what you can do with the Policy Generator, see Generating a policy that installs the Compliance Operator.

2.3.7. Policy Generator

The Policy Generator is a part of the Red Hat Advanced Cluster Management for Kubernetes application lifecycle subscription Red Hat OpenShift GitOps workflow that generates Red Hat Advanced Cluster Management policies using Kustomize. The Policy Generator builds Red Hat Advanced Cluster Management policies from Kubernetes manifest YAML files, which are provided through a PolicyGenerator manifest YAML file that is used to configure it. The Policy Generator is implemented as a Kustomize generator plug-in. For more information on Kustomize, read the Kustomize documentation.

View the following sections for more information about the Policy Generator:

2.3.7.1. Policy Generator capabilities

The integration of the Policy Generator with the Red Hat Advanced Cluster Management application lifecycle subscription OpenShift GitOps workflow simplifies the distribution of Kubernetes resource objects to managed OpenShift Container Platform clusters, and Kubernetes clusters through Red Hat Advanced Cluster Management policies.

Use the Policy Generator to complete the following actions:

  • Convert any Kubernetes manifest files to Red Hat Advanced Cluster Management configuration policies, including manifests that are created from a Kustomize directory.
  • Patch the input Kubernetes manifests before they are inserted into a generated Red Hat Advanced Cluster Management policy.
  • Generate additional configuration policies so you can report on Gatekeeper policy violations through Red Hat Advanced Cluster Management for Kubernetes.
  • Generate policy sets on the hub cluster.

2.3.7.2. Policy Generator configuration structure

The Policy Generator is a Kustomize generator plug-in that is configured with a manifest of the PolicyGenerator kind and policy.open-cluster-management.io/v1 API version. Continue reading to learn about the configuration structure:

  • To use the plug-in, add a generators section in a kustomization.yaml file. View the following example:

    generators:
      - policy-generator-config.yaml 1
    1
    The policy-generator-config.yaml file that is referenced in the previous example is a YAML file with the instructions of the policies to generate.
  • A simple PolicyGenerator configuration file might resemble the following example, with policyDefaults and policies:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: config-data-policies
    policyDefaults:
      namespace: policies
      policySets: []
    policies:
      - name: config-data
        manifests:
          - path: configmap.yaml 1
    1
    The configmap.yaml file represents a Kubernetes manifest YAML file to be included in the policy. Alternatively, you can set the path to a Kustomize directory, or a directory with multiple Kubernetes manifest YAML files. View the following ConfigMap example:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-config
      namespace: default
    data:
      key1: value1
      key2: value2
  • You can use the object-templates-raw manifest to automatically generate a ConfigurationPolicy resource with the content you add. For example, you can create a manifest file with the following syntax:

    object-templates-raw: |
      {{- range (lookup "v1" "ConfigMap" "my-namespace" "").items }}
      - complianceType: musthave
        objectDefinition:
          kind: ConfigMap
          apiVersion: v1
          metadata:
            name: {{ .metadata.name }}
            namespace: {{ .metadata.namespace }}
            labels:
              i-am-from: template
      {{- end }}
  • After you create a manifest file, you can create a PolicyGenerator configuration file. See the following YAML example and for path, enter the path to your manifest.yaml file:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: config-data-policies
    policyDefaults:
      namespace: policies
      policySets: []
    policies:
      - name: config-data
        manifests:
          - path: manifest.yaml
  • The generated Policy resource, along with the generated Placement resource, and PlacementBinding resource, might resemble the following examples. Note: Specifications for resources are described in the Policy Generator configuration reference table.

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-config-data
      namespace: policies
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions: []
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-config-data
      namespace: policies
    placementRef:
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
      name: placement-config-data
    subjects:
    - apiGroup: policy.open-cluster-management.io
      kind: Policy
      name: config-data
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/description:
      name: config-data
      namespace: policies
    spec:
      disabled: false
      policy-templates:
      - objectDefinition:
          apiVersion: policy.open-cluster-management.io/v1
          kind: ConfigurationPolicy
          metadata:
            name: config-data
          spec:
            object-templates:
            - complianceType: musthave
              objectDefinition:
                apiVersion: v1
                data:
                  key1: value1
                  key2: value2
                kind: ConfigMap
                metadata:
                  name: my-config
                  namespace: default
            remediationAction: inform
            severity: low

2.3.7.3. Policy Generator configuration reference table

All the fields in the policyDefaults section, except for namespace, can be overridden for each policy, and all the fields in the policySetDefaults section can be overridden for each policy set.

Table 2.2. Parameter table
FieldOptional or requiredDescription

apiVersion

Required

Set the value to policy.open-cluster-management.io/v1.

kind

Required

Set the value to PolicyGenerator to indicate the type of policy.

metadata.name

Required

The name for identifying the policy resource.

placementBindingDefaults.name

Optional

If multiple policies use the same placement, this name is used to generate a unique name for the resulting PlacementBinding, binding the placement with the array of policies that reference it.

policyDefaults

Required

Any default value listed here is overridden by an entry in the policies array except for namespace.

policyDefaults.namespace

Required

The namespace of all the policies.

policyDefaults.complianceType

Optional

Determines the policy controller behavior when comparing the manifest to objects on the cluster. The values that you can use are musthave, mustonlyhave, or mustnothave. The default value is musthave.

policyDefaults.metadataComplianceType

Optional

Overrides complianceType when comparing the manifest metadata section to objects on the cluster. The values that you can use are musthave, and mustonlyhave. The default value is empty ({}) to avoid overriding the complianceType for metadata.

policyDefaults.categories

Optional

Array of categories to be used in the policy.open-cluster-management.io/categories annotation. The default value is CM Configuration Management.

policyDefaults.controls

Optional

Array of controls to be used in the policy.open-cluster-management.io/controls annotation. The default value is CM-2 Baseline Configuration.

policyDefaults.standards

Optional

An array of standards to be used in the policy.open-cluster-management.io/standards annotation. The default value is NIST SP 800-53.

policyDefaults.policyAnnotations

Optional

Annotations that the policy includes in the metadata.annotations section. It is applied for all policies unless specified in the policy. The default value is empty ({}).

policyDefaults.configurationPolicyAnnotations

Optional

Key-value pairs of annotations to set on generated configuration policies. For example, you can disable policy templates by defining the following parameter: {"policy.open-cluster-management.io/disable-templates": "true"}. The default value is empty ({}).

policyDefaults.copyPolicyMetadata

Optional

Copies the labels and annotations for all policies and adds them to a replica policy. Set to true by default. If set to false, only the policy framework specific policy labels and annotations are copied to the replicated policy.

policyDefaults.customMessage

Optional

Configures the compliance messages emitted by the configuration policy to use one of the specified Go templates based on the current compliance.

policyDefaults.severity

Optional

The severity of the policy violation. The default value is low.

policyDefaults.disabled

Optional

Whether the policy is disabled, meaning it is not propagated and no status as a result. The default value is false to enable the policy.

policyDefaults.remediationAction

Optional

The remediation mechanism of your policy. The parameter values are enforce and inform. The default value is inform.

policyDefaults.namespaceSelector

Required for namespaced objects that do not have a namespace specified

Determines namespaces in the managed cluster that the object is applied to. The include and exclude parameters accept file path expressions to include and exclude namespaces by name. The matchExpressions and matchLabels parameters specify namespaces to include by label. Read the Kubernetes labels and selectors documentation. The resulting list is compiled by using the intersection of results from all parameters.

policyDefaults.evaluationInterval

Optional

Specifies the frequency for a policy to be evaluated when it is in a particular compliance state. Use the parameters compliant and noncompliant. The default value for the compliant and noncompliant parameters is watch to leverage Kubernetes API watches instead of polling the Kubernetes API server.

When managed clusters have low resources, the evaluation interval can be set to long polling intervals to reduce CPU and memory usage on the Kubernetes API and policy controller. These are in the format of durations. For example, 1h25m3s represents 1 hour, 25 minutes, and 3 seconds. These can also be set to never to avoid evaluating the policy after it is in a particular compliance state.

policyDefaults.evaluationInterval.compliant

Optional

Specifies the evaluation frequency for a compliant policy. If you want to enable the previous polling behavior, set this parameter to 10s.

policyDefaults.evaluationInterval.noncompliant

Optional

Specifies the evaluation frequency for a non-compliant policy. If you want to enable the previous polling behavior, set this parameter to 10s.

policyDefaults.pruneObjectBehavior

Optional

Determines whether objects created or monitored by the policy should be deleted when the policy is deleted. Pruning only takes place if the remediation action of the policy has been set to enforce. Example values are DeleteIfCreated, DeleteAll, or None. The default value is None.

policyDefaults.recreateOption

Optional

Describes whether to delete and recreate an object when an update is required. The IfRequired value recreates the object when you update an immutable field. Without dry run update support, the IfRequired has no effect on clusters. The Always value always recreate the object if a mismatch is detected.

When the remediationAction parameter is set to inform the RecreateOption value has no effect. The default value is None.

policyDefaults.recordDiff

Optional

Specifies if and where to log the difference between the object on the cluster and the objectDefinition in the policy. Set to Log to log the difference in the controller logs or None to not log the difference. By default, this parameter is empty to not log the difference.

policyDefaults.dependencies

Optional

A list of objects that must be in specific compliance states before this policy is applied. Cannot be specified when policyDefaults.orderPolicies is set to true.

policyDefaults.dependencies[].name

Required

The name of the object being depended on.

policyDefaults.dependencies[].namespace

Optional

The namespace of the object being depended on. The default is the namespace of policies set for the Policy Generator.

policyDefaults.dependencies[].compliance

Optional

The compliance state the object needs to be in. The default value is Compliant.

policyDefaults.dependencies[].kind

Optional

The kind of the object. By default, the kind is set to Policy, but can also be other kinds that have compliance state, such as ConfigurationPolicy.

policyDefaults.dependencies[].apiVersion

Optional

The API version of the object. The default value is policy.open-cluster-management.io/v1.

policyDefaults.description

Optional

The description of the policy you want to create.

policyDefaults.extraDependencies

Optional

A list of objects that must be in specific compliance states before this policy is applied. The dependencies that you define are added to each policy template (for example, ConfigurationPolicy) separately from the dependencies list. Cannot be specified when policyDefaults.orderManifests is set to true.

policyDefaults.extraDependencies[].name

Required

The name of the object being depended on.

policyDefaults.extraDependencies[].namespace

Optional

The namespace of the object being depended on. By default, the value is set to the namespace of policies set for the Policy Generator.

policyDefaults.extraDependencies[].compliance

Optional

The compliance state the object needs to be in. The default value is Compliant.

policyDefaults.extraDependencies[].kind

Optional

The kind of the object. The default value is to Policy, but can also be other kinds that have a compliance state, such as ConfigurationPolicy.

policyDefaults.extraDependencies[].apiVersion

Optional

The API version of the object. The default value is policy.open-cluster-management.io/v1.

policyDefaults.ignorePending

Optional

Bypass compliance status checks when the Policy Generator is waiting for its dependencies to reach their desired states. The default value is false.

policyDefaults.orderPolicies

Optional

Automatically generate dependencies on the policies so they are applied in the order you defined in the policies list. By default, the value is set to false. Cannot be specified at the same time as policyDefaults.dependencies.

policyDefaults.orderManifests

Optional

Automatically generate extraDependencies on policy templates so that they are applied in the order you defined in the manifests list for that policy. Cannot be specified when policyDefaults.consolidateManifests is set to true. Cannot be specified at the same time as policyDefaults.extraDependencies.

policyDefaults.consolidateManifests

Optional

This determines if a single configuration policy is generated for all the manifests being wrapped in the policy. If set to false, a configuration policy per manifest is generated. The default value is true.

policyDefaults.informGatekeeperPolicies (Deprecated)

Optional

Set informGatekeeperPolicies to false to use Gatekeeper manifests directly without defining it in a configuration policy. When the policy references a violated Gatekeeper policy manifest, an additional configuration policy is generated in order to receive policy violations in Red Hat Advanced Cluster Management. The default value is true.

policyDefaults.informKyvernoPolicies

Optional

When the policy references a Kyverno policy manifest, this determines if an additional configuration policy is generated to receive policy violations in Red Hat Advanced Cluster Management, when the Kyverno policy is violated. The default value is true.

policyDefaults.policyLabels

Optional

Labels that the policy includes in its metadata.labels section. The policyLabels parameter is applied for all policies unless specified in the policy.

policyDefaults.gatekeeperEnforcementAction

Optional

Overrides the spec.enforcementAction field of a Gatekeeper constraint. This only applies to Gatekeeper constraints and is ignored by other manifests. If not set, the spec.enforcementAction field does not change.

policyDefaults.policySets

Optional

Array of policy sets that the policy joins. Policy set details can be defined in the policySets section. When a policy is part of a policy set, a placement binding is not generated for the policy since one is generated for the set. Set policies[].generatePlacementWhenInSet or policyDefaults.generatePlacementWhenInSet to override policyDefaults.policySets.

policyDefaults.generatePolicyPlacement

Optional

Generate placement manifests for policies. Set to true by default. When set to false, the placement manifest generation is skipped, even if a placement is specified.

policyDefaults.generatePlacementWhenInSet

Optional

When a policy is part of a policy set, by default, the generator does not generate the placement for this policy since a placement is generated for the policy set. Set generatePlacementWhenInSet to true to deploy the policy with both policy placement and policy set placement. The default value is false.

policyDefaults.placement

Optional

The placement configuration for the policies. This defaults to a placement configuration that matches all clusters.

policyDefaults.placement.name

Optional

Specifying a name to consolidate placements that contain the same cluster label selectors.

policyDefaults.placement.labelSelector

Optional

Specify a placement by defining a cluster label selector using either key:value, or providing a matchExpressions, matchLabels, or both, with appropriate values. See placementPath to specify an existing file.

policyDefaults.placement.placementName

Optional

Define this parameter to use a placement that already exists on the cluster. A Placement is not created, but a PlacementBinding binds the policy to this Placement.

policyDefaults.placement.placementPath

Optional

To reuse an existing placement, specify the path relative to the location of the kustomization.yaml file. If provided, this placement is used by all policies by default. See labelSelector to generate a new Placement.

policyDefaults.placement.clusterSelector (Deprecated)

Optional

PlacementRule is deprecated. Use labelSelector instead to generate a placement. Specify a placement rule by defining a cluster selector using either key:value or by providing matchExpressions, matchLabels, or both, with appropriate values. See placementRulePath to specify an existing file.

policyDefaults.placement.placementRuleName (Deprecated)

Optional

PlacementRule is deprecated. Alternatively, use placementName to specify a placement. To use an existing placement rule on the cluster, specify the name for this parameter. A PlacementRule is not created, but a PlacementBinding binds the policy to the existing PlacementRule.

policyDefaults.placement.placementRulePath (Deprecated)

Optional

PlacementRule is deprecated. Alternatively, use placementPath to specify a placement. To reuse an existing placement rule, specify the path relative to the location of the kustomization.yaml file. If provided, this placement rule is used by all policies by default. See clusterSelector to generate a new PlacementRule.

policySetDefaults

Optional

Default values for policy sets. Any default value listed for this parameter is overridden by an entry in the policySets array.

policySetDefaults.placement

Optional

The placement configuration for the policies. This defaults to a placement configuration that matches all clusters. See policyDefaults.placement for description of this field.

policySetDefaults.generatePolicySetPlacement

Optional

Generate placement manifests for policy sets. Set to true by default. When set to false the placement manifest generation is skipped, even if a placement is specified.

policies

Required

The list of policies to create along with overrides to either the default values, or the values that are set in policyDefaults. See policyDefaults for additional fields and descriptions.

policies.description

Optional

The description of the policy you want to create.

policies[].name

Required

The name of the policy to create.

policies[].manifests

Required

The list of Kubernetes object manifests to include in the policy, along with overrides to either the default values, the values set in this policies item, or the values set in policyDefaults. See policyDefaults for additional fields and descriptions. When consolidateManifests is set to true, only complianceType, metadataComplianceType, and recordDiff can be overridden at the policies[].manifests level.

policies[].manifests[].path

Required

Path to a single file, a flat directory of files, or a Kustomize directory relative to the kustomization.yaml file. If the directory is a Kustomize directory, the generator runs Kustomize against the directory before generating the policies. If there is a requirement to process Helm charts for the Kustomize directory, set POLICY_GEN_ENABLE_HELM to true in the environment where the policy generator is running to enable Helm for the Policy Generator.

The following manifests are supported:

  • Non-root policy type manifests such as CertificatePolicy, ConfigurationPolicy, OperatorPolicy that have a Policy suffix. The previous manifests are not modified except for patches and are directly added as a policy-templates entry of the Policy resource. Gatekeeper constraints are also directly included if informGatekeeperPolicies is set to false.
  • Manifests containing only an object-templates-raw key. The corresponding value is used directly in a generated ConfigurationPolicy resource without modification, which is added as a policy-templates entry of the Policy entry.
  • For everything else, ConfigurationPolicy objects are generated to wrap the previously mentioned manifests. The resulting ConfigurationPolicy manifest is added as a policy-templates entry of the Policy resource.

policies[].manifests[].name

Optional

Serves as the ConfigurationPolicy resource name when ConsolidateManifests is set to false. If multiple manifests are present in the path, an index number is appended. If multiple manifests are present and their names are provided, with consolidateManifests set to true, the name of the first manifest is used for all manifest paths.

policies[].manifests[].patches

Optional

A list of Kustomize patches to apply to the manifest at the path. If there are multiple manifests, the patch requires the apiVersion, kind, metadata.name, and metadata.namespace (if applicable) fields to be set so Kustomize can identify the manifest that the patch applies to. If there is a single manifest, the metadata.name and metadata.namespace fields can be patched.

policies.policyLabels

Optional

Labels that the policy includes in its metadata.labels section. The policyLabels parameter is applied for all policies unless specified in the policy.

policySets

Optional

The list of policy sets to create, along with overrides to either the default values or the values that are set in policySetDefaults. To include a policy in a policy set, use policyDefaults.policySets, policies[].policySets, or policySets.policies. See policySetDefaults for additional fields and descriptions.

policySets[].name

Required

The name of the policy set to create.

policySets[].description

Optional

The description of the policy set to create.

policySets[].policies

Optional

The list of policies to be included in the policy set. If policyDefaults.policySets or policies[].policySets is also specified, the lists are merged.

2.3.7.4. Additional resources

2.3.8. Generating a policy that installs the Compliance Operator

Generate a policy that installs the Compliance Operator onto your clusters. For an operator that uses the namespaced installation mode, such as the Compliance Operator, an OperatorGroup manifest is also required.

Complete the following steps:

  1. Create a YAML file with a Namespace, a Subscription, and an OperatorGroup manifest called compliance-operator.yaml. The following example installs these manifests in the compliance-operator namespace:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-compliance
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: compliance-operator
      namespace: openshift-compliance
    spec:
      targetNamespaces:
        - openshift-compliance
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: compliance-operator
      namespace: openshift-compliance
    spec:
      channel: release-0.1
      name: compliance-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
  2. Create a PolicyGenerator configuration file. View the following PolicyGenerator policy example that installs the Compliance Operator on all OpenShift Container Platform managed clusters:

    apiVersion: policy.open-cluster-management.io/v1
    kind: PolicyGenerator
    metadata:
      name: install-compliance-operator
    policyDefaults:
      namespace: policies
      placement:
        labelSelector:
          matchExpressions:
            - key: vendor
              operator: In
              values:
                - "OpenShift"
    policies:
      - name: install-compliance-operator
        manifests:
          - path: compliance-operator.yaml
  3. Add the policy generator to your kustomization.yaml file. The generators section might resemble the following configuration:

    generators:
      - policy-generator-config.yaml

    As a result, the generated policy resembles the following file:

    apiVersion: cluster.open-cluster-management.io/v1beta1
    kind: Placement
    metadata:
      name: placement-install-compliance-operator
      namespace: policies
    spec:
      predicates:
      - requiredClusterSelector:
          labelSelector:
            matchExpressions:
            - key: vendor
              operator: In
              values:
              - OpenShift
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: PlacementBinding
    metadata:
      name: binding-install-compliance-operator
      namespace: policies
    placementRef:
      apiGroup: cluster.open-cluster-management.io
      kind: Placement
      name: placement-install-compliance-operator
    subjects:
      - apiGroup: policy.open-cluster-management.io
        kind: Policy
        name: install-compliance-operator
    ---
    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      annotations:
        policy.open-cluster-management.io/categories: CM Configuration Management
        policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
        policy.open-cluster-management.io/standards: NIST SP 800-53
        policy.open-cluster-management.io/description:
      name: install-compliance-operator
      namespace: policies
    spec:
      disabled: false
      policy-templates:
        - objectDefinition:
            apiVersion: policy.open-cluster-management.io/v1
            kind: ConfigurationPolicy
            metadata:
              name: install-compliance-operator
            spec:
              object-templates:
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: v1
                    kind: Namespace
                    metadata:
                      name: openshift-compliance
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: operators.coreos.com/v1alpha1
                    kind: Subscription
                    metadata:
                      name: compliance-operator
                      namespace: openshift-compliance
                    spec:
                      channel: release-0.1
                      name: compliance-operator
                      source: redhat-operators
                      sourceNamespace: openshift-marketplace
                - complianceType: musthave
                  objectDefinition:
                    apiVersion: operators.coreos.com/v1
                    kind: OperatorGroup
                    metadata:
                      name: compliance-operator
                      namespace: openshift-compliance
                    spec:
                      targetNamespaces:
                        - compliance-operator
              remediationAction: enforce
              severity: low

As a result, the generated policy is displayed.

2.3.9. Governance policy framework architecture

Enhance the security for your cluster with the Red Hat Advanced Cluster Management for Kubernetes governance lifecycle. The product governance lifecycle is based on using supported policies, processes, and procedures to manage security and compliance from a central interface page. View the following diagram of the governance architecture:

Governance architecture diagram

View the following component descriptions for the governance architecture diagram:

  • Governance policy framework: The previous image represents the framework that runs as the governance-policy-framework pod on managed clusters and contains the following controllers:

    • Spec sync controller: Synchronizes the replicated policy in the managed cluster namespace on the hub cluster to the managed cluster namespace on the managed cluster.
    • Status sync controller: Records compliance events from policy controllers in the replicated policies on the hub and managed cluster. The status only contains updates that are relevant to the current policy and does not consider past statuses if the policy is deleted and recreated.
    • Template sync controller: Creates, updates, and deletes objects in the managed cluster namespace on the managed cluster based on the definitions from the replicated policy spec.policy-templates entries.
    • Gatekeeper sync controller: Records Gatekeeper constraint audit results as compliance events in corresponding Red Hat Advanced Cluster Management policies.
  • Policy propagator controller: Runs on the Red Hat Advanced Cluster Management hub cluster and generates the replicated policies in the managed cluster namespaces on the hub based on the placements bound to the root policy. It also aggregates compliance status from replicated policies to the root policy status and initiates automations based on policy automations bound to the root policy.
  • Governance policy add-on controller: Runs on the Red Hat Advanced Cluster Management hub cluster and manages the installation of policy controllers on managed clusters.

2.3.9.1. Governance architecture components

The governance architecture also include following components:

  • Governance dashboard: Provides a summary of your cloud governance and risk details, which include policy and cluster violations. Refer to the Manage Governance dashboard section to learn about the structure of an Red Hat Advanced Cluster Management for Kubernetes policy framework, and how to use the Red Hat Advanced Cluster Management for Kubernetes Governance dashboard.

    Notes:

    • When a policy is propagated to a managed cluster, it is first replicated to the cluster namespace on the hub cluster, and is named and labeled using namespaceName.policyName. When you create a policy, make sure that the length of the namespaceName.policyName does not exceed 63 characters due to the Kubernetes length limit for label values.
    • When you search for a policy in the hub cluster, you might also receive the name of the replicated policy in the managed cluster namespace. For example, if you search for policy-dhaz-cert in the default namespace, the following policy name from the hub cluster might also appear in the managed cluster namespace: default.policy-dhaz-cert.
  • Policy-based governance framework: Supports policy creation and deployment to various managed clusters based on attributes associated with clusters, such as a geographical region. There are examples of the predefined policies and instructions on deploying policies to your cluster in the open source community. Additionally, when policies are violated, automations can be configured to run and take any action that the user chooses.
  • Open source community: Supports community contributions with a foundation of the Red Hat Advanced Cluster Management policy framework. Policy controllers and third-party policies are also a part of the open-cluster-management/policy-collection repository. You can contribute and deploy policies using GitOps.

2.3.9.2. Additional resources

2.3.10. Governance dashboard

Manage your security policies and policy violations by using the Governance dashboard to create, view, and edit your resources. You can create YAML files for your policies from the command line and console. Continue reading for details about the Governance dashboard from the console.

2.3.10.1. Governance page

The following tabs are displayed on the Governance page Overview, Policy sets, and Policies. Read the following descriptions to know which information is displayed:

  • Overview

    The following summary cards are displayed from the Overview tab: Policy set violations, Policy violations, Clusters, Categories, Controls, and Standards.

  • Policy sets

    Create and manage hub cluster policy sets.

  • Policies

    • Create and manage security policies. The table of policies list the following details of a policy: Name, Namespace, and Cluster violations are displayed.
    • You can edit, enable or disable, set remediation to inform or enforce, or remove a policy by selecting the Actions icon. You can view the categories and standards of a specific policy by selecting the drop-down arrow to expand the row.
    • Reorder your table columns in the Manage column dialog box. Select the Manage column icon for the dialog box to be displayed. To reorder your columns, select the Reorder icon and move the column name. For columns that you want to appear in the table, click the checkbox for specific column names and select the Save button.
    • Complete bulk actions by selecting multiple policies and clicking the Actions button. You can also customize your policy table by clicking the Filter button.

      When you select a policy in the table list, the following tabs of information are displayed from the console:

      • Details: Select the Details tab to view policy details and placement details. In the Placement table, the Compliance column provides links to view the compliance of the clusters that are displayed.
      • Results: Select the Results tab to view a table list of all clusters that are associated to the policy.
  • From the Message column, click the View details link to view the template details, template YAML, and related resources. You can also view related resources. Click the View history link to view the violation message and a time of the last report.

2.3.10.2. Governance automation configuration

If there is a configured automation for a specific policy, you can select the automation to view more details. View the following descriptions of the schedule frequency options for your automation:

  • Manual run: Manually set this automation to run once. After the automation runs, it is set to disabled. Note: You can only select Manual run mode when the schedule frequency is disabled.
  • Run once mode: When a policy is violated, the automation runs one time. After the automation runs, it is set to disabled. After the automation is set to disabled, you must continue to run the automation manually. When you run once mode, the extra variable of target_clusters is automatically supplied with the list of clusters that violated the policy. The Ansible Automation Platform Job template must have PROMPT ON LAUNCH enabled for the EXTRA VARIABLES section (also known as extra_vars).
  • Run everyEvent mode: When a policy is violated, the automation runs every time for each unique policy violation per managed cluster. Use the DelayAfterRunSeconds parameter to set the minimum seconds before an automation can be restarted on the same cluster. If the policy is violated multiple times during the delay period and kept in the violated state, the automation runs one time after the delay period. The default is 0 seconds and is only applicable for the everyEvent mode. When you run everyEvent mode, the extra variable of target_clusters and Ansible Automation Platform Job template is the same as once mode.
  • Disable automation: When the scheduled automation is set to disabled, the automation does not run until the setting is updated.

The following variables are automatically provided in the extra_vars of the Ansible Automation Platform Job:

  • policy_name: The name of the non-compliant root policy that initiates the Ansible Automation Platform job on the hub cluster.
  • policy_namespace: The namespace of the root policy.
  • hub_cluster: The name of the hub cluster, which is determined by the value in the clusters DNS object.
  • policy_sets: This parameter contains all associated policy set names of the root policy. If the policy is not within a policy set, the policy_set parameter is empty.
  • policy_violations: This parameter contains a list of non-compliant cluster names, and the value is the policy status field for each non-compliant cluster.

2.3.10.3. Additional resources

Review the following topics to learn more about creating and updating your security policies:

2.3.11. Creating configuration policies

You can create a YAML file for your configuration policy from the command line interface (CLI) or from the console. As you create a configuration policy from the console, a YAML file is displayed in the YAML editor.

2.3.11.1. Prerequisites

  • Required access: Administrator or cluster administrator
  • If you have an existing Kubernetes manifest, consider using the Policy Generator to automatically include the manifests in a policy. See the Policy Generator documentation.

2.3.11.2. Creating a configuration policy from the CLI

Complete the following steps to create a configuration policy from the (CLI):

  1. Create a YAML file for your configuration policy. Run the following command:

    oc create -f configpolicy-1.yaml

    Your configuration policy might resemble the following policy:

    apiVersion: policy.open-cluster-management.io/v1
    kind: Policy
    metadata:
      name: policy-1
      namespace: my-policies
    policy-templates:
    - apiVersion: policy.open-cluster-management.io/v1
      kind: ConfigurationPolicy
      metadata:
        name: mustonlyhave-configuration
      spec:
        namespaceSelector:
          include: ["default"]
          exclude: ["kube-system"]
        remediationAction: inform
        disabled: false
        complianceType: mustonlyhave
        object-templates:
  2. Apply the policy by running the following command:

    oc apply -f <policy-file-name>  --namespace=<namespace>
  3. Verify and list the policies by running the following command:

    oc get policies.policy.open-cluster-management.io --namespace=<namespace>

Your configuration policy is created.

2.3.11.2.1. Viewing your configuration policy from the CLI

Complete the following steps to view your configuration policy from the CLI:

  1. View details for a specific configuration policy by running the following command:

    oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yaml
  2. View a description of your configuration policy by running the following command:

    oc describe policies.policy.open-cluster-management.io <name> -n <namespace>
2.3.11.2.2. Viewing your configuration policy from the console

View any configuration policy and its status from the console.

After you log in to your cluster from the console, select Governance to view a table list of your policies. Note: You can filter the table list of your policies by selecting the All policies tab or Cluster violations tab.

Select one of your policies to view more details. The Details, Clusters, and Templates tabs are displayed.

2.3.11.3. Disabling configuration policies

To disable your configuration policy, select Disable policy from the Actions menu of the policy. The policy is disabled, but not deleted.

2.3.11.4. Deleting a configuration policy

Delete a configuration policy from the CLI or the console. Complete the following steps:

  1. To delete the policy from your target cluster or clusters, run the following command:

    oc delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>
  2. Verify that your policy is removed by running the following command:

    oc get policies.policy.open-cluster-management.io <policy-name> -n <namespace>
  3. To delete a configuration policy from the console, click the Actions icon for the policy you want to delete in the policy violation table and then click Delete.

Your policy is deleted.

2.3.11.5. Additional resources

2.3.12. Configuring Ansible Automation Platform for governance

Red Hat Advanced Cluster Management for Kubernetes governance can be integrated with Red Hat Ansible Automation Platform to create policy violation automations. You can configure the automation from the Red Hat Advanced Cluster Management console.

2.3.12.1. Prerequisites

  • A supported OpenShift Container Platform version
  • You must have Ansible Automation Platform version 3.7.3 or a later version installed. It is best practice to install the latest supported version of Ansible Automation Platform. See Red Hat Ansible Automation Platform documentation for more details.
  • Install the Ansible Automation Platform Resource Operator from the Operator Lifecycle Manager. In the Update Channel section, select stable-2.x-cluster-scoped. Select the All namespaces on the cluster (default) installation mode.

    Note: Ensure that the Ansible Automation Platform job template is idempotent when you run it. If you do not have Ansible Automation Platform Resource Operator, you can find it from the Red Hat OpenShift Container Platform OperatorHub page.

For more information about installing and configuring Red Hat Ansible Automation Platform, see Setting up Ansible tasks.

2.3.12.2. Creating a policy violation automation from the console

After you log in to your Red Hat Advanced Cluster Management hub cluster, select Governance from the navigation menu, and then click on the Policies tab to view the policy tables.

Configure an automation for a specific policy by clicking Configure in the Automation column. You can create automation when the policy automation panel appears. From the Ansible credential section, click the drop-down menu to select an Ansible credential. If you need to add a credential, see Managing credentials overview.

Note: This credential is copied to the same namespace as the policy. The credential is used by the AnsibleJob resource that is created to initiate the automation. Changes to the Ansible credential in the Credentials section of the console is automatically updated.

After a credential is selected, click the Ansible job drop-down list to select a job template. In the Extra variables section, add the parameter values from the extra_vars section of the PolicyAutomation. Select the frequency of the automation. You can select Run once mode, Run everyEvent mode, or Disable automation.

Save your policy violation automation by selecting Submit. When you select the View Job link from the Ansible job details side panel, the link directs you to the job template on the Search page. After you successfully create the automation, it is displayed in the Automation column.

Note: When you delete a policy that has an associated policy automation, the policy automation is automatically deleted as part of clean up.

Your policy violation automation is created from the console.

2.3.12.3. Creating a policy violation automation from the CLI

Complete the following steps to configure a policy violation automation from the CLI:

  1. From your terminal, log in to your Red Hat Advanced Cluster Management hub cluster using the oc login command.
  2. Find or create a policy that you want to add an automation to. Note the policy name and namespace.
  3. Create a PolicyAutomation resource using the following sample as a guide:

    apiVersion: policy.open-cluster-management.io/v1beta1
    kind: PolicyAutomation
    metadata:
      name: policyname-policy-automation
    spec:
      automationDef:
        extra_vars:
          your_var: your_value
        name: Policy Compliance Template
        secret: ansible-tower
        type: AnsibleJob
      mode: disabled
      policyRef: policyname
  4. The Automation template name in the previous sample is Policy Compliance Template. Change that value to match your job template name.
  5. In the extra_vars section, add any parameters you need to pass to the Automation template.
  6. Set the mode to either once, everyEvent, or disabled.
  7. Set the policyRef to the name of your policy.
  8. Create a secret in the same namespace as this PolicyAutomation resource that contains the Ansible Automation Platform credential. In the previous example, the secret name is ansible-tower. Use the sample from application lifecycle to see how to create the secret.
  9. Create the PolicyAutomation resource.

    Notes:

    • An immediate run of the policy automation can be initiated by adding the following annotation to the PolicyAutomation resource:

      metadata:
        annotations:
          policy.open-cluster-management.io/rerun: "true"
    • When the policy is in once mode, the automation runs when the policy is non-compliant. The extra_vars variable, named target_clusters is added and the value is an array of each managed cluster name where the policy is non-compliant.
    • When the policy is in everyEvent mode and the DelayAfterRunSeconds exceeds the defined time value, the policy is non-compliant and the automation runs for every policy violation.

2.4. Policy deployment with external tools

To deploy CertificatePolicy, ConfigurationPolicy, OperatorPolicy resources, and Gatekeeper constraints directly to your managed cluster, you can use an external tool such as Red Hat OpenShift GitOps.

2.4.1. Deployment workflow

Your CertificatePolicy, ConfigurationPolicy, OperatorPolicy policies must be in the open-cluster-management-policies namespace or the managed cluster namespace. Policies in other namespaces are ignored and do not receive a status. If you are using a Red Hat OpenShift GitOps version with an Argo CD version earlier than 2.13, you must configure Red Hat Advanced Cluster Management for Kubernetes policy health checks

The OpenShift GitOps service account must have permission to manage Red Hat Advanced Cluster Management for Kubernetes policies. Deploy policies with OpenShift GitOps and view the policies from the Discovered policies table of the Governance dashboard. The policies are grouped by the policy name and kind fields.

2.4.2. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.