Chapter 3. Manage security policies
Use the Governance dashboard to create, view, and manage your security policies and policy violations. You can create YAML files for your policies from the CLI and console.
3.1. Governance page Copy linkLink copied to clipboard!
The following tabs are displayed on the Governance page:
Overview
View the following summary cards from the Overview tab: Policy set violations, Policy violations, Clusters, Categories, Controls, and Standards.
Policy sets
Create and manage hub cluster policy sets.
Policies
Create and manage security policies. The table of policies lists the following details of a policy: Name, Namespace, Status, Remediation, Policy set, Cluster violations, Source, Automation and Created.
You can edit, enable or disable, set remediation to inform or enforce, or remove a policy by selecting the Actions icon. You can view the categories and standards of a specific policy by selecting the drop-down arrow to expand the row.
Complete bulk actions by selecting multiple policies and clicking the Actions button. You can also customize your policy table by clicking the Filter button.
When you select a policy in the table list, the following tabs of information are displayed from the console:
- Details: Select the Details tab to view policy details and placement details. In the Placement table, the Compliance column provides links to view the compliance of the clusters that are displayed.
Results: Select the Results tab to view a table list of all clusters that are associated to the policy.
From the Message column, click the View details link to view the template details, template YAML, and related resources. You can also view related resources. Click the View history link to view the violation message and a time of the last report.
3.2. Governance automation configuration Copy linkLink copied to clipboard!
If there is a configured automation for a specific policy, you can select the automation to view more details. View the following descriptions of the schedule frequency options for your automation:
-
Manual run: Manually set this automation to run once. After the automation runs, it is set to
disabled. Note: You can only select Manual run mode when the schedule frequency is disabled. -
Run once mode: When a policy is violated, the automation runs one time. After the automation runs, it is set to
disabled. After the automation is set todisabled, you must continue to run the automation manually. When you run once mode, the extra variable oftarget_clustersis automatically supplied with the list of clusters that violated the policy. The {aap-short} Job template must havePROMPT ON LAUNCHenabled for theEXTRA VARIABLESsection (also known asextra_vars). -
Run everyEvent mode: When a policy is violated, the automation runs every time for each unique policy violation per managed cluster. Use the
DelayAfterRunSecondsparameter to set the minimum seconds before an automation can be restarted on the same cluster. If the policy is violated multiple times during the delay period and kept in the violated state, the automation runs one time after the delay period. The default is 0 seconds and is only applicable for theeveryEventmode. When you runeveryEventmode, the extra variable oftarget_clustersand {aap-short} Job template is the same as once mode. -
Disable automation: When the scheduled automation is set to
disabled, the automation does not run until the setting is updated.
Review the following topics to learn more about creating and updating your security policies:
Refer to Governance for more topics.
3.3. Configuring Ansible Tower for governance Copy linkLink copied to clipboard!
Red Hat Advanced Cluster Management for Kubernetes governance can be integrated with Ansible Tower automation to create policy violation automations. You can configure the automation from the Red Hat Advanced Cluster Management console.
3.3.1. Prerequisites Copy linkLink copied to clipboard!
- Red Hat OpenShift Container Platform 4.5 or later
- You must have Ansible Tower version 3.7.3 or a later version installed. It is best practice to install the latest supported version of Ansible Tower. See Red Hat Ansible Tower documentation for more details.
- Install the Ansible Automation Platform Resource Operator on to your hub cluster to connect Ansible jobs to the governance framework. For best results when using the AnsibleJob to launch Ansible Tower jobs, the Ansible Tower job template should be idempotent when it is run. If you do not have Ansible Automation Platform Resource Operator, you can find it from the Red Hat OpenShift Container Platform OperatorHub page.
For more information about installing and configuring Ansible Tower automation, see Setting up Ansible tasks
3.3.2. Creating a policy violation automation from the console Copy linkLink copied to clipboard!
After you log into your Red Hat Advanced Cluster Management hub cluster, select Governance from the navigation menu, and then click on the Policies tab to view the policy tables.
Configure an automation for a specific policy by clicking Configure in the Automation column. You can create automation when the policy automation panel appears. From the Ansible credential section, click the drop-down menu to select an Ansible credential. If you need to add a credential, see Managing credentials overview.
Note: This credential is copied to the same namespace as the policy. The credential is used by the AnsibleJob resource that is created to initiate the automation. Changes to the Ansible credential in the Credentials section of the console is automatically updated.
After a credential is selected, click the Ansible job drop-down list to select a job template. In the Extra variables section, add the parameter values from the extra_vars section of the PolicyAutomation. Select the frequency of the automation. You can select Run once mode, Run everyEvent mode, or Disable automation.
Save your policy violation automation by selecting Submit. When you select the View Job link from the Ansible job details side panel, the link directs you to the job template on the Search page. After you successfully create the automation, it is displayed in the Automation column.
Note: When you delete a policy that has an associated policy automation, the policy automation is automatically deleted as part of clean up.
Your policy violation automation is created from the console.
3.3.3. Creating a policy violation automation from the CLI Copy linkLink copied to clipboard!
Complete the following steps to configure a policy violation automation from the CLI:
-
From your terminal, log in to your Red Hat Advanced Cluster Management hub cluster using the
oc logincommand. - Find or create a policy that you want to add an automation to. Note the policy name and namespace.
Create a
PolicyAutomationresource using the following sample as a guide:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The Ansible job template name in the previous sample is
Policy Compliance Template. Change that value to match your job template name. -
In the
extra_varssection, add any parameters you need to pass to the Ansible job template. -
Set the mode to either
once,everyEvent, ordisabled. -
Set the
policyRefto the name of your policy. -
Create a secret in the same namespace as this
PolicyAutomationresource that contains the Ansible Tower credential. In the previous example, the secret name isansible-tower. Use the sample from application lifecycle to see how to create the secret. Create the
PolicyAutomationresource.Notes:
An immediate run of the policy automation can be initiated by adding the following annotation to the
PolicyAutomationresource:metadata: annotations: policy.open-cluster-management.io/rerun: "true"metadata: annotations: policy.open-cluster-management.io/rerun: "true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
When the policy is in
oncemode, the automation runs when the policy is non-compliant. Theextra_varsvariable, namedtarget_clustersis added and the value is an array of each managed cluster name where the policy is non-compliant. -
When the policy is in
everyEventmode and theDelayAfterRunSecondsexceeds the defined time value, the policy is non-compliant and the automation runs for every policy violation.
3.4. Deploy policies using GitOps Copy linkLink copied to clipboard!
You can deploy a set of policies across a fleet of managed clusters with the governance framework. You can add to the open source community, policy-collection by contributing to and using the policies in the repository. For more information, see Contributing a custom policy. Policies in each of the stable and community folders from the open source community are further organized according to NIST Special Publication 800-53.
Continue reading to learn best practices to use GitOps to automate and track policy updates and creation through a Git repository.
Prerequisite: Before you begin, be sure to fork the policy-collection repository.
3.4.1. Customizing your local repository Copy linkLink copied to clipboard!
Customize your local repository by consolidating the stable and community policies into a single folder. Remove the policies you do not want to use. Complete the following steps to customize your local repository:
Create a new directory in the repository to hold the policies that you want to deploy. Be sure that you are in your local
policy-collectionrepository on your main default branch for GitOps. Run the following command:mkdir my-policies
mkdir my-policiesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy all of the
stableandcommunitypolicies into yourmy-policiesdirectory. Start with thecommunitypolicies first, in case thestablefolder contains duplicates of what is available in the community. Run the following commands:cp -R community/* my-policies/ cp -R stable/* my-policies/
cp -R community/* my-policies/ cp -R stable/* my-policies/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now that you have all of the policies in a single parent directory structure, you can edit the policies in your fork.
Tips:
- It is best practice to remove the policies you are not planning to use.
Learn about policies and the definition of the policies from the following list:
- Purpose: Understand what the policy does.
Remediation Action: Does the policy only inform you of compliance, or enforce the policy and make changes? See the
spec.remediationActionparameter. If changes are enforced, make sure you understand the functional expectation. Remember to check which policies support enforcement. For more information, view the Validate section.Note: The
spec.remediationActionset for the policy overrides any remediation action that is set in the individualspec.policy-templates.-
Placement: What clusters is the policy deployed to? By default, most policies target the clusters with the
environment: devlabel. Some policies may target OpenShift Container Platform clusters or another label. You can update or add additional labels to include other clusters. When there is no specific value, the policy is applied to all of your clusters. You can also create multiple copies of a policy and customize each one if you want to use a policy that is configured one way for one set of clusters and configured another way for another set of clusters.
3.4.2. Committing to your local repository Copy linkLink copied to clipboard!
After you are satisfied with the changes you have made to your directory, commit and push your changes to Git so that they can be accessed by your cluster.
Note: This example is used to show the basics of how to use policies with GitOps, so you might have a different workflow to get changes to your branch.
Complete the following steps:
From your terminal, run
git statusto view your recent changes in your directory that you previously created. Add your new directory to the list of changes to be committed with the following command:git add my-policies/
git add my-policies/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Commit the changes and customize your message. Run the following command:
git commit -m “Policies to deploy to the hub cluster”
git commit -m “Policies to deploy to the hub cluster”Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the changes to the branch of your forked repository that is used for GitOps. Run the following command:
git push origin <your_default_branch>master
git push origin <your_default_branch>masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Your changes are committed.
3.4.3. Deploying policies to your cluster Copy linkLink copied to clipboard!
After you push your changes, you can deploy the policies to your Red Hat Advanced Cluster Management for Kubernetes installation. Post deployment, your hub cluster is connected to your Git repository. Any further changes to your chosen branch of the Git repository is reflected in your cluster.
Note: By default, policies deployed with GitOps use the merge reconcile option. If you want to use the replace reconcile option instead, add the apps.open-cluster-management.io/reconcile-option: replace annotation to the Subscription resource. See Application Lifecycle for more details.
The deploy.sh script creates Channel and Subscription resources in your hub cluster. The channel connects to the Git repository, and the subscription specifies the data to bring to the cluster through the channel. As a result, all policies defined in the specified subdirectory are created on your hub. After the policies are created by the subscription, Red Hat Advanced Cluster Management analyzes the policies and creates additional policy resources in the namespace associated with each managed cluster that the policy is applied to, based on the defined placement rule.
The policy is then copied to the managed cluster from its respective managed cluster namespace on the hub cluster. As a result, the policies in your Git repository are pushed to all managed clusters that have labels that match the clusterSelector that are defined in the placement rule of your policy.
Complete the following steps:
From the
policy-collectionfolder, run the following command to change the directory:cd deploy
cd deployCopy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure that your command line interface (CLI) is configured to create resources on the correct cluster with the following command:
oc cluster-info
oc cluster-infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of the command displays the API server details for the cluster, where Red Hat Advanced Cluster Management is installed. If the correct URL is not displayed, configure your CLI to point to the correct cluster. See Using the OpenShift CLI for more information.
Create a namespace where your policies are created to control access and to organize the policies. Run the following command:
oc create namespace policy-namespace
oc create namespace policy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to deploy the policies to your cluster:
./deploy.sh -u https://github.com/<your-repository>/policy-collection -p my-policies -n policy-namespace
./deploy.sh -u https://github.com/<your-repository>/policy-collection -p my-policies -n policy-namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
your-repositorywith your Git user name or repository name.Note: For reference, the full list of arguments for the
deploy.shscript uses the following syntax:./deploy.sh [-u <url>] [-b <branch>] [-p <path/to/dir>] [-n <namespace>] [-a|--name <resource-name>]
./deploy.sh [-u <url>] [-b <branch>] [-p <path/to/dir>] [-n <namespace>] [-a|--name <resource-name>]Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the following explanations for each argument:
-
URL: The URL to the repository that you forked from the main
policy-collectionrepository. The default URL ishttps://github.com/stolostron/policy-collection.git. -
Branch: Branch of the Git repository to point to. The default branch is
main. -
Subdirectory Path: The subdirectory path you created to contain the policies you want to use. In the previous sample, we used the
my-policiessubdirectory, but you can also specify which folder you want start with. For example, you can usemy-policies/AC-Access-Control. The default folder isstable. -
Namespace: The namespace where the resources and policies are created on the hub cluster. These instructions use the
policy-namespacenamespace. The default namespace ispolicies. -
Name Prefix: Prefix for the
ChannelandSubscriptionresources. The default isdemo-stable-policies.
-
URL: The URL to the repository that you forked from the main
After you run the deploy.sh script, any user with access to the repository can commit changes to the branch, which pushes changes to existing policies on your clusters.
Note: To deploy policies with subscriptions, complete the following steps:
-
Bind the
open-cluster-management:subscription-adminClusterRole to the user creating the subscription. If you are using an allow list in the subscription, include the following API entries:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.4. Verifying GitOps policy deployments from the console Copy linkLink copied to clipboard!
Verify that your changes were applied to your policies from the console. You can also make more changes to your policy from the console, however the changes are reverted when the Subscription is reconciled with the Git repository. Complete the following steps:
- Log in to your Red Hat Advanced Cluster Management cluster.
- From the navigation menu, select Governance.
- Locate the policies that you deployed in the table. Policies that are deployed using GitOps have a Git label in the Source column. Click the label to view the details for the Git repository.
3.4.4.1. Verifying GitOps policy deployments from the CLI Copy linkLink copied to clipboard!
Complete the following steps:
Check for the following policy details:
- Why is a specific policy compliant or non-compliant on the clusters that it was distributed to?
- Are the policies applied to the correct clusters?
- If this policy is not distributed to any clusters, why?
Identify the GitOps deployed policies that you created or modified. The GitOps deployed policies can be identified by the annotation that is applied automatically. Annotations for the GitOps deployed policies resemble the following paths:
apps.open-cluster-management.io/hosting-deployable: policies/deploy-stable-policies-Policy-policy-role9 apps.open-cluster-management.io/hosting-subscription: policies/demo-policies apps.open-cluster-management.io/sync-source: subgbk8s-policies/demo-policies
apps.open-cluster-management.io/hosting-deployable: policies/deploy-stable-policies-Policy-policy-role9 apps.open-cluster-management.io/hosting-subscription: policies/demo-policies apps.open-cluster-management.io/sync-source: subgbk8s-policies/demo-policiesCopy to Clipboard Copied! Toggle word wrap Toggle overflow GitOps annotations are valuable to see which subscription created the policy. You can also add your own labels to your policies so that you can write runtime queries that select policies based on labels.
For example, you can add a label to a policy with the following command:
oc label policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> <key>=<value>
oc label policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> <key>=<value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then, you can query policies that have labels with the following command:
oc get policies.policy.open-cluster-management.io -n <policy-namespace> -l <key>=<value>
oc get policies.policy.open-cluster-management.io -n <policy-namespace> -l <key>=<value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your policies are deployed using GitOps.
3.5. Support for templates in configuration policies Copy linkLink copied to clipboard!
Configuration policies support the inclusion of Golang text templates in the object definitions. These templates are resolved at runtime either on the hub cluster or the target managed cluster using configurations related to that cluster. This gives you the ability to define configuration policies with dynamic content, and inform or enforce Kubernetes resources that are customized to the target cluster.
3.5.1. Prerequisite Copy linkLink copied to clipboard!
- The template syntax must be conformed to the Golang template language specification, and the resource definition generated from the resolved template must be a valid YAML. See the Golang documentation about Package templates for more information. Any errors in template validation are recognized as policy violations. When you use a custom template function, the values are replaced at runtime.
3.5.2. Template functions Copy linkLink copied to clipboard!
Template functions, such as resource-specific and generic lookup template functions, are available for referencing Kubernetes resources on the hub cluster (using the {{hub … hub}} delimiters), or managed cluster (using the {{ … }} delimiters). See Support for hub cluster templates in configuration policies for more details. The resource-specific functions are used for convenience and makes content of the resources more accessible. If you use the generic function, lookup, which is more advanced, it is best to be familiar with the YAML structure of the resource that is being looked up. In addition to these functions, utility functions like base64encode, base64decode, indent, autoindent, toInt, toBool, and more are also available.
To conform templates with YAML syntax, templates must be set in the policy resource as strings using quotes or a block character (| or >). This causes the resolved template value to also be a string. To override this, consider using toInt or toBool as the final function in the template to initiate further processing that forces the value to be interpreted as an integer or boolean respectively.
Continue reading to view descriptions and examples for some of the custom template functions that are supported:
3.5.2.1. fromSecret function Copy linkLink copied to clipboard!
The fromSecret function returns the value of the given data key in the secret. View the following syntax for the function:
func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)
func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)
When you use this function, enter the namespace, name, and data key of a Kubernetes Secret resource. You must use the same namespace that is used for the policy when using the function in a hub cluster template. See, Support for hub cluster templates in configuration policies for more details.
Note: When you use this function with hub cluster templates, the output is automatically encrypted using the the protect function.
You receive a policy violation if the Kubernetes Secret resource does not exist on the target cluster. If the data key does not exist on the target cluster, the value becomes an empty string. View the following configuration policy that enforces a Secret resource on the target cluster. The value for the PASSWORD data key is a template that references the secret on the target cluster:
3.5.2.2. fromConfigmap function Copy linkLink copied to clipboard!
The fromConfigmap function returns the value of the given data key in the ConfigMap. View the following syntax for the function:
func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)
func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)
When you use this function, enter the namespace, name, and data key of a Kubernetes ConfigMap resource. You must use the same namespace that is used for the policy using the function in a hub cluster template. See, Support for hub cluster templates in configuration policies for more details. You receive a policy violation if the Kubernetes ConfigMap resource does not exist on the target cluster. If the data key does not exist on the target cluster, the value becomes an empty string. View the following configuration policy that enforces a Kubernetes resource on the target managed cluster. The value for the log-file data key is a template that retrieves the value of the log-file from the ConfigMap, logs-config from the default namespace, and the log-level is set to the data key log-level.
3.5.2.3. fromClusterClaim function Copy linkLink copied to clipboard!
The fromClusterClaim function returns the value of the Spec.Value in the ClusterClaim resource. View the following syntax for the function:
func fromClusterClaim (clusterclaimName string) (value map[string]interface{}, err Error)
func fromClusterClaim (clusterclaimName string) (value map[string]interface{}, err Error)
When you use this function, enter the name of a Kubernetes ClusterClaim resource. You receive a policy violation if the ClusterClaim resource does not exist. View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster. The value for the platform data key is a template that retrieves the value of the platform.open-cluster-management.io cluster claim. Similarly, it retrieves values for product and version from the ClusterClaim:
3.5.2.4. lookup function Copy linkLink copied to clipboard!
The lookup function returns the Kubernetes resource as a JSON compatible map. If the requested resource does not exist, an empty map is returned. If the resource does not exist and the value is provided to another template function, you might get the following error: invalid value; expected string.
Note: Use the default template function, so the correct type is provided to later template functions. See the Open source community functions section.
View the following syntax for the function:
func lookup (apiversion string, kind string, namespace string, name string) (value string, err Error)
func lookup (apiversion string, kind string, namespace string, name string) (value string, err Error)
When you use this function, enter the API version, kind, namespace, and name of the Kubernetes resource. You must use the same namespace that is used for the policy within the hub cluster template. See, Support for hub cluster templates in configuration policies for more details. View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster. The value for the metrics-url data key is a template that retrieves the v1/Service Kubernetes resource metrics from the default namespace, and is set to the value of the Spec.ClusterIP in the queried resource:
3.5.2.5. base64enc function Copy linkLink copied to clipboard!
The base64enc function returns a base64 encoded value of the input data string. View the following syntax for the function:
func base64enc (data string) (enc-data string)
func base64enc (data string) (enc-data string)
When you use this function, enter a string value. View the following example of the configuration policy that uses the base64enc function:
3.5.2.6. base64dec function Copy linkLink copied to clipboard!
The base64dec function returns a base64 decoded value of the input enc-data string. View the following syntax for the function:
func base64dec (enc-data string) (data string)
func base64dec (enc-data string) (data string)
When you use this function, enter a string value. View the following example of the configuration policy that uses the base64dec function:
3.5.2.7. indent function Copy linkLink copied to clipboard!
The indent function returns the padded data string. View the following syntax for the function:
func indent (spaces int, data string) (padded-data string)
func indent (spaces int, data string) (padded-data string)
When you use this function, enter a data string with the specific number of spaces. View the following example of the configuration policy that uses the indent function:
3.5.2.8. autoindent function Copy linkLink copied to clipboard!
The autoindent function acts like the indent function that automatically determines the number of leading spaces based on the number of spaces before the template. View the following example of the configuration policy that uses the autoindent function:
3.5.2.9. toInt function Copy linkLink copied to clipboard!
The toInt function casts and returns the integer value of the input value. Also, when this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as an integer by the YAML. View the following syntax for the function:
func toInt (input interface{}) (output int)
func toInt (input interface{}) (output int)
When you use this function, enter the data that needs to be casted as an integer. View the following example of the configuration policy that uses the toInt function:
3.5.2.10. toBool function Copy linkLink copied to clipboard!
The toBool function converts the input string into a boolean, and returns the boolean. Also, when this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as a boolean by the YAML. View the following syntax for the function:
func toBool (input string) (output bool)
func toBool (input string) (output bool)
When you use this function, enter the string data that needs to be converted to a boolean. View the following example of the configuration policy that uses the toBool function:
3.5.2.11. protect function Copy linkLink copied to clipboard!
The protect function enables you to encrypt a string in a hub cluster policy template. It is automatically decrypted on the managed cluster when the policy is evaluated. View the following example of the configuration policy that uses the protect function:
In the previous YAML example, there is an existing hub cluster policy template that is defined to use the lookup function. On the replicated policy in the managed cluster namespace, the value might resemble the following syntax: $ocm_encrypted:okrrBqt72oI+3WT/0vxeI3vGa+wpLD7Z0ZxFMLvL204=
Each encryption algorithm used is AES-CBC using 256-bit keys. Each encryption key is unique per managed cluster and is automatically rotated every 30 days.
This ensures that your decrypted value is to never be stored in the policy on the managed cluster.
To force an immediate rotation, delete the policy.open-cluster-management.io/last-rotated annotation on the policy-encryption-key Secret in the managed cluster namespace on the hub cluster. Policies are then reprocessed to use the new encryption key.
3.5.2.12. toLiteral function Copy linkLink copied to clipboard!
The toLiteral function removes any quotation marks around the template string after it is processed. You can use this function to convert a JSON string from a ConfigMap field to a JSON value in the manifest. Run the following function to remove quotation marks from the key parameter value:
key: '{{ "[\"10.10.10.10\", \"1.1.1.1\"]" | toLiteral }}'
key: '{{ "[\"10.10.10.10\", \"1.1.1.1\"]" | toLiteral }}'
After using the toLiteral function, the following update is displayed:
key: ["10.10.10.10", "1.1.1.1"]
key: ["10.10.10.10", "1.1.1.1"]
3.5.2.13. Open source community functions Copy linkLink copied to clipboard!
Additionally, Red Hat Advanced Cluster Management supports the following template functions that are included from the sprig open source project:
-
cat -
contains -
default -
empty -
fromJson -
hasPrefix -
hasSuffix -
join -
list -
lower -
mustFromJson -
quote -
replace -
semver -
semverCompare -
split -
splitn -
ternary -
trim -
until -
untilStep -
upper
See the Sprig Function Documentation for more details.
3.5.3. Support for hub cluster templates in configuration policies Copy linkLink copied to clipboard!
In addition to managed cluster templates that are dynamically customized to the target cluster, Red Hat Advanced Cluster Management also supports hub cluster templates to define configuration policies using values from the hub cluster. This combination reduces the need to create separate policies for each target cluster or hardcode configuration values in the policy definitions.
Hub cluster templates are based on Golang text template specifications, and the {{hub … hub}} delimiter indicates a hub cluster template in a configuration policy.
For security, both resource-specific and the generic lookup functions in hub cluster templates are restricted to the namespace of the policy on the hub cluster. View the Comparison of hub and managed cluster templates for additional details.
Important: If you use hub cluster templates to propagate secrets or other sensitive data, the sensitive data exists in the managed cluster namespace on the hub cluster and on the managed clusters where that policy is distributed. The template content is expanded in the policy, and policies are not encrypted by the OpenShift Container Platform ETCD encryption support. To address this, use fromSecret, which automatically encrypts the values from the Secret, or protect to encrypt other values.
3.5.3.1. Template processing Copy linkLink copied to clipboard!
A configuration policy definition can contain both hub cluster and managed cluster templates. Hub cluster templates are processed first on the hub cluster, then the policy definition with resolved hub cluster templates is propagated to the target clusters. On the managed cluster, the ConfigurationPolicyController processes any managed cluster templates in the policy definition and then enforces or verifies the fully resolved object definition.
3.5.3.2. Special annotation for reprocessing Copy linkLink copied to clipboard!
Policies are processed on the hub cluster only upon creation or after an update. Therefore, hub cluster templates are only resolved to the data in the referenced resources upon policy creation or update. Any changes to the referenced resources are not automatically synced to the policies.
A special annotation, policy.open-cluster-management.io/trigger-update can be used to indicate changes to the data referenced by the templates. Any change to the special annotation value initiates template processing, and the latest contents of the referenced resource are read and updated in the policy definition that is the propagator for processing on managed clusters. A typical way to use this annotation is to increment the value by one each time.
3.5.3.3. Bypass template processing Copy linkLink copied to clipboard!
You might create a policy that contains a template that is not intended to be processed by Red Hat Advanced Cluster Management. By default, Red Hat Advanced Cluster Management processes all templates.
To bypass template processing for your hub cluster, you must change {{ template content }} to {{ `{{ template content }}` }}.
Alternatively, you can add the following annotation in the ConfigurationPolicy section of your Policy: policy.open-cluster-management.io/disable-templates: "true". When this annotation is included, the previous workaround is not necessary. Template processing is bypassed for the ConfigurationPolicy.
See the following table for a comparison of hub cluster and managed cluster templates:
3.5.3.4. Comparison of hub cluster and managed cluster templates Copy linkLink copied to clipboard!
| Templates | Hub cluster | Managed cluster |
|---|---|---|
| Syntax | Golang text template specification | Golang text template specification |
| Delimiter | {{hub … hub}} | {{ … }} |
| Context |
A | No context variables |
| Access control |
You can only reference namespaced Kubernetes objects that are in the same namespace as the | You can reference any resource on the cluster. |
| Functions | A set of template functions that support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. See the Access control row for lookup restrictions.
The
The equivalent call might use the following syntax: | A set of template functions support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. |
| Function output storage |
The output of template functions are stored in | The output of template functions are not stored in policy related resource objects. |
| Processing | Processing occurs at runtime on the hub cluster during propagation of replicated policies to clusters. Policies and the hub cluster templates within the policies are processed on the hub cluster only when templates are created or updated. |
Processing occurs in the |
| Processing errors | Errors from the hub cluster templates are displayed as violations on the managed clusters the policy applies to. | Errors from the managed cluster templates are displayed as violations on the specific target cluster where the violation occurred. |
3.6. Governance metric Copy linkLink copied to clipboard!
The policy framework exposes metrics that show policy distribution and compliance. Use the policy_governance_info metric on the hub cluster to view trends and analyze any policy failures.
3.6.1. Metric overview Copy linkLink copied to clipboard!
See the following topics for an overview of metrics.
3.6.1.1. Metric: policy_governance_info Copy linkLink copied to clipboard!
The policy_governance_info is collected by OpenShift Container Platform monitoring, and some aggregate data is collected by Red Hat Advanced Cluster Management observability, if it is enabled.
Note: If observability is enabled, you can enter a query for the metric from the Grafana Explore page.
When you create a policy, you are creating a root policy. The framework watches for root policies as well as PlacementRules and PlacementBindings, to determine where to create propagated policies in order to distribute the policy to managed clusters. For both root and propagated policies, a metric of 0 is recorded if the policy is compliant, and 1 if it is non-compliant.
The policy_governance_info metric uses the following labels:
-
type: The label values arerootorpropagated. -
policy: The name of the associated root policy. -
policy_namespace: The namespace on the hub cluster where the root policy was defined. -
cluster_namespace: The namespace for the cluster where the policy is distributed.
These labels and values enable queries that can show us many things happening in the cluster that might be difficult to track.
Note: If the metrics are not needed, and there are any concerns about performance or security, this feature can be disabled. Set the DISABLE_REPORT_METRICS environment variable to true in the propagator deployment. You can also add policy_governance_info metric to the observability allowlist as a custom metric. See Adding custom metrics for more details.
3.6.1.2. Metric: config_policies_evaluation_duration_ Copy linkLink copied to clipboard!
The config_policies_evaluation_duration_ histogram tracks the number of seconds it takes to process all configuration policies that are ready to be evaluated on the cluster. Use the following metrics to query the histogram:
-
config_policies_evaluation_duration_seconds_bucket: The buckets are cumulative and represent seconds with the following possible entries: 1, 3, 9, 10.5, 15, 30, 60, 90, 120, 180, 300, 450, 600, and greater. -
config_policies_evaluation_duration_seconds_count: The count of all events. -
config_policies_evaluation_duration_seconds_sum: The sum of all values.
Use config_policies_evaluation_duration_ to determine if the ConfigurationPolicy evaluationInterval setting needs to be changed for resource intensive policies that do not need frequent evaluation. You can also increase the concurrency at the cost of higher resource utilization on the Kubernetes API server. See Configure the configuration policy controller for more details.
To receive information about the time used to evaluate configuration policies, perform a Prometheus query that resembles the following expression:
rate(config_policies_evaluation_duration_seconds_sum[10m]) / rate (config_policies_evaluation_duration_seconds_count[10m]
The config-policy-controller pod running on managed clusters in the open-cluster-management-agent-addon namespace calculates the metric. The config-policy-controller does not send the metric to Observability by default.
3.7. Managing security policies Copy linkLink copied to clipboard!
Create a security policy to report and validate your cluster compliance based on your specified security standards, categories, and controls.
View the following sections:
3.7.1. Creating a security policy Copy linkLink copied to clipboard!
You can create a security policy from the command line interface (CLI) or from the console.
Required access: Cluster administrator
Important: You must define a placement rule and placement binding to apply your policy to a specific cluster. Enter a valid value for the Cluster selector field to define a PlacementRule and PlacementBinding. See Resources that support support set-based requirements in the Kubernetes documentation for a valid expression. View the definitions of the objects that are required for your Red Hat Advanced Cluster Management policy:
- PlacementRule: Defines a cluster selector where the policy must be deployed.
- PlacementBinding: Binds the placement to a placement rule.
View more descriptions of the policy YAML files in the Policy overview.
3.7.1.1. Creating a security policy from the command line interface Copy linkLink copied to clipboard!
Complete the following steps to create a policy from the command line interface (CLI):
Create a policy by running the following command:
kubectl create -f policy.yaml -n <policy-namespace>
kubectl create -f policy.yaml -n <policy-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the template that the policy uses. Edit your
.yamlfile by adding apolicy-templatesfield to define a template. Your policy might resemble the following YAML file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
PlacementRule. Be sure to change thePlacementRuleto specify the clusters where the policies need to be applied by adjusting theclusterSelector. View Placement rule samples overviewYour
PlacementRulemight resemble the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a
PlacementBindingto bind your policy to yourPlacementRule. YourPlacementBindingmight resemble the following YAML sample:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7.1.1.1. Viewing your security policy from the CLI Copy linkLink copied to clipboard!
Complete the following steps to view your security policy from the CLI:
View details for a specific security policy by running the following command:
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> -o yaml
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow View a description of your security policy by running the following command:
kubectl describe policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>
kubectl describe policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7.1.2. Creating a cluster security policy from the console Copy linkLink copied to clipboard!
After you log into your Red Hat Advanced Cluster Management, navigate to the Governance page and click Create policy.
As you create your new policy from the console, a YAML file is also created in the YAML editor. To view the YAML editor, select the toggle at the beginning of the Create policy form to enable it.
Complete the Create policy form, then select the Submit button.
Your YAML file might resemble the following policy:
Click Create Policy. A security policy is created from the console.
3.7.1.2.1. Viewing your security policy from the console Copy linkLink copied to clipboard!
View any security policy and its status from the console. Navigate to the Governance page to view a table list of your policies. Note: You can filter the table list of your policies by selecting the Policies tab or Cluster violations tab.
Select one of your policies to view more details. The Details, Clusters, and Templates tabs are displayed. When the cluster or policy status cannot be determined, the following message is displayed: No status.
3.7.1.3. Creating policy sets from the CLI Copy linkLink copied to clipboard!
By default, the policy set is created with no policies or placements. You must create a placement for the policy set and have at least one policy that exists on your cluster. When you create a policy set, you can add numerous policies. Run the following command to create a policy set from the CLI:
kubectl apply -f <policyset-filename>
kubectl apply -f <policyset-filename>
3.7.1.4. Creating policy sets from the console Copy linkLink copied to clipboard!
From the navigation menu, select Governance. Then select the Policy sets tab. Select the Create policy set button and complete the form. After you add the details for your policy set, select the Submit button.
View the stable Policyets, which require the policy generator for deployment, PolicySets-- Stable.
3.7.2. Updating security policies Copy linkLink copied to clipboard!
Learn to update security policies by viewing the following section.
3.7.2.1. Adding a policy to a policy set from the CLI Copy linkLink copied to clipboard!
Run the following command to edit your policy set: kubectl edit policysets your-policyset-name
Add the policy name to the list in the policies section of the policy set. Apply your added policy in the placement section of your policy set with the following command: kubectl apply -f your-added-policy.yaml. A PlacementBinding and PlacementRule are created. Note: If you delete the placement binding, the policy is still placed by the policy set.
3.7.2.2. Adding a policy to a policy set from the console Copy linkLink copied to clipboard!
Add a policy to the policy set by selecting the Policy sets tab. Select the Actions icon and select Edit. The Edit policy set form appears.
Navigate to the Policies section of the form to select a policy to add to the policy set.
3.7.2.3. Disabling security policies Copy linkLink copied to clipboard!
Your policy is enabled by default. Disable your policy from the console.
After you log into your Red Hat Advanced Cluster Management for Kubernetes console, navigate to the Governance page to view a table list of your policies.
Select the Actions icon > Disable policy. The Disable Policy dialog box appears.
Click Disable policy. Your policy is disabled.
3.7.3. Deleting a security policy Copy linkLink copied to clipboard!
Delete a security policy from the CLI or the console.
Delete a security policy from the CLI:
Delete a security policy by running the following command:
kubectl delete policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>
kubectl delete policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After your policy is deleted, it is removed from your target cluster or clusters. Verify that your policy is removed by running the following command:
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <policy-namespace>
Delete a security policy from the console:
From the navigation menu, click Governance to view a table list of your policies. Click the Actions icon for the policy you want to delete in the policy violation table.
Click Remove. From the Remove policy dialog box, click Remove policy
3.7.3.1. Deleting policy sets from the console Copy linkLink copied to clipboard!
From the Policy sets tab, select the Actions icon for the policy set. When you click Delete, the Permanently delete Policyset? dialogue box appears.
Click the Delete button.
To manage other policies, see Managing security policies for more information. Refer to Governance for more topics about policies.
3.7.4. Cleaning up resources that are created by policies Copy linkLink copied to clipboard!
Use the pruneObjectBehavior parameter in a configuration policy to clean up resources that are created by the policy. When pruneObjectBehavior is set, the related objects are only cleaned up after the configuration policy (or parent policy) associated with them is deleted. View the following descriptions of the values that can be used for the parameter:
-
DeleteIfCreated: Cleans up any resources created by the policy. -
DeleteAll: Cleans up all resources managed by the policy. -
None: This is the default value and maintains the same behavior from previous releases, where no related resources are deleted.
You can set the value directly in the YAML as you create a policy from the CLI. From the console, you can select the value in the Prune Object Behavior section of the the Policy templates step.
Note: If a policy that installs an operator uses the pruneObjectBehavior parameter defined, then additional clean up is needed to complete the operator uninstall. Additional clean up might include deleting the operator ClusterServiceVersion object.
3.8. Managing configuration policies Copy linkLink copied to clipboard!
Learn to create, apply, view, and update your configuration policies.
Required access: Administrator or cluster administrator
3.8.1. Creating a configuration policy Copy linkLink copied to clipboard!
You can create a YAML file for your configuration policy from the command line interface (CLI) or from the console.
If you have an existing Kubernetes manifest, consider using the policy generator to automatically include the manifests in a policy. See the Policy generator documentation. View the following sections to create a configuration policy:
3.8.1.1. Creating a configuration policy from the CLI Copy linkLink copied to clipboard!
Complete the following steps to create a configuration policy from the (CLI):
Create a YAML file for your configuration policy. Run the following command:
kubectl create -f configpolicy-1.yaml
kubectl create -f configpolicy-1.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Your configuration policy might resemble the following policy:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the policy by running the following command:
kubectl apply -f <policy-file-name> --namespace=<namespace>
kubectl apply -f <policy-file-name> --namespace=<namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify and list the policies by running the following command:
kubectl get policies.policy.open-cluster-management.io --namespace=<namespace>
kubectl get policies.policy.open-cluster-management.io --namespace=<namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Your configuration policy is created.
3.8.1.2. Viewing your configuration policy from the CLI Copy linkLink copied to clipboard!
Complete the following steps to view your configuration policy from the CLI:
View details for a specific configuration policy by running the following command:
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yaml
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow View a description of your configuration policy by running the following command:
kubectl describe policies.policy.open-cluster-management.io <name> -n <namespace>
kubectl describe policies.policy.open-cluster-management.io <name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8.1.3. Creating a configuration policy from the console Copy linkLink copied to clipboard!
As you create a configuration policy from the console, a YAML file is also created in the YAML editor.
Log in to your cluster from the console, and select Governance from the navigation menu.
Click Create policy. Specify the policy you want to create by selecting one of the configuration policies for the specification parameter.
Continue with configuration policy creation by completing the policy form. Enter or select the appropriate values for the following fields:
- Name
- Specifications
- Cluster selector
- Remediation action
- Standards
- Categories
- Controls
Click Create. Your configuration policy is created.
3.8.1.4. Viewing your configuration policy from the console Copy linkLink copied to clipboard!
View any configuration policy and its status from the console.
After you log into your cluster from the console, select Governance to view a table list of your policies. Note: You can filter the table list of your policies by selecting the All policies tab or Cluster violations tab.
Select one of your policies to view more details. The Details, Clusters, and Templates tabs are displayed.
3.8.2. Updating configuration policies Copy linkLink copied to clipboard!
Learn to update configuration policies by viewing the following section.
3.8.2.1. Disabling configuration policies Copy linkLink copied to clipboard!
Disable your configuration policy. Similar to the instructions mentioned earlier, log in and navigate to the Governance page.
Select the Actions icon for a configuration policy from the table list, then click Disable. The Disable Policy dialog box appears.
Click Disable policy.
Your configuration policy is disabled.
3.8.3. Deleting a configuration policy Copy linkLink copied to clipboard!
Delete a configuration policy from the CLI or the console.
Delete a configuration policy from the CLI:
Delete a configuration policy by running the following command:
kubectl delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>
kubectl delete policies.policy.open-cluster-management.io <policy-name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After your policy is deleted, it is removed from your target cluster or clusters.
- Verify that your policy is removed by running the following command:
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <namespace>
kubectl get policies.policy.open-cluster-management.io <policy-name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete a configuration policy from the console:
From the navigation menu, click Governance to view a table list of your policies.
Click the Actions icon for the policy you want to delete in the policy violation table. Then click Remove. From the Remove policy dialog box, click Remove policy.
Your policy is deleted.
See configuration policy samples that are supported by Red Hat Advanced Cluster Management from the CM-Configuration-Management folder.
Alternatively, you can refer to the Table of sample configuration policies to view other configuration policies that are monitored by the controller. For details to manage other policies, refer to Managing security policies.
3.9. Managing gatekeeper operator policies Copy linkLink copied to clipboard!
Use the gatekeeper operator policy to install the gatekeeper operator and gatekeeper on a managed cluster. Learn to create, view, and update your gatekeeper operator policies in the following sections.
Required access: Cluster administrator
3.9.1. Installing gatekeeper using a gatekeeper operator policy (Deprecated) Copy linkLink copied to clipboard!
Use the governance framework to install the gatekeeper operator. Gatekeeper operator is available in the OpenShift Container Platform catalog. See Adding Operators to a cluster in the OpenShift Container Platform documentation for more information.
Use the configuration policy controller to install the gatekeeper operator policy. During the install, the operator group and subscription pull the gatekeeper operator to install it in your managed cluster. Then, the gatekeeper operator creates a gatekeeper CR to configure gatekeeper. View the Gatekeeper operator CR sample.
Gatekeeper operator policy is monitored by the Red Hat Advanced Cluster Management configuration policy controller, where enforce remediation action is supported. Gatekeeper operator policies are created automatically by the controller when set to enforce.
3.9.2. Creating a gatekeeper policy from the console Copy linkLink copied to clipboard!
Install the gatekeeper policy by creating a gatekeeper policy from the console. Alternatively, you can view a sample YAML to deploy policy-gatekeeper-operator.yaml.
After you log into your cluster, navigate to the Governance page.
Select Create policy. As you complete the form, select Gatekeeper Operator from the Specifications field. The parameter values for your policy are automatically populated and the policy is set to inform by default. Set your remediation action to enforce to install gatekeeper.
Note: Default values are generated by the operator. See Gatekeeper Helm Chart for an explanation of the optional parameters that can be used for the gatekeeper operator policy.
3.9.2.1. Gatekeeper operator CR Copy linkLink copied to clipboard!
3.9.3. Upgrading gatekeeper and the gatekeeper operator Copy linkLink copied to clipboard!
You can upgrade the versions for gatekeeper and the gatekeeper operator. When you install the gatekeeper operator with the gatekeeper operator policy, notice the value for installPlanApproval. The operator upgrades automatically when installPlanApproval is set to Automatic.
You must approve the upgrade of the gatekeeper operator manually, for each cluster, when installPlanApproval is set to Manual.
3.9.4. Updating gatekeeper operator policy Copy linkLink copied to clipboard!
Learn to update the gatekeeper operator policy by viewing the following section.
3.9.4.1. Viewing gatekeeper operator policy from the console Copy linkLink copied to clipboard!
View your gatekeeper operator policy and its status from the console.
After you log into your cluster from the console, click Governance to view a table list of your policies. Note: You can filter the table list of your policies by selecting the Policies tab or Cluster violations tab.
Select the policy-gatekeeper-operator policy to view more details. View the policy violations by selecting the Clusters tab.
3.9.4.2. Disabling gatekeeper operator policy Copy linkLink copied to clipboard!
Disable your gatekeeper operator policy.
After you log into your Red Hat Advanced Cluster Management for Kubernetes console, navigate to the Governance page to view a table list of your policies.
Select the Actions icon for the policy-gatekeeper-operator policy, then click Disable. The Disable Policy dialog box appears.
Click Disable policy. Your policy-gatekeeper-operator policy is disabled.
3.9.5. Deleting gatekeeper operator policy Copy linkLink copied to clipboard!
Delete the gatekeeper operator policy from the CLI or the console.
Delete gatekeeper operator policy from the CLI:
Delete gatekeeper operator policy by running the following command:
kubectl delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>
kubectl delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After your policy is deleted, it is removed from your target cluster or clusters.
Verify that your policy is removed by running the following command:
kubectl get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>
kubectl get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete gatekeeper operator policy from the console:
Navigate to the Governance page to view a table list of your policies.
Similar to the previous console instructions, click the Actions icon for the
policy-gatekeeper-operatorpolicy. Click Remove to delete the policy. From the Remove policy dialog box, click Remove policy.
Your gatekeeper operator policy is deleted.
3.9.6. Uninstalling gatekeeper policy, gatekeeper, and gatekeeper operator policy Copy linkLink copied to clipboard!
Complete the following steps to uninstall gatekeeper policy, gatekeeper, and gatekeeper operator policy:
Remove the gatekeeper
ConstraintandConstraintTemplatethat is applied on your managed cluster:-
Edit your gatekeeper operator policy. Locate the
ConfigurationPolicytemplate that you used to create the gatekeeperConstraintandConstraintTemplate. -
Change the value for
complianceTypeof theConfigurationPolicytemplate tomustnothave. - Save and apply the policy.
-
Edit your gatekeeper operator policy. Locate the
Remove gatekeeper instance from your managed cluster:
-
Edit your gatekeeper operator policy. Locate the
ConfigurationPolicytemplate that you used to create the Gatekeeper custom resource (CR). -
Change the value for
complianceTypeof theConfigurationPolicytemplate tomustnothave.
-
Edit your gatekeeper operator policy. Locate the
Remove the gatekeeper operator that is on your managed cluster:
-
Edit your gatekeeper operator policy. Locate the
ConfigurationPolicytemplate that you used to create the Subscription CR. -
Change the value for
complianceTypeof theConfigurationPolicytemplate tomustnothave.
-
Edit your gatekeeper operator policy. Locate the
Gatekeeper policy, gatekeeper, and gatekeeper operator policy are uninstalled.
See Integrating gatekeeper constraints and constraint templates for details about gatekeeper. For a list of topics to integrate third-party policies with the product, see Integrate third-party policy controllers.
3.10. Managing operator policies in disconnected environments Copy linkLink copied to clipboard!
You might need to deploy Red Hat Advanced Cluster Management for Kubernetes policies on Red Hat OpenShift Container Platform clusters that are not connected to the internet (disconnected). If the policies you deploy are used to deploy policies that install an Operator Lifecycle Manager operator, you must follow the procedure for Mirroring an Operator catalog.
Complete the following steps to validate access to the operator images:
See Verify required packages are available to validate that packages you require to use with policies are available. You must validate availability for each image registry used by any managed cluster that the following policies are deployed to:
-
container-security-operator -
gatekeeper-operator-product -
compliance-operator
-
See Configure image content source policies to validate that the sources are available. The image content source policies must exist on each of the disconnected managed clusters and can be deployed using a policy to simplify the process. See the following table of image source locations:
Expand Governance policy type Image source location Container security
registry.redhat.io/quayCompliance
registry.redhat.io/complianceGatekeeper
registry.redhat.io/rhacm2
3.11. Secure the hub cluster Copy linkLink copied to clipboard!
Secure your Red Hat Advanced Cluster Management for Kubernetes installation by hardening the hub cluster security. Complete the following steps:
- Secure Red Hat OpenShift Container Platform. For more information, see OpenShift Container Platform security and compliance.
- Setup role-based access control (RBAC). For more information, see Role-based access control.
- Customize certificates, see Certificates.
- Define your cluster credentials, see Managing credentials overview
- Review the policies that are available to help you harden your cluster security. See Supported policies
3.12. Integrity shield protection (Technology Preview) Copy linkLink copied to clipboard!
Integrity shield is a tool that helps with integrity control for enforcing signature verification for any requests to create, or update resources. Integrity shield supports Open Policy Agent (OPA) and Gatekeeper, verifies if the requests have a signature, and blocks any unauthorized requests according to the defined constraint.
See the following integrity shield capabilities:
- Support the deployment of authorized Kubernetes manifests only.
- Support zero-drift in resource configuration unless the resource is added to the allowlist.
- Perform all integrity verification on the cluster such as enforcing the admission controller.
- Monitor resources continuously to report if unauthorized Kubernetes resources are deployed on the cluster.
-
X509, GPG, and Sigstore signing are supported to sign Kubernetes manifest YAML files. Kubernetes integrity shield supports Sigstore signing by using the
k8s-manifest-sigstore.
3.12.1. Integrity shield architecture Copy linkLink copied to clipboard!
Integrity shield consists of two main components, API and Observer. Integrity shield operator supports the installation and management of the integrity shield components on your cluster. View the following description of the components:
-
Integrity shield API receives a Kubernetes resource from the OPA or gatekeeper, validates the resource that is included in the admission request, and sends the verification result to the OPA or gatekeeper. The integrity shield API uses the
verify-resourcefeature of thek8s-manifest-sigstoreinternally to verify the Kubernetes manifest YAML file. Integrity shield API validates resources according toManifestingIntegrityConstraint, which is a custom resource based on the constraint framework of OPA or gatekeeper. -
Integrity shield Observer continuously verifies Kubernetes resources on clusters according to
ManifestingIntegrityConstraintresources and exports the results to resources called,ManifestIntegrityState. Integrity shield Observer also usesk8s-manifest-sigstoreto verify signatures.
3.12.2. Supported versions Copy linkLink copied to clipboard!
The following product versions support integrity shield protection:
See Enable integrity shield protection (Technology Preview) for more details.
3.12.3. Enable integrity shield protection (Technology Preview) Copy linkLink copied to clipboard!
Enable integrity shield protection in an Red Hat Advanced Cluster Management for Kubernetes cluster to protect the integrity of Kubernetes resources.
3.12.3.1. Prerequisites Copy linkLink copied to clipboard!
The following prerequisites are required to enable integrity shield protection on a Red Hat Advanced Cluster Management managed cluster:
-
Install an Red Hat Advanced Cluster Management hub cluster that has one or more managed clusters, along with cluster administrator access to the cluster to use the
ocorkubectlcommands. Install integrity shield. Before you install the integrity shield, you must install an Open Policy Agent or gatekeeper on your cluster. Complete the following steps to install the integrity shield operator:
Install the integrity shield operator in a namespace for integrity shield by running the following command:
kubectl create -f https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/integrity-shield-operator/deploy/integrity-shield-operator-latest.yaml
kubectl create -f https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/integrity-shield-operator/deploy/integrity-shield-operator-latest.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the integrity shield custom resource with the following command:
kubectl create -f https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/integrity-shield-operator/config/samples/apis_v1_integrityshield.yaml -n integrity-shield-operator-system
kubectl create -f https://raw.githubusercontent.com/open-cluster-management/integrity-shield/master/integrity-shield-operator/config/samples/apis_v1_integrityshield.yaml -n integrity-shield-operator-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Integrity shield requires a pair of keys for signing and verifying signatures of resources that need to be protected in a cluster. Set up signing and verification key pair:
Generate a new GPG key with the following command:
gpg --full-generate-key
gpg --full-generate-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Export your new GPG public key to a file with the following command:
gpg --export signer@enterprise.com > /tmp/pubring.gpg
gpg --export signer@enterprise.com > /tmp/pubring.gpgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Install
yqto run the script for signing a Red Hat Advanced Cluster Management policy. -
Enabling integrity shield protection and signing Red Hat Advanced Cluster Management include retrieving and committing sources from the
integrity-shieldrepository. You must install Git.
3.12.3.2. Enabling integrity shield protection Copy linkLink copied to clipboard!
Enable the integrity shield on your Red Hat Advanced Cluster Management managed cluster by completing the following steps:
Create a namespace on your hub cluster for the integrity shield. Run the following command:
oc create ns your-integrity-shield-ns
oc create ns your-integrity-shield-nsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy a verification key to a Red Hat Advanced Cluster Management managed cluster. As a reminder, you must create signing and verification keys. Run the
acm-verification-key-setup.shon your hub cluster to setup a verification key. Run the following command:curl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh | bash -s \ --namespace integrity-shield-operator-system \ --secret keyring-secret \ --path /tmp/pubring.gpg \ --label environment=dev | oc apply -f -curl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh | bash -s \ --namespace integrity-shield-operator-system \ --secret keyring-secret \ --path /tmp/pubring.gpg \ --label environment=dev | oc apply -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove the verification key, run the following command:
curl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh | bash -s - \ --namespace integrity-shield-operator-system \ --secret keyring-secret \ --path /tmp/pubring.gpg \ --label environment=dev | oc delete -f -curl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/ACM/acm-verification-key-setup.sh | bash -s - \ --namespace integrity-shield-operator-system \ --secret keyring-secret \ --path /tmp/pubring.gpg \ --label environment=dev | oc delete -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Red Hat Advanced Cluster Management policy named
policy-integrity-shieldon your hub cluster.-
Retrieve the
policy-integrity-shieldpolicy from thepolicy-collectionrepository. Be sure to fork the repository. -
Configure the namespace to deploy the integrity shield on a Red Hat Advanced Cluster Management managed cluster by updating the
remediationActionparameter value, frominformtoenforce. -
Configure a email for the signer and verification key by updating the
signerConfigsection. -
Configure the
PlacementRulewhich determines what Red Hat Advanced Cluster Management managed clusters that integrity shield should be deployed to. Sign
policy-integrity-shield.yamlby running the following command:curl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/gpg-annotation-sign.sh | bash -s \ signer@enterprise.com \ policy-integrity-shield.yamlcurl -s https://raw.githubusercontent.com/stolostron/integrity-shield/master/scripts/gpg-annotation-sign.sh | bash -s \ signer@enterprise.com \ policy-integrity-shield.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: You must create a new signature whenever you change the policy and apply to other clusters. Otherwise, the change is blocked and not applied.
-
Retrieve the
See policy-integrity-shield policy for an example.