Questo contenuto non è disponibile nella lingua selezionata.
Chapter 1. Governance
Enterprises must meet internal standards for software engineering, secure engineering, resiliency, security, and regulatory compliance for workloads hosted on private, multi and hybrid clouds. Red Hat Advanced Cluster Management for Kubernetes governance provides an extensible policy framework for enterprises to introduce their own security policies.
Continue reading the related topics of the Red Hat Advanced Cluster Management governance framework:
1.1. Policy controllers
Policy controllers monitor and report whether your cluster is compliant with a policy. Use the Red Hat Advanced Cluster Management for Kubernetes policy framework by using the supported policy templates to apply policies managed by these controllers. The policy controllers manage Kubernetes custom resource definition instances.
Policy controllers check for policy violations, and can make the cluster status compliant if the controller supports the enforcement feature. View the following topics to learn more about the following Red Hat Advanced Cluster Management for Kubernetes policy controllers:
Important: Only the configuration policy controller policies support the enforce
feature. You must manually remediate policies, where the policy controller does not support the enforce
feature.
1.1.1. Kubernetes configuration policy controller
Use the configuration policy controller to configure any Kubernetes resource and apply security policies across your clusters. The configuration policy controller communicates with the local Kubernetes API server so that you can get a list of your configurations that are in your cluster.
During installation, the configuration policy controller is created on the managed cluster. The configuration policy is provided in the policy-templates
field of the policy on the hub cluster, and is propagated to the selected managed clusters by the governance framework.
When the remediationAction
for the configuration policy controller is set to InformOnly
, the parent policy does not enforce the configuration policy, even if the remediationAction
in the parent policy is set to enforce
.
If you have existing Kubernetes manifests that you want to put in a policy, the Policy Generator is a useful tool to accomplish this.
1.1.1.1. Configuration policy YAML structure
You can find the description of a field on your managed cluster by running the oc explain --api-version=policy.open-cluster-management.io/v1 ConfigurationPolicy.<field-path>
command. Replace <field-path>
with the path to the field that you need.
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform 1 customMessage: compliant: {} noncompliant: {} severity: low evaluationInterval: compliant: "" noncompliant: "" object-templates-raw: "" object-templates: 2 - complianceType: musthave metadataComplianceType: recordDiff: "" recreateOption: "" objectSelector: matchLabels: {} matchExpressions: [] objectDefinition: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - image: pod-image name: pod-name ports: - containerPort: 80 - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: testData: hello
- 1
- When the
remediationAction
for the configuration policy is set toenforce
, the controller applies the specified configuration to the target managed cluster. However, configuration policies that specify an object without a name can only be set toinform
, unless theobjectSelector
is also configured. - 2
- A Kubernetes object is defined in the
object-templates
array in the configuration policy, where fields of the configuration policy controller is compared with objects on the managed cluster. You can also use templated values within configuration policies. For more advanced use cases, specify a string inobject-templates-raw
to create theobject-templates
that you want. For more information, see Template processing.
1.1.1.2. Configuration policy YAML table
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name of the policy. |
| Required for namespaced objects that do not have a namespace specified |
Determines namespaces in the managed cluster that the object is applied to. The |
| Required |
Specifies the action to take when the policy is non-compliant. Use the following parameter values: |
| Optional |
Configure the compliance message sent by the configuration policy based on the current compliance. Each message configuration is a string that can contain Go templates. The |
| Optional | Configure custom messages for configuration policies that are compliant. Go templates and UTF-8 encoded characters, including emoji and foreign characters, are supported values. |
| Optional | Configure custom messages for configuration policies that are non-compliant. Go templates and UTF-8 encoded characters, including emoji and foreign characters, are supported values. |
| Required |
Specifies the severity when the policy is non-compliant. Use the following parameter values: |
| Optional |
Specifies the frequency for a policy to be evaluated when it is in a particular compliance state. Use the parameters
When managed clusters have low resources, the evaluation interval can be set to long polling intervals to reduce CPU and memory usage on the Kubernetes API and policy controller. These are in the format of durations. For example, |
| Optional |
Specifies the evaluation frequency for a compliant policy. To enable the previous polling behavior, set this parameter to |
| Optional |
Specifies the evaluation frequency for a non-compliant policy. To enable the previous polling behavior, set this parameter to |
| Optional |
The array of Kubernetes objects (either fully defined or containing a subset of fields) for the controller to compare with objects on the managed cluster. Note: While |
| Optional |
Used to set object templates with a raw YAML string. Specify conditions for the object templates, where advanced functions like if-else statements and the {{- if eq .metadata.name "policy-grc-your-meta-data-name" }} replicas: 2 {{- else }} replicas: 1 {{- end }}
Note: While |
| Required | Use this parameter to define the desired state of the Kubernetes object on your managed clusters. Use one of the following verbs as the parameter value:
|
| Optional |
Overrides |
| Optional |
Use this parameter to specify if and where to display the difference between the object on the cluster and the
By default, this parameter is set to |
| Optional |
Describes when to delete and recreate an object when an update is required. When you set the object to |
| Optional |
|
| Required | A Kubernetes object (either fully defined or containing a subset of fields) for the controller to compare with objects on the managed cluster. |
| Optional | Determines whether to clean up resources related to the policy when the policy is removed from a managed cluster. |
1.1.1.3. Additional resources
See the following topics for more information:
- See Creating configuration policies.
- See the Hub cluster policy framework for more details on the hub cluster policy.
-
See the policy samples that use NIST Special Publication 800-53 (Rev. 4), and are supported by Red Hat Advanced Cluster Management from the
CM-Configuration-Management
folder. - For information about dry-run support, see the Kubernetes documentation, Dry-run.
- Learn about how policies are applied on your hub cluster, see Supported policies for more details.
- Refer to Policy controllers for more details about controllers.
- Customize your policy controller configuration. See Policy controller advanced configuration.
- Learn about cleaning up resources and other topics in the Cleaning up resources that are created by policies documentation.
- Refer to Policy Generator.
- Learn about how to create and customize policies, see Governance dashboard.
- See Template processing.
1.1.2. Certificate policy controller
You can use the certificate policy controller to detect certificates that are close to expiring, time durations (hours) that are too long, or contain DNS names that fail to match specified patterns. You can add the certificate policy to the policy-templates
field of the policy on the hub cluster, which propagates to the selected managed clusters by using the governance framework. See the Hub cluster policy framework documentation for more details on the hub cluster policy.
Configure and customize the certificate policy controller by updating the following parameters in your controller policy:
-
minimumDuration
-
minimumCADuration
-
maximumDuration
-
maximumCADuration
-
allowedSANPattern
-
disallowedSANPattern
Your policy might become non-compliant due to either of the following scenarios:
- When a certificate expires in less than the minimum duration of time or exceeds the maximum time.
- When DNS names fail to match the specified pattern.
The certificate policy controller is created on your managed cluster. The controller communicates with the local Kubernetes API server to get the list of secrets that contain certificates and determine all non-compliant certificates.
Certificate policy controller does not support the enforce
feature.
Note: The certificate policy controller automatically looks for a certificate in a secret in only the tls.crt
key. If a secret is stored under a different key, add a label named certificate_key_name
with a value set to the key to let the certificate policy controller know to look in a different key. For example, if a secret contains a certificate stored in the key named sensor-cert.pem
, add the following label to the secret: certificate_key_name: sensor-cert.pem
.
1.1.2.1. Certificate policy controller YAML structure
View the following example of a certificate policy and review the element in the YAML table:
apiVersion: policy.open-cluster-management.io/v1 kind: CertificatePolicy metadata: name: certificate-policy-example spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} labelSelector: myLabelKey: myLabelValue remediationAction: severity: minimumDuration: minimumCADuration: maximumDuration: maximumCADuration: allowedSANPattern: disallowedSANPattern:
1.1.2.1.1. Certificate policy controller YAML table
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name to identify the policy. |
| Optional |
In a certificate policy, the |
| Required |
Determines namespaces in the managed cluster where secrets are monitored. The
Note: If the |
| Optional | Specifies identifying attributes of objects. See the Kubernetes labels and selectors documentation. |
| Required |
Specifies the remediation of your policy. Set the parameter value to |
| Optional |
Informs the user of the severity when the policy is non-compliant. Use the following parameter values: |
| Required |
When a value is not specified, the default value is |
| Optional |
Set a value to identify signing certificates that might expire soon with a different value from other certificates. If the parameter value is not specified, the CA certificate expiration is the value used for the |
| Optional | Set a value to identify certificates that have been created with a duration that exceeds your desired limit. The parameter uses the time duration format from Golang. See Golang Parse Duration for more information. |
| Optional | Set a value to identify signing certificates that have been created with a duration that exceeds your defined limit. The parameter uses the time duration format from Golang. See Golang Parse Duration for more information. |
| Optional | A regular expression that must match every SAN entry that you have defined in your certificates. This parameter checks DNS names against patterns. See the Golang Regular Expression syntax for more information. |
| Optional | A regular expression that must not match any SAN entries you have defined in your certificates. This parameter checks DNS names against patterns.
Note: To detect wild-card certificate, use the following SAN pattern: See the Golang Regular Expression syntax for more information. |
1.1.2.2. Certificate policy sample
When your certificate policy controller is created on your hub cluster, a replicated policy is created on your managed cluster. See policy-certificate.yaml
to view the certificate policy sample.
1.1.2.3. Additional resources
- Learn how to manage a certificate policy, see Managing security policies for more details.
- Refer to Policy controllers introduction for more topics.
1.1.3. Policy set controller
The policy set controller aggregates the policy status scoped to policies that are defined in the same namespace. Create a policy set (PolicySet
) to group policies that are in the same namespace. All policies in the PolicySet
are placed together in a selected cluster by creating a PlacementBinding
to bind the PolicySet
and Placement
. The policy set is deployed to the hub cluster.
Additionally, when a policy is a part of multiple policy sets, existing and new Placement
resources remain in the policy. When a user removes a policy from the policy set, the policy is not applied to the cluster that is selected in the policy set, but the placements remain. The policy set controller only checks for violations in clusters that include the policy set placement.
Notes:
- The Red Hat Advanced Cluster Management sample policy set uses cluster placement. If you use cluster placement, bind the namespace containing the policy to the managed cluster set. See Deploying policies to your cluster for more details on using cluster placement.
-
In order to use a
Placement
resource, aManagedClusterSet
resource must be bound to the namespace of thePlacement
resource with aManagedClusterSetBinding
resource. Refer to Creating a ManagedClusterSetBinding resource for additional details.
Learn more details about the policy set structure in the following sections:
1.1.3.1. Policy set YAML structure
Your policy set might resemble the following YAML file:
apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: demo-policyset spec: policies: - policy-demo --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: demo-policyset-pb placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: demo-policyset-pr subjects: - apiGroup: policy.open-cluster-management.io kind: PolicySet name: demo-policyset --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo-policyset-pr spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: name operator: In values: - local-cluster tolerations: - key: cluster.open-cluster-management.io/unavailable operator: Exists - key: cluster.open-cluster-management.io/unreachable operator: Exists
1.1.3.2. Policy set table
View the following parameter table for descriptions:
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name for identifying the policy resource. |
| Required | Add configuration details for your policy. |
| Optional | The list of policies that you want to group together in the policy set. |
1.1.3.3. Policy set sample
apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: pci namespace: default spec: description: Policies for PCI compliance policies: - policy-pod - policy-namespace status: compliant: NonCompliant placement: - placementBinding: binding1 placement: placement1 policySet: policyset-ps
1.1.3.4. Additional resources
- See Red Hat OpenShift Platform Plus policy set.
- See the Creating policy sets section in the Managing security policies topic.
-
Also view the stable
PolicySets
, which require the Policy Generator for deployment, PolicySets-- Stable.
1.1.4. Operator policy controller
The operator policy controller allows you to monitor and install Operator Lifecycle Manager operators across your clusters. Use the operator policy controller to monitor the health of various pieces of the operator and to specify how you want to automatically handle updates to the operator.
You can also distribute an operator policy to managed clusters by using the governance framework and adding the policy to the policy-templates
field of a policy on the hub cluster.
You can also use template values within the operatorGroup
and subscription
fields of an operator policy. For more information, see Template processing.
1.1.4.1. Prerequisites
- Operator Lifecycle Manager must be available on your managed cluster. This is enabled by default on Red Hat OpenShift Container Platform.
- Required access: Cluster administrator
1.1.4.2. Operator policy YAML table
Field | Optional or required | Description |
---|---|---|
| Required |
Set the value to |
| Required |
Set the value to |
| Required | The name for identifying the policy resource. |
| Required |
If the |
| Optional |
By default, if the |
| Required |
Specifies the desired state of the operator on the cluster. If set to |
| Optional |
Determines which resource types need to be kept or removed when you enforce an |
| Required | Define the configurations to create an operator subscription. Add information in the following fields to create an operator subscription. Default options are selected for a few items if there is no entry:
|
| Optional |
Use this parameter to define the compliance behavior for specific scenarios that are associated with operators. You can set each of the following options individually, where the supported values are
|
| Required |
If the |
| Optional |
Declare which versions of the operator are compliant. The |
1.1.4.3. Additional resources
- See Template processing.
-
See Installing an operator by using the
OperatorPolicy
resource for more details. - See Managing operator policies in disconnected environments.
- See the Subscription topic in the OpenShift Container Platform documentation.
- See Operator Lifecycle Manager (OLM) for more details.
- See the Adding Operators to a cluster documentation for general information on OLM.
1.2. Template processing
Configuration policies and operator policies support the inclusion of Golang text templates. These templates are resolved at runtime either on the hub cluster or the target managed cluster using configurations related to that cluster. This gives you the ability to define policies with dynamic content, and inform or enforce Kubernetes resources that are customized to the target cluster.
A policy definition can contain both hub cluster and managed cluster templates. Hub cluster templates are processed first on the hub cluster, then the policy definition with resolved hub cluster templates is propagated to the target clusters. A controller on the managed cluster processes any managed cluster templates in the policy definition and then enforces or verifies the fully resolved object definition.
The template must conform to the Golang template language specification, and the resource definition generated from the resolved template must be a valid YAML. See the Golang documentation about Package templates for more information. Any errors in template validation are recognized as policy violations. When you use a custom template function, the values are replaced at runtime.
Important:
-
If you use hub cluster templates to propagate secrets or other sensitive data, the sensitive data exists in the managed cluster namespace on the hub cluster and on the managed clusters where that policy is distributed. The template content is expanded in the policy, and policies are not encrypted by the OpenShift Container Platform ETCD encryption support. To address this, use
fromSecret
orcopySecretData
, which automatically encrypts the values from the secret, orprotect
to encrypt other values. When you add multiline string values such as, certificates, always add
| toRawJson | toLiteral
syntax at the end of the template pipeline to handle line breaks. For example, if you are copying a certificate from aSecret
resource and including it in aConfigMap
resource, your template pipeline might be similar to the following syntax:ca.crt: '{{ fromSecret "openshift-config" "ca-config-map-secret" "ca.crt" | base64dec | toRawJson | toLiteral }}'
The
toRawJson
template function converts the input value to a JSON string with new lines escaped to not affect the YAML structure. ThetoLiteral
template function removes the outer single quotes from the output. For example, when templates are processed for thekey: '{{ 'hello\nworld' | toRawJson }}'
template pipeline, the output iskey: '"hello\nworld"'
. The output of thekey: '{{ 'hello\nworld' | toRawJson | toLiteral }}'
template pipeline iskey: "hello\nworld"
.
See the following table for a comparison of hub cluster and managed cluster templates:
1.2.1. Comparison of hub cluster and managed cluster templates
Templates | Hub cluster | Managed cluster |
---|---|---|
Syntax | Golang text template specification | Golang text template specification |
Delimiter | {{hub … hub}} | {{ … }} |
Context |
Context variables |
For |
Access control |
By default, you can only reference namespaced Kubernetes resources that are in the same namespace as the
Alternatively, you can specify the
Note: The service account must have | You can reference any resource on the cluster. |
Functions | A set of template functions that support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. See the Access control row for lookup restrictions.
The
The equivalent call might use the following syntax: | A set of template functions support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. |
Function output storage |
The output of template functions are stored in | The output of template functions are not stored in policy related resource objects. |
Processing | Processing occurs at runtime on the hub cluster during propagation of replicated policies to clusters. Policies and the hub cluster templates within the policies are processed on the hub cluster only when templates are created or updated. | Processing occurs on the managed cluster. Configuration policies are processed periodically, which automatically updates the resolved object definition with data in the referenced resources. Operator policies automatically update whenever a referenced resource changes. |
Processing errors | Errors from the hub cluster templates are displayed as violations on the managed clusters the policy applies to. | Errors from the managed cluster templates are displayed as violations on the specific target cluster where the violation occurred. |
Continue reading the following topics:
1.2.2. Template functions
Reference Kubernetes resources such as resource-specific and generic template functions on your hub cluster by using the {{hub … hub}}
delimiters, or on your managed cluster by using the {{ … }}
delimiters. You can use resource-specific functions for convenience and to make the content of your resources more accessible.
1.2.2.1. Template function descriptions
If you use the generic function, lookup
, which is more advanced, familiarize yourself with the YAML structure of the resource that is being looked up. In addition to these functions, utility functions such as base64enc
, base64dec
, indent
, autoindent
, toInt
, toBool
, and more are available.
To conform templates with YAML syntax, you must define templates in the policy resource as strings using quotes or a block character (|
or >
). This causes the resolved template value to also be a string. To override this, use toInt
or toBool
as the final function in the template to initiate further processing that forces the value to be interpreted as an integer or boolean respectively.
Continue reading to view the descriptions and examples for some of the custom template functions that are supported:
1.2.2.1.1. fromSecret
The fromSecret
function returns the value of the given data key in the secret. View the following syntax for the function:
func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)
When you use this function, enter the namespace, name, and data key of a Kubernetes Secret
resource. You must use the same namespace that is used for the policy when using the function in a hub cluster template. See Template processing for more details.
View the following configuration policy that enforces a Secret
resource on the target cluster:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret spec: object-templates: - objectDefinition: apiVersion: v1 kind: Secret 1 metadata: name: demosecret namespace: test type: Opaque data: 2 USER_NAME: YWRtaW4= PASSWORD: '{{ fromSecret "default" "localsecret" "PASSWORD" }}' 3
- 1
- You receive a policy violation if the Kubernetes
Secret
resource does not exist on the target cluster. If the data key does not exist on the target cluster, the value becomes an empty string. - 2
- When you use the
fromSecret
function with hub cluster templates, the output is automatically encrypted using the protect function to protect the value in flight to the managed cluster. - 3
- The value for the
PASSWORD
data key is a template that references the secret on the target cluster.
Important: When you add multiline string values such as, certificates, always add | toRawJson | toLiteral
syntax at the end of the template pipeline to handle line breaks. For example, if you are copying a certificate from a Secret
resource and including it in a ConfigMap
resource, your template pipeline might be similar to the following syntax:
ca.crt: '{{ fromSecret "openshift-config" "ca-config-map-secret" "ca.crt" | base64dec | toRawJson | toLiteral }}'
-
The
toRawJson
template function converts the input value to a JSON string with new lines escaped to not affect the YAML structure. -
The
toLiteral
template function removes the outer single quotes from the output. For example, when templates are processed for thekey: '{{ 'hello\nworld' | toRawJson }}'
template pipeline, the output iskey: '"hello\nworld"'
. The output of thekey: '{{ 'hello\nworld' | toRawJson | toLiteral }}'
template pipeline iskey: "hello\nworld"
.
1.2.2.1.2. fromConfigmap
The fromConfigMap
function returns the value of the given data key in the config map. When you use this function, enter the namespace, name, and data key of a Kubernetes ConfigMap
resource. You must use the same namespace that is used for the policy using the function in a hub cluster template. See Template processing for more details.
View the following syntax for the function:
func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)
View the following configuration policy that enforces a Kubernetes resource on the target managed cluster:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromcm-lookup spec: object-templates: - objectDefinition: ... data: 1 app-name: sampleApp app-description: "this is a sample app" log-file: '{{ fromConfigMap "default" "logs-config" "log-file" }}' 2 log-level: '{{ fromConfigMap "default" "logs-config" "log-level" }}' 3
- 1
- If the
data
key does not exist on the target cluster, the value becomes an empty string. - 2
- The
log-file
template value retrieves the value of thelog-file
data key from thelogs-config
ConfigMap
in thedefault
namespace. - 3
- The
log-level
template value retrieves the value of thelog-level
data key from thelogs-config
ConfigMap
in thedefault
namespace.
1.2.2.1.3. fromClusterClaim
The fromClusterClaim
function returns the value of the Spec.Value
in the ClusterClaim
resource. View the following syntax for the function:
func fromClusterClaim (clusterclaimName string) (dataValue string, err Error)
View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-clusterclaims 1 spec: object-templates: - objectDefinition: ... data: 2 platform: '{{ fromClusterClaim "platform.open-cluster-management.io" }}' 3 product: '{{ fromClusterClaim "product.open-cluster-management.io" }}' version: '{{ fromClusterClaim "version.openshift.io" }}'
- 1
- When you use this function, enter the name of a Kubernetes
ClusterClaim
resource. You receive a policy violation if theClusterClaim
resource does not exist. - 2
- Configuration values can be set as key-value properties.
- 3
- The value for the
platform
data key is a template that retrieves the value of theplatform.open-cluster-management.io
cluster claim. Similarly, it retrieves values forproduct
andversion
from theClusterClaim
resource.
1.2.2.1.4. lookup
The lookup
function returns the Kubernetes resource as a JSON compatible map. When you use this function, enter the API version, kind, namespace, name, and optional label selectors of the Kubernetes resource. You must use the same namespace that is used for the policy within the hub cluster template. See Template processing for more details.
If the requested resource does not exist, an empty map is returned. If the resource does not exist and the value is provided to another template function, you might get the following error: invalid value; expected string
.
Note: Use the default
template function, so the correct type is provided to later template functions. See the Sprig open source section.
View the following syntax for the function:
func lookup (apiversion string, kind string, namespace string, name string, labelselector ...string) (value string, err Error)
For label selector examples, see the reference to the Kubernetes labels and selectors documentation, in the Additional resources section. View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-lookup spec: object-templates: - objectDefinition: ... data: 1 app-name: sampleApp app-description: "this is a sample app" metrics-url: >- 2 http://{{ (lookup "v1" "Service" "default" "metrics").spec.clusterIP }}:8080
1.2.2.1.5. base64enc
The base64enc
function returns a base64
encoded value of the input data string
. When you use this function, enter a string value. View the following syntax for the function:
func base64enc (data string) (enc-data string)
View the following example of the configuration policy that uses the base64enc
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret spec: object-templates: - objectDefinition: ... data: USER_NAME: >- {{ fromConfigMap "default" "myconfigmap" "admin-user" | base64enc }}
1.2.2.1.6. base64dec
The base64dec
function returns a base64
decoded value of the input enc-data string
. When you use this function, enter a string value. View the following syntax for the function:
func base64dec (enc-data string) (data string)
View the following example of the configuration policy that uses the base64dec
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret spec: object-templates: - objectDefinition: ... data: app-name: >- {{ ( lookup "v1" "Secret" "testns" "mytestsecret") .data.appname ) | base64dec }}
1.2.2.1.7. indent
The indent
function returns the padded data string
. When you use this function, enter a data string with the specific number of spaces. View the following syntax for the function:
func indent (spaces int, data string) (padded-data string)
View the following example of the configuration policy that uses the indent
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy spec: object-templates: - objectDefinition: ... data: Ca-cert: >- {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | indent 4 }}
1.2.2.1.8. autoindent
The autoindent
function acts like the indent
function that automatically determines the number of leading spaces based on the number of spaces before the template.
View the following example of the configuration policy that uses the autoindent
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret spec: object-templates: - objectDefinition: ... data: Ca-cert: >- {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | autoindent }}
1.2.2.1.9. toInt
The toInt
function casts and returns the integer value of the input value. When this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as an integer by the YAML. When you use this function, enter the data that needs to be casted as an integer. View the following syntax for the function:
func toInt (input interface{}) (output int)
View the following example of the configuration policy that uses the toInt
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function spec: object-templates: - objectDefinition: ... spec: vlanid: >- {{ (fromConfigMap "site-config" "site1" "vlan") | toInt }}
1.2.2.1.10. toBool
The toBool
function converts the input string into a boolean, and returns the boolean. When this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as a boolean by the YAML. When you use this function, enter the string data that needs to be converted to a boolean. View the following syntax for the function:
func toBool (input string) (output bool)
View the following example of the configuration policy that uses the toBool
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function spec: object-templates: - objectDefinition: ... spec: enabled: >- {{ (fromConfigMap "site-config" "site1" "enabled") | toBool }}
1.2.2.1.11. protect
The protect
function enables you to encrypt a string in a hub cluster policy template. It is automatically decrypted on the managed cluster when the policy is evaluated. View the following example of the configuration policy that uses the protect
function:
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function spec: object-templates: - objectDefinition: ... spec: enabled: >- {{hub (lookup "v1" "Secret" "default" "my-hub-secret").data.message | protect hub}}
In the previous YAML example, there is an existing hub cluster policy template that is defined to use the lookup
function. On the replicated policy in the managed cluster namespace, the value might resemble the following syntax: $ocm_encrypted:okrrBqt72oI+3WT/0vxeI3vGa+wpLD7Z0ZxFMLvL204=
Each encryption algorithm used is AES-CBC using 256-bit keys. Each encryption key is unique per managed cluster and is automatically rotated every 30 days.
This ensures that your decrypted value is to never be stored in the policy on the managed cluster.
To force an immediate rotation, delete the policy.open-cluster-management.io/last-rotated
annotation on the policy-encryption-key
Secret in the managed cluster namespace on the hub cluster. Policies are then reprocessed to use the new encryption key.
1.2.2.1.12. toLiteral
The toLiteral
function removes any quotation marks around the template string after it is processed. You can use this function to convert a JSON string from a config map field to a JSON value in the manifest. Run the following function to remove quotation marks from the key
parameter value:
key: '{{ "[\"10.10.10.10\", \"1.1.1.1\"]" | toLiteral }}'
After using the toLiteral
function, the following update is displayed:
key: ["10.10.10.10", "1.1.1.1"]
1.2.2.1.13. copySecretData
The copySecretData
function copies all of the data
contents of the specified secret. View the following sample of the function:
...
objectDefinition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret-copy
data: '{{ copySecretData "default" "my-secret" }}' 1
1.2.2.1.14. copyConfigMapData
The copyConfigMapData
function copies all of the data
content of the specified config map. View the following sample of the function:
... objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-secret-copy data: '{{ copyConfigMapData "default" "my-configmap" }}'
1.2.2.1.15. getNodesWithExactRoles
The getNodesWithExactRoles
function returns a list of nodes with only the roles that you specify, and ignores nodes that have any additional roles except the node-role.kubernetes.io/worker
role. View the following sample function where you are selecting "infra"
nodes and ignoring the storage nodes:
... objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: infraNode: | {{- range $i,$nd := (getNodesWithExactRoles "infra").items }} node{{ $i }}: {{ $nd.metadata.name }} {{- end }} replicas: {{ len ((getNodesWithExactRoles "infra").items) | toInt }}
1.2.2.1.16. hasNodesWithExactRoles
The hasNodesWithExactRoles
function returns the true
value if the cluster contains nodes with only the roles that you specify, and ignores nodes that have any additional roles except the node-role.kubernetes.io/worker
role. View the following sample of the function:
... objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: key: '{{ hasNodesWithExactRoles "infra" }}'
1.2.2.1.17. skipObject
The skipObject
function is only available in managed cluster templates in a ConfigurationPolicy
resource. Calling {{ skipObject }}
at any time inside a Go template signals to skip that particular object for the policy. When you use the skipObject
with the objectSelector
, you can further filter objects selected by name.
View the following example that selects objects that have the label foo: bar
, but skips objects that have a name with the suffix -prod
:
... objectSelector: matchExpressions: - key: foo operator: In values: - bar objectDefinition: apiVersion: v1 kind: ConfigMap data: key: '{{ if (hasSuffix "-prod" .ObjectName) }}{{ skipObject }}{{ end }}{{ hasNodesWithExactRoles "infra" }}'
1.2.2.1.18. Sprig open source
Red Hat Advanced Cluster Management supports the following template functions that are included from the sprig
open source project:
Sprig library | Functions |
---|---|
Cryptographic and security |
|
Date |
|
Default |
|
Dictionaries and dict |
|
Integer math |
|
Integer slice |
|
Lists |
|
String functions |
|
Version comparison |
|
1.2.2.2. Additional resources
- See Template processing for more details.
- See Advanced template processing in policies for use-cases.
- See Policy CLI for tools to resolve Go templates locally.
- For label selector examples, see the Kubernetes labels and selectors documentation.
- Refer to the Golang documentation - Package templates.
- See the Sprig Function Documentation for more details.
1.2.3. Using advanced template processing in policies
Use both managed cluster and hub cluster templates to reduce the need to create separate policies for each target cluster or hardcode configuration values in the policy definitions. For security, both resource-specific and the generic lookup functions in hub cluster templates are restricted to the namespace of the policy on the hub cluster.
Important: If you use hub cluster templates to propagate secrets or other sensitive data, that causes sensitive data exposure in the managed cluster namespace on the hub cluster and on the managed clusters where that policy is distributed. The template content is expanded in the policy, and policies are not encrypted by the OpenShift Container Platform ETCD encryption support. To address this, use fromSecret
or copySecretData
, which automatically encrypts the values from the secret, or protect
to encrypt other values.
Continue reading for advanced template use-cases:
1.2.3.1. Special annotation for reprocessing
Hub cluster templates are resolved to the data in the referenced resources during policy creation, or when the referenced resources are updated.
If you need to manually initiate an update, use the special annotation, policy.open-cluster-management.io/trigger-update
, to indicate changes for the data referenced by the templates. Any change to the special annotation value automatically initiates template processing. Additionally, the latest contents of the referenced resource are read and updated in the policy definition that is propagated for processing on managed clusters. A way to use this annotation is to increment the value by one each time.
1.2.3.2. Bypassing template processing
By default, Red Hat Advanced Cluster Management processes all Go templates. You might create a policy that contains a policy in policy-templates
that is not intended to be processed by Red Hat Advanced Cluster Management. To bypass template processing for a particular template, change {{ template content }}
to +{{ `{{ template content }}
}}+`. Then, the template returns a raw string to be processed by a subsequent templating engine.
If you want to completely bypass the template resolution for a configuration policy, add the policy.open-cluster-management.io/disable-templates: "true"
annotation in the relevant ConfigurationPolicy
that is contained in your Policy
resource.
1.2.3.3. Configuration policy objectSelector
To iterate over many objects that are filtered by a label with ConfigurationPolicy
, use the objectSelector
and specify the kind
in the objectDefinition
. The objectSelector
is similar to the namespaceSelector
, except that it does not have name filtering capabilities. To filter by name, use the skipObject
Go template function. If you reference {{ skipObject }}
at any time, it signals to skip that particular object from those that are selected by the objectSelector
for the policy.
See the following YAML sample that selects objects that have the label foo: bar
, but skips objects that have a name with the suffix -prod
:
... objectSelector: matchExpressions: - key: foo operator: In values: - bar objectDefinition: apiVersion: v1 kind: ConfigMap data: key: '{{ if (hasSuffix "-prod" .ObjectName) }}{{ skipObject }}{{ end }}{{ hasNodesWithExactRoles "infra" }}'
1.2.3.4. Processing raw templates in configuration policies
The object-template-raw
parameter is an optional parameter that supports advanced templating use-cases, such as the if
conditional function and the range
loop function. The object-templates-raw
parameter accepts a string containing Go templates, and when you use these templates, they must result in an object-templates
array.
For example, see the following YAML sample that adds the species-category: mammal
label to any ConfigMap
in the default
namespace that has a name
key equal to Sea Otter
:
object-templates-raw: | {{- range (lookup "v1" "ConfigMap" "default" "").items }} {{- if eq .data.name "Sea Otter" }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: species-category: mammal {{- end }} {{- end }}
Note: While spec.object-templates
and spec.object-templates-raw
are optional, exactly one of the two parameter fields must be set.
View the following policy example that uses advanced templates to create and configure infrastructure MachineSet
objects for your managed clusters.
apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: create-infra-machineset spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* Specify the parameters needed to create the MachineSet */ -}} {{- $machineset_role := "infra" }} {{- $region := "ap-southeast-1" }} {{- $zones := list "ap-southeast-1a" "ap-southeast-1b" "ap-southeast-1c" }} {{- $infrastructure_id := (lookup "config.openshift.io/v1" "Infrastructure" "" "cluster").status.infrastructureName }} {{- $worker_ms := (index (lookup "machine.openshift.io/v1beta1" "MachineSet" "openshift-machine-api" "").items 0) }} {{- /* Generate the MachineSet for each zone as specified */ -}} {{- range $zone := $zones }} - complianceType: musthave objectDefinition: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }} name: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }} machine.openshift.io/cluster-api-machineset: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }} template: metadata: labels: machine.openshift.io/cluster-api-cluster: {{ $infrastructure_id }} machine.openshift.io/cluster-api-machine-role: {{ $machineset_role }} machine.openshift.io/cluster-api-machine-type: {{ $machineset_role }} machine.openshift.io/cluster-api-machineset: {{ $infrastructure_id }}-{{ $machineset_role }}-{{ $zone }} spec: metadata: labels: node-role.kubernetes.io/{{ $machineset_role }}: "" taints: - key: node-role.kubernetes.io/{{ $machineset_role }} effect: NoSchedule providerSpec: value: ami: id: {{ $worker_ms.spec.template.spec.providerSpec.value.ami.id }} apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: encrypted: true iops: 2000 kmsKey: arn: '' volumeSize: 500 volumeType: io1 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 instanceType: {{ $worker_ms.spec.template.spec.providerSpec.value.instanceType }} iamInstanceProfile: id: {{ $infrastructure_id }}-worker-profile kind: AWSMachineProviderConfig placement: availabilityZone: {{ $zone }} region: {{ $region }} securityGroups: - filters: - name: tag:Name values: - {{ $infrastructure_id }}-worker-sg subnet: filters: - name: tag:Name values: - {{ $infrastructure_id }}-private-{{ $zone }} tags: - name: kubernetes.io/cluster/{{ $infrastructure_id }} value: owned userDataSecret: name: worker-user-data {{- end }}
1.2.3.5. Resolving hub templates after defining configuration policies
By default, if you use the hub templates in the ConfigurationPolicy
and OperatorPolicy
resources, you must correspond the Policy
resources that are defined on the hub cluster with these resources.
You can define configuration policies directly on the managed clusters. For example, OpenShift GitOps is one that you can define configuration policies directly on the managed cluster. To resolve hub templates on the managed cluster in that situation, you can enable the governance-standalone-hub-templating
add-on.
To enable the governance-standalone-hub-templating
add-on, complete the following steps:
- On your hub cluster, go to the managed cluster namespace.
Create a
ManagedClusterAddOn
resource with thegovernance-standalone-hub-templating
name by using the following YAML sample:apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-standalone-hub-templating namespace: <cluster name> labels: cluster.open-cluster-management.io/backup: '' spec: installNamespace: open-cluster-management-agent-addon
By default, the agent on the managed cluster only has access to the ManagedCluster
resources on the hub cluster. You can use the .ManagedClusterLabels
template variable in the hub cluster templates inside the ConfigurationPolicies
that are deployed directly to the managed cluster.
If you want the hub template to access other resources, such as the lookup
or fromConfigMap
function calls, you must add those specific permissions to the add-on group. You can add these permissions through resources, such as Roles
, ClusterRoles
, RoleBindings
, and ClusterRoleBindings
.
The name of the add-on group depends on the name of your managed cluster, but it has the following standard form: system:open-cluster-management:cluster:<cluster name>:addon:governance-standalone-hub-templating
.
To allow access to Configmaps
in your managed cluster namespace on your hub cluster, complete the following steps:
Add the
Role
resource by running the following command:oc create role -n <cluster name> cm-reader --verb=get,list,watch --resource=configmaps
Add the
Rolebinding
by running the following command:oc create rolebinding -n <cluster name> cm-reader-binding --role=cm-reader --group=system:open-cluster-management:cluster:<cluster name>:addon:governance-standalone-hub-templating
-
To ensure these resources on the hub cluster get backed up and restored, add the following label to each resource that you create:
cluster.open-cluster-management.io/backup
.
After you add these resources, the add-on can resolve the hub templates, and the state of the policy gets saved on a secret on the managed clusters. This secret prevents interruptions if the hub cluster becomes temporarily unavailable to the managed cluster.
1.2.3.6. Additional resources
- See Resources that are backed up for more details on backing up and restoring resources.
- See Template functions for more details.
- Return to Template processing.
- See Policy CLI for tools to resolve Go templates locally.
- See Kubernetes configuration policy controller for more details.
- Also refer to the Backing up etcd data.