Questo contenuto non è disponibile nella lingua selezionata.
Release notes
Learn about new features, access the Red Hat Advanced Cluster Management Support Matrix, and see errata updates. Find known issues and limitations, deprecations and removals, and information for GDPR and FIPS readiness.
Abstract
Chapter 1. Release notes for Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
Learn about Red Hat Advanced Cluster Management 2.14 new features and enhancements, support, deprecations, removals, and fixed issues for errata releases.
Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator.
Access the Red Hat Advanced Cluster Management Support Matrix to learn about hub cluster and managed cluster requirements and support for each component. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.
- New features and enhancements for Red Hat Advanced Cluster Management
- Fixed issues for Red Hat Advanced Cluster Management
- Known issues Red Hat Advanced Cluster Management
- Deprecations and removals for Red Hat Advanced Cluster Management
- Red Hat Advanced Cluster Management for Kubernetes considerations for GDPR readiness
- FIPS readiness
- Observability support
Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.
Deprecated: Red Hat Advanced Cluster Management 2.9 and earlier versions are no longer supported. The documentation might remain available, but without any errata releases for fixed issues or other updates.
Best practice: Upgrade to the most recent version.
- The documentation references the earliest supported Red Hat OpenShift Container Platform versions, unless the component in the documentation is created and tested with only a specific version of OpenShift Container Platform.
- For full support information, see the Red Hat Advanced Cluster Management Support Matrix and the Lifecycle and update policies for Red Hat Advanced Cluster Management for Kubernetes.
- If you experience issues with one of the currently supported releases, or the product documentation, go to Red Hat Support where you can troubleshoot, view Knowledgebase articles, connect with the Support Team, or open a case. You must log in with your credentials.
- You can also learn more about the Customer Portal documentation at Red Hat Customer Portal FAQ.
1.1. New features and enhancements for Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes 2.14 provides visibility of your entire Kubernetes domain with built-in governance, cluster lifecycle management, and application lifecycle management with GitOps, along with observability.
Red Hat Advanced Cluster Management version 2.14 is released with the multicluster engine operator version 2.9 for Cluster lifecycle management. To learn about cluster management this release, see the Release notes for Cluster lifecycle with multicluster engine operator.
Access the Red Hat Advanced Cluster Management Support Matrix to learn about hub cluster and managed cluster requirements and support for each component. For lifecycle information, see Red Hat OpenShift Container Platform Life Cycle policy.
Important:
- Submariner version 0.21.0 is not released at the same time as Red Hat Advanced Cluster Management 2.14. If you use Submariner with Red Hat Advanced Cluster Management, do not update to Red Hat Advanced Cluster Management version 2.14 until the latest Submariner version is released and announced.
- Red Hat Advanced Cluster Management supports all providers that are certified through the Cloud Native Computing Foundation (CNCF) Kubernetes Conformance Program. Choose a vendor that is recognized by CNCF for your hybrid cloud multicluster management. See the following information about using CNCF providers:
- Learn how CNCF providers are certified at Certified Kubernetes Conformance.
- For Red Hat support information about CNCF third-party providers, see Red Hat support with third party components, or Contact Red Hat support.
-
If you bring your own CNCF conformance certified cluster, you need to change the OpenShift Container Platform CLI
oc
command to the Kubernetes CLI command,kubectl
.
1.1.1. General announcements for this release Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management is now available on the AWS Marketplace with either on-demand or annual pricing and a simplified billing option for running Red Hat Advanced Cluster Management on Red Hat OpenShift Service on AWS clusters. Red Hat Advanced Cluster Management on the AWS Marketplace offers billing consistency, flexible resource consumption, and cost efficiency.
Purchase from the AWS Marketplace, then you can install MultiClusterHub
resources and manage your clusters. To view the overall consumption, go to console.redhat.com. Click Subscriptions Usage > OpenShift. Select Variant: Red Hat Advanced Cluster Management from the drop-down menu.
1.1.2. New features and enhancements for each component Copia collegamentoCollegamento copiato negli appunti!
Learn specific details about new features for components within Red Hat Advanced Cluster Management:
Some features and components are identified and released as Technology Preview.
1.1.3. Installation Copia collegamentoCollegamento copiato negli appunti!
Learn about Red Hat Advanced Cluster Management installation features and enhancements. multicluster engine operator installation release notes are in the Cluster lifecycle with multicluster engine operator documentation that is linked earlier in this topic.
-
You can now change the name of your local cluster, which is your managed hub cluster. By default, the hub cluster manages itself as the
local-cluster
unless you change the settings. When thedisableHubSelfManagement
field is set totrue
, which disables the local cluster feature, you can change the name. Use 34 or fewer characters for the<your-local-cluster-name>
value. Thelocal-cluster
resource and the namespaces reflects the change.
Learn about Red Hat Advanced Cluster Management advanced configuration options at MultiClusterHub advanced configuration in the documentation.
1.1.4. Console Copia collegamentoCollegamento copiato negli appunti!
Learn about what is new in the Red Hat Advanced Cluster Management integrated console, which includes Search capability.
- You can filter discovered clusters by cluster type and infrastructure provider in the console. To learn more, see View discovered clusters.
- You can now view related resources for your policies across different clusters within a single table in the Discovered policies section of the console. This table works for all the policy types except for the Gatekeeper operator resources. See Policy deployment for more details.
See Web console for more information about the Console.
1.1.5. Clusters Copia collegamentoCollegamento copiato negli appunti!
For new features that are related to multicluster engine operator, see New features and enhancements for Cluster lifecycle with multicluster engine operator in the Cluster section of the documentation.
For cluster management with Red Hat Advanced Cluster Management, see the following new features and enhancements:
-
Technology Preview: You can now reinstall clusters by reusing the same
ClusterInstance
custom resource from the SiteConfig Operator. For more information, see Cluster reinstallation with the SiteConfig operator (Technology Preview). - Technology Preview: You can grant more specific permissions with fine-grained role-based access for your virtual machines. As a cluster administrator, you can now manage and control permissions that are based on the namespace of a managed cluster, as well as the entire cluster. You can also grant permissions for individual virtual machines. See Implementing fine-grained role-based access control with the console (Technology Preview) and Implementing fine-grained role-based access control in the terminal (Technology Preview) for information.
- You can now configure resource requests and limits on all add-ons. To learn more, see Configuring klusterlet add-ons.
View other cluster management tasks and support information at Cluster lifecycle with multicluster engine operator overview.
1.1.6. multicluster global hub Copia collegamentoCollegamento copiato negli appunti!
Learn what is new for multicluster global hub this release.
-
You can now change the build in the
amq-strimzi
catalog source name and namespace in your disconnected environment. For more information, see Adding the registry and catalog to your disconnected cluster - You can install multicluster global hub in an existing Red Hat Advanced Cluster Management hub cluster and enable the multicluster global hub agent in the Red Hat Advanced Cluster Management. Kafka stores your data in the multicluster global hub database so you can view the cluster and policy information from the Grafana dashboards. For more information, see Installing multicluster global hub on an existing Red Hat Advanced Cluster Management hub cluster
- Technology Preview: With the Managed Cluster Migration feature, you can migrate managed clusters from one Red Hat Advanced Cluster Management hub cluster to another. For more information, see Migrating managed clusters (Technology Preview)
- For other multicluster global hub topics, see multicluster global hub.
1.1.7. Applications Copia collegamentoCollegamento copiato negli appunti!
Read about new features for application management.
-
You can now implement the
ApplicationSet
progressive rollout strategy to your application in your cluster fleet for both the push and pull model, giving you a Red Hat OpenShift GitOps-based process that makes changes safely across your entire cluster fleet. For more information, see Implementing progressive rollout strategy by using ApplicationSet resource (Technology Preview) -
When you create a
ClusterPermission
resource, you can still use an existingClusterRole
instead of using one of the newly created roles. For more information, see Creating a cluster permission using an existing role. -
You can create a
ClusterPermission
resource so that you can simultaneously work on different subjects. For more information, see Creating a cluster permission resource to reference subjects.
For other Application and GitOps topics, see Managing applications.
1.1.8. Observability Copia collegamentoCollegamento copiato negli appunti!
- Technology Preview: Use right-sizing guides to optimize workloads on a cluster and namespace level. To learn more, see Optimizing workloads by using right-sizing guides (Technology Preview).
-
You can disable user workload alert fowarding to Alertmanager for your managed clusters by setting the
mco-disable-uwl-alerting
annotation totrue
. See Disabling user workload alert forwarding for managed clusters for more details.
See Observability service to learn more.
1.1.9. Governance Copia collegamentoCollegamento copiato negli appunti!
-
You can now use the
.Object
Go template context variable in theConfigurationPolicy
resource to reference values from the target object on the cluster along with the.ObjectName
and.ObjectNamespace
context variables. For more information, see the Comparison of hub cluster and managed cluster templates. -
You can convert YAML strings by implementing the
fromYaml
andtoYaml
Go template functions in theConfigurationPolicy
resource. You can also access more Sprig functions in the list of available functions. For more information, see fromYaml. -
With the
dryrun
subcommand, you can run thepolicytools
command line to evaluate aConfigurationPolicy
locally by accessing the static resource files. For more information, see Policy command line interface. -
To give different operator versions in one entry of the
versions
list within anOperatorPolicy
resource, separate the versions with a comma. For more information, see Operator policy YAML table.
See Governance to learn more about the dashboard and the policy framework.
1.1.10. Business continuity Copia collegamentoCollegamento copiato negli appunti!
Learn about new features for Back up and restore and VolSync components.
-
You can now use the
namespaceMapping
property to restore resources in a different namespace than the namespace of the initial resource. See Defining the acm-dr-virt-restore-config ConfigMap for more information. -
You can set an annotation on the OADP operator to use the
startingCSV
property. For more information, see Installing a startingCSV on the OADP operator.
To learn about VolSync, which enables asynchronous replication of persistent volumes within a cluster, see VolSync persistent volume replication service
To learn about Backup and restore, see Backup and restore.
1.1.11. Networking Copia collegamentoCollegamento copiato negli appunti!
- Technology Preview: You can now use dual-stack clusters with OVN-Kubernetes Container Network Interface on Submariner without Globalnet and with the Libreswan cable driver. See Submariner multicluster networking and service discovery.
See Networking.
1.1.12. Learn more about this release Copia collegamentoCollegamento copiato negli appunti!
See more information about the release, as well as support information for the product.
- See more release notes, such as Known Issues and Limitations, in Release notes for Red Hat Advanced Cluster Management.
- Get an overview of Red Hat Advanced Cluster Management from the Welcome to Red Hat Advanced Cluster Management for Kubernetes topic.
- See the Multicluster architecture topic to learn more about major components of the product.
- See support information and more in the Red Hat Advanced Cluster Management Troubleshooting.
- Access the open source Open Cluster Management community for interaction, growth, and contributions from the open community. To get involved, see open-cluster-management.io.
1.2. Fixed issues for Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
By default, fixed issues for errata releases are automatically applied when released. The details are published here when the release is available. If no release notes are listed, the product does not have an errata release at this time.
Important: For reference, Jira links and Jira numbers might be added to the content and used internally. Links that require access might not be available for the user. Learn about the types of errata releases from Red Hat.
See Upgrading by using the operator for more information about upgrades.
Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator.
1.3. Known issues Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for application management. The following list contains known issues for this release, or known issues that continued from the previous release.
Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator.
Important: OpenShift Container Platform release notes are not documented in this product documentation. For your OpenShift Container Platform cluster, see OpenShift Container Platform release notes.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.1. Installation known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for installing and upgrading. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.1.1. Uninstalling and reinstalling earlier versions with an upgrade can fail Copia collegamentoCollegamento copiato negli appunti!
Uninstalling Red Hat Advanced Cluster Management from OpenShift Container Platform can cause issues if you later want to install earlier versions and then upgrade. For instance, when you uninstall Red Hat Advanced Cluster Management, then install an earlier version of Red Hat Advanced Cluster Management and upgrade that version, the upgrade might fail. The upgrade fails if the custom resources were not removed.
Follow the Cleaning up artifacts before reinstalling procedure to prevent this problem.
1.3.1.2. Cannot remove data image resources after a cluster reinstallation with Image-Based Break/Fix Copia collegamentoCollegamento copiato negli appunti!
If you change the spec.nodes.<node-id>.bmcAddress
field while reinstalling a cluster with the Image-Based Break/Fix, the SiteConfig operator is unable to contact the original machine nd cannot delete the data image resources from the original cluster. To work around this issue, remove the finalizer from the data image resources before reinstalling a cluster with Image-Based Break/Fix.
To remove the finalizer from the data image resources, run the following command:
oc patch dataimages.metal3.io -n target-0 target-0-0 --patch '{"metadata":{"finalizers":[]}}' --type merge
$ oc patch dataimages.metal3.io -n target-0 target-0-0 --patch '{"metadata":{"finalizers":[]}}' --type merge dataimage.metal3.io/target-0-0 patched
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.2. Business continuity known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.2.1. Backup and restore known issues Copia collegamentoCollegamento copiato negli appunti!
Backup and restore known issues and limitations are listed here, along with workarounds if they are available.
1.3.2.1.1. Bare metal hub resource no longer backed up by the managed clusters backup Copia collegamentoCollegamento copiato negli appunti!
If the resources for the bare metal cluster are backed up and restored to a secondary hub cluster by using the Red Hat Advanced Cluster Management back up and restore feature, the managed cluster reinstalls on the nodes, which destroys the existing managed cluster.
Note: This only affects bare metal clusters that were deployed by using zero touch provisioning, meaning that they have BareMetalHost
resources that manage powering on and off bare metal nodes and attaching virtual media for booting. If a BareMetalHost
resource was not used in the deployment of the managed cluster, there is no negative impact.
To work around this issue, the BareMetalHost
resources on the primary hub cluster are no longer backed up with the managed cluster backup.
If you have a different use case and want the managed BareMetalHost
resources on the primary hub cluster to be backed up, add the following backup label to the BareMetalHost
resources on the primary hub cluster: cluster.open-cluster-management.io/backup
.
To learn more about using this backup label to back up generic resources, see Resources that are backed up.
1.3.2.1.2. Velero restore limitations Copia collegamentoCollegamento copiato negli appunti!
A new hub cluster can have a different configuration than the active hub cluster if the new hub cluster, where the data is restored, has user-created resources. For example, this can include an existing policy that was created on the new hub cluster before the backup data is restored on the new hub cluster.
Velero skips existing resources if they are not part of the restored backup, so the policy on the new hub cluster remains unchanged, resulting in a different configuration between the new hub cluster and active hub cluster.
To address this limitation, the cluster backup and restore operator runs a post restore operation to clean up the resources created by the user or a different restore operation when a restore.cluster.open-cluster-management.io
resource is created.
For more information, see the Cleaning the hub cluster after restore topic.
1.3.2.1.3. Passive configurations do not display managed clusters Copia collegamentoCollegamento copiato negli appunti!
Managed clusters are only displayed when the activation data is restored on the passive hub cluster.
1.3.2.1.4. Managed cluster resource not restored Copia collegamentoCollegamento copiato negli appunti!
When you restore the settings for the local-cluster
managed cluster resource and overwrite the local-cluster
data on a new hub cluster, the settings are misconfigured. Content from the previous hub cluster local-cluster
is not backed up because the resource contains local-cluster
specific information, such as the cluster URL details.
You must manually apply any configuration changes that are related to the local-cluster
resource on the restored cluster. See Prepare the new hub cluster in the Installing the backup and restore operator topic.
1.3.2.1.5. Restored Hive managed clusters might not be able to connect with the new hub cluster Copia collegamentoCollegamento copiato negli appunti!
When you restore the backup of the changed or rotated certificate of authority (CA) for the Hive managed cluster, on a new hub cluster, the managed cluster fails to connect to the new hub cluster. The connection fails because the admin
kubeconfig
secret for this managed cluster, available with the backup, is no longer valid.
You must manually update the restored admin
kubeconfig
secret of the managed cluster on the new hub cluster.
1.3.2.1.6. Imported managed clusters show a Pending Import status Copia collegamentoCollegamento copiato negli appunti!
Managed clusters that are manually imported on the primary hub cluster show a Pending Import
status when the activation data is restored on the passive hub cluster. For more information, see Connecting clusters by using a Managed Service Account.
1.3.2.1.7. The appliedmanifestwork is not removed from managed clusters after restoring the hub cluster Copia collegamentoCollegamento copiato negli appunti!
When the hub cluster data is restored on the new hub cluster, the appliedmanifestwork
is not removed from managed clusters that have a placement rule for an application subscription that is not a fixed cluster set.
See the following example of a placement rule for an application subscription that is not a fixed cluster set:
spec: clusterReplicas: 1 clusterSelector: matchLabels: environment: dev
spec:
clusterReplicas: 1
clusterSelector:
matchLabels:
environment: dev
As a result, the application is orphaned when the managed cluster is detached from the restored hub cluster.
To avoid the issue, specify a fixed cluster set in the placement rule. See the following example:
spec: clusterSelector: matchLabels: environment: dev
spec:
clusterSelector:
matchLabels:
environment: dev
You can also delete the remaining appliedmanifestwork
manually by running the folowing command:
oc delete appliedmanifestwork <the-left-appliedmanifestwork-name>
oc delete appliedmanifestwork <the-left-appliedmanifestwork-name>
1.3.2.1.8. The managed-serviceaccount add-on status shows Unknown Copia collegamentoCollegamento copiato negli appunti!
The managed cluster appliedmanifestwork
addon-managed-serviceaccount-deploy
is removed from the imported managed cluster if you are using the Managed Service Account without enabling it on the multicluster engine for Kubernetes operator resource of the new hub cluster.
The managed cluster is still imported to the new hub cluster, but the managed-serviceaccount
add-on status shows Unknown
.
You can recover the managed-serviceaccount
add-on after enabling the Managed Service Account in the multicluster engine operator resource. See Enabling automatic import to learn how to enable the Managed Service Account.
1.3.2.2. Volsync known issues Copia collegamentoCollegamento copiato negli appunti!
1.3.2.2.1. Restoring the connection of a managed cluster with custom CA certificates to its restored hub cluster might fail Copia collegamentoCollegamento copiato negli appunti!
After you restore the backup of a hub cluster that manages a cluster with custom CA certificates, the connection between the managed cluster and the hub cluster might fail. This is because the CA certificate was not backed up on the restored hub cluster. To restore the connection, copy the custom CA certificate information that is in the namespace of your managed cluster to the <managed_cluster>-admin-kubeconfig
secret on the restored hub cluster.
Note: If you copy this CA certificate to the hub cluster before creating the backup copy, the backup copy includes the secret information. When you use the backup copy to restore in the future, the connection between the hub cluster and managed cluster automatically completes.
1.3.3. Console known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for the console. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.3.1. klusterlet-addon-search pod fails Copia collegamentoCollegamento copiato negli appunti!
The klusterlet-addon-search
pod fails because the memory limit is reached. You must update the memory request and limit by customizing the klusterlet-addon-search
deployment on your managed cluster. Edit the ManagedclusterAddon
custom resource named search-collector
, on your hub cluster. Add the following annotations to the search-collector
and update the memory, addon.open-cluster-management.io/search_memory_request=512Mi
and addon.open-cluster-management.io/ search_memory_limit=1024Mi
.
For example, if you have a managed cluster named foobar
, run the following command to change the memory request to 512Mi
and the memory limit to 1024Mi
:
oc annotate managedclusteraddon search-collector -n foobar \ addon.open-cluster-management.io/search_memory_request=512Mi \ addon.open-cluster-management.io/search_memory_limit=1024Mi
oc annotate managedclusteraddon search-collector -n foobar \
addon.open-cluster-management.io/search_memory_request=512Mi \
addon.open-cluster-management.io/search_memory_limit=1024Mi
1.3.3.1.1. Search does not display node information from the managed cluster Copia collegamentoCollegamento copiato negli appunti!
Search maps RBAC for resources in the hub cluster. Depending on user RBAC settings, users might not see node data from the managed cluster. Results from search might be different from what is displayed on the Nodes page for a cluster.
1.3.3.2. Cannot upgrade OpenShift Dedicated in console Copia collegamentoCollegamento copiato negli appunti!
From the console you can request an upgrade for OpenShift Dedicated clusters, but the upgrade fails with the Cannot upgrade non openshift cluster
error message. Currently there is no workaround.
1.3.3.3. Search PostgreSQL pod is in CrashLoopBackoff state Copia collegamentoCollegamento copiato negli appunti!
The search-postgres
pod is in CrashLoopBackoff
state. If Red Hat Advanced Cluster Management is deployed in a cluster with nodes that have the hugepages
parameter enabled and the search-postgres
pod gets scheduled in these nodes, then the pod does not start.
Complete the following steps to increase the memory of the search-postgres
pod:
Pause the
search-operator
pod with the following command:oc annotate search search-v2-operator search-pause=true
oc annotate search search-v2-operator search-pause=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
search-postgres
deployment with a limit for thehugepages
parameter. Run the following command to set thehugepages
parameter to512Mi
:oc patch deployment search-postgres --type json -p '[{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits/hugepages-2Mi", "value":"512Mi"}]'
oc patch deployment search-postgres --type json -p '[{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits/hugepages-2Mi", "value":"512Mi"}]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before you verify the memory usage for the pod, make sure your
search-postgres
pod is in theRunning
state. Run the following command:oc get pod <your-postgres-pod-name> -o jsonpath="Status: {.status.phase}"
oc get pod <your-postgres-pod-name> -o jsonpath="Status: {.status.phase}"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to verify the memory usage of the
search-postgres
pod:oc get pod <your-postgres-pod-name> -o jsonpath='{.spec.containers[0].resources.limits.hugepages-2Mi}'
oc get pod <your-postgres-pod-name> -o jsonpath='{.spec.containers[0].resources.limits.hugepages-2Mi}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The following value appears, 512Mi
.
1.3.3.4. Cannot edit namespace bindings for cluster set Copia collegamentoCollegamento copiato negli appunti!
When you edit namespace bindings for a cluster set with the admin
role or bind
role, you might encounter an error that resembles the following message:
ResourceError: managedclustersetbindings.cluster.open-cluster-management.io "<cluster-set>" is forbidden: User "<user>" cannot create/delete resource "managedclustersetbindings" in API group "cluster.open-cluster-management.io" in the namespace "<namespace>".
To resolve the issue, make sure you also have permission to create or delete a ManagedClusterSetBinding
resource in the namespace you want to bind. The role bindings only allow you to bind the cluster set to the namespace.
1.3.3.5. Horizontal scrolling does not work after provisioning hosted control plane cluster Copia collegamentoCollegamento copiato negli appunti!
After provisioning a hosted control plane cluster, you might not be able to scroll horizontally in the cluster overview of the Red Hat Advanced Cluster Management console if the ClusterVersionUpgradeable
parameter is too long. You cannot view the hidden data as a result.
To work around the issue, zoom out by using your browser zoom controls, increase your Red Hat Advanced Cluster Management console window size, or copy and paste the text to a different location.
1.3.3.6. EditApplicationSet expand feature repeats Copia collegamentoCollegamento copiato negli appunti!
When you add multiple label expressions or attempt to enter your cluster selector for your ApplicationSet
, you might receive the following message repeatedly, "Expand to enter expression". You can enter your cluster selection despite this issue.
1.3.3.7. Unable to log out from Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
When you use an external identity provider to log in to Red Hat Advanced Cluster Management, you might not be able to log out of Red Hat Advanced Cluster Management. This occurs when you use Red Hat Advanced Cluster Management, installed with IBM Cloud and Keycloak as the identity providers.
You must log out of the external identity provider before you attempt to log out of Red Hat Advanced Cluster Management.
1.3.3.8. Issues with entering the cluster-ID in the OpenShift Cloud Manager console Copia collegamentoCollegamento copiato negli appunti!
If you did not access the cluster-ID
in the OpenShift Cloud Manager console, you can still get a description of your Red Hat OpenShift Service on AWS cluster-ID
from the terminal. You need the Red Hat OpenShift Service on AWS command line interface. See Getting started with the Red Hat OpenShift Service on AWS CLI documentation.
To get the cluster-ID
, run the following command from the Red Hat OpenShift Service on AWS command line interface:
rosa describe cluster --cluster=<cluster-name> | grep -o ’^ID:.*
rosa describe cluster --cluster=<cluster-name> | grep -o ’^ID:.*
1.3.4. Cluster management known issues and limitations Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for Cluster management with Red Hat Advanced Cluster Management. The following list contains Known issues this release, or known issues that continued from the previous release.
For Cluster management with the stand-alone multicluster engine operator known issues and limitations, see Cluster lifecycle known issues and limitations in the multicluster engine operator documentation.
1.3.4.1. Hub cluster communication limitations Copia collegamentoCollegamento copiato negli appunti!
The following limitations occur if the hub cluster is not able to reach or communicate with the managed cluster:
- You cannot create a new managed cluster by using the console. You are still able to import a managed cluster manually by using the command line interface or by using the Run import commands manually option in the console.
- If you deploy an Application or ApplicationSet by using the console, or if you import a managed cluster into ArgoCD, the hub cluster ArgoCD controller calls the managed cluster API server. You can use AppSub or the ArgoCD pull model to work around the issue.
1.3.4.2. The local-cluster might not be automatically recreated Copia collegamentoCollegamento copiato negli appunti!
If the local-cluster is deleted while disableHubSelfManagement
is set to false
, the local-cluster is recreated by the MulticlusterHub
operator. After you detach a local-cluster, the local-cluster might not be automatically recreated.
To resolve this issue, modify a resource that is watched by the
MulticlusterHub
operator. See the following example:oc delete deployment multiclusterhub-repo -n <namespace>
oc delete deployment multiclusterhub-repo -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
To properly detach the local-cluster, set the
disableHubSelfManagement
to true in theMultiClusterHub
.
1.3.4.3. Local-cluster status offline after reimporting with a different name Copia collegamentoCollegamento copiato negli appunti!
When you accidentally try to reimport the cluster named local-cluster
as a cluster with a different name, the status for local-cluster
and for the reimported cluster display offline
.
To recover from this case, complete the following steps:
Run the following command on the hub cluster to edit the setting for self-management of the hub cluster temporarily:
oc edit mch -n open-cluster-management multiclusterhub
oc edit mch -n open-cluster-management multiclusterhub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the setting
spec.disableSelfManagement=true
. Run the following command on the hub cluster to delete and redeploy the local-cluster:
oc delete managedcluster local-cluster
oc delete managedcluster local-cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to remove the
local-cluster
management setting:oc edit mch -n open-cluster-management multiclusterhub
oc edit mch -n open-cluster-management multiclusterhub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove
spec.disableSelfManagement=true
that you previously added.
1.3.4.4. Hub cluster and managed clusters clock not synced Copia collegamentoCollegamento copiato negli appunti!
Hub cluster and manage cluster time might become out-of-sync, displaying in the console unknown
and eventually available
within a few minutes. Ensure that the OpenShift Container Platform hub cluster time is configured correctly. See Customizing nodes.
1.3.5. Application known issues and limitations Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for application management. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
See the following known issues for the Application lifecycle component.
1.3.5.1. Subscription application displays incorrect warning message Copia collegamentoCollegamento copiato negli appunti!
When you deploy the Subscription application (Deprecated), it displays a warning message for the Subscription node in the Application Topology page. If you check the Subscription node details, it incorrectly shows the local-cluster
is offline.
Check the real status of the local-cluster
by clicking Infrastructure > Clusters from the Console navigation.
1.3.5.3. Application topology displays invalid expression Copia collegamentoCollegamento copiato negli appunti!
When you use the Exist
or DoesNotExist
operators in the Placement
resource, the application topology node details display the expressions as \#invalidExpr
. This display is wrong, and the expression is still valid and works in the Placement
resource. To workaround this issue, edit the expression inside the Placement
resource YAML.
1.3.5.4. Editing subscription applications with PlacementRule does not display the subscription YAML in editor Copia collegamentoCollegamento copiato negli appunti!
After you create a subscription application that references a PlacementRule
resource, the subscription YAML does not display in the YAML editor in the console. Use your terminal to edit your subscription YAML file.
1.3.5.5. Helm Chart with secret dependencies cannot be deployed by the Red Hat Advanced Cluster Management subscription Copia collegamentoCollegamento copiato negli appunti!
Using Helm Chart, you can define privacy data in a Kubernetes secret and refer to this secret within the value.yaml
file of the Helm Chart.
The username and password are given by the referred Kubernetes secret resource dbsecret
. For example, see the following sample value.yaml
file:
credentials: secretName: dbsecret usernameSecretKey: username passwordSecretKey: password
credentials:
secretName: dbsecret
usernameSecretKey: username
passwordSecretKey: password
The Helm Chart with secret dependencies is only supported in the Helm binary CLI. It is not supported in the operator SDK Helm library. The Red Hat Advanced Cluster Management subscription controller applies the operator SDK Helm library to install and upgrade the Helm Chart. Therefore, the Red Hat Advanced Cluster Management subscription cannot deploy the Helm Chart with secret dependencies.
1.3.5.6. Topology does not correctly display for Argo CD pull model ApplicationSet application Copia collegamentoCollegamento copiato negli appunti!
When you use the Argo CD pull model to deploy ApplicationSet
applications and the application resource names are customized, the resource names might appear different for each cluster. When this happens, the topology does not display your application correctly.
1.3.5.7. Local cluster is excluded as a managed cluster for pull model Copia collegamentoCollegamento copiato negli appunti!
The hub cluster application set deploys to target managed clusters, but the local cluster, which is a managed hub cluster, is excluded as a target managed cluster.
As a result, if the Argo CD application is propagated to the local cluster by the Argo CD pull model, the local cluster Argo CD application is not cleaned up, even though the local cluster is removed from the placement decision of the Argo CD ApplicationSet
resource.
To work around the issue and clean up the local cluster Argo CD application, remove the skip-reconcile
annotation from the local cluster Argo CD application. See the following annotation:
annotations: argocd.argoproj.io/skip-reconcile: "true"
annotations:
argocd.argoproj.io/skip-reconcile: "true"
Additionally, if you manually refresh the pull model Argo CD application in the Applications section of the Argo CD console, the refresh is not processed and the REFRESH button in the Argo CD console is disabled.
To work around the issue, remove the refresh
annotation from the Argo CD application. See the following annotation:
annotations: argocd.argoproj.io/refresh: normal
annotations:
argocd.argoproj.io/refresh: normal
1.3.5.8. Argo CD controller and the propagation controller might reconcile simultaneously Copia collegamentoCollegamento copiato negli appunti!
Both the Argo CD controller and the propagation controller might reconcile on the same application resource and cause the duplicate instances of application deployment on the managed clusters, but from the different deployment models.
For deploying applications by using the pull model, the Argo CD controllers ignore these application resources when the Argo CD argocd.argoproj.io/skip-reconcile
annotation is added to the template section of the ApplicationSet
.
The argocd.argoproj.io/skip-reconcile
annotation is only available in the GitOps operator version 1.9.0, or later. To prevent conflicts, wait until the hub cluster and all the managed clusters are upgraded to GitOps operator version 1.9.0 before implementing the pull model.
1.3.5.9. Resource fails to deploy Copia collegamentoCollegamento copiato negli appunti!
All the resources listed in the MulticlusterApplicationSetReport
are actually deployed on the managed clusters. If a resource fails to deploy, the resource is not included in the resource list, but the cause is listed in the error message.
1.3.5.10. Resource allocation might take several minutes Copia collegamentoCollegamento copiato negli appunti!
For large environments with over 1000 managed clusters and Argo CD application sets that are deployed to hundreds of managed clusters, Argo CD application creation on the hub cluster might take several minutes. You can set the requeueAfterSeconds
to zero
in the clusterDecisionResource
generator of the application set, as it is displayed in the following example file:
1.3.5.11. Application ObjectBucket channel type cannot use allow and deny lists Copia collegamentoCollegamento copiato negli appunti!
You cannot specify allow and deny lists with ObjectBucket channel type in the subscription-admin
role. In other channel types, the allow and deny lists in the subscription indicates which Kubernetes resources can be deployed, and which Kubernetes resources should not be deployed.
1.3.5.12. Changes to the multicluster_operators_subscription image do not take effect automatically Copia collegamentoCollegamento copiato negli appunti!
The application-manager
add-on that is running on the managed clusters is now handled by the subscription operator, when it was previously handled by the klusterlet operator. The subscription operator is not managed the multicluster-hub
, so changes to the multicluster_operators_subscription
image in the multicluster-hub
image manifest ConfigMap do not take effect automatically.
If the image that is used by the subscription operator is overrided by changing the multicluster_operators_subscription
image in the multicluster-hub
image manifest ConfigMap, the application-manager
add-on on the managed clusters does not use the new image until the subscription operator pod is restarted. You need to restart the pod.
1.3.5.13. Policy resource not deployed unless by subscription administrator Copia collegamentoCollegamento copiato negli appunti!
The policy.open-cluster-management.io/v1
resources are no longer deployed by an application subscription by default for Red Hat Advanced Cluster Management version 2.4.
A subscription administrator needs to deploy the application subscription to change this default behavior.
See Creating an allow and deny list as subscription administrator for information. policy.open-cluster-management.io/v1
resources that were deployed by existing application subscriptions in previous Red Hat Advanced Cluster Management versions remain, but are no longer reconciled with the source repository unless the application subscriptions are deployed by a subscription administrator.
1.3.5.14. Application Ansible hook stand-alone mode Copia collegamentoCollegamento copiato negli appunti!
Ansible hook stand-alone mode is not supported. To deploy Ansible hook on the hub cluster with a subscription, you might use the following subscription YAML:
However, this configuration might never create the Ansible instance, since the spec.placement.local:true
has the subscription running on standalone
mode. You need to create the subscription in hub mode.
Create a placement rule that deploys to
local-cluster
. See the following sample wherelocal-cluster: "true"
refers to your hub cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reference that placement rule in your subscription. See the following sample:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After applying both, you should see the Ansible instance created in your hub cluster.
1.3.5.15. Application not deployed after an updated placement rule Copia collegamentoCollegamento copiato negli appunti!
If applications are not deploying after an update to a placement rule, verify that the application-manager
pod is running. The application-manager
is the subscription container that needs to run on managed clusters.
You can run oc get pods -n open-cluster-management-agent-addon |grep application-manager
to verify.
You can also search for kind:pod cluster:yourcluster
in the console and see if the application-manager
is running.
If you cannot verify, attempt to import the cluster again and verify again.
1.3.5.16. Subscription operator does not create an SCC Copia collegamentoCollegamento copiato negli appunti!
Learn about Red Hat OpenShift Container Platform SCC at Managing security context constraints, which is an additional configuration required on the managed cluster.
Different deployments have different security context and different service accounts. The subscription operator cannot create an SCC CR automatically.. Administrators control permissions for pods. A Security Context Constraints (SCC) CR is required to enable appropriate permissions for the relative service accounts to create pods in the non-default namespace. To manually create an SCC CR in your namespace, complete the following steps:
Find the service account that is defined in the deployments. For example, see the following
nginx
deployments:nginx-ingress-52edb nginx-ingress-52edb-backend
nginx-ingress-52edb nginx-ingress-52edb-backend
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an SCC CR in your namespace to assign the required permissions to the service account or accounts. See the following example, where
kind: SecurityContextConstraints
is added:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.5.17. Application channels require unique namespaces Copia collegamentoCollegamento copiato negli appunti!
Creating more than one channel in the same namespace can cause errors with the hub cluster.
For instance, namespace charts-v1
is used by the installer as a Helm type channel, so do not create any additional channels in charts-v1
. Ensure that you create your channel in a unique namespace. All channels need an individual namespace, except GitHub channels, which can share a namespace with another GitHub channel.
1.3.5.18. Ansible Automation Platform job fail Copia collegamentoCollegamento copiato negli appunti!
Ansible jobs fail to run when you select an incompatible option. Ansible Automation Platform only works when the -cluster-scoped
channel options are chosen. This affects all components that need to perform Ansible jobs.
1.3.5.19. Ansible Automation Platform operator access Ansible Automation Platform outside of a proxy Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Ansible Automation Platform operator cannot access Ansible Automation Platform outside of a proxy-enabled OpenShift Container Platform cluster. To resolve, you can install the Ansible Automation Platform within the proxy. See install steps that are provided by Ansible Automation Platform.
1.3.5.20. Application name requirements Copia collegamentoCollegamento copiato negli appunti!
An application name cannot exceed 37 characters. The application deployment displays the following error if the characters exceed this amount.
status: phase: PropagationFailed reason: 'Deployable.apps.open-cluster-management.io "_long_lengthy_name_" is invalid: metadata.labels: Invalid value: "_long_lengthy_name_": must be no more than 63 characters/n'
status:
phase: PropagationFailed
reason: 'Deployable.apps.open-cluster-management.io "_long_lengthy_name_" is invalid: metadata.labels: Invalid value: "_long_lengthy_name_": must be no more than 63 characters/n'
1.3.5.21. Application console table limitations Copia collegamentoCollegamento copiato negli appunti!
See the following limitations to various Application tables in the console:
- From the Applications table on the Overview page and the Subscriptions table on the Advanced configuration page, the Clusters column displays a count of clusters where application resources are deployed. Since applications are defined by resources on the local cluster, the local cluster is included in the search results, whether actual application resources are deployed on the local cluster or not.
- From the Advanced configuration table for Subscriptions, the Applications column displays the total number of applications that use that subscription, but if the subscription deploys child applications, those are included in the search result, as well.
- From the Advanced configuration table for Channels, the Subscriptions column displays the total number of subscriptions on the local cluster that use that channel, but this does not include subscriptions that are deployed by other subscriptions, which are included in the search result.
1.3.5.22. No Application console topology filtering Copia collegamentoCollegamento copiato negli appunti!
The Console and Topology for Application changes for the 2.14. There is no filtering capability from the console Topology page.
1.3.5.23. Allow and deny list does not work in Object storage applications Copia collegamentoCollegamento copiato negli appunti!
The allow
and deny
list feature does not work in Object storage application subscriptions.
1.3.5.24. ClusterPermission resource fails when creating many role bindings Copia collegamentoCollegamento copiato negli appunti!
If you create a ClusterPermission
resource that contains many role bindings within the same namespace and do not set the optional name
field, it fails. This error occurs because the same name is used for the role bindings in the namespace.
To workaround this issue, set the optional name
field for each role binding with a unique name.
1.3.6. Observability known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for Red Hat Advanced Cluster Management for Kubernetes. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see link:https://docs.redhat.com/documentation/en-us/openshift_container_platform/4.17/html/release_notes#ocp-4-15-known-issues [OpenShift Container Platform known issues].
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.6.1. Retention change causes data loss Copia collegamentoCollegamento copiato negli appunti!
The default retention for all resolution levels, such as retentionResolutionRaw
, retentionResolution5m
, or retentionResolution1h
, is 365 days (365d
). This 365d
default retention means that the default retention for a 1 hour resolution has decreased from indefinite, 0d
to 365d
. This retention change might cause you to lose data. If you did not set an explicit value for the resolution retention in your MultiClusterObservability
spec.advanced.retentionConfig
parameter, you might lose data.
For more information, see Adding advanced configuration for retention.
1.3.6.2. Observatorium API gateway pods in a restored hub cluster might have stale tenant data Copia collegamentoCollegamento copiato negli appunti!
The Observatorium API gateway pods in a restored hub cluster might contain stale tenant data after a backup and restore procedure because of a Kubernetes limitation. See Mounted ConfigMaps are updated automatically for more about the limitation.
As a result, the Observatorium API and Thanos gateway rejects metrics from collectors, and the Red Hat Advanced Cluster Management Grafana dashboards do not display data.
See the following errors from the Observatorium API gateway pod logs:
level=error name=observatorium caller=logchannel.go:129 msg="failed to forward metrics" returncode="500 Internal Server Error" response="no matching hashring to handle tenant\n"
level=error name=observatorium caller=logchannel.go:129 msg="failed to forward metrics" returncode="500 Internal Server Error" response="no matching hashring to handle tenant\n"
Thanos receives pods logs with the following errors:
caller=handler.go:551 level=error component=receive component=receive-handler tenant=xxxx err="no matching hashring to handle tenant" msg="internal server error"
caller=handler.go:551 level=error component=receive component=receive-handler tenant=xxxx err="no matching hashring to handle tenant" msg="internal server error"
See the following procedure to resolve this issue:
-
Scale down the
observability-observatorium-api
deployment instances fromN
to0
. -
Scale up the
observability-observatorium-api
deployment instances from0
toN
.
Note: N
= 2
by default, but might be greater than 2
in some custom configuration environments.
This restarts all Observatorium API gateway pods with the correct tenant information, and the data from collectors start displaying in Grafana in between 5-10 minutes.
1.3.6.3. Permission to add PrometheusRules and ServiceMonitors in openshift-monitoring namespace denied Copia collegamentoCollegamento copiato negli appunti!
You must use a label in your defined Red Hat Advanced Cluster Management hub cluster namespace. The label, openshift.io/cluster-monitoring: "true"
causes the Cluster Monitoring Operator to scrape the namespace for metrics.
When Red Hat Advanced Cluster Management is deployed or an installation is upgraded, the Red Hat Advanced Cluster Management Observability ServiceMonitors
and PrometheusRule
resources are no longer present in the openshift-monitoring
namespace.
1.3.6.4. Lack of support for proxy settings Copia collegamentoCollegamento copiato negli appunti!
The Prometheus AdditionalAlertManagerConfig
resource of the observability add-on does not support proxy settings. You must disable the observability alert forwarding feature.
Complete the following steps to disable alert forwarding:
-
Go to the
MultiClusterObservability
resource. -
Update the
mco-disabling-alerting
parameter value totrue
The HTTPS proxy with a self-signed CA certificate is not supported.
1.3.6.5. Duplicate local-clusters on Service-level Overview dashboard Copia collegamentoCollegamento copiato negli appunti!
When various hub clusters deploy Red Hat Advanced Cluster Management observability using the same S3 storage, duplicate local-clusters
can be detected and displayed within the Kubernetes/Service-Level Overview/API Server dashboard. The duplicate clusters affect the results within the following panels: Top Clusters, Number of clusters that has exceeded the SLO, and Number of clusters that are meeting the SLO. The local-clusters
are unique clusters associated with the shared S3 storage. To prevent multiple local-clusters
from displaying within the dashboard, it is recommended for each unique hub cluster to deploy observability with a S3 bucket specifically for the hub cluster.
1.3.6.6. Observability endpoint operator fails to pull image Copia collegamentoCollegamento copiato negli appunti!
The observability endpoint operator fails if you create a pull-secret to deploy to the MultiClusterObservability CustomResource (CR) and there is no pull-secret in the open-cluster-management-observability
namespace. When you import a new cluster, or import a Hive cluster that is created with Red Hat Advanced Cluster Management, you need to manually create a pull-image secret on the managed cluster.
For more information, see Enabling observability.
1.3.6.7. There is no data from ROKS clusters Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management observability does not display data from a ROKS cluster on some panels within built-in dashboards. This is because ROKS does not expose any API server metrics from servers they manage. The following Grafana dashboards contain panels that do not support ROKS clusters: Kubernetes/API server
, Kubernetes/Compute Resources/Workload
, Kubernetes/Compute Resources/Namespace(Workload)
1.3.6.8. There is no etcd data from ROKS clusters Copia collegamentoCollegamento copiato negli appunti!
For ROKS clusters, Red Hat Advanced Cluster Management observability does not display data in the etcd panel of the dashboard.
1.3.6.9. Metrics are unavailable in the Grafana console Copia collegamentoCollegamento copiato negli appunti!
Annotation query failed in the Grafana console:
When you search for a specific annotation in the Grafana console, you might receive the following error message due to an expired token:
"Annotation Query Failed"
Refresh your browser and verify you are logged into your hub cluster.
Error in rbac-query-proxy pod:
Due to unauthorized access to the
managedcluster
resource, you might receive the following error when you query a cluster or project:no project or cluster found
Check the role permissions and update appropriately. See Role-based access control for more information.
1.3.6.10. Prometheus data loss on managed clusters Copia collegamentoCollegamento copiato negli appunti!
By default, Prometheus on OpenShift uses ephemeral storage. Prometheus loses all metrics data whenever it is restarted.
When observability is enabled or disabled on OpenShift Container Platform managed clusters that are managed by Red Hat Advanced Cluster Management, the observability endpoint operator updates the cluster-monitoring-config
ConfigMap
by adding additional alertmanager configuration that restarts the local Prometheus automatically.
1.3.6.11. Error ingesting out-of-order samples Copia collegamentoCollegamento copiato negli appunti!
Observability receive
pods report the following error message:
Error on ingesting out-of-order samples
Error on ingesting out-of-order samples
The error message means that the time series data sent by a managed cluster, during a metrics collection interval is older than the time series data it sent in the previous collection interval. When this problem happens, data is discarded by the Thanos receivers and this might create a gap in the data shown in Grafana dashboards. If the error is seen frequently, it is recommended to increase the metrics collection interval to a higher value. For example, you can increase the interval to 60 seconds.
The problem is only noticed when the time series interval is set to a lower value, such as 30 seconds. Note, this problem is not seen when the metrics collection interval is set to the default value of 300 seconds.
1.3.6.12. Grafana deployment fails after upgrade Copia collegamentoCollegamento copiato negli appunti!
If you have a grafana-dev
instance deployed in earlier versions before 2.6, and you upgrade the environment to 2.6, the grafana-dev
does not work. You must delete the existing grafana-dev
instance by running the following command:
./setup-grafana-dev.sh --clean
./setup-grafana-dev.sh --clean
Recreate the instance with the following command:
./setup-grafana-dev.sh --deploy
./setup-grafana-dev.sh --deploy
1.3.6.13. Enabling disableHubSelfManagement causes empty list in Grafana dashboard Copia collegamentoCollegamento copiato negli appunti!
The Grafana dashboard shows an empty label list if the disableHubSelfManagement
parameter is set to true
in the mulitclusterengine
custom resource. You must set the parameter to false
or remove the parameter to see the label list. See disableHubSelfManagement for more details.
1.3.6.13.1. Endpoint URL cannot have fully qualified domain names (FQDN) Copia collegamentoCollegamento copiato negli appunti!
When you use the FQDN or protocol for the endpoint
parameter, your observability pods are not enabled. The following error message is displayed:
Endpoint url cannot have fully qualified paths
Endpoint url cannot have fully qualified paths
Enter the URL without the protocol. Your endpoint
value must resemble the following URL for your secrets:
endpoint: example.com:443
endpoint: example.com:443
1.3.6.13.2. Grafana downsampled data mismatch Copia collegamentoCollegamento copiato negli appunti!
When you attempt to query historical data and there is a discrepancy between the calculated step value and downsampled data, the result is empty. For example, if the calculated step value is 5m
and the downsampled data is in a one-hour interval, data does not appear from Grafana.
This discrepancy occurs because a URL query parameter must be passed through the Thanos Query front-end data source. Afterwards, the URL query can perform additional queries for other downsampling levels when data is missing.
You must manually update the Thanos Query front-end data source configuration. Complete the following steps:
- Go to the Query front-end data source.
- To update your query parameters, click the Misc section.
-
From the Custom query parameters field, select
max_source_resolution=auto
. - To verify that the data is displayed, refresh your Grafana page.
Your query data appears from the Grafana dashboard.
1.3.6.14. Metrics collector does not detect proxy configuration Copia collegamentoCollegamento copiato negli appunti!
A proxy configuration in a managed cluster that you configure by using the addonDeploymentConfig
is not detected by the metrics collector. As a workaround, you can enable the proxy by removing the managed cluster ManifestWork
. Removing the ManifestWork
forces the changes in the addonDeploymentConfig
to be applied.
1.3.6.15. Limitations when using custom managed cluster Observatorium API or Alertmanager URLs Copia collegamentoCollegamento copiato negli appunti!
Custom Observatorium API and Alertmanager URLs only support intermediate components with TLS passthrough. If both custom URLs are pointing to the same intermediate component, you must use separate sub-domains because OpenShift Container Platform routers do not support two separate route objects with the same host.
1.3.7. Governance known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for Governance. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.7.1. The ConfigurationPolicy incorrectly processes objectSelector and namespaceSelector results Copia collegamentoCollegamento copiato negli appunti!
When you use both the objectSelector
and the namespaceSelector
fields in a ConfigurationPolicy
resource, the objects that the objectSelector
return get applied to all the namespaces that the namespaceSelector
return. The ConfigurationPolicy
incorrectly processes the results. To workaround this issue, apply the object-templates-raw
field to iterate over the objects.
1.3.7.2. Configuration policy listed complaint when namespace is stuck in Terminating state Copia collegamentoCollegamento copiato negli appunti!
When you have a configuration policy that is configured with mustnothave
for the complianceType
parameter and enforce
for the remediationAction
parameter, the policy is listed as compliant when a deletion request is made to the Kubernetes API. Therefore, the Kubernetes object can be stuck in a Terminating
state while the policy is listed as compliant.
1.3.7.3. Operators deployed with policies do not support ARM Copia collegamentoCollegamento copiato negli appunti!
While installation into an ARM environment is supported, operators that are deployed with policies might not support ARM environments. The following policies that install operators do not support ARM environments:
- Red Hat Advanced Cluster Management policy for the Quay Container Security Operator
- Red Hat Advanced Cluster Management policy for the Compliance Operator
1.3.7.4. ConfigurationPolicy custom resource definition is stuck in terminating Copia collegamentoCollegamento copiato negli appunti!
When you remove the config-policy-controller
add-on from a managed cluster by disabling the policy controller in the KlusterletAddonConfig
or by detaching the cluster, the ConfigurationPolicy
custom resource definition might get stuck in a terminating state. If the ConfigurationPolicy
custom resource definition is stuck in a terminating state, new policies might not be added to the cluster if the add-on is reinstalled later. You can also receive the following error:
template-error; Failed to create policy template: create not allowed while custom resource definition is terminating
template-error; Failed to create policy template: create not allowed while custom resource definition is terminating
Use the following command to check if the custom resource definition is stuck:
oc get crd configurationpolicies.policy.open-cluster-management.io -o=jsonpath='{.metadata.deletionTimestamp}'
oc get crd configurationpolicies.policy.open-cluster-management.io -o=jsonpath='{.metadata.deletionTimestamp}'
If a deletion timestamp is on the resource, the custom resource definition is stuck. To resolve the issue, remove all finalizers from configuration policies that remain on the cluster. Use the following command on the managed cluster and replace <cluster-namespace>
with the managed cluster namespace:
oc get configurationpolicy -n <cluster-namespace> -o name | xargs oc patch -n <cluster-namespace> --type=merge -p '{"metadata":{"finalizers": []}}'
oc get configurationpolicy -n <cluster-namespace> -o name | xargs oc patch -n <cluster-namespace> --type=merge -p '{"metadata":{"finalizers": []}}'
The configuration policy resources are automatically removed from the cluster and the custom resource definition exits its terminating state. If the add-on has already been reinstalled, the custom resource definition is recreated automatically without a deletion timestamp.
1.3.7.5. Policy status shows repeated updates when enforced Copia collegamentoCollegamento copiato negli appunti!
If a policy is set to remediationAction: enforce
and is repeatedly updated, the Red Hat Advanced Cluster Management console shows repeated violations with successful updates. Repeated updates produce multiple policy events, which can cause the governance-policy-framework-addon
pod to run out of memory and crash. See the following two possible causes and solutions for the error:
Another controller or process is also updating the object with different values.
To resolve the issue, disable the policy and compare the differences between
objectDefinition
in the policy and the object on the managed cluster. If the values are different, another controller or process might be updating them. Check themetadata
of the object to help identify why the values are different.The
objectDefinition
in theConfigurationPolicy
does not match because of Kubernetes processing the object when the policy is applied.To resolve the issue, disable the policy and compare the differences between
objectDefinition
in the policy and the object on the managed cluster. If the keys are different or missing, Kubernetes might have processed the keys before applying them to the object, such as removing keys containing default or empty values.
1.3.7.6. Duplicate policy template names create inconsistent results Copia collegamentoCollegamento copiato negli appunti!
When you create a policy with identical policy template names, you receive inconsistent results that are not detected, but you might not know the cause. For example, defining a policy with multiple configuration policies named create-pod
causes inconsistent results. Best practice: Avoid using duplicate names for policy templates.
1.3.7.7. Kyverno policies no longer report a status for the latest version Copia collegamentoCollegamento copiato negli appunti!
Kyverno policies generated by the Policy Generator report the following message in your Red Hat Advanced Cluster Management cluster:
violation - couldn't find mapping resource with kind ClusterPolicyReport, please check if you have CRD deployed; violation - couldn't find mapping resource with kind PolicyReport, please check if you have CRD deployed
violation - couldn't find mapping resource with kind ClusterPolicyReport, please check if you have CRD deployed;
violation - couldn't find mapping resource with kind PolicyReport, please check if you have CRD deployed
The cause is that the PolicyReport
API version is incorrect in the generator and does not match what Kyverno has deployed.
1.3.8. Known issues for networking Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for Submariner. The following list contains known issues for this release, or known issues that continued from the previous release.
For your Red Hat OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
For more about deprecations and removals, see Deprecations and removals for Red Hat Advanced Cluster Management.
1.3.8.1. Submariner known issues Copia collegamentoCollegamento copiato negli appunti!
See the following known issues and limitations that might occur while using networking features.
1.3.8.1.1. Source IP not retained for applications on OpenShift Container Platform with OVN-Kubernetes Copia collegamentoCollegamento copiato negli appunti!
If you are using OpenShift Container Platform versions between 4.18 and versions earlier than 4.19.5 with OVN-Kubernetes for Submariner, the source IP is not retained when packets reach the destination pod. As a result, applications that rely on the source IP, such as NetworkPolicy
, might not work correctly.
1.3.8.1.2. Without ClusterManagementAddon submariner add-on fails Copia collegamentoCollegamento copiato negli appunti!
For versions 2.8 and earlier, when you install Red Hat Advanced Cluster Management, you also deploy the submariner-addon
component with the Operator Lifecycle Manager. If you did not create a MultiClusterHub
custom resource, the submariner-addon
pod sends an error and prevents the operator from installing.
The following notification occurs because the ClusterManagementAddon
custom resource definition is missing:
graceful termination failed, controllers failed with error: the server could not find the requested resource (post clustermanagementaddons.addon.open-cluster-management.io)
graceful termination failed, controllers failed with error: the server could not find the requested resource (post clustermanagementaddons.addon.open-cluster-management.io)
The ClusterManagementAddon
resource is created by the cluster-manager
deployment, however, this deployment becomes available when the MultiClusterEngine
components are installed on the cluster.
If there is not a MultiClusterEngine
resource that is already available on the cluster when the MultiClusterHub
custom resource is created, the MultiClusterHub
operator deploys the MultiClusterEngine
instance and the operator that is required, which resolves the previous error.
1.3.8.1.3. Submariner add-on resources not cleaned up properly when managed clusters are imported Copia collegamentoCollegamento copiato negli appunti!
If the submariner-addon
component is set to false
within MultiClusterHub
(MCH) operator, then the submariner-addon
finalizers are not cleaned up properly for the managed cluster resources. Since the finalizers are not cleaned up properly, this prevents the submariner-addon
component from being disabled within the hub cluster.
1.3.8.1.4. Submariner install plan limitation Copia collegamentoCollegamento copiato negli appunti!
The Submariner install plan does not follow the overall install plan settings. Therefore, the operator management screen cannot control the Submariner install plan. By default, Submariner install plans are applied automatically, and the Submariner addon is always updated to the latest available version corresponding to the installed Red Hat Advanced Cluster Management version. To change this behavior, you must use a customized Submariner subscription.
1.3.8.1.5. Limited headless services support Copia collegamentoCollegamento copiato negli appunti!
Service discovery is not supported for headless services without selectors when using Globalnet.
1.3.8.1.6. Deployments that use VXLAN when NAT is enabled are not supported Copia collegamentoCollegamento copiato negli appunti!
Only non-NAT deployments support Submariner deployments with the VXLAN cable driver.
1.3.8.1.7. Self-signed certificates might prevent connection to broker Copia collegamentoCollegamento copiato negli appunti!
Self-signed certificates on the broker might prevent joined clusters from connecting to the broker. The connection fails with certificate validation errors. You can disable broker certificate validation by setting InsecureBrokerConnection
to true
in the relevant SubmarinerConfig
object. See the following example:
1.3.8.1.8. Submariner only supports OpenShift SDN or OVN Kubernetes Copia collegamentoCollegamento copiato negli appunti!
Submariner only supports Red Hat OpenShift Container Platform clusters that use the OpenShift SDN or the OVN-Kubernetes Container Network Interface (CNI) network provider.
1.3.8.1.9. Command limitation on Microsoft Azure clusters Copia collegamentoCollegamento copiato negli appunti!
The subctl diagnose firewall inter-cluster
command does not work on Microsoft Azure clusters.
1.3.8.1.10. Automatic upgrade not working with custom CatalogSource or Subscription Copia collegamentoCollegamento copiato negli appunti!
Submariner is automatically upgraded when Red Hat Advanced Cluster Management for Kubernetes is upgraded. The automatic upgrade might fail if you are using a custom CatalogSource
or Subscription
.
To make sure automatic upgrades work when installing Submariner on managed clusters, you must set the spec.subscriptionConfig.channel
field to stable-0.15
in the SubmarinerConfig
custom resource for each managed cluster.
1.3.8.1.11. Submariner conflicts with IPsec-enabled OVN-Kubernetes deployments Copia collegamentoCollegamento copiato negli appunti!
IPsec tunnels that are created by IPsec-enabled OVN-Kubernetes deployments might conflict with IPsec tunnels that are created by Submariner. Do not use OVN-Kubernetes in IPsec mode with Submariner.
1.3.8.1.12. Uninstall Submariner before removing ManagedCluster from a ManageClusterSet Copia collegamentoCollegamento copiato negli appunti!
If you remove a cluster from a ClusterSet
, or move a cluster to a different ClusterSet
, the Submariner installation is no longer valid.
You must uninstall Submariner before moving or removing a ManagedCluster
from a ManageClusterSet
. If you don’t uninstall Submariner, you cannot uninstall or reinstall Submariner anymore and Submariner stops working on your ManagedCluster
.
1.3.9. Multicluster global hub Operator known issues Copia collegamentoCollegamento copiato negli appunti!
Review the known issues for the multicluster global hub Operator. The following list contains known issues for this release, or known issues that continued from the previous release. For your OpenShift Container Platform cluster, see OpenShift Container Platform known issues.
1.3.9.1. Detached hosted managed hub cluster recreates the addon Copia collegamentoCollegamento copiato negli appunti!
When a hosted managed hub cluster gets detached from the multicluster global hub cluster, the detach logic
deletes the open-cluster-management-agent-addon
namespace and all the addons
within it. The local-cluster
add-on manager finds the deleted addons
and recreates them.
There is currently no workaround for this issue.
1.3.9.2. The detached managed hub cluster deletes and recreates the namespace and resources Copia collegamentoCollegamento copiato negli appunti!
If you import a managed hub cluster in the hosted mode and detach this managed hub cluster, then it deletes and recreates the open-cluster-management-agent-addon
namespace. The detached managed hub cluster also deletes and recreates all the related addon
resources within this namespace.
There is currently no workaround for this issue.
1.3.9.3. Kafka operator keeps restarting Copia collegamentoCollegamento copiato negli appunti!
In the Federal Information Processing Standard (FIPS) environment, the Kafka operator keeps restarting because of the out-of-memory (OOM) state. To fix this issue, set the resource limit to at least 512M
. For detailed steps on how to set this limit, see amq stream doc.
1.3.9.4. Backup and restore known issues Copia collegamentoCollegamento copiato negli appunti!
If your original multicluster global hub cluster crashes, the multicluster global hub loses its generated events and cron
jobs. Even if you restore the new multicluster global hub cluster, the events and cron
jobs are not restored. To workaround this issue, you can manually run the cron
job, see Running the summarization process manually.
1.3.9.5. Managed cluster displays but is not counted Copia collegamentoCollegamento copiato negli appunti!
A managed cluster that is not created successfully, meaning clusterclaim id.k8s.io
does not exist in the managed cluster, is not counted in the policy compliance dashboards, but shows in the policy console.
1.3.9.6. Incomplete data display in Grafana dashboards Copia collegamentoCollegamento copiato negli appunti!
The Kafka message size limit might cause the Grafana dashboards to not display complete data. The default limit is 1 megabyte (MB)
. Verify if the total size of your policy or managed cluster data exceeds 1 MB
. If it does, increase the Kafka message size limit to meet your needs.
Modify the Kafka configuration by completing the following steps:
To go to the Kafka container resource, run the following command:
oc edit kafka kafka -n multicluster-global-hub
oc edit kafka kafka -n multicluster-global-hub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following YAML to set the Kafka message size to
10 MB
:spec: kafka: config: message.max.bytes: 10485760 replica.fetch.max.bytes: 10485760
spec: kafka: config: message.max.bytes: 10485760 replica.fetch.max.bytes: 10485760
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the multicluster global hub agent pod in each managed hub cluster. By default, the
agent_installed_namespace
is themulticluster-global-hub-agent
. Run the following command:oc delete pod multicluster-global-hub-agent-xxx -n agent_installed_namespace
oc delete pod multicluster-global-hub-agent-xxx -n agent_installed_namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3.9.7. The standard group filter cannot pass to the new page Copia collegamentoCollegamento copiato negli appunti!
In the Global Hub Policy Group Compliancy Overview hub dashboards, you can check one data point by clicking View Offending Policies for standard group, but after you click this link to go to the offending page, the standard group filter cannot pass to the new page.
This is also an issue for the Cluster Group Compliancy Overview.
1.4. Deprecations and removals for Red Hat Advanced Cluster Management Copia collegamentoCollegamento copiato negli appunti!
Learn when parts of the product are deprecated or removed from Red Hat Advanced Cluster Management for Kubernetes. Consider the alternative actions in the Recommended action and details, which display in the tables for the current release and for two prior releases.
Deprecated: Red Hat Advanced Cluster Management 2.9 and earlier versions are no longer supported. The documentation might remain available, but without any errata releases for fixed issues or other updates.
Best practice: Upgrade to the most recent version.
Important: Cluster lifecycle components and features are within the multicluster engine operator, which is a software operator that enhances cluster fleet management. Release notes for multicluster engine operator-specific features are found in at Release notes for Cluster lifecycle with multicluster engine operator.
1.4.1. Product deprecations and removals Copia collegamentoCollegamento copiato negli appunti!
A deprecated component, feature, or service is supported, but no longer recommended for use and might become obsolete in future releases. Consider the alternative actions in the Recommended action and details that are provided in the following table:
Product or category | Affected item | Version | Recommended action | More details and links |
---|---|---|---|---|
Documentation for APIs | The Red Hat Advanced Cluster Management API documentation | Red Hat Advanced Cluster Management 2.13 | View current and supported APIs from the console or the terminal instead of the documentation. | None |
Application management | Subscription | Red Hat Advanced Cluster Management 2.13 | Use Red Hat Advanced Cluster Management with OpenShift GitOps instead. | The deprecation is extended for five releases before removal. See GitOps overview for updated function. |
Applications and Governance |
| 2.8 |
Use |
While |
multicluster global hub | PostgreSQL manual upgrade process | Red Hat Advanced Cluster Management 2.14 | None | Automatic database upgrades are supported for multicluster global hub version 1.5 and later. |
A removed item is typically function that was deprecated in previous releases and is no longer available in the product. You must use alternatives for the removed function. Consider the alternative actions in the Recommended action and details that are provided in the following table:
Product or category | Affected item | Version | Recommended action | More details and links |
---|---|---|---|---|
Red Hat Advanced Cluster Management for Kubernetes Search service | Enabling virtual machine actions (Technology Preview) | 2.14 | Use fine-grained role-base access control instead. | |
Red Hat Advanced Cluster Management for Kubernetes console | Importing OpenShift Container Platform 3.11 clusters from the console | 2.14 | Upgrade to a supported version of OpenShift Container Platform. See the OpenShift Container Platform documentation. |
The support for the deprecated custom resource definition |
Original Overview page | Red Hat Advanced Cluster Management for Kubernetes search console | 2.13 | Enable the Fleet view switch to view the new default Overview page. | The previous layout of the Red Hat Advanced Cluster Management Overview page is redesigned. |
Installer |
| 2.9 | None |
See Advanced Configuration for configuring install. If you uppgrade your Red Hat Advanced Cluster Management for Kubernetes version and originally had a |
Policy compliance history API | Governance | 2.13 |
Use the existing policy metrics to see the compliance status changes. You can also view the | For more information, see Policy controller advanced configuration. |
1.5. Red Hat Advanced Cluster Management platform considerations for GDPR readiness Copia collegamentoCollegamento copiato negli appunti!
1.5.1. Notice Copia collegamentoCollegamento copiato negli appunti!
This document is intended to help you in your preparations for General Data Protection Regulation (GDPR) readiness. It provides information about features of the Red Hat Advanced Cluster Management for Kubernetes platform that you can configure, and aspects of the product’s use, that you should consider to help your organization with GDPR readiness.
This information is not an exhaustive list, due to the many ways that clients can choose and configure features, and the large variety of ways that the product can be used in itself and with third-party clusters and systems.
Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation.
Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients' business and any actions the clients may need to take to comply with such laws and regulations.
The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability. Red Hat does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.
1.5.2. Table of Contents Copia collegamentoCollegamento copiato negli appunti!
1.5.3. GDPR Copia collegamentoCollegamento copiato negli appunti!
General Data Protection Regulation (GDPR) has been adopted by the European Union (EU) and applies from May 25, 2018.
1.5.3.1. Why is GDPR important? Copia collegamentoCollegamento copiato negli appunti!
GDPR establishes a stronger data protection regulatory framework for processing personal data of individuals. GDPR brings:
- New and enhanced rights for individuals
- Widened definition of personal data
- New obligations for processors
- Potential for significant financial penalties for non-compliance
- Compulsory data breach notification
1.5.3.2. Read more about GDPR Copia collegamentoCollegamento copiato negli appunti!
1.5.4. Product Configuration for GDPR Copia collegamentoCollegamento copiato negli appunti!
The following sections describe aspects of data management within the Red Hat Advanced Cluster Management for Kubernetes platform and provide information on capabilities to help clients with GDPR requirements.
1.5.5. Data Life Cycle Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes is an application platform for developing and managing on-premises, containerized applications. It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, cluster lifecycle, application lifecycle, and security frameworks (governance, risk, and compliance).
As such, the Red Hat Advanced Cluster Management for Kubernetes platform deals primarily with technical data that is related to the configuration and management of the platform, some of which might be subject to GDPR. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. This data will be described throughout this document for the awareness of clients responsible for meeting GDPR requirements.
This data is persisted on the platform on local or remote file systems as configuration files or in databases. Applications that are developed to run on the Red Hat Advanced Cluster Management for Kubernetes platform might deal with other forms of personal data subject to GDPR. The mechanisms that are used to protect and manage platform data are also available to applications that run on the platform. Additional mechanisms might be required to manage and protect personal data that is collected by applications run on the Red Hat Advanced Cluster Management for Kubernetes platform.
To best understand the Red Hat Advanced Cluster Management for Kubernetes platform and its data flows, you must understand how Kubernetes, Docker, and the Operator work. These open source components are fundamental to the Red Hat Advanced Cluster Management for Kubernetes platform. You use Kubernetes deployments to place instances of applications, which are built into Operators that reference Docker images. The Operator contain the details about your application, and the Docker images contain all the software packages that your applications need to run.
1.5.5.1. What types of data flow through Red Hat Advanced Cluster Management for Kubernetes platform Copia collegamentoCollegamento copiato negli appunti!
As a platform, Red Hat Advanced Cluster Management for Kubernetes deals with several categories of technical data that could be considered as personal data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data unknown to the platform.
Information on how this technical data is collected/created, stored, accessed, secured, logged, and deleted is described in later sections of this document.
1.5.5.2. Personal data used for online contact Copia collegamentoCollegamento copiato negli appunti!
Customers can submit online comments, feedback, and requests for information about in a variety of ways, primarily:
- The public Slack community if there is a Slack channel
- The public comments or tickets on the product documentation
- The public conversations in a technical community
Typically, only the client name and email address are used, to enable personal replies for the subject of the contact, and the use of personal data conforms to the Red Hat Online Privacy Statement.
1.5.6. Data Collection Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Advanced Cluster Management for Kubernetes platform does not collect sensitive personal data. It does create and manage technical data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names, which might be considered personal data. The Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. All such information is only accessible by the system administrator through a management console with role-based access control or by the system administrator though login to a Red Hat Advanced Cluster Management for Kubernetes platform node.
Applications that run on the Red Hat Advanced Cluster Management for Kubernetes platform might collect personal data.
When you assess the use of the Red Hat Advanced Cluster Management for Kubernetes platform running containerized applications and your need to meet the requirements of GDPR, you must consider the types of personal data that are collected by the application and aspects of how that data is managed, such as:
- How is the data protected as it flows to and from the application? Is the data encrypted in transit?
- How is the data stored by the application? Is the data encrypted at rest?
- How are credentials that are used to access the application collected and stored?
- How are credentials that are used by the application to access data sources collected and stored?
- How is data collected by the application removed as needed?
This is not a definitive list of the types of data that are collected by the Red Hat Advanced Cluster Management for Kubernetes platform. It is provided as an example for consideration. If you have any questions about the types of data, contact Red Hat.
1.5.7. Data storage Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Advanced Cluster Management for Kubernetes platform persists technical data that is related to configuration and management of the platform in stateful stores on local or remote file systems as configuration files or in databases. Consideration must be given to securing all data at rest. The Red Hat Advanced Cluster Management for Kubernetes platform supports encryption of data at rest in stateful stores that use dm-crypt
.
The following items highlight the areas where data is stored, which you might want to consider for GDPR.
- Platform Configuration Data: The Red Hat Advanced Cluster Management for Kubernetes platform configuration can be customized by updating a configuration YAML file with properties for general settings, Kubernetes, logs, network, Docker, and other settings. This data is used as input to the Red Hat Advanced Cluster Management for Kubernetes platform installer for deploying one or more nodes. The properties also include an administrator user ID and password that are used for bootstrap.
-
Kubernetes Configuration Data: Kubernetes cluster state data is stored in a distributed key-value store,
etcd
. - User Authentication Data, including User IDs and passwords: User ID and password management are handled through a client enterprise LDAP directory. Users and groups that are defined in LDAP can be added to Red Hat Advanced Cluster Management for Kubernetes platform teams and assigned access roles. Red Hat Advanced Cluster Management for Kubernetes platform stores the email address and user ID from LDAP, but does not store the password. Red Hat Advanced Cluster Management for Kubernetes platform stores the group name and upon login, caches the available groups to which a user belongs. Group membership is not persisted in any long-term way. Securing user and group data at rest in the enterprise LDAP must be considered. Red Hat Advanced Cluster Management for Kubernetes platform also includes an authentication service, Open ID Connect (OIDC) that interacts with the enterprise directory and maintains access tokens. This service uses ETCD as a backing store.
-
Service authentication data, including user IDs and passwords: Credentials that are used by Red Hat Advanced Cluster Management for Kubernetes platform components for inter-component access are defined as Kubernetes Secrets. All Kubernetes resource definitions are persisted in the
etcd
key-value data store. Initial credentials values are defined in the platform configuration data as Kubernetes Secret configuration YAML files. For more information, see Secrets in the Kubernetes documentation.
1.5.8. Data access Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes platform data can be accessed through the following defined set of product interfaces.
- Web user interface (the console)
-
Kubernetes
kubectl
CLI - Red Hat Advanced Cluster Management for Kubernetes CLI
- oc CLI
These interfaces are designed to allow you to make administrative changes to your Red Hat Advanced Cluster Management for Kubernetes cluster. Administration access to Red Hat Advanced Cluster Management for Kubernetes can be secured and involves three logical, ordered stages when a request is made: authentication, role-mapping, and authorization.
1.5.8.1. Authentication Copia collegamentoCollegamento copiato negli appunti!
The Red Hat Advanced Cluster Management for Kubernetes platform authentication manager accepts user credentials from the console and forwards the credentials to the backend OIDC provider, which validates the user credentials against the enterprise directory. The OIDC provider then returns an authentication cookie (auth-cookie
) with the content of a JSON Web Token (JWT
) to the authentication manager. The JWT token persists information such as the user ID and email address, in addition to group membership at the time of the authentication request. This authentication cookie is then sent back to the console. The cookie is refreshed during the session. It is valid for 12 hours after you sign out of the console or close your web browser.
For all subsequent authentication requests made from the console, the front-end NGINX server decodes the available authentication cookie in the request and validates the request by calling the authentication manager.
The Red Hat Advanced Cluster Management for Kubernetes platform CLI requires the user to provide credentials to log in.
The kubectl
and oc
CLI also requires credentials to access the cluster. These credentials can be obtained from the management console and expire after 12 hours. Access through service accounts is supported.
1.5.8.2. Role Mapping Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes platform supports role-based access control (RBAC). In the role mapping stage, the user name that is provided in the authentication stage is mapped to a user or group role. The roles are used when authorizing which administrative activities can be carried out by the authenticated user.
1.5.8.3. Authorization Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes platform roles control access to cluster configuration actions, to catalog and Helm resources, and to Kubernetes resources. Several IAM (Identity and Access Management) roles are provided, including Cluster Administrator, Administrator, Operator, Editor, Viewer. A role is assigned to users or user groups when you add them to a team. Team access to resources can be controlled by namespace.
1.5.8.4. Pod Security Copia collegamentoCollegamento copiato negli appunti!
Pod security policies are used to set up cluster-level control over what a pod can do or what it can access.
1.5.9. Data Processing Copia collegamentoCollegamento copiato negli appunti!
Users of Red Hat Advanced Cluster Management for Kubernetes can control the way that technical data that is related to configuration and management is processed and secured through system configuration.
Role-based access control (RBAC) controls what data and functions can be accessed by users.
Data-in-transit is protected by using TLS
. HTTPS
(TLS
underlying) is used for secure data transfer between user client and back end services. Users can specify the root certificate to use during installation.
Data-at-rest protection is supported by using dm-crypt
to encrypt data.
These same platform mechanisms that are used to manage and secure Red Hat Advanced Cluster Management for Kubernetes platform technical data can be used to manage and secure personal data for user-developed or user-provided applications. Clients can develop their own capabilities to implement further controls.
1.5.10. Data Deletion Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes platform provides commands, application programming interfaces (APIs), and user interface actions to delete data that is created or collected by the product. These functions enable users to delete technical data, such as service user IDs and passwords, IP addresses, Kubernetes node names, or any other platform configuration data, as well as information about users who manage the platform.
Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of data deletion:
-
All technical data that is related to platform configuration can be deleted through the management console or the Kubernetes
kubectl
API.
Areas of Red Hat Advanced Cluster Management for Kubernetes platform to consider for support of account data deletion:
-
All technical data that is related to platform configuration can be deleted through the Red Hat Advanced Cluster Management for Kubernetes or the Kubernetes
kubectl
API.
Function to remove user ID and password data that is managed through an enterprise LDAP directory would be provided by the LDAP product used with Red Hat Advanced Cluster Management for Kubernetes platform.
1.5.11. Capability for Restricting Use of Personal Data Copia collegamentoCollegamento copiato negli appunti!
Using the facilities summarized in this document, Red Hat Advanced Cluster Management for Kubernetes platform enables an end user to restrict usage of any technical data within the platform that is considered personal data.
Under GDPR, users have rights to access, modify, and restrict processing. Refer to other sections of this document to control the following:
Right to access
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals access to their data.
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to provide individuals information about what data Red Hat Advanced Cluster Management for Kubernetes platform holds about the individual.
Right to modify
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to allow an individual to modify or correct their data.
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to correct an individual’s data for them.
Right to restrict processing
- Red Hat Advanced Cluster Management for Kubernetes platform administrators can use Red Hat Advanced Cluster Management for Kubernetes platform features to stop processing an individual’s data.
1.5.12. Appendix Copia collegamentoCollegamento copiato negli appunti!
As a platform, Red Hat Advanced Cluster Management for Kubernetes deals with several categories of technical data that could be considered as personal data, such as an administrator user ID and password, service user IDs and passwords, IP addresses, and Kubernetes node names. Red Hat Advanced Cluster Management for Kubernetes platform also deals with information about users who manage the platform. Applications that run on the platform might introduce other categories of personal data that are unknown to the platform.
This appendix includes details on data that is logged by the platform services.
1.6. FIPS readiness Copia collegamentoCollegamento copiato negli appunti!
Red Hat Advanced Cluster Management for Kubernetes is designed for Federal Information Processing Standards (FIPS). When running on Red Hat OpenShift Container Platform in FIPS mode, OpenShift Container Platform uses the Red Hat Enterprise Linux cryptographic libraries submitted to NIST for FIPS Validation on only the architectures that are supported by OpenShift Container Platform.
For more information about the NIST validation program, see Cryptographic Module Validation Program.
For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance Activities and Government Standards.
If you plan to manage clusters with FIPS enabled, you must install Red Hat Advanced Cluster Management on an OpenShift Container Platform cluster that is configured to operate in FIPS mode. The hub cluster must be in FIPS mode because cryptography that is created on the hub cluster is used on managed clusters.
To enable FIPS mode on your managed clusters, set fips: true
when you provision your OpenShift Container Platform managed cluster. You cannot enable FIPS after you provision your cluster.
For more information, see Do you need extra security for your cluster? in the OpenShift Container Platform documentation.
1.6.1. FIPS readiness limitations Copia collegamentoCollegamento copiato negli appunti!
Read the following limitations with Red Hat Advanced Cluster Management components and features for FIPS readiness.
- Persistent Volume Claim (PVC) and S3 storage that is used by the search and observability components must be encrypted when you configure the provided storage. Red Hat Advanced Cluster Management does not provide storage encryption, see the OpenShift Container Platform documentation, Configuring persistent storage.
When you provision managed clusters using the Red Hat Advanced Cluster Management console, select the following checkbox in the Cluster details section of the managed cluster creation to enable the FIPS standards:
FIPS with information text: Use the Federal Information Processing Standards (FIPS) modules provided with Red Hat Enterprise Linux CoreOS instead of the default Kubernetes cryptography suite file before you deploy the new managed cluster.
FIPS with information text: Use the Federal Information Processing Standards (FIPS) modules provided with Red Hat Enterprise Linux CoreOS instead of the default Kubernetes cryptography suite file before you deploy the new managed cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The Red Hat Edge Manager (Technolgy Preview) component that is integrated with Red Hat Advanced Cluster Management is not developed for FIPS readiness.
1.7. Observability support Copia collegamentoCollegamento copiato negli appunti!
- Red Hat Advanced Cluster Management is tested with and fully supported by Red Hat OpenShift Data Foundation, formerly Red Hat OpenShift Container Platform.
- Red Hat Advanced Cluster Management supports the function of the multicluster observability operator on user-provided third-party object storage that is S3 API compatible. The observability service uses Thanos supported, stable object stores.
- Red Hat Advanced Cluster Management support efforts include reasonable efforts to identify root causes. If you open a support ticket and the root cause is the S3 compatible object storage that you provided, then you must open an issue using the customer support channels.