Chapter 1. multicluster global hub
The multicluster global hub is a set of components that enable you to import one or more hub clusters and manage them from a single hub cluster.
After importing the hub clusters as managed hub clusters, you can use multicluster global hub to complete the following tasks across all of the managed hub clusters:
- Report the policy compliance status and trend
- Inventory all managed hubs and managed clusters on the overview page
- Detect and alert in cases of irregular policy behavior
The multicluster global hub is useful when a single hub cluster cannot manage the large number of clusters in a high-scale environment. When this happens, you divide the clusters into smaller groups of clusters and configure a hub cluster for each group.
It is often inconvenient to view the data on multiple hub clusters for the managed clusters that are managed by that hub cluster. The multicluster global hub provides an easier way to view information from multiple hubs by designating multiple hub clusters as managed hub clusters. The multicluster global hub cluster manages the other hub clusters and gathers summarized information from the managed hub clusters.
Enable the Observability service on your multicluster global hub to view the health and utilization of your managed hub clusters in Grafana dashboards. View metrics from your multicluster global hub and hub cluster. For more information about Observability, see Observability service documentation.
To learn about how you can use the multicluster global hub, see the following sections:
- multicluster global hub architecture
- multicluster global hub requirements
- Installing Multicluster Global Hub in a connected environment
- Installing Multicluster Global Hub in a disconnected environment
- Installing multicluster global hub on an existing Red Hat Advanced Cluster Management hub cluster
- Integrating existing components
- Importing a managed hub cluster in the default mode
- Importing a managed hub cluster in the hosted mode
- Accessing the Grafana data
- Grafana alerts (Technology Preview)
- Configuring the cron jobs
- Running the summarization process manually
- Backup for multicluster global hub(Technology Preview)
- multicluster global hub search (Technology Preview)
- Migrating managed clusters
- Recovering data from the upgraded built-in PostgreSQL (Deprecated)
1.1. multicluster global hub architecture Copy linkLink copied to clipboard!
The multicluster global hub consists of the following components that are used to access and manage your hub clusters:
- A server component called the global hub cluster where the management tools and the console run
- A client component that is installed on Red Hat Advanced Cluster Management, named the managed hub, which can be managed by the global hub cluster. The managed hub also manages other clusters. You do not have to use a dedicated cluster for your multicluster global hub cluster.
Learn more about the architecture in the following sections:
See the following high-level multicluster terms and components:
1.1.1. The multicluster global hub operator Copy linkLink copied to clipboard!
The multicluster global hub operator contains the components of multicluster global hub. The operator deploys all of the required components for global multicluster management. The components include multicluster-global-hub-manager, multicluster-global-hub-grafana, and provided versions of Kafka and PostgreSQL in the multicluster global hub cluster and multicluster-global-hub-agent in the managed hub clusters.
1.1.2. The multicluster global hub manager Copy linkLink copied to clipboard!
The multicluster global hub manager is used to persist the data into the postgreSQL database. The data is from Kafka transport. The manager also posts the data to the Kafka transport, so it can be synchronized with the data on the managed hub clusters.
1.1.3. The multicluster global hub agent Copy linkLink copied to clipboard!
The multicluster global hub agent runs on the managed hub clusters. It synchronizes the data between the multicluster global hub cluster and the managed hub clusters. For example, the agent synchronizes the information of the managed clusters from the managed hub clusters to the multicluster global hub cluster and synchronizes the policy or application from the multicluster global hub cluster to the managed hub clusters.
1.1.4. The multicluster global hub visualizations Copy linkLink copied to clipboard!
Grafana runs on the multicluster global hub cluster as the main service for multicluster global hub visualizations. The PostgreSQL data collected by the Global Hub Manager is its default DataSource. By exposing the service using the route called multicluster-global-hub-grafana, you can access the multicluster global hub Grafana dashboards by accessing the console.
1.2. multicluster global hub requirements Copy linkLink copied to clipboard!
Learn about the multicluster global hub requirements and review the components and environments that are supported by multicluster global hub.
Required access: Cluster administrator
OpenShift Container Platform Dedicated environment required access: You must have cluster-admin permissions. By default dedicated-admin role does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment.
See the following sections:
1.2.1. Networking requirements Copy linkLink copied to clipboard!
See the following networking requirements:
- The managed hub cluster is also a multicluster global hub managed cluster in Red Hat Advanced Cluster Management. You must configure the network component in Red Hat Advanced Cluster Management. For Red Hat Advanced Cluster Management networking details, see Networking.
To review the multicluster global hub network information, see the following table:
Expand Direction Protocol Connection Port that is specified Source address Destination address Inbound from the browser
HTTPS
Access the Grafana dashboard
443
Browser
Grafana route IP address
Outbound to Kafka Cluster
HTTPS
The multicluster global hub manager needs to receive data from Kafka cluster
443
multicluster-global-hub-manager-xxxpodKafka route host
Outbound to PostgreSQL database
HTTPS
The multicluster global hub needs to contribute data to the PostgreSQL database
443
multicluster-global-hub-manager-xxxpodPostgreSQL database IP address
To review the managed hub network information, see the following table:
Expand Direction Protocol Connection Port that is specified Source address Destination address Outbound to Kafka Cluster
HTTPS
The multicluster global hub agent needs to sync cluster and policy information to Kafka cluster
443
multicluster-global-hub-agentpodKafka route host
- For sizing guidelines, see Sizing your Red Hat Advanced Cluster Management cluster.
1.2.2. Supported components Copy linkLink copied to clipboard!
See the following supported components:
- The multicluster global hub and OpenShift Container Platform console share an integrated console, so they support the same browser. For information about supported browsers and versions, see Accessing the web console in the Red Hat OpenShift Container Platform documentation
See the following table of supported platforms for the multicluster global hub cluster:
Expand Platform Supported for global hub cluster Supported for managed hub cluster Red Hat Advanced Cluster Management 2.15, and later 2.15.x releases
Yes
Yes
Red Hat Advanced Cluster Management 2.14, and later 2.14.x releases
Yes
Yes
Red Hat Advanced Cluster Management 2.13, and later 2.13.x releases
Yes
Yes
Red Hat Advanced Cluster Management on Arm
Yes
Yes
Red Hat Advanced Cluster Management on IBM Z
Yes
Yes
Red Hat Advanced Cluster Management on IBM Power Systems
Yes
Yes
multicluster global hub supports middleware, such as Kafka, PostgreSQL, and Grafana. See the following table for a list of supported middleware and built-in versions:
Expand Middleware Built-in version amq-streams
3.0.x
Kafka
4.0.0
PostgreSQL
16
Grafana
11.1.0
1.2.3. Additional resources Copy linkLink copied to clipboard!
1.3. Installing multicluster global hub in a connected environment Copy linkLink copied to clipboard!
The multicluster global hub is installed through Operator Lifecycle Manager, which manages the installation, upgrade, and removal of the components that comprise the operator.
Required access: Cluster administrator
1.3.1. Prerequisites Copy linkLink copied to clipboard!
-
For the OpenShift Container Platform Dedicated environment, you must have
cluster-adminpermissions to access the environment. By defaultdedicated-adminrole does not have the required permissions to create namespaces in the OpenShift Container Platform Dedicated environment. - You must install and configure Red Hat Advanced Cluster Management for Kubernetes. For more details, see Installing and upgrading.
- You must configure the Red Hat Advanced Cluster Management network. The managed hub cluster is also a managed cluster of multicluster global hub in Red Hat Advanced Cluster Management. For more details, see Hub cluster network configuration.
1.3.1.1. Installing multicluster global hub by using the console Copy linkLink copied to clipboard!
To install the multicluster global hub operator in a connected environment by using the OpenShift Container Platform console, complete the following steps:
-
Log in to the OpenShift Container Platform console as a user with the
cluster-adminrole. - From the navigation menu, select Operators > the OperatorHub icon.
- Locate and select the Multicluster global hub operator.
- Click Install to start the installation.
- After the installation completes, check the status on the Installed Operators page.
- Click Multicluster global hub operator to go to the Operator page.
-
Click the Multicluster global hub tab to see the
Multicluster Global Hubinstance. -
Click Create Multicluster Global Hub to create the
Multicluster Global Hubinstance. -
Enter the required information and click Create to create the
Multicluster Global Hubinstance.
1.3.2. Additional resources Copy linkLink copied to clipboard!
- For more information about mirroring an Operator catalog, see Mirroring an Operator catalog.
- For more information about accessing images from private registries, see Accessing images for Operators from private registries.
- For more information about adding a catalog source, see Adding a catalog source to a cluster.
- For more information about installing Red Hat Advanced Cluster Management in a disconnected environment, see Installing in disconnected network environments.
- For more information about mirroring images, see Mirroring images for a disconnected installation.
- For more information about the Operator SDK Intregration with OLM, see Operator SDK Integration with Operator Lifecycle Manager.
1.4. Installing multicluster global hub in a disconnected environment Copy linkLink copied to clipboard!
If your cluster is in a restricted network, you can deploy the multicluster global hub operator in the disconnected environment.
Required access: Cluster administrator
1.4.1. Prerequisites Copy linkLink copied to clipboard!
You must meet the following requirements before you install multicluster global hub in a disconnected environment:
- An image registry and a bastion host must have access to both the internet and to your mirror registry.
- Install the Operator Lifecycle Manager on your cluster. See Operator Lifecycle Manager (OLM).
- Install Red Hat Advanced Cluster Management for Kubernetes.
Install the following command line interfaces:
- The OpenShift Container Platform command line. See Getting started with the OpenShift Container Platform CLI.
-
The
opmcommand line. See Installing the opm CLI. -
The
oc-mirrorplugin. See Mirroring images for a disconnected installation by using the oc-mirror plugin v2.
1.4.2. Configuring a mirror registry Copy linkLink copied to clipboard!
Installing multicluster global hub in a disconnected environment involves the use of a local mirror image registry. At this point, it is assumed that you have set up a mirror registry during the OpenShift Container Platform cluster installation.
Complete the following procedures to provision the mirror registry for multicluster global hub:
1.4.2.1. Creating operator packages in mirror catalog with oc-mirror plug-in Copy linkLink copied to clipboard!
Red Hat provides the multicluster global hub and AMQ Streams operators in the Red Hat operators catalog, which are delivered by the registry.redhat.io/redhat/redhat-operator-index index image. When you prepare your mirror of this catalog index image, you can choose to either mirror the entire catalog as provided by Red Hat, or you can mirror a subset that contains only the operator packages that you intend to use.
If you are creating a full mirror catalog, no special considerations are needed as all of the packages required to install multicluster global hub and AMQ Streams are included. However, if you are creating a partial or filtered mirrored catalog, for which you identify particular packages to be included, you must to include the multicluster-global-hub-operator-rh and amq-streams package names in your list.
Complete the following steps to create a local mirror registry of the multicluster-global-hub-operator-rh and amq-streams packages:
Create a
ImageSetConfigurationYAML file to configure and add the operator image. Your YAML file might resemble the following content, with the current version replacing4.x:kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.x packages: - name: multicluster-global-hub-operator-rh - name: amq-streams additionalImages: [] helm: {}Mirror the image set directly to the target mirror registry by using the following command:
oc mirror --config=./imageset-config.yaml docker://myregistry.example.com:5000- Mirror the image set in a fully disconnected environment. For more details, see Mirroring images for a disconnected installation.
1.4.2.2. Adding the registry and catalog to your disconnected cluster Copy linkLink copied to clipboard!
To make your mirror registry and catalog available on your disconnected cluster. Complete the following steps:
Disable the default catalog sources of Operator Hub. Run the following command to update the
OperatorHubresource:oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'- Mirror the Operator catalog by completing the procedure, Mirroring the Operator catalog.
Add the
CatalogSourceresource for your mirrored catalog into theopenshift-marketplacenamespace. YourCatalogSourceYAML file might be similar to the following example, with4.xset as the supported version:apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc secrets: - <global-hub-secret>-
Note: Take note of the value of the
metadata.namefield.
-
Note: Take note of the value of the
- Save the updated file.
Query the available
PackageManifestresource to verify that the required packages are available from your disconnected cluster. Run the following command:oc -n openshift-marketplace get packagemanifests-
Verify that the displayed list includes entries showing the
multicluster-global-hub-operator-rhandamq-streamspackages. Verify that the catalog source for your mirror catalog supplies these packages. To change the build in the
amq-strimzicatalog source, create a multicluster global hub annotation. Apply the following YAML:apiVersion: operator.open-cluster-management.io/v1alpha4 kind: MulticlusterGlobalHub metadata: annotations: global-hub.open-cluster-management.io/strimzi-catalog-source-name: redhat-operators global-hub.open-cluster-management.io/strimzi-catalog-source-namespace: openshift-marketplace global-hub.open-cluster-management.io/strimzi-subscription-package-name: amq-streams global-hub.open-cluster-management.io/strimzi-subscription-channel: amq-streams-3.0.x name: multiclusterglobalhub spec: ......
1.4.3. Configuring the image registry Copy linkLink copied to clipboard!
In order to have your cluster obtain container images for the multicluster global hub operator from your local mirror registry, rather than from the internet-hosted registries, you must configure an ImageContentSourcePolicy resource on your disconnected cluster to redirect image references to your mirror registry. The ImageContentSourcePolicy only support the image mirror with image digest.
If you mirrored your catalog using the oc adm catalog mirror command, the needed image content source policy configuration is in the imageContentSourcePolicy.yaml file inside the manifests- directory that is created by that command.
If you used the oc-mirror plug-in to mirror your catalog instead, the imageContentSourcePolicy.yaml file is within the oc-mirror-workspace/results- directory created by the oc-mirror plug-in.
In either case, you can apply the policies to your disconnected command using an oc apply or oc replace command such as oc replace -f ./<path>/imageContentSourcePolicy.yaml
The required image content source policy statements can vary based on how you created your mirror registry, but are similar to this example:
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
labels:
operators.openshift.org/catalog: "true"
name: global-hub-operator-icsp
spec:
repositoryDigestMirrors:
- mirrors:
- myregistry.example.com:5000/multicluster-globalhub
source: registry.redhat.io/multicluster-globalhub
- mirrors:
- myregistry.example.com:5000/openshift4
source: registry.redhat.io/openshift4
- mirrors:
- myregistry.example.com:5000/redhat
source: registry.redhat.io/redhat
- mirrors:
- myregistry.example.com:5000/rhel9
source: registry.redhat.io/rhel9
- mirrors:
- myregistry.example.com:5000/amq-streams
source: registry.redhat.io/amq-streams
You can configure different image registries for different managed hubs with the ManagedClusterImageRegistry. See Importing a cluster that has a ManagedClusterImageRegistry to use the ManagedClusterImageRegistry API to replace the agent image.
By completing the previous step, a label and an annotation are added to the selected ManagedCluster. This means that the agent image in the cluster are replaced with the mirror image.
-
Label:
multicluster-global-hub.io/image-registry=<namespace.managedclusterimageregistry-name> -
Annotation:
multicluster-global-hub.io/image-registries: <image-registry-info>
1.4.3.1. Configure the image pull secret Copy linkLink copied to clipboard!
If the Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or to individual target tenant namespaces.
1.4.3.1.1. Configure the multicluster global hub image pull secret in an OpenShift Container Platform cluster Copy linkLink copied to clipboard!
You can configure the image pull secret in an existing OpenShift Container Platform cluster.
Note: Applying the image pull secret on a pre-existing cluster causes a rolling restart of all of the nodes.
Complete the following steps to configure the pull secret:
Export the user name from the pull secret:
export USER=<the-registry-user>Export the password from the pull secret:
export PASSWORD=<the-registry-password>Copy the pull secret:
oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull_secret.yamlLog in using the pull secret:
oc registry login --registry=${REGISTRY} --auth-basic="$USER:$PASSWORD" --to=pull_secret.yamlSpecify the multicluster global hub image pull secret:
oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yamlRemove the old pull secret:
rm pull_secret.yaml
1.4.3.1.2. Configure the multicluster global hub image pull secret to an individual namespace Copy linkLink copied to clipboard!
You can configure the image pull secret to an individual namespace by completing the following steps:
Create the secret in the tenant namespace by running the following command:
oc create secret generic <secret_name> -n <tenant_namespace> \ --from-file=.dockerconfigjson=<path/to/registry/credentials> \ --type=kubernetes.io/dockerconfigjsonLink the secret to the service account for your operator or operand:
oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull
1.4.3.2. Installing the Global Hub Operator Copy linkLink copied to clipboard!
You can install and subscribe an Operator from the Red Hat OpenShift Software Catalog. See Adding Operators to a cluster for the procedure. After adding the Operator, you can check the status of the multicluster global hub Operator by running the following command:
oc get pods -n multicluster-global-hub
NAME READY STATUS RESTARTS AGE
multicluster-global-hub-operator-687584cb7c-fnftj 1/1 Running 0 2m12s
1.4.4. Additional resources Copy linkLink copied to clipboard!
- For more information about creating a mirror registry, see Create a mirror registry.
- For more information about mirroring images, see Mirroring in disconnected environments.
- For more information about mirroring an Operator catalog, see Mirroring an Operator catalog.
1.5. Installing multicluster global hub on an existing Red Hat Advanced Cluster Management hub cluster Copy linkLink copied to clipboard!
You can install multicluster global hub in an existing Red Hat Advanced Cluster Management hub cluster and enable the multicluster global hub agent in Red Hat Advanced Cluster Management. Learn about the following capabilities:
- When you install multicluster global hub in an Red Hat Advanced Cluster Management hub cluster, Kafka receives the information from the multicluster global hub operator and stores it in the multicluster global hub database, giving you access to the clusters, policies, and events.
- Kafka stores the multicluster global hub operator information, so as a Kafka consumer, you can access other components. For example, you can integrate with Red Hat Advanced Cluster Security and Red Hat Event-Driven Ansible.
- Your data is stored in the multicluster global hub database so you can view the cluster and policy information in the Grafana dashboards.
To install multicluster global hub on an existing Red Hat Advanced Cluster Management hub cluster, you must set the installAgentOnLocal field to true by applying the following YAML sample:
apiVersion: operator.open-cluster-management.io/v1alpha4
kind: MulticlusterGlobalHub
metadata:
name: multiclusterglobalhub
namespace: multicluster-global-hub
spec:
enableMetrics: true
installAgentOnLocal: true
dataLayer:
kafka:
topics:
specTopic: gh-spec
statusTopic: gh-status
consumerGroupPrefix: org1_
postgres:
retention: 18m
availabilityConfig: High
- 1
installAgentOnLocal: Deploys multicluster global hub on the multicluster global hub cluster. If you set the value totrue, the multicluster global hub operator installs the agent on the multicluster global hub cluster.- 2
specTopic: Distributes workloads from multicluster global hub to managed hub clusters. The default value isgh-spec.- 3
statusTopic: Reports events and status updates to the multicluster global hub manager. When a topic ends with an asterisk, the topic is intended for the individual managed hub clusters. The default value for all managed hub clusters isgh-status.- 4
consumerGroupPrefix: Specifies the prefix for Kafka consumer groups. The final consumer group identification for multicluster global hub is<prefix> + "global_hub", and for managed hub clusters it is<prefix> + <managed-hub-name>. In the final group identification, all hyphens, get changed to underscores. If you do not specify a value, the default consumer group is the name of the hubs without any prefix.- 5
availabilityConfig: Specifies the replication of deployments to improve their availability. For the values of the replications, you have the options ofBasicandHigh. For theHighvalue, theavailabilityConfigcreates one replica for the multicluster global hub manager and Grafana. It creates two replicas for the Kafka broker, which is a separate component in multicluster global hub. The default value isHigh.
The multicluster global hub agent is automatically installed on an existing Red Hat Advanced Cluster Management hub cluster, and you can create a MultiClusterGlobalHub custom resource by completing the following procedures:
Note: If you enabled the MultiClusterObservability custom resource in the Red Hat Advanced Cluster Management after you installed multicluster global hub, then any new update for the MultiClusterObservability custom resource does not take effect.
1.6. Integrating existing components Copy linkLink copied to clipboard!
The multicluster global hub requires middleware components, Kafka and PostgreSQL, along with Grafana as the Observability platform to provide the policy compliance view. The multicluster global hub provides versions of Kafka, PostgreSQL, and Grafana. You can also integrate your own existing Kafka, PostgreSQL, and Grafana.
1.6.1. Integrating an existing version of Kafka Copy linkLink copied to clipboard!
If you have your own instance of Kafka, you can use it as the transport for multicluster global hub. Complete the following steps to integrate an instance of Kafka:
- If you do not have a persistent volume for your Kafka instance, you need to create one.
Create a secret named
multicluster-global-hub-transportin themulticluster-global-hubnamespace.Extract the information in the following required fields:
-
bootstrap.servers: Specifies the Kafka bootstrap servers. -
ca.crt: Required if you use theKafkaUsercustom resource to configure authentication credentials. See the User authentication topic in the STRIMZI documentation for the required steps to extract theca.crtcertificate from the secret. -
client.crt: Required, see the User authentication topic in the STRIMZI documentation for the steps to extract theuser.crtcertificate from the secret. -
client.key: Required, see the User authentication topic in the STRIMZI documentation for the steps to extract theuser.keyfrom the secret.
-
Create the secret by running the following command, replacing the values with your extracted values where necessary:
oc create secret generic multicluster-global-hub-transport -n multicluster-global-hub \ --from-literal=bootstrap_server=<kafka-bootstrap-server-address> \ --from-file=ca.crt=<CA-cert-for-kafka-server> \ --from-file=client.crt=<Client-cert-for-kafka-server> \ --from-file=client.key=<Client-key-for-kafka-server>-
If automatic topic creation is configured on your Kafka instance, then skip this step. If it is not configured, create the
specandstatustopics manually. - Ensure that the global hub user that accesses Kafka has the permission to read data from the topics and write data to the topics.
1.6.2. Integrating an existing version of PostgreSQL Copy linkLink copied to clipboard!
If you have your own PostgreSQL relational database, you can use it as the storage for multicluster global hub.
The minimum required storage size is 20GB. This amount can store 3 managed hubs with 250 managed clusters and 50 policies per managed hub for 18 months. You need to create a secret named multicluster-global-hub-storage in the multicluster-global-hub namespace. The secret must contain the following fields:
-
database_uri: It is used to create the database and insert data. Your value must resemble the following format:postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>. -
database_uri_with_readonlyuser: It is used to query data by the instance of Grafana that is used by multicluster global hub. It is an optional value. Your value must resemble the following format:postgres://<user>:<password>@<host>:<port>/<database>?sslmode=<mode>. The
ca.crt, which is based on thesslmode, is an optional value.- Verify that your cluster has the minimum required storage size of 20GB. This amount can store three managed hubs with 250 managed clusters and 50 policies per managed hub for 18 months.
- Create the secret by running the following command:
oc create secret generic multicluster-global-hub-storage -n multicluster-global-hub \ --from-literal=database_uri=<postgresql-uri> \ --from-literal=database_uri_with_readonlyuser=<postgresql-uri-with-readonlyuser> \ --from-file=ca.crt=<CA-for-postgres-server>
The host must be accessible from the multicluster global hub cluster. If your PostgreSQL database is in a Kubernetes cluster, you can consider using the service type with nodePort or LoadBalancer to expose the database. For more information, see Accessing the provisioned postgres database for troubleshooting.
1.6.3. Integrating an existing version of Grafana Copy linkLink copied to clipboard!
Using an existing Grafana instance might work with multicluster global hub if you are relying on your own Grafana to get metrics from multiple sources, such as Prometheus, from different clusters and if you aggregate the metrics yourself. To get multicluster global hub data into your own Grafana, you need to configure the data source and import the dashboards.
Collect the PostgreSQL connection information from the multicluster global hub Grafana
datasourcesecret by running the following command:oc get secret multicluster-global-hub-grafana-datasources -n multicluster-global-hub -ojsonpath='{.data.datasources\.yaml}' | base64 -dThe output resembles the following example:
apiVersion: 1 datasources: - access: proxy isDefault: true name: Global-Hub-DataSource type: postgres url: postgres-primary.multicluster-global-hub.svc:5432 database: hoh user: guest jsonData: sslmode: verify-ca tlsAuth: true tlsAuthWithCACert: true tlsConfigurationMethod: file-content tlsSkipVerify: true queryTimeout: 300s timeInterval: 30s secureJsonData: password: xxxxx tlsCACert: xxxxxConfigure the
datasourcein your own Grafana instance by adding a source, such as PostgreSQL, and complete the required fields with the information you previously extracted.See the following required fields:
- Name
- Host
- Database
- User
- Password
- TLS/SSL Mode
- TLS/SSL Method
- CA Cert
If your Grafana is not in the multicluster global hub cluster, you need to expose the PostgreSQL by using the
LoadBalancerso the PostgreSQL can be accessed from outside. You can add the following value into thePostgresClusteroperand:service: type: LoadBalancerAfter you add that content, then you can get the
EXTERNAL-IPfrom thepostgres-haservice. See the following example:oc get svc postgres-ha -n multicluster-global-hub NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgres-ha LoadBalancer 172.30.227.58 xxxx.us-east-1.elb.amazonaws.com 5432:31442/TCP 128mAfter running that command, you can use
xxxx.us-east-1.elb.amazonaws.com:5432as the PostgreSQL Connection Host.Import the existing dashboards.
- Follow the steps in Export and import dashboards in the official Grafana documentation to export the dashboard from the existing Grafana instance.
- Follow the steps in Export and import dashboards in the official Grafana documentation to import a dashboard into the multicluster global hub Grafana instance.
1.6.4. Additional resources Copy linkLink copied to clipboard!
See User authentication in the STRIMZI documentation for more information about how to extract the ca.crt certificate from the secret.
See User authentication in the STRIMZI documentation for the steps to extract the user.crt certificate from the secret.
1.7. Importing a managed hub cluster in the default mode Copy linkLink copied to clipboard!
Import an existing hub cluster as a managed hub cluster to help you control your different environments when you develop and deploy your applications.
1.7.1. Prerequisites Copy linkLink copied to clipboard!
-
Disable the cluster self-management setting in the existing hub cluster by setting the
disableHubSelfManagementsetting totruein themulticlusterhubcustom resource. This setting disables the automatic import of the hub cluster as a managed cluster. - Import the managed hub cluster by completing the steps in Cluster import introduction.
1.7.2. Importing a managed hub cluster Copy linkLink copied to clipboard!
To import an existing hub cluster as a managed hub cluster, complete the following steps:
-
Set the label
global-hub.open-cluster-management.io/deploy-mode=defaultto themanagedclusterwhen you import the managed hub cluster. Check the multicluster global hub agent status to ensure that the agent is running in the managed hub cluster. Run the following command:
oc get managedclusteraddon multicluster-global-hub-controller -n <managed_hub_cluster_name>
Note: If you upgrade multicluster global hub, you must manually add the label global-hub.open-cluster-management.io/deploy-mode=default to all the managed hub clusters.
1.8. Importing a managed hub cluster in the hosted mode Copy linkLink copied to clipboard!
To enable local-cluster on your managed hub cluster, you must import it in the hosted mode.
1.8.1. Prerequisites Copy linkLink copied to clipboard!
-
Enable the
local-clusterin the multicluster global hub cluster. - Install the latest Red Hat Advanced Cluster Management version.
Make sure this
kubeconfigfile is always usable. If it expires, regenerate theauto-import-secretin the managed hub cluster namespace with the following command:oc create secret generic auto-import-secret --from-file=kubeconfig=./managedClusterKubeconfig -n <Managedhub Namespace>
1.8.2. Importing a managed hub cluster in the hosted mode Copy linkLink copied to clipboard!
When you import an existing managed hub cluster in the hosted mode, you only get support for imported managed hub clusters that use a kubeconfig file. Red Hat Advanced Cluster Management for Kubernetes uses this kubeconfig file to generate the auto-import-secret which connects to your managed hub cluster. In the hosted mode, multicluster global hub does not support backup and restore.
Import your managed hub cluster by setting the label global-hub.open-cluster-management.io/deploy-mode=hosted to managedcluster.
With this label, multicluster global hub does the following actions:
- Imports the new managed hub cluster in the hosted mode.
- Installs the multicluster global hub agent in the new managed hub cluster that uses the hosted mode.
-
Disables the following
addonsin the new managed hub cluster namespaces:applicationManager,certPolicyController, andpolicyController. -
Changes the following managed hub clusters
addonsthat are related to the new namespace:work-manager,cluster-proxy, andmanaged-serviceaccount. -
Changes these namespaces to
open-cluster-management-global-hub-agent-addon.
1.9. Accessing the Grafana data Copy linkLink copied to clipboard!
The Grafana data is exposed through the route. Run the following command to display the login URL:
oc get route multicluster-global-hub-grafana -n <the-namespace-of-multicluster-global-hub-instance>
The authentication method of this URL is same as authenticating to the Red Hat OpenShift Container Platform console.
To learn more about what you can view with the Grafana dashboards, see the following sections:
1.9.1. Viewing policy status with Grafana dashboards Copy linkLink copied to clipboard!
After accessing the global hub Grafana data, you can monitor the policies that were configured through the hub cluster environments that are managed.
From the multicluster global hub dashboard, you can identify the compliance status of the policies of the system over a selected time range. The policy compliance status is updated daily, so the dashboard does not display the status of the current day until the following day.
To navigate the multicluster global hub dashboards, you can observe and filter the policy data by grouping them by policy or by cluster.
If you prefer to examine the policy data by using the policy grouping, start from the and the dashboard called Global Hub - Policy Group Compliancy Overview.
This dashboard allows you to filter the policy data based on standard, category, and control. After selecting a specific point in time on the graph, you are directed to the Global Hub - Offending Policies dashboard. The Global Hub - Offending Policies dashboard lists the non-compliant or unknown policies at that time. After selecting a target policy, you can view related events and see what has changed by accessing the Global Hub - What’s Changed / Policies dashboard.
Similarly, if you want to examine the policy data by cluster grouping, begin by using the Global Hub - Cluster Group Compliancy Overview dashboard. The navigation flow is identical to the policy grouping flow, but you select filters that are related to the cluster, such as managed cluster labels and values. Instead of viewing policy events for all clusters, after reaching the Global Hub - What’s Changed / Clusters dashboard, you can view policy events related to an individual cluster.
1.9.2. Viewing Strimzi information with Grafana dashboards Copy linkLink copied to clipboard!
To understand the health and performance of your Kafka deployment and Postgres database, collect their metrics. When you check the metrics, you can identify issues before they become critical and make informed decisions about resource allocation and capacity planning. If you do not collect and check the metrics, you might have limited visibility into the behavior of your Kafka deployment, making troubleshooting more difficult.
You can check the dashboards and their metrics in the multicluster global hub Grafana. In the Strimzi folder, you can view the following dashboards:
- multicluster global hub - Strimzi Operator
- multicluster global hub - Strimzi Kafka
- multicluster global hub - Strimzi Kraft
1.9.3. Viewing Postgres information with Grafana dashboards (Technology Preview) Copy linkLink copied to clipboard!
To understand the health and performance of your Postgres config, collect and check the Postgres metrics in the Grafana dashboard. In the Postgres folder, you can view the following dashboard:
- multicluster global hub - PostgresSQL Database
1.9.4. Viewing cluster information with Grafana dashboards Copy linkLink copied to clipboard!
To filter your managed clusters and to view their details and events, use the Grafana dashboards. With the Grafana dashboard, you can filter your managed clusters based on: hub, labels and name. Additionally, you can view the distribution of your managed clusters based on: hub, status, cloud, and version.
In the Cluster folder, you can view the following dashboard:
- multicluster global hub - Cluster Overview
1.10. Grafana alerts (Technology Preview) Copy linkLink copied to clipboard!
You can configure three Grafana alerts, which are stored in the multicluster-global-hub-default-alerting config map. These alerts notify you of suspicious policies, suspicious clusters compliance status change, and failed cron jobs.
See the following descriptions of the alerts:
Suspicious policy change: This alert rule watches the suspicious policies change. If the following events occur more than five times in one hour, it creates notifications.
- A policy was enabled or disabled.
- A policy was updated.
Suspicious cluster compliance status change: This alert rule watches the cluster compliance status and policy events for a cluster. There are two rules in this alert:
-
Cluster compliance status changes frequently: If a cluster compliance status changes from
compliancetonon-compliancemore than three times in one hour, it creates notifications. -
Too many policy events in a cluster: For a policy in a cluster, if there are more than 20 events in five minutes, it creates notifications. If this alert is always firing, the data in the
event.local_policiestable increases too fast.
-
Cluster compliance status changes frequently: If a cluster compliance status changes from
Cron Job failed: This alert watches the cron jobs that are described in Configuring the cron jobs for failed events. There are two rules in this alert:
-
Local compliance job failed: If this alert rule creates notifications, it means the local compliance status synchronization job failed. It might cause the data in the
history.local_compliancetable to be lost. Run the job manually, if necessary. - Data retention job failed: If this alert rule starts creating notifications, it means the data retention job failed. You can run it manually.
-
Local compliance job failed: If this alert rule creates notifications, it means the local compliance status synchronization job failed. It might cause the data in the
1.10.1. Deleting a default Grafana alert rule Copy linkLink copied to clipboard!
If the default Grafana alert rules do not provide useful information, you can delete the Grafana alert rule by including a deleteRules section in the multicluster-global-hub-custom-alerting config map. See Customize Grafana alerting resources for more information about the multicluster-global-hub-custom-alerting config map.
To delete all of the default alerts, the deleteRules configuration section should resemble the following example:
deleteRules:
- orgId: 1
uid: globalhub_suspicious_policy_change
- orgId: 1
uid: globalhub_cluster_compliance_status_change_frequently
- orgId: 1
uid: globalhub_high_number_of_policy_events
- orgId: 1
uid: globalhub_data_retention_job
- orgId: 1
uid: globalhub_local_compliance_job
1.10.2. Customizing Grafana alerts Copy linkLink copied to clipboard!
The multicluster global hub supports creating custom Grafana alerts. Complete the following steps to customize your Grafana alerts:
1.10.2.1. Customizing your grafana.ini file Copy linkLink copied to clipboard!
To customize your grafana.ini file, create a secret named multicluster-global-hub-custom-grafana-config in the namespace where you installed your multicluster global hub operator. The secret data key is grafana.ini, as seen in the following example. Replace the required information with your own credentials:
apiVersion: v1
kind: Secret
metadata:
name: multicluster-global-hub-custom-grafana-config
namespace: multicluster-global-hub
type: Opaque
stringData:
grafana.ini: |
[smtp]
enabled = true
host = smtp.google.com:465
user = <example@google.com>
password = <password>
;cert_file =
;key_file =
skip_verify = true
from_address = <example@google.com>
from_name = Grafana
;ehlo_identity = dashboard.example.com
<1>The EHLO identity in the SMTP dialog, which defaults to instance_name.
Note: You cannot configure the section that already contains the multicluster-global-hub-default-grafana-config secret.
1.10.2.2. Customizing Grafana alerting resources Copy linkLink copied to clipboard!
The multicluster global hub supports customizing the alerting resources, which is explained in Create and manage alerting resources using file provisioning in the Grafana documentation.
To customize the alerting resources, create a config map named multicluster-global-hub-custom-alerting in the multicluster-global-hub namespace.
The config map data key is alerting.yaml, as in the following example:
apiVersion: v1
data:
alerting.yaml: |
contactPoints:
- orgId: 1
name: globalhub_policy
receivers:
- uid: globalhub_policy_alert_email
type: email
settings:
addresses: <example@redhat.com>
singleEmail: false
- uid: globalhub_policy_alert_slack
type: slack
settings:
url: <Slack-webhook-URL>
title: |
{{ template "globalhub.policy.title" . }}
text: |
{{ template "globalhub.policy.message" . }}
policies:
- orgId: 1
receiver: globalhub_policy
group_by: ['grafana_folder', 'alertname']
matchers:
- grafana_folder = Policy
repeat_interval: 1d
deleteRules:
- orgId: 1
uid: [Alert Rule Uid]
muteTimes:
- orgId: 1
name: mti_1
time_intervals:
- times:
- start_time: '06:00'
end_time: '23:59'
location: 'UTC'
weekdays: ['monday:wednesday', 'saturday', 'sunday']
months: ['1:3', 'may:august', 'december']
years: ['2020:2022', '2030']
days_of_month: ['1:5', '-3:-1']
kind: ConfigMap
metadata:
name: multicluster-global-hub-custom-alerting
namespace: multicluster-global-hub
1.11. Configuring the cron jobs Copy linkLink copied to clipboard!
You can configure the cron job settings of the multicluster global hub.
After you install the multicluster global hub operand, the multicluster global hub manager runs and displays a job scheduler for you to schedule the following cron jobs:
-
Local compliance status sync job: This cron job runs at midnight every day based on the policy status and events collected by the manager on the previous day. Run this job to summarize the compliance status and the change frequency of the policy on the cluster, and store them to the
history.local_compliancetable as the data source of the Grafana dashboards. Data retention job: Some data tables in multicluster global hub continue to grow over time, which normally can cause problems when the tables get too large. The following two actions help to minimize the issues that result from tables that are too large:
- Delete older data that is no longer needed.
Enable partitioning on the large table to run queries and deletions on faster.
For event tables, such as the
event.local_policiesand thehistory.local_compliancethat increase in size daily, range partitioning divides the large tables into smaller partitions. This process also creates the partition tables for the next month each time it is run.For the policy and cluster tables, such as
local_spec.policiesandstatus.managed_clusters, thedeleted_atindexes on the tables to improve performance when you delete.
You can change the duration of time that the data is retained by changing the
retentionsetting on the multicluster global hub operand. The recommended minimum value is1 month, and the default value is18 months. The run interval of this job should be less than one month.
The listed cron jobs run every time the multicluster global hub manager starts. The local compliance status sync job is run once a day and can be run multiple times within the day without changing the result.
The data retention job is run once a week and also can be run many times per month without a change in the results.
The status of these jobs are are saved in the multicluster_global_hub_jobs_status metrics, which can be viewed from the console of the Red Hat OpenShift Container Platform cluster. A value of 0 indicates that the job ran successfully, while a value of 1 indicates failure.
1.12. Running the summarization process manually Copy linkLink copied to clipboard!
You can manually run the summarization process to restore the initial compliance state of the day when the job is not triggered or has not failed running. To manually run the summarization process, complete the following steps:
- Use the earlier day’s compliance history as the initial state for the recovery day’s history.
Connect to the database.
You can use clients such as
pgAdminandtablePlushto connect to the multicluster global hub database. Or, you can directly connect to the database on the cluster by running the following command:oc exec -it multicluster-global-hub-postgresql-0 -n multicluster-global-hub -- psql -d hohDecide the date when you want the summarization process to run, for example:
2023-07-06Find the local compliance job failure information from the dashboard metrics or the
history.local_compliance_job_logtable. In this example, the date is,2023-07-06, so you know that2023-07-06is the date when you need to manually run the summary processes.Recover the initial compliance of
2023-07-06by running the following SQL:-- call the func to generate the initial data of '2023-07-06' by inheriting '2023-07-05' CALL history.generate_local_compliance('2024-07-06');
1.13. Backup for multicluster global hub (Technology Preview) Copy linkLink copied to clipboard!
Use multicluster global hub with Red Hat Advanced Cluster Management backup and restore features for recovery solutions and to access basic resources. To learn more about how these features help you, see backup and restore.
multicluster global hub also supports backup postgres pvc by using the acm-hub-pvc-backup. To ensure your multicluster global hub can support the backup postgres pvc, you must have the current version of VolySync and Red Hat Advanced Cluster Management. For detailed steps on backing up your data, see acm-hub-pvc-backup.
1.13.1. Restoring multicluster global hub backup and restore Copy linkLink copied to clipboard!
If you need to restore your multicluster global hub cluster, see prepare the new hub cluster. Install the multicluster global hub operator but do not create the multicluster global hub custom resource (CR) because the CR is automatically restored.
1.14. multicluster global hub search (Technology Preview) Copy linkLink copied to clipboard!
Global search expands the search capabilities when you use multicluster global hub to manage your environment.
1.14.1. Prerequisites Copy linkLink copied to clipboard!
You need to enable multicluster global hub.
1.14.2. Enabling global search Copy linkLink copied to clipboard!
To enable global search, add the global-search-preview=true annotation to the search-v2-operator resource by running the following command:
oc annotate search search-v2-operator -n open-cluster-management global-search-preview=true
The search operator is updated with the following status condition:
status:
conditions:
- lastTransitionTime: '2024-05-31T19:49:37Z'
message: None
reason: None
status: 'True'
type: GlobalSearchReady
1.14.3. Additional resources Copy linkLink copied to clipboard!
1.15. Migrating managed clusters Copy linkLink copied to clipboard!
Technology Preview: You can access the Managed Cluster Migration feature in multicluster global hub 1.5. You can migrate managed clusters from one Red Hat Advanced Cluster Management hub cluster to another and across versions, for example, from Red Hat Advanced Cluster Management 2.13 to Red Hat Advanced Cluster Management 2.14.
By using multicluster global hub to migrate your managed cluster, you have a unified process that helps you perform the following actions:
- Reorganize workloads among Red Hat Advanced Cluster Management hub clusters
- Move clusters and their resources together
- Automate cluster registration and cleanup
- Tracks individual steps with detailed status updates
To fully migrate your managed clusters, complete the following sections:
Prerequisites
To migrate your managed clusters, you need the following components:
- The multicluster global hub operator to organize the migration workflow.
-
The
sourceRed Hat Advanced Cluster Management hub cluster to manage the clusters and the associated resources. -
The
targetRed Hat Advanced Cluster Management hub cluster to receive the migrated clusters and the associated resources.
1.15.1. Managed cluster migration process Copy linkLink copied to clipboard!
With multicluster global hub, you can manage large fleets of clusters by implementing the event-driven architecture of multicluster global hub. multicluster global hub connects itself to your managed hub clusters. With the multicluster global hub event-based design, you can enable multicluster global hub to communicate, organize, synchronize, and transfer resources and cluster states among all you hub clusters.
The manage cluster migration process requires coordination between a source and target Red Hat Advanced Cluster Management hub cluster. The multicluster-global-hub-agent performs the migrations tasks on the source and target hub clusters. The multicluster-global-hub-manager controls the flow of the migration between the source and target hub clusters and manages the ManagedClusterMigration resources.
During the managed cluster migration, the source and target hub clusters go through different phases. These phases and their conditions help track the status changes of the hub clusters throughout the migration process. See the following table for an outline of each phase and its condition:
| Phase | Condition |
|---|---|
|
|
Performs one migration at a time. Other migrations remain |
|
| Verifies that the clusters and hub clusters are valid. |
|
|
Prepares the |
|
| Migrates selected clusters and their resources. |
|
|
Restarts the registration of the cluster to the |
|
|
Cleans the resources from both |
|
| Confirms the migrations are successfully completed. |
|
|
Confirms that the migration has failed and includes this failure message in the |
See the following table for the supported versions and the corresponding multicluster global hub source, and target hub versions:
| multicluster global hub version for migration | source hub version | target hub version |
|---|---|---|
| multicluster global hub 1.5 | Red Hat Advanced Cluster Management 2.14 | Red Hat Advanced Cluster Management 2.14 |
| multicluster global hub 1.5 | Red Hat Advanced Cluster Management 2.13 | Red Hat Advanced Cluster Management 2.14 |
1.15.2. Preparing the migration environment Copy linkLink copied to clipboard!
To prepare your multicluster global hub environment for migration, you can create a Brownfield environment by deploying the multicluster global hub control plane directly into the source hub cluster. Then, you can import the target hub cluster into the multicluster global hub environment that is in the Hosted mode.
Complete the following steps:
-
Install the multicluster global hub operator in the
sourceRed Hat Advanced Cluster Management hub cluster. -
Enable the
local-clusterin the multicluster global hub custom operator that is running in thesourcehub cluster. Apply the following YAML to create the multicluster global hub operand and enable the multicluster global hub agent to run locally:
apiVersion: operator.open-cluster-management.io/v1alpha4 kind: MulticlusterGlobalHub metadata: name: multiclusterglobalhub namespace: multicluster-global-hub spec: availabilityConfig: High installAgentOnLocal: trueImport the target hub cluster into multicluster global hub that is in the
Hostedmode. Add the following label to the managed hub cluster:global-hub.open-cluster-management.io/deploy-mode=hosted
1.15.3. Migrating managed clusters Copy linkLink copied to clipboard!
After you configure your multicluster global hub migration environment, migrate your managed clusters. Complete the following steps to migrate the cluster1 sample and the associated resources from hub1 to hub2 hub clusters:
You can create a ManagedClusterMigration resource in the multicluster global hub migration environment by directly selecting your managed clusters by name or by using a Placement resource.
To migrate your specific managed clusters by name, complete the following steps:
- Go to your multicluster global hub namespace.
Apply the following YAML file:
apiVersion: global-hub.open-cluster-management.io/v1alpha1 kind: ManagedClusterMigration metadata: name: migration-sample spec: from: local-cluster1 includedManagedClusters: - cluster12 - cluster2 to: hub23
To migrate your managed clusters with the Placement resource, complete the following steps:
- Go to your multicluster global hub namespace.
-
Select clusters based on labels, cluster properties, or other criteria defined in your
Placementresource. Apply the following YAML file:
apiVersion: global-hub.open-cluster-management.io/v1alpha1 kind: ManagedClusterMigration metadata: name: migration-placement-sample spec: from: local-cluster1 includedManagedClustersPlacementRef: production-clusters2 to: hub23 - 1
- The
sourcehub cluster is alocal-clusterthat multicluster global hub installed inhub1hub cluster. - 2
- The
Placementresource that has the nameproduction-clustersand that defines which clusters get migrated. You must define thePlacementresource in the multicluster global hub agent namespace of the source hub cluster. For example, you can name thePlacementresourcemulticluster-global-hubormulticluster-global-hub-agent. - 3
- The
targethub cluster ishub2.
-
Specify the
includedManagedClustersorincludedManagedClustersPlacementRefcluster type to include in your migration. Include only one of these cluster types in your migration because they cannot exist together.
1.15.4. Tracking the migration status Copy linkLink copied to clipboard!
During the migration process, you can track the statuses of your managed clusters by using the multicluster global hub ConfigMap or the migration custom resource.
multicluster global hub automatically creates a ConfigMap which gives you a detailed cluster list, telling you if the cluster migration fails or succeeds. The migration custom resources gives you the status of the entire migration process.
Track the migration status of your managed clusters by completing the following steps:
View the migration status from the
ConfigMapby running the following command:kubectl get configmap <migration-name> -n <global-hub-namespace> -o yamlEnsure that the
ConfigMapthat you see resembles the following sample:apiVersion: v1 kind: ConfigMap metadata: name: migration-sample namespace: multicluster-global-hub data: success: '["cluster1","cluster3"]' failure: '["cluster2"]'-
To view the tracking status from the migration custom resource, verify that its migration status is
True, and that it resembles the following sample:
status:
conditions:
- type: ResourceValidated
status: "True"
message: Migration resources have been validated
- type: ResourceInitialized
status: "True"
message: All source and target hubs have been initialized
- type: ResourceDeployed
status: "True"
message: Resources have been successfully deployed to the target hub cluster
- type: ClusterRegistered
status: "True"
message: All migrated clusters have been successfully registered
- type: ResourceCleaned
status: "True"
message: Resources have been successfully cleaned up from the hub clusters
phase: Completed
1.15.5. Additional resources Copy linkLink copied to clipboard!
To learn more about importing hub clusters, see the following resources:
1.16. Recovering data from the upgraded built-in PostgreSQL (Deprecated) Copy linkLink copied to clipboard!
Starting with multicluster global hub version 1.4.0, the built-in PostgreSQL database is upgraded to version 16. This upgrade replaces the earlier multicluster-global-hub-postgres instance with the new multicluster-global-hub-postgressql instance.
By default, this upgrade automatically re-syncs real-time data, such as policies and clusters, to the new Postgres instance. Historical data, such as event and history tables, are not automatically recovered.
If you want to keep historical data, complete the following sections:
1.16.1. Restoring historical data Copy linkLink copied to clipboard!
When the multicluster global hub upgrade to version 1.4.0 removes your historical data, recover your data by completing the following steps:
Clone the Red Hat Advanced Cluster Management for Kubernetes Git repository to access the shell script that you need to recover your data by running the following command:
git clone -b release-2.13 https://github.com/stolostron/multicluster-global-hub.gitRestore your history tables by completing the following steps:
If the default namespace is
multicluster-global-hub, run the following shell script:./doc/upgrade/restore_history_tables.sh-
If the namespace is not specified as
multicluster-global-hub, set the multicluster global hub namespace when you run the following shell script:
./doc/upgrade/restore_history_tables.sh1 - 1
- The multicluster global hub that you installed.
Restore your event tables by completing the following steps:
If the default namespace is
multicluster-global-hub, run the following shell script:./doc/upgrade/restore_event_tables.sh-
If the namespace is not specified as
multicluster-global-hub, set the multicluster global hub namespace when you run the following shell script:
./doc/upgrade/restore_event_tables.sh1 - 1
- The multicluster global hub that you installed.
1.16.2. Deleting legacy built-in Postgres data Copy linkLink copied to clipboard!
After multicluster global hub upgrades to version 1.4.0, it switches to the new built-in Postgres instance. The global hub operator does not automatically delete resources associated with the legacy Postgres instance. To delete the legacy Postgres resources, complete the following steps:
If the default namespace is
multicluster-global-hub, run the following shell script:./doc/upgrade/cleanup_legacy_postgres.shIf the namespace is not specified as
multicluster-global-hub, set the multicluster global hub namespace when you run the following shell script:./doc/upgrade/cleanup_legacy_postgres.sh1 - 1
- The multicluster global hub that you installed.