Chapter 2. SiteConfig
The SiteConfig operator offers a template-driven cluster provisioning solution with a unified ClusterInstance API, which comes from the SiteConfig API of the SiteConfig generator kustomize plugin.
To learn more about how to use SiteConfig operator, see the following documentation:
For advanced topics, see the following documentation:
2.1. About the SiteConfig operator Copy linkLink copied to clipboard!
The SiteConfig operator offers a template-driven cluster provisioning solution, which allows you to provision clusters with various installation methods.
The SiteConfig operator introduces the unified ClusterInstance API, which comes from the SiteConfig API of the SiteConfig generator kustomize plugin.
The ClusterInstance API decouples parameters that define a cluster from the manner in which the cluster is deployed.
This separation removes certain limitations that are presented by the SiteConfig kustomize plugin in the current GitOps Zero Touch Provisioning (ZTP) flow, such as agent cluster installations and scalability constraints that are related to Argo CD.
Using the unified ClusterInstance API, the SiteConfig operator offers the following improvements:
- Isolation
-
Separates the cluster definition from the installation method. The
ClusterInstancecustom resource captures the cluster definition, while installation templates capture the cluster architecture and installation methods. - Unification
-
The SiteConfig operator unifies both Git and non-Git workflows. You can apply the
ClusterInstancecustom resource directly on the hub cluster, or synchronize resources through a GitOps solution, such as ArgoCD. - Consistency
- Maintains a consistent API across installation methods, whether you are using the Assisted Installer, the Image Based Install Operator, or any other custom template-based approach.
- Scalability
-
Achieves greater scalability for each cluster than the
SiteConfigkustomize plugin. - Flexibility
- Provides you with more power to deploy and install clusters by using custom templates.
- Troubleshooting
- Offers insightful information regarding cluster deployment status and rendered manifests, significantly enhancing the troubleshooting experience.
For more information about the Image Based Install Operator, see Image Based Install Operator.
For more information about the Assisted Installer, see Installing an on-premise cluster using the Assisted Installer
2.1.1. The SiteConfig operator flow Copy linkLink copied to clipboard!
The SiteConfig operator dynamically generates installation manifests based on user-defined templates that are instantiated from the data in the ClusterInstance custom resource.
You can source the ClusterInstance custom resource from your Git repository through ArgoCD, or you can create it directly on the hub cluster manually or through external tools and workflows.
The following is a high-level overview of the process:
- You create one or more sets of installation templates on the hub cluster.
-
You create a
ClusterInstancecustom resource that references those installation templates and supporting manifests. -
After the resources are created, the SiteConfig operator reconciles the
ClusterInstancecustom resource by populating the templated fields that are referenced in the custom resource. - The SiteConfig operator validates and renders the installation manifests, then the Operator performs a dry run.
- If the dry run is successful, the manifests are created, then the underlying Operators consume and process the manifests.
- The installation begins.
-
The SiteConfig operator continuously monitors for changes in the associated
ClusterDeploymentresource and updates theClusterInstancecustom resource’sstatusfield accordingly.
2.2. Installation templates overview Copy linkLink copied to clipboard!
Installation templates are data-driven templates that are used to generate the set of installation artifacts. These templates follow the Golang text/template format, and are instantiated by using data from the ClusterInstance custom resource. This enables dynamic creation of installation manifests for each target cluster that has similar configurations, but with different values.
You can also create multiple sets based on the different installation methods or cluster topologies. The SiteConfig operator supports the following types of installation templates:
- Cluster-level
- Templates that must reference only cluster-specific fields.
- Node-level
- Templates that can reference both cluster-specific and node-specific fields.
For more information about installation templates, see the following documentation:
2.2.1. Template functions Copy linkLink copied to clipboard!
You can customize the templated fields. The SiteConfig operator supports all sprig library functions.
Additionally, the ClusterInstance API provides the following function that you can use while creating your custom manifests:
toYaml-
The
toYamlfunction encodes an item into a YAML string. If the item cannot be converted to YAML, the function returns an empty string.
See the following example of the .toYaml specification in the ClusterInstance.Spec.Proxy field:
{{ if .Spec.Proxy }}
proxy:
{{ .Spec.Proxy | toYaml | indent 4 }}
{{ end }}
2.2.2. Default set of templates Copy linkLink copied to clipboard!
The SiteConfig operator provides the following default, validated, and immutable set of templates in the same namespace in which the operator is installed:
| Installation method | Template type | File name | Template content |
|---|---|---|---|
| Assisted Installer | Cluster-level templates |
|
|
| Node-level templates |
|
| |
| Image-based Install Operator | Cluster-level templates |
|
|
| Node-level templates |
|
|
For more information about the ClusterInstance API, see ClusterInstance API.
2.2.3. Special template variables Copy linkLink copied to clipboard!
The SiteConfig operator provides a set of special template variables that you can use in your templates. See the following list:
CurrentNode- The SiteConfig operator explicitly controls the iteration of the node objects and exposes this variable to access all the content for the current node being handled in templating.
InstallConfigOverrides-
Contains the merged
networkType,cpuPartitioningModeandinstallConfigOverridescontent. ControlPlaneAgents-
Consists of the number of control plane agents and it is automatically derived from the
ClusterInstancenode objects. WorkerAgents-
Consists of the number of worker agents and it is automatically derived from the
ClusterInstancenode objects.
Capitalize the field name in the text template to create a custom templated field.
For example, the ClusterInstance spec field is referenced with the .Spec prefix. However, you must reference special variable fields with the .SpecialVars prefix.
Important: Instead of using the .Spec.Nodes prefix for the spec.nodes field, you must reference it with the .SpecialVars.CurrentNode special template variable.
For example, if you want to specify the name and namespace for your current node by using the CurrentNode special template variable, use the field names in the following form:
name: "{{ .SpecialVars.CurrentNode.HostName }}"
namespace: "{{ .Spec.ClusterName }}"
2.2.4. Customization of the manifests order Copy linkLink copied to clipboard!
You can control the order in which manifests are created, updated, and deleted by using the siteconfig.open-cluster-management.io/sync-wave annotation. The annotation takes an integer as a value, and that integer constitutes as a wave.
You can add one or several manifests to a single wave. If you do not specify a value, the annotation takes the default value of 0.
The SiteConfig operator reconciles the manifests in ascending order when creating or updating resources and it deletes resources in descending order.
In the following example, if the SiteConfig operator creates or updates the manifests, the AgentClusterInstall and ClusterDeployment custom resources are reconciled in the first wave, while KlusterletAddonConfig and ManagedCluster custom resources are reconciled in the third wave.
apiVersion: v1
data:
AgentClusterInstall: |-
...
siteconfig.open-cluster-management.io/sync-wave: "1"
...
ClusterDeployment: |-
...
siteconfig.open-cluster-management.io/sync-wave: "1"
...
InfraEnv: |-
...
siteconfig.open-cluster-management.io/sync-wave: "2"
...
KlusterletAddonConfig: |-
...
siteconfig.open-cluster-management.io/sync-wave: "3"
...
ManagedCluster: |-
...
siteconfig.open-cluster-management.io/sync-wave: "3"
...
kind: ConfigMap
metadata:
name: assisted-installer-templates
namespace: example-namespace
If the SiteConfig operator deletes the resources, KlusterletAddonConfig and ManagedCluster custom resources are the first to be deleted, while the AgentClusterInstall and ClusterDeployment custom resources are the last.
2.2.5. Configuration of additional annotations and labels Copy linkLink copied to clipboard!
You can configure additional annotations and labels to both cluster-level and node-level installation manifests by using the extraAnnotations and extraLabels fields in the ClusterInstance API. The SiteConfig operator applies your additional annotations and labels to the manifests that you specify in the ClusterInstance resource.
When creating your additional annotations and labels, you must specify a manifest type to allow the SiteConfig operator to apply them to all the matching manifests. However, the annotations and labels are arbitrary and you can set any key and value pairs that are meaningful to your applications.
Note: The additional annotations and labels are only applied to the resources that were rendered through the referenced templates.
View the following example application of extraAnnotations and extraLabels:
apiVersion: siteconfig.open-cluster-management.io/v1alpha1
kind: ClusterInstance
metadata:
name: "example-sno"
namespace: "example-sno"
spec:
[...]
clusterName: "example-sno"
extraAnnotations:
ClusterDeployment:
myClusterAnnotation: success
extraLabels:
ManagedCluster:
common: "true"
group-du: ""
nodes:
- hostName: "example-sno.example.redhat.com"
role: "master"
extraAnnotations:
BareMetalHost:
myNodeAnnotation: success
extraLabels:
BareMetalHost:
"testExtraLabel": "success"
- 1
- This field supports cluster-level annotations and labels that the SiteConfig operator applies to the the
ManagedClusterandClusterDeploymentmanifests. - 2
- This field supports node-level annotations and labels that the SiteConfig operator applies to the
BareMetalHostmanifest. - 3
- The
extraAnnotationsin thisBareMetalHostexample ismyNodeAnnotation. - 4
- The
extraLabelsin thisBareMetalHostexample istestExtraLabel.You can verify that your additional labels are applied by running the following command:
oc get managedclusters example-sno -ojsonpath='{.metadata.labels}' | jqView the following example of applied labels:
{ "common": "true", "group-du": "", ... }You can verify that your additional annotations are applied by running the following command:
oc get bmh example-sno.example.redhat.com -n example-sno -ojsonpath='{.metadata.annotations}' | jqView the following example of applied annotations:
{ "myNodeAnnotation": "success", ... }
2.2.6. Permissible changes after provisioning Copy linkLink copied to clipboard!
You might want to change your cluster configuration but making changes to your cluster is not allowed during provisioning. However, after a cluster is provisioned, you can modify the following fields:
-
spec.extraAnnotations -
spec.extraLabels -
spec.suppressedManifests -
spec.pruneManifests -
spec.clusterImageSetNameRef -
spec.nodes.<node-id>.extraAnnotations -
spec.nodes.<node-id>.extraLabels -
spec.nodes.<node-id>.suppressedManifests -
spec.nodes.<node-id>.pruneManifests
Note: <node-id> represents the updated NodeSpec object.
2.3. Enabling the SiteConfig operator Copy linkLink copied to clipboard!
Enable the SiteConfig operator to use the default installation templates and install single-node OpenShift clusters at scale.
Required access: Cluster administrator
2.3.1. Prerequisites Copy linkLink copied to clipboard!
- You need a Red Hat Advanced Cluster Management hub cluster.
2.3.2. Enabling the SiteConfig operator from the MultiClusterHub resource Copy linkLink copied to clipboard!
Patch the MultiClusterHub resource, then verify that SiteConfig operator is enabled. Complete the following procedure:
Set an environment variable that matches the namespace of the
MultiClusterHuboperator by running the following command:export MCH_NAMESPACE=<namespace>Set the
enabledfield totruein thesiteconfigentry ofspec.overrides.componentsin theMulticlusterhubresource by running the following command:oc patch multiclusterhubs.operator.open-cluster-management.io multiclusterhub -n ${MCH_NAMESPACE} --type json --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"siteconfig","enabled": true}}]'Verify that the SiteConfig operator is enabled by running the following command on the hub cluster:
oc -n ${MCH_NAMESPACE} get po | grep siteconfigSee the following example output:
siteconfig-controller-manager-6fdd86cc64-sdg87 2/2 Running 0 43sOptional: Verify that you have the default installation templates by running the following command on the hub cluster:
oc -n ${MCH_NAMESPACE} get cmSee the following list of templates in the output example:
NAME DATA AGE ai-cluster-templates-v1 5 97s ai-node-templates-v1 2 97s ... ibi-cluster-templates-v1 3 97s ibi-node-templates-v1 3 97s ...
2.4. Installing single-node OpenShift clusters with the SiteConfig operator Copy linkLink copied to clipboard!
Install your clusters with the SiteConfig operator by using the default installation templates. Use the installation templates for the Image-Based Install Operator to complete the procedure.
Required access: Cluster administrator
2.4.1. Prerequisites Copy linkLink copied to clipboard!
- If you are using GitOps ZTP, configure your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default installation templates. To get familiar with the default templates, see Default set of templates
Install and configure the underlying operator of your choice.
- To learn about and install the Image Based Install Operator for single-node OpenShift, see Image Based Install Operator.
- To install the Assisted Installer, see Installing an on-premise cluster with the Assisted Installer.
Complete the following steps to install a cluster with the SiteConfig operator:
2.4.2. Creating the target namespace Copy linkLink copied to clipboard!
You need a target namespace when you create the pull secret, the BMC secret, extra manifest ConfigMap objects, and the ClusterInstance custom resource.
Complete the following steps to create the target namespace:
Create a YAML file for the target namespace. See the following example file that is named
clusterinstance-namespace.yaml:apiVersion: v1 kind: Namespace metadata: name: example-snoApply your file to create the resource. Run the following command on the hub cluster:
oc apply -f clusterinstance-namespace.yaml
2.4.3. Creating the pull secret Copy linkLink copied to clipboard!
You need a pull secret to enable your clusters to pull images from container registries. Complete the following steps to create a pull secret:
Create a YAML file to pull images. See the following example of a file that is named
pull-secret.yaml:apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno1 data: .dockerconfigjson: <encoded_docker_configuration>2 type: kubernetes.io/dockerconfigjsonApply the file to create the resource. Run the following command on the hub cluster:
oc apply -f pull-secret.yaml
2.4.4. Creating the BMC secret Copy linkLink copied to clipboard!
You need a secret to connect to your baseboard management controller (BMC). Complete the following steps to create a secret:
Create a YAML file for the BMC secret. See the following sample file that is named
example-bmc-secret.yaml:apiVersion: v1 data: password: <password> username: <username> kind: Secret metadata: name: example-bmh-secret namespace: "example-sno"1 type: Opaque- 1
- Ensure that the
namespacevalue matches the target namespace.
Apply the file to create the resource. Run the following command on the hub cluster:
oc apply -f example-bmc-secret.yaml
2.4.5. Optional: Creating the extra manifests Copy linkLink copied to clipboard!
You can create extra manifests that you need to reference in the ClusterInstance custom resource. Complete the following steps to create an extra manifest:
Create a YAML file for an extra manifest
ConfigMapobject, for example namedenable-crun.yaml:apiVersion: v1 kind: ConfigMap metadata: name: enable-crun namespace: example-sno1 data: enable-crun-master.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun- 1
- Ensure that the
namespacevalue matches the target namespace.
Create the resource by running the following command on the hub cluster:
oc apply -f enable-crun.yaml
2.4.6. Rendering the installation manifests Copy linkLink copied to clipboard!
Reference the templates and supporting manifests in the ClusterInstance custom resource. Complete the following steps to render the installation manifests by using the default cluster and node templates:
In the
example-snonamespace, create theClusterInstancecustom resource that is namedclusterinstance-ibi.yamlin the following example:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-clusterinstance" namespace: "example-sno"1 spec: #clusterType: "SNO"2 holdInstallation: false extraManifestsRefs:3 - name: extra-machine-configs - name: enable-crun pullSecretRef: name: "pull-secret"4 [...] clusterName: "example-sno"5 [...] clusterImageSetNameRef: "img4.17-x86-64" [...] templateRefs:6 - name: ibi-cluster-templates-v1 namespace: rhacm [...] nodes: [...] bmcCredentialsName:7 name: "example-bmh-secret" [...] templateRefs:8 - name: ibi-node-templates-v1 namespace: rhacm [...]- 1
- Ensure that the
namespacein theClusterInstancecustom resource matches the target namespace that you defined. - 2
- Optional: If you want to scale out or scale in your single-node OpenShift clusters, you must set the
spec.clusterTypefield to"SNO". - 3
- Reference the
nameof one or more extra manifestsConfigMapobjects. - 4
- Reference the
nameof your pull secret. - 5
- Ensure that the value of the
clusterNamefield in theClusterInstancecustom resource matches the value of thenamespacefield. - 6
- Reference the
nameof the cluster-level templates in thespec.templateRefsfield. If you are using a default installation template, thenamespacemust match the namespace where the Operator is installed. - 7
- Reference the
nameof the BMC secret. - 8
- Reference the
nameof the node-level templates in thespec.nodes.templateRefsfield. If you are using a default installation template, thenamespacemust match the namespace where the Operator is installed.
Apply the file and create the resource by running the following command:
oc apply -f clusterinstance-ibi.yamlAfter you create the custom resource, the SiteConfig operator starts reconciling the
ClusterInstancecustom resource, then validates and renders the installation manifests.The SiteConfig operator continues to monitor for changes in the
ClusterDeploymentcustom resources to update the cluster installation progress of the correspondingClusterInstancecustom resource.Monitor the process by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace> -o yamlSee the following example output from the
status.conditionssection for successful manifest generation:message: Applied site config manifests reason: Completed status: "True" type: RenderedTemplatesAppliedCheck the manifests that SiteConfig operator rendered by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace> -o jsonpath='{.status.manifestsRendered}'
For more information about status conditions, see ClusterInstance API.
2.5. Deprovisioning single-node OpenShift clusters with the SiteConfig operator Copy linkLink copied to clipboard!
Deprovision your clusters with the SiteConfig operator to delete all resources and accesses associated with that cluster.
Required access: Cluster administrator
2.5.1. Prerequisites Copy linkLink copied to clipboard!
- Deploy your clusters with the SiteConfig operator by using the default installation templates.
2.5.2. Deprovisioning single-node OpenShift clusters Copy linkLink copied to clipboard!
Complete the following steps to delete your clusters:
Delete the
ClusterInstancecustom resource by running the following command:oc delete clusterinstance <cluster_name> -n <target_namespace>Verify that the deletion was successful by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace>
See the following example output where the (NotFound) error indicates that your cluster is deprovisioned.
Error from server (NotFound): clusterinstances.siteconfig.open-cluster-management.io "<cluster_name>" not found
2.6. Image Based Install Operator Copy linkLink copied to clipboard!
Install the Image Based Install Operator so that you can complete and manage image-based cluster installations by using the same APIs as existing installation methods.
For more information about, and to learn how to enable the Image Based Install Operator, see Image-based installations for single-node OpenShift.
2.7. SiteConfig advanced topics Copy linkLink copied to clipboard!
The SiteConfig operator provides additional functionalities, such as creating custom templates or scaling worker nodes, expanding the standard operations that apply to most use cases. See the following documentation for advanced topics of the SiteConfig operator:
2.7.1. Creating custom templates with the SiteConfig operator Copy linkLink copied to clipboard!
Create user-defined templates that are not provided in the default set of templates.
Required access: Cluster administrator
Complete the following steps to a create custom template:
Create a YAML file named
my-custom-secret.yamlthat contains the cluster-level template in aConfigMapobject:apiVersion: v1 kind: ConfigMap metadata: name: my-custom-secret namespace: rhacm data: MySecret: |- apiVersion: v1 kind: Secret metadata: name: "{{ .Spec.ClusterName }}-my-custom-secret-key" namespace: "clusters" annotations: siteconfig.open-cluster-management.io/sync-wave: "1"1 type: Opaque data: key: <key>- 1
- The
siteconfig.open-cluster-management.io/sync-waveannotation controls in which order manifests are created, updated, or deleted.
Apply the custom template on the hub cluster by running the following command:
oc apply -f my-custom-secret.yamlReference your template in the
ClusterInstancecustom resource namedclusterinstance-my-custom-secret.yaml:spec: ... templateRefs: - name: ai-cluster-templates-v1.yaml namespace: rhacm - name: my-custom-secret.yaml namespace: rhacm ...Apply the
ClusterInstancecustom resource by running the following command:oc apply -f clusterinstance-my-custom-secret.yaml
2.7.2. Scaling in a single-node OpenShift cluster with the SiteConfig operator Copy linkLink copied to clipboard!
Scale in your managed cluster that was installed by the SiteConfig operator. You can scale in your cluster by removing a worker node.
Required access: Cluster administrator
2.7.2.1. Prerequisites Copy linkLink copied to clipboard!
- If you are using GitOps ZTP, you have configured your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default templates. To get familiar with the default templates, see Default set of templates
- You have installed your cluster with the SiteConfig operator. To install a cluster with the SiteConfig operator, see Installing single-node OpenShift clusters with the SiteConfig operator
-
You have set the
spec.clusterTypeto"SNO".
2.7.2.2. Adding an annotation to your worker node Copy linkLink copied to clipboard!
Add an annotation to your worker node for removal.
Complete the following steps to annotate worker node from the managed cluster:
Add an annotation in the
extraAnnotationsfield of the worker node entry in theClusterInstancecustom resource that is used to provision your cluster:spec: ... nodes: - hostName: "worker-node2.example.com" role: "worker" ironicInspect: "" extraAnnotations: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: "true" ...Apply the changes. See the following options:
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that the annotation is applied to the
BaremetalHostworker resource by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> worker-node2.example.com -ojsonpath='{.metadata.annotations}' | jqSee the following example output for successful application of the annotation:
{
"baremetalhost.metal3.io/detached": "assisted-service-controller",
"bmac.agent-install.openshift.io/hostname": "worker-node2.example.com",
"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete": "true"
"bmac.agent-install.openshift.io/role": "master",
"inspect.metal3.io": "disabled",
"siteconfig.open-cluster-management.io/sync-wave": "1",
}
2.7.2.3. Deleting the BareMetalHost resource of the worker node Copy linkLink copied to clipboard!
Delete the BareMetalHost resource of the worker node that you want to be removed.
Complete the following steps to remove a worker node from the managed cluster:
Update the node object that you want to delete in your existing
ClusterInstancecustom resource with the following configuration:... spec: ... nodes: - hostName: "worker-node2.example.com" ... pruneManifests: - apiVersion: metal3.io/v1alpha1 kind: BareMetalHost - apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv - apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig ...Apply the changes. See the following options:
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that the
BareMetalHostresources are removed by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>See the following example output:
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com deprovisioning true 44m worker-node2.example.com powering off before delete true 20h worker-node2.example.com deleting true 50mVerify that the
Agentresources are removed by running the following command on the hub cluster:oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>See the following example output:
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> true worker DoneVerify that the
Noderesources are removed by running the following command on the managed cluster:oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>See the following example output:
NAME STATUS ROLES AGE VERSION worker-node2.example.com NotReady,SchedulingDisabled worker 19h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5-
After the
BareMetalHostobject of the worker node is successfully deleted, remove the associated worker node definition from thespec.nodessection in theClusterInstanceresource.
2.7.3. Scaling out a single-node OpenShift cluster with the SiteConfig operator Copy linkLink copied to clipboard!
Scale out your managed cluster that was installed by the SiteConfig operator. You can scale out your cluster by adding a worker node.
Required access: Cluster administrator
2.7.3.1. Prerequisites Copy linkLink copied to clipboard!
- If using GitOps ZTP, you have configured your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default installation templates. To get familiar with the default templates, see Default set of templates.
- You have installed your cluster with the SiteConfig operator. To install a cluster with the SiteConfig operator, see Installing single-node OpenShift clusters with the SiteConfig operator.
-
You have set the
spec.clusterTypeto"SNO".
2.7.3.2. Adding a worker node Copy linkLink copied to clipboard!
Add a worker node by updating your ClusterInstance custom resource that is used to provision your cluster.
Complete the following steps to add a worker node to the managed cluster:
Define a new node object in the existing
ClusterInstancecustom resource:spec: ... nodes: - hostName: "<host_name>" role: "worker" templateRefs: - name: ai-node-templates-v1 namespace: rhacm bmcAddress: "<bmc_address>" bmcCredentialsName: name: "<bmc_credentials_name>" bootMACAddress: "<boot_mac_address>" ...Apply the changes. See the following options:
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that a new
BareMetalHostresource is added by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>See the following example output:
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com provisioning true 44mVerify that a new
Agentresource is added by running the following command on the hub cluster:oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>See the following example output:
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> false worker worker-node2.example.com <managed_cluster_name> true worker Starting installation worker-node2.example.com <managed_cluster_name> true worker Installing worker-node2.example.com <managed_cluster_name> true worker Writing image to disk worker-node2.example.com <managed_cluster_name> true worker Waiting for control plane worker-node2.example.com <managed_cluster_name> true worker Rebooting worker-node2.example.com <managed_cluster_name> true worker Joined worker-node2.example.com <managed_cluster_name> true worker DoneVerify that a new
Noderesource is added by running the following command on the managed cluster:oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>See the following example output:
NAME STATUS ROLES AGE VERSION worker-node2.example.com Ready worker 1h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
2.7.4. Mirroring images for disconnected environments Copy linkLink copied to clipboard!
You can deploy a cluster with the SiteConfig operator by using the Image Based Install Operator as your underlying operator. If you deploy your clusters with the Image Based Install Operator in a disconnected environment, you must supply your mirror images as extra manifests in the ClusterInstance custom resource.
Required access: Cluster administrator
Complete the following steps to mirror images for disconnected environments:
Create a YAML file named
idms-configmap.yamlfor yourImageDigestMirrorSetobject that contains your mirror registry locations:kind: ConfigMap apiVersion: v1 metadata: name: "idms-configmap" namespace: "example-sno" data: 99-example-idms.yaml: | apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: example-idms spec: imageDigestMirrors: - mirrors: - mirror.registry.example.com/image-repo/image source: registry.example.com/image-repo/image
Important: Define the ConfigMap resource that contains the extra manifest in the same namespace as the ClusterInstance resource.
Create the resource by running the following command on the hub cluster:
oc apply -f idms-configmap.yamlReference your
ImageDigestMirrorSetobject in theClusterInstancecustom resource:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-sno" namespace: "example-sno" spec: ... extraManifestsRefs: - name: idms-configmap ...