Chapter 2. SiteConfig
The SiteConfig operator offers a template-driven cluster provisioning solution with a unified ClusterInstance
API, which comes from the SiteConfig
API of the SiteConfig
generator kustomize plugin.
To learn more about how to use SiteConfig operator, see the following documentation:
For advanced topics, see the following documentation:
2.1. About the SiteConfig operator
The SiteConfig operator offers a template-driven cluster provisioning solution, which allows you to provision clusters with various installation methods.
The SiteConfig operator introduces the unified ClusterInstance
API, which comes from the SiteConfig
API of the SiteConfig
generator kustomize plugin.
The ClusterInstance
API decouples parameters that define a cluster from the manner in which the cluster is deployed.
This separation removes certain limitations that are presented by the SiteConfig
kustomize plugin in the current GitOps Zero Touch Provisioning (ZTP) flow, such as agent cluster installations and scalability constraints that are related to Argo CD.
Using the unified ClusterInstance
API, the SiteConfig operator offers the following improvements:
- Isolation
-
Separates the cluster definition from the installation method. The
ClusterInstance
custom resource captures the cluster definition, while installation templates capture the cluster architecture and installation methods. - Unification
-
The SiteConfig operator unifies both Git and non-Git workflows. You can apply the
ClusterInstance
custom resource directly on the hub cluster, or synchronize resources through a GitOps solution, such as ArgoCD. - Consistency
- Maintains a consistent API across installation methods, whether you are using the Assisted Installer, the Image Based Install Operator, or any other custom template-based approach.
- Scalability
-
Achieves greater scalability for each cluster than the
SiteConfig
kustomize plugin. - Flexibility
- Provides you with more power to deploy and install clusters by using custom templates.
- Troubleshooting
- Offers insightful information regarding cluster deployment status and rendered manifests, significantly enhancing the troubleshooting experience.
For more information about the Image Based Install Operator, see Image Based Install Operator.
For more information about the Assisted Installer, see Installing an on-premise cluster using the Assisted Installer
2.1.1. The SiteConfig operator flow
The SiteConfig operator dynamically generates installation manifests based on user-defined templates that are instantiated from the data in the ClusterInstance
custom resource.
You can source the ClusterInstance
custom resource from your Git repository through ArgoCD, or you can create it directly on the hub cluster manually or through external tools and workflows.
The following is a high-level overview of the process:
- You create one or more sets of installation templates on the hub cluster.
-
You create a
ClusterInstance
custom resource that references those installation templates and supporting manifests. -
After the resources are created, the SiteConfig operator reconciles the
ClusterInstance
custom resource by populating the templated fields that are referenced in the custom resource. - The SiteConfig operator validates and renders the installation manifests, then the Operator performs a dry run.
- If the dry run is successful, the manifests are created, then the underlying Operators consume and process the manifests.
- The installation begins.
-
The SiteConfig operator continuously monitors for changes in the associated
ClusterDeployment
resource and updates theClusterInstance
custom resource’sstatus
field accordingly.
2.2. Installation templates overview
Installation templates are data-driven templates that are used to generate the set of installation artifacts. These templates follow the Golang text/template
format, and are instantiated by using data from the ClusterInstance
custom resource. This enables dynamic creation of installation manifests for each target cluster that has similar configurations, but with different values.
You can also create multiple sets based on the different installation methods or cluster topologies. The SiteConfig operator supports the following types of installation templates:
- Cluster-level
- Templates that must reference only cluster-specific fields.
- Node-level
- Templates that can reference both cluster-specific and node-specific fields.
For more information about installation templates, see the following documentation:
2.2.1. Template functions
You can customize the templated fields. The SiteConfig operator supports all sprig library functions.
Additionally, the ClusterInstance
API provides the following function that you can use while creating your custom manifests:
toYaml
-
The
toYaml
function encodes an item into a YAML string. If the item cannot be converted to YAML, the function returns an empty string.
See the following example of the .toYaml
specification in the ClusterInstance.Spec.Proxy
field:
{{ if .Spec.Proxy }} proxy: {{ .Spec.Proxy | toYaml | indent 4 }} {{ end }}
2.2.2. Default set of templates
The SiteConfig operator provides the following default, validated, and immutable set of templates in the same namespace in which the operator is installed:
Installation method | Template type | File name | Template content |
---|---|---|---|
Assisted Installer | Cluster-level templates |
|
|
Node-level templates |
|
| |
Image-based Install Operator | Cluster-level templates |
|
|
Node-level templates |
|
|
For more information about the ClusterInstance
API, see ClusterInstance API.
2.2.3. Special template variables
The SiteConfig operator provides a set of special template variables that you can use in your templates. See the following list:
CurrentNode
- The SiteConfig operator explicitly controls the iteration of the node objects and exposes this variable to access all the content for the current node being handled in templating.
InstallConfigOverrides
-
Contains the merged
networkType
,cpuPartitioningMode
andinstallConfigOverrides
content. ControlPlaneAgents
-
Consists of the number of control plane agents and it is automatically derived from the
ClusterInstance
node objects. WorkerAgents
-
Consists of the number of worker agents and it is automatically derived from the
ClusterInstance
node objects.
Capitalize the field name in the text template to create a custom templated field.
For example, the ClusterInstance
spec
field is referenced with the .Spec
prefix. However, you must reference special variable fields with the .SpecialVars
prefix.
Important: Instead of using the .Spec.Nodes
prefix for the spec.nodes
field, you must reference it with the .SpecialVars.CurrentNode
special template variable.
For example, if you want to specify the name
and namespace
for your current node by using the CurrentNode
special template variable, use the field names in the following form:
name: "{{ .SpecialVars.CurrentNode.HostName }}" namespace: "{{ .Spec.ClusterName }}"
2.2.4. Customization of the manifests order
You can control the order in which manifests are created, updated, and deleted by using the siteconfig.open-cluster-management.io/sync-wave
annotation. The annotation takes an integer as a value, and that integer constitutes as a wave.
You can add one or several manifests to a single wave. If you do not specify a value, the annotation takes the default value of 0
.
The SiteConfig operator reconciles the manifests in ascending order when creating or updating resources and it deletes resources in descending order.
In the following example, if the SiteConfig operator creates or updates the manifests, the AgentClusterInstall
and ClusterDeployment
custom resources are reconciled in the first wave, while KlusterletAddonConfig
and ManagedCluster
custom resources are reconciled in the third wave.
apiVersion: v1 data: AgentClusterInstall: |- ... siteconfig.open-cluster-management.io/sync-wave: "1" ... ClusterDeployment: |- ... siteconfig.open-cluster-management.io/sync-wave: "1" ... InfraEnv: |- ... siteconfig.open-cluster-management.io/sync-wave: "2" ... KlusterletAddonConfig: |- ... siteconfig.open-cluster-management.io/sync-wave: "3" ... ManagedCluster: |- ... siteconfig.open-cluster-management.io/sync-wave: "3" ... kind: ConfigMap metadata: name: assisted-installer-templates namespace: example-namespace
If the SiteConfig operator deletes the resources, KlusterletAddonConfig
and ManagedCluster
custom resources are the first to be deleted, while the AgentClusterInstall
and ClusterDeployment
custom resources are the last.
2.2.5. Configuration of additional annotations and labels
You can configure additional annotations and labels to both cluster-level and node-level installation manifests by using the extraAnnotations
and extraLabels
fields in the ClusterInstance
API. The SiteConfig operator applies your additional annotations and labels to the manifests that you specify in the ClusterInstance
resource.
When creating your additional annotations and labels, you must specify a manifest type to allow the SiteConfig operator to apply them to all the matching manifests. However, the annotations and labels are arbitrary and you can set any key and value pairs that are meaningful to your applications.
Note: The additional annotations and labels are only applied to the resources that were rendered through the referenced templates.
View the following example application of extraAnnotations
and extraLabels
:
Example application of extraAnnotations
and extraLabels
apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-sno" namespace: "example-sno" spec: [...] clusterName: "example-sno" extraAnnotations: 1 ClusterDeployment: myClusterAnnotation: success extraLabels: 2 ManagedCluster: common: "true" group-du: "" nodes: - hostName: "example-sno.example.redhat.com" role: "master" extraAnnotations: 3 BareMetalHost: myNodeAnnotation: success extraLabels: 4 BareMetalHost: "testExtraLabel": "success"
- 1 2
- This field supports cluster-level annotations and labels that the SiteConfig operator applies to the the
ManagedCluster
andClusterDeployment
manifests. - 3 4
- This field supports node-level annotations and labels that the SiteConfig operator applies to the
BareMetalHost
manifest.You can verify that your additional labels are applied by running the following command:
oc get managedclusters example-sno -ojsonpath='{.metadata.labels}' | jq
View the following example of applied labels:
Example applied labels
{ "common": "true", "group-du": "", ... }
- You can verify that your additional annotations are applied by running the following command:
oc get bmh example-sno.example.redhat.com -n example-sno -ojsonpath='{.metadata.annotations}' | jq
View the following example of applied annotations:
Example applied annotation
{ "myNodeAnnotation": "success", ... }
2.3. Enabling the SiteConfig operator
Enable the SiteConfig operator to use the default installation templates and install single-node OpenShift clusters at scale.
Required access: Cluster administrator
2.3.1. Prerequisites
- You need a Red Hat Advanced Cluster Management hub cluster.
2.3.2. Enabling the SiteConfig operator from the MultiClusterHub
resource
Patch the MultiClusterHub
resource, then verify that SiteConfig operator is enabled. Complete the following procedure:
Set an environment variable that matches the namespace of the
MultiClusterHub
operator by running the following command:export MCH_NAMESPACE=<namespace>
Set the
enabled
field totrue
in thesiteconfig
entry ofspec.overrides.components
in theMulticlusterhub
resource by running the following command:oc patch multiclusterhubs.operator.open-cluster-management.io multiclusterhub -n ${MCH_NAMESPACE} --type json --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"siteconfig","enabled": true}}]'
Verify that the SiteConfig operator is enabled by running the following command on the hub cluster:
oc -n ${MCH_NAMESPACE} get po | grep siteconfig
See the following example output:
siteconfig-controller-manager-6fdd86cc64-sdg87 2/2 Running 0 43s
Optional: Verify that you have the default installation templates by running the following command on the hub cluster:
oc -n ${MCH_NAMESPACE} get cm
See the following list of templates in the output example:
NAME DATA AGE ai-cluster-templates-v1 5 97s ai-node-templates-v1 2 97s ... ibi-cluster-templates-v1 3 97s ibi-node-templates-v1 3 97s ...
2.4. Image Based Install Operator
Install the Image Based Install Operator so that you can complete and manage image-based cluster installations by using the same APIs as existing installation methods.
For more information about, and to learn how to enable the Image Based Install Operator, see Image-based installations for single-node OpenShift.
2.5. Installing single-node OpenShift clusters with the SiteConfig operator
Install your clusters with the SiteConfig operator by using the default installation templates. Use the installation templates for the Image-Based Install Operator to complete the procedure.
Required access: Cluster administrator
2.5.1. Prerequisites
- If you are using GitOps ZTP, configure your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default installation templates. To get familiar with the default templates, see Default set of templates
Install and configure the underlying operator of your choice.
- To learn about and install the Image Based Install Operator for single-node OpenShift, see Image Based Install Operator.
- To install the Assisted Installer, see Installing an on-premise cluster with the Assisted Installer.
Complete the following steps to install a cluster with the SiteConfig operator:
2.5.2. Creating the target namespace
You need a target namespace when you create the pull secret, the BMC secret, extra manifest ConfigMap
objects, and the ClusterInstance
custom resource.
Complete the following steps to create the target namespace:
Create a YAML file for the target namespace. See the following example file that is named
clusterinstance-namespace.yaml
:apiVersion: v1 kind: Namespace metadata: name: example-sno
Apply your file to create the resource. Run the following command on the hub cluster:
oc apply -f clusterinstance-namespace.yaml
2.5.3. Creating the pull secret
You need a pull secret to enable your clusters to pull images from container registries. Complete the following steps to create a pull secret:
Create a YAML file to pull images. See the following example of a file that is named
pull-secret.yaml
:apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 1 data: .dockerconfigjson: <encoded_docker_configuration> 2 type: kubernetes.io/dockerconfigjson
Apply the file to create the resource. Run the following command on the hub cluster:
oc apply -f pull-secret.yaml
2.5.4. Creating the BMC secret
You need a secret to connect to your baseboard management controller (BMC). Complete the following steps to create a secret:
Create a YAML file for the BMC secret. See the following sample file that is named
example-bmc-secret.yaml
:apiVersion: v1 data: password: <password> username: <username> kind: Secret metadata: name: example-bmh-secret namespace: "example-sno" 1 type: Opaque
- 1
- Ensure that the
namespace
value matches the target namespace.
Apply the file to create the resource. Run the following command on the hub cluster:
oc apply -f example-bmc-secret.yaml
2.5.5. Optional: Creating the extra manifests
You can create extra manifests that you need to reference in the ClusterInstance
custom resource. Complete the following steps to create an extra manifest:
Create a YAML file for an extra manifest
ConfigMap
object, for example namedenable-crun.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: enable-crun namespace: example-sno 1 data: enable-crun-master.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun
- 1
- Ensure that the
namespace
value matches the target namespace.
Create the resource by running the following command on the hub cluster:
oc apply -f enable-crun.yaml
2.5.6. Rendering the installation manifests
Reference the templates and supporting manifests in the ClusterInstance
custom resource. Complete the following steps to render the installation manifests by using the default cluster and node templates:
In the
example-sno
namespace, create theClusterInstance
custom resource that is namedclusterinstance-ibi.yaml
in the following example:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-clusterinstance" namespace: "example-sno" 1 spec: holdInstallation: false extraManifestsRefs: 2 - name: extra-machine-configs - name: enable-crun pullSecretRef: name: "pull-secret" 3 [...] clusterName: "example-sno" 4 [...] clusterImageSetNameRef: "img4.17-x86-64" [...] templateRefs: 5 - name: ibi-cluster-templates-v1 namespace: rhacm [...] nodes: [...] bmcCredentialsName: 6 name: "example-bmh-secret" [...] templateRefs: 7 - name: ibi-node-templates-v1 namespace: rhacm [...]
- 1
- Ensure that the
namespace
in theClusterInstance
custom resource matches the target namespace that you defined. - 2
- Reference the
name
of one or more extra manifestsConfigMap
objects. - 3
- Reference the
name
of your pull secret. - 4
- Ensure that the value of the
clusterName
field in theClusterInstance
custom resource matches the value of thenamespace
field. - 5
- Reference the
name
of the cluster-level templates in thespec.templateRefs
field. If you are using a default installation template, thenamespace
must match the namespace where the Operator is installed. - 6
- Reference the
name
of the BMC secret. - 7
- Reference the
name
of the node-level templates in thespec.nodes.templateRefs
field. If you are using a default installation template, thenamespace
must match the namespace where the Operator is installed.
Apply the file and create the resource by running the following command:
oc apply -f clusterinstance-ibi.yaml
After you create the custom resource, the SiteConfig operator starts reconciling the
ClusterInstance
custom resource, then validates and renders the installation manifests.The SiteConfig operator continues to monitor for changes in the
ClusterDeployment
custom resources to update the cluster installation progress of the correspondingClusterInstance
custom resource.Monitor the process by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace> -o yaml
See the following example output from the
status.conditions
section for successful manifest generation:message: Applied site config manifests reason: Completed status: "True" type: RenderedTemplatesApplied
Check the manifests that SiteConfig operator rendered by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace> -o jsonpath='{.status.manifestsRendered}'
For more information about status conditions, see ClusterInstance API.
2.6. Deprovisioning single-node OpenShift clusters with the SiteConfig operator
Deprovision your clusters with the SiteConfig operator to delete all resources and accesses associated with that cluster.
Required access: Cluster administrator
2.6.1. Prerequisites
- Deploy your clusters with the SiteConfig operator by using the default installation templates.
2.6.2. Deprovisioning single-node OpenShift clusters
Complete the following steps to delete your clusters:
Delete the
ClusterInstance
custom resource by running the following command:oc delete clusterinstance <cluster_name> -n <target_namespace>
Verify that the deletion was successful by running the following command:
oc get clusterinstance <cluster_name> -n <target_namespace>
See the following example output where the (NotFound)
error indicates that your cluster is deprovisioned.
Error from server (NotFound): clusterinstances.siteconfig.open-cluster-management.io "<cluster_name>" not found
2.7. SiteConfig advanced topics
The SiteConfig operator provides additional functionalities, such as creating custom templates or scaling worker nodes, expanding the standard operations that apply to most use cases. See the following documentation for advanced topics of the SiteConfig operator:
2.7.1. Creating custom templates with the SiteConfig operator
Create user-defined templates that are not provided in the default set of templates.
Required access: Cluster administrator
Complete the following steps to a create custom template:
Create a YAML file named
my-custom-secret.yaml
that contains the cluster-level template in aConfigMap
object:apiVersion: v1 kind: ConfigMap metadata: name: my-custom-secret namespace: rhacm data: MySecret: |- apiVersion: v1 kind: Secret metadata: name: "{{ .Spec.ClusterName }}-my-custom-secret-key" namespace: "clusters" annotations: siteconfig.open-cluster-management.io/sync-wave: "1" 1 type: Opaque data: key: <key>
- 1
- The
siteconfig.open-cluster-management.io/sync-wave
annotation controls in which order manifests are created, updated, or deleted.
Apply the custom template on the hub cluster by running the following command:
oc apply -f my-custom-secret.yaml
Reference your template in the
ClusterInstance
custom resource namedclusterinstance-my-custom-secret.yaml
:spec: ... templateRefs: - name: ai-cluster-templates-v1.yaml namespace: rhacm - name: my-custom-secret.yaml namespace: rhacm ...
Apply the
ClusterInstance
custom resource by running the following command:oc apply -f clusterinstance-my-custom-secret.yaml
2.7.2. Scaling in a single-node OpenShift cluster with the SiteConfig operator
Scale in your managed cluster that was installed by the SiteConfig operator. You can scale in your cluster by removing a worker node.
Required access: Cluster administrator
2.7.2.1. Prerequisites
- If you are using GitOps ZTP, you have configured your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default templates. To get familiar with the default templates, see Default set of templates
- You have installed your cluster with the SiteConfig operator. To install a cluster with the SiteConfig operator, see Installing single-node OpenShift clusters with the SiteConfig operator
2.7.2.2. Adding an annotation to your worker node
Add an annotation to your worker node for removal.
Complete the following steps to annotate worker node from the managed cluster:
Add an annotation in the
extraAnnotations
field of the worker node entry in theClusterInstance
custom resource that is used to provision your cluster:spec: ... nodes: - hostName: "worker-node2.example.com" role: "worker" ironicInspect: "" extraAnnotations: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: "true" ...
Apply the changes. See the following options:
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml
- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that the annotation is applied to the
BaremetalHost
worker resource by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> worker-node2.example.com -ojsonpath='{.metadata.annotations}' | jq
See the following example output for successful application of the annotation:
{ "baremetalhost.metal3.io/detached": "assisted-service-controller", "bmac.agent-install.openshift.io/hostname": "worker-node2.example.com", "bmac.agent-install.openshift.io/remove-agent-and-node-on-delete": "true" "bmac.agent-install.openshift.io/role": "master", "inspect.metal3.io": "disabled", "siteconfig.open-cluster-management.io/sync-wave": "1", }
2.7.2.3. Deleting the BareMetalHost
resource of the worker node
Delete the BareMetalHost
resource of the worker node that you want to be removed.
Complete the following steps to remove a worker node from the managed cluster:
Update the node object that you want to delete in your existing
ClusterInstance
custom resource with the following configuration:... spec: ... nodes: - hostName: "worker-node2.example.com" ... pruneManifests: - apiVersion: metal3.io/v1alpha1 kind: BareMetalHost ...
Apply the changes. See the following options.
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml
- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that the
BareMetalHost
resources are removed by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
See the following example output:
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com deprovisioning true 44m worker-node2.example.com powering off before delete true 20h worker-node2.example.com deleting true 50m
Verify that the
Agent
resources are removed by running the following command on the hub cluster:oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
See the following example output:
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> true worker Done
Verify that the
Node
resources are removed by running the following command on the managed cluster:oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
See the following example output:
NAME STATUS ROLES AGE VERSION worker-node2.example.com NotReady,SchedulingDisabled worker 19h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
2.7.3. Scaling out a single-node OpenShift cluster with the SiteConfig operator
Scale out your managed cluster that was installed by the SiteConfig operator. You can scale out your cluster by adding a worker node.
Required access: Cluster administrator
2.7.3.1. Prerequisites
- If using GitOps ZTP, you have configured your GitOps ZTP environment. To configure your environment, see Preparing the hub cluster for GitOps ZTP.
- You have the default installation templates. To get familiar with the default templates, see Default set of templates.
- You have installed your cluster with the SiteConfig operator. To install a cluster with the SiteConfig operator, see Installing single-node OpenShift clusters with the SiteConfig operator.
2.7.3.2. Adding a worker node
Add a worker node by updating your ClusterInstance
custom resource that is used to provision your cluster.
Complete the following steps to add a worker node to the managed cluster:
Define a new node object in the existing
ClusterInstance
custom resource:spec: ... nodes: - hostName: "<host_name>" role: "worker" templateRefs: - name: ai-node-templates-v1 namespace: rhacm bmcAddress: "<bmc_address>" bmcCredentialsName: name: "<bmc_credentials_name>" bootMACAddress: "<boot_mac_address>" ...
Apply the changes. See the following options:
- If you are using Red Hat Advanced Cluster Management without Red Hat OpenShift GitOps, run the following command on the hub cluster:
oc apply -f <clusterinstance>.yaml
- If you are using GitOps ZTP, push to your Git repository and wait for Argo CD to synchronize the changes.
Verify that a new
BareMetalHost
resource is added by running the following command on the hub cluster:oc get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>
See the following example output:
NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com provisioning true 44m
Verify that a new
Agent
resource is added by running the following command on the hub cluster:oc get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>
See the following example output:
NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> false worker worker-node2.example.com <managed_cluster_name> true worker Starting installation worker-node2.example.com <managed_cluster_name> true worker Installing worker-node2.example.com <managed_cluster_name> true worker Writing image to disk worker-node2.example.com <managed_cluster_name> true worker Waiting for control plane worker-node2.example.com <managed_cluster_name> true worker Rebooting worker-node2.example.com <managed_cluster_name> true worker Joined worker-node2.example.com <managed_cluster_name> true worker Done
Verify that a new
Node
resource is added by running the following command on the managed cluster:oc get nodes --kubeconfig <managed_cluster_kubeconfig_filename>
See the following example output:
NAME STATUS ROLES AGE VERSION worker-node2.example.com Ready worker 1h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5
2.7.4. Mirroring images for disconnected environments
You can deploy a cluster with the SiteConfig operator by using the Image Based Install Operator as your underlying operator. If you deploy your clusters with the Image Based Install Operator in a disconnected environment, you must supply your mirror images as extra manifests in the ClusterInstance
custom resource.
Required access: Cluster administrator
Complete the following steps to mirror images for disconnected environments:
Create a YAML file named
idms-configmap.yaml
for yourImageDigestMirrorSet
object that contains your mirror registry locations:kind: ConfigMap apiVersion: v1 metadata: name: "idms-configmap" namespace: "example-sno" data: 99-example-idms.yaml: | apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: example-idms spec: imageDigestMirrors: - mirrors: - mirror.registry.example.com/image-repo/image source: registry.example.com/image-repo/image
Important: Define the ConfigMap
resource that contains the extra manifest in the same namespace as the ClusterInstance
resource.
Create the resource by running the following command on the hub cluster:
oc apply -f idms-configmap.yaml
Reference your
ImageDigestMirrorSet
object in theClusterInstance
custom resource:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-sno" namespace: "example-sno" spec: ... extraManifestsRefs: - name: idms-configmap ...