6.4. Moving resources to infrastructure machine sets


Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.

6.4.1. Moving the router

You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.

Prerequisites

  • Configure additional machine sets in your OpenShift Container Platform cluster.

Procedure

  1. View the IngressController custom resource for the router Operator:

    $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
    Copy to Clipboard Toggle word wrap

    The command output resembles the following text:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      creationTimestamp: 2019-04-18T12:35:39Z
      finalizers:
      - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
      generation: 1
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "11341"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: 79509e05-61d6-11e9-bc55-02ce4781844a
    spec: {}
    status:
      availableReplicas: 2
      conditions:
      - lastTransitionTime: 2019-04-18T12:36:15Z
        status: "True"
        type: Available
      domain: apps.<cluster>.example.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
    Copy to Clipboard Toggle word wrap
  2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

    $ oc edit ingresscontroller default -n openshift-ingress-operator
    Copy to Clipboard Toggle word wrap

    Add the nodeSelector stanza that references the infra label to the spec section, as shown:

      spec:
        nodePlacement:
          nodeSelector:
            matchLabels:
              node-role.kubernetes.io/infra: ""
    Copy to Clipboard Toggle word wrap
  3. Confirm that the router pod is running on the infra node.

    1. View the list of router pods and note the node name of the running pod:

      $ oc get pod -n openshift-ingress -o wide
      Copy to Clipboard Toggle word wrap

      Example output

      NAME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
      router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
      router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>
      Copy to Clipboard Toggle word wrap

      In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

    2. View the node status of the running pod:

      $ oc get node <node_name> 
      1
      Copy to Clipboard Toggle word wrap
      1
      Specify the <node_name> that you obtained from the pod list.

      Example output

      NAME                          STATUS  ROLES         AGE   VERSION
      ip-10-0-217-226.ec2.internal  Ready   infra,worker  17h   v1.18.3
      Copy to Clipboard Toggle word wrap

      Because the role list includes infra, the pod is running on the correct node.

6.4.2. Moving the default registry

You configure the registry Operator to deploy its pods to different nodes.

Prerequisites

  • Configure additional machine sets in your OpenShift Container Platform cluster.

Procedure

  1. View the config/instance object:

    $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: imageregistry.operator.openshift.io/v1
    kind: Config
    metadata:
      creationTimestamp: 2019-02-05T13:52:05Z
      finalizers:
      - imageregistry.operator.openshift.io/finalizer
      generation: 1
      name: cluster
      resourceVersion: "56174"
      selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
      uid: 36fd3724-294d-11e9-a524-12ffeee2931b
    spec:
      httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
      logging: 2
      managementState: Managed
      proxy: {}
      replicas: 1
      requests:
        read: {}
        write: {}
      storage:
        s3:
          bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
          region: us-east-1
    status:
    ...
    Copy to Clipboard Toggle word wrap

  2. Edit the config/instance object:

    $ oc edit configs.imageregistry.operator.openshift.io/cluster
    Copy to Clipboard Toggle word wrap
  3. Add the following lines of text the spec section of the object:

      nodeSelector:
        node-role.kubernetes.io/infra: ""
    Copy to Clipboard Toggle word wrap
  4. Verify the registry pod has been moved to the infrastructure node.

    1. Run the following command to identify the node where the registry pod is located:

      $ oc get pods -o wide -n openshift-image-registry
      Copy to Clipboard Toggle word wrap
    2. Confirm the node has the label you specified:

      $ oc describe node <node_name>
      Copy to Clipboard Toggle word wrap

      Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

6.4.3. Moving the monitoring solution

By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.

Procedure

  1. Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |+
        alertmanagerMain:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        prometheusK8s:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        prometheusOperator:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        grafana:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        k8sPrometheusAdapter:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        kubeStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        telemeterClient:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        openshiftStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        thanosQuerier:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
    Copy to Clipboard Toggle word wrap

    Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.

  2. Apply the new config map:

    $ oc create -f cluster-monitoring-configmap.yaml
    Copy to Clipboard Toggle word wrap
  3. Watch the monitoring pods move to the new machines:

    $ watch 'oc get pod -n openshift-monitoring -o wide'
    Copy to Clipboard Toggle word wrap
  4. If a component has not moved to the infra node, delete the pod with this component:

    $ oc delete pod -n openshift-monitoring <pod>
    Copy to Clipboard Toggle word wrap

    The component from the deleted pod is re-created on the infra node.

Additional resources

6.4.4. Moving the cluster logging resources

You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • Cluster logging and Elasticsearch must be installed. These features are not installed by default.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    Copy to Clipboard Toggle word wrap
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ...
    
    spec:
      collection:
        logs:
          fluentd:
            resources: null
          type: fluentd
      curation:
        curator:
          nodeSelector: 
    1
    
            node-role.kubernetes.io/infra: ''
          resources: null
          schedule: 30 3 * * *
        type: curator
      logStore:
        elasticsearch:
          nodeCount: 3
          nodeSelector: 
    2
    
            node-role.kubernetes.io/infra: ''
          redundancyPolicy: SingleRedundancy
          resources:
            limits:
              cpu: 500m
              memory: 16Gi
            requests:
              cpu: 500m
              memory: 16Gi
          storage: {}
        type: elasticsearch
      managementState: Managed
      visualization:
        kibana:
          nodeSelector: 
    3
    
            node-role.kubernetes.io/infra: ''
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    
    ...
    Copy to Clipboard Toggle word wrap
1 2 3
Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-5b8bdf44f9-ccpq9   2/2     Running   0          27s   10.129.2.18   ip-10-0-147-79.us-east-2.compute.internal   <none>           <none>
    Copy to Clipboard Toggle word wrap

  • You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    $ oc get nodes
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                         STATUS   ROLES          AGE   VERSION
    ip-10-0-133-216.us-east-2.compute.internal   Ready    master         60m   v1.18.3
    ip-10-0-139-146.us-east-2.compute.internal   Ready    master         60m   v1.18.3
    ip-10-0-139-192.us-east-2.compute.internal   Ready    worker         51m   v1.18.3
    ip-10-0-139-241.us-east-2.compute.internal   Ready    worker         51m   v1.18.3
    ip-10-0-147-79.us-east-2.compute.internal    Ready    worker         51m   v1.18.3
    ip-10-0-152-241.us-east-2.compute.internal   Ready    master         60m   v1.18.3
    ip-10-0-139-48.us-east-2.compute.internal    Ready    infra          51m   v1.18.3
    Copy to Clipboard Toggle word wrap

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    kind: Node
    apiVersion: v1
    metadata:
      name: ip-10-0-139-48.us-east-2.compute.internal
      selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      resourceVersion: '39083'
      creationTimestamp: '2020-04-13T19:07:55Z'
      labels:
        node-role.kubernetes.io/infra: ''
    ...
    Copy to Clipboard Toggle word wrap

  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ...
    
    spec:
    
    ...
    
      visualization:
        kibana:
          nodeSelector: 
    1
    
            node-role.kubernetes.io/infra: ''
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    Copy to Clipboard Toggle word wrap
    1
    Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                            READY   STATUS        RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running       0          29m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running       0          28m
    fluentd-42dzz                                   1/1     Running       0          28m
    fluentd-d74rq                                   1/1     Running       0          28m
    fluentd-m5vr9                                   1/1     Running       0          28m
    fluentd-nkxl7                                   1/1     Running       0          28m
    fluentd-pdvqb                                   1/1     Running       0          28m
    fluentd-tflh6                                   1/1     Running       0          28m
    kibana-5b8bdf44f9-ccpq9                         2/2     Terminating   0          4m11s
    kibana-7d85dcffc8-bfpfp                         2/2     Running       0          33s
    Copy to Clipboard Toggle word wrap

  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    $ oc get pod kibana-7d85dcffc8-bfpfp -o wide
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                      READY   STATUS        RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-7d85dcffc8-bfpfp   2/2     Running       0          43s   10.131.0.22   ip-10-0-139-48.us-east-2.compute.internal   <none>           <none>
    Copy to Clipboard Toggle word wrap

  • After a few moments, the original Kibana pod is removed.

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running   0          30m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running   0          29m
    fluentd-42dzz                                   1/1     Running   0          29m
    fluentd-d74rq                                   1/1     Running   0          29m
    fluentd-m5vr9                                   1/1     Running   0          29m
    fluentd-nkxl7                                   1/1     Running   0          29m
    fluentd-pdvqb                                   1/1     Running   0          29m
    fluentd-tflh6                                   1/1     Running   0          29m
    kibana-7d85dcffc8-bfpfp                         2/2     Running   0          62s
    Copy to Clipboard Toggle word wrap

返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat