Este contenido no está disponible en el idioma seleccionado.

Chapter 4. Replacing a master host


You can replace a failed master host.

First, remove the failed master host from your cluster, and then add a replacement master host. If the failed master host ran etcd, scale up etcd by adding etcd to the new master host.

Important

You must complete all sections of this topic.

4.1. Deprecating a master host

Master hosts run important services, such as the OpenShift Container Platform API and controllers services. In order to deprecate a master host, these services must be stopped.

The OpenShift Container Platform API service is an active/active service, so stopping the service does not affect the environment as long as the requests are sent to a separate master server. However, the OpenShift Container Platform controllers service is an active/passive service, where the services use etcd to decide the active master.

Deprecating a master host in a multi-master architecture includes removing the master from the load balancer pool to avoid new connections attempting to use that master. This process depends heavily on the load balancer used. The steps below show the details of removing the master from haproxy. In the event that OpenShift Container Platform is running on a cloud provider, or using a F5 appliance, see the specific product documents to remove the master from rotation.

Procedure

  1. Remove the backend section in the /etc/haproxy/haproxy.cfg configuration file. For example, if deprecating a master named master-0.example.com using haproxy, ensure the host name is removed from the following:

    backend mgmt8443
        balance source
        mode tcp
        # MASTERS 8443
        server master-1.example.com 192.168.55.12:8443 check
        server master-2.example.com 192.168.55.13:8443 check
  2. Then, restart the haproxy service.

    $ sudo systemctl restart haproxy
  3. Once the master is removed from the load balancer, disable the API and controller services by moving definition files out of the static pods dir /etc/origin/node/pods:

    # mkdir -p /etc/origin/node/pods/disabled
    # mv /etc/origin/node/pods/controller.yaml /etc/origin/node/pods/disabled/:
    +
  4. Because the master host is a schedulable OpenShift Container Platform node, follow the steps in the Deprecating a node host section.
  5. Remove the master host from the [masters] and [nodes] groups in the /etc/ansible/hosts Ansible inventory file to avoid issues if running any Ansible tasks using that inventory file.

    Warning

    Deprecating the first master host listed in the Ansible inventory file requires extra precautions.

    The /etc/origin/master/ca.serial.txt file is generated on only the first master listed in the Ansible host inventory. If you deprecate the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

    Important

    In OpenShift Container Platform 3.11 clusters running multiple masters, one of the master nodes includes additional CA certificates in /etc/origin/master, /etc/etcd/ca, and /etc/etcd/generated_certs. These are required for application node and etcd node scale-up operations and must be restored on another master node if the CA host master is being deprecated.

  6. The kubernetes service includes the master host IPs as endpoints. To verify that the master has been properly deprecated, review the kubernetes service output and see if the deprecated master has been removed:

    $ oc describe svc kubernetes -n default
    Name:			kubernetes
    Namespace:		default
    Labels:			component=apiserver
    			provider=kubernetes
    Annotations:		<none>
    Selector:		<none>
    Type:			ClusterIP
    IP:			10.111.0.1
    Port:			https	443/TCP
    Endpoints:		192.168.55.12:8443,192.168.55.13:8443
    Port:			dns	53/UDP
    Endpoints:		192.168.55.12:8053,192.168.55.13:8053
    Port:			dns-tcp	53/TCP
    Endpoints:		192.168.55.12:8053,192.168.55.13:8053
    Session Affinity:	ClientIP
    Events:			<none>

    After the master has been successfully deprecated, the host where the master was previously running can be safely deleted.

4.2. Adding hosts

You can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.

Important

The scaleup.yml playbook configures only the new host. It does not update NO_PROXY in master services, and it does not restart master services.

You must have an existing inventory file, for example /etc/ansible/hosts, that is representative of your current cluster configuration in order to run the scaleup.yml playbook. If you previously used the atomic-openshift-installer command to run your installation, you can check ~/.config/openshift/hosts for the last inventory file that the installer generated and use that file as your inventory file. You can modify this file as required. You must then specify the file location with -i when you run the ansible-playbook.

Important

See the cluster maximums section for the recommended maximum number of nodes.

Procedure

  1. Ensure you have the latest playbooks by updating the openshift-ansible package:

    # yum update openshift-ansible
  2. Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section. For example, to add a new node host, add new_nodes:

    [OSEv3:children]
    masters
    nodes
    new_nodes

    To add new master hosts, add new_masters.

  3. Create a [new_<host_type>] section to specify host information for the new hosts. Format this section like an existing section, as shown in the following example of adding a new node:

    [nodes]
    master[1:3].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]
    node3.example.com openshift_node_group_name='node-config-infra'

    See Configuring Host Variables for more options.

    When adding new masters, add hosts to both the [new_masters] section and the [new_nodes] section to ensure that the new master host is part of the OpenShift SDN:

    [masters]
    master[1:2].example.com
    
    [new_masters]
    master3.example.com
    
    [nodes]
    master[1:2].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]
    master3.example.com
    Important

    If you label a master host with the node-role.kubernetes.io/infra=true label and have no other dedicated infrastructure nodes, you must also explicitly mark the host as schedulable by adding openshift_schedulable=true to the entry. Otherwise, the registry and router pods cannot be placed anywhere.

  4. Change to the playbook directory and run the openshift_node_group.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i option:

    $ cd /usr/share/ansible/openshift-ansible
    $ ansible-playbook [-i /path/to/file] \
      playbooks/openshift-master/openshift_node_group.yml

    This creates the ConfigMap for the new node groups, and ultimately, the configuration file of the node on the host.

    Note

    Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster.

  5. Run the scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i option.

    • For additional nodes:

      $ ansible-playbook [-i /path/to/file] \
          playbooks/openshift-node/scaleup.yml
    • For additional masters:

      $ ansible-playbook [-i /path/to/file] \
          playbooks/openshift-master/scaleup.yml
  6. Set the node label to logging-infra-fluentd=true, if you deployed the EFK stack in your cluster:

    # oc label node/new-node.example.com logging-infra-fluentd=true
  7. After the playbook runs, verify the installation.
  8. Move any hosts that you defined in the [new_<host_type>] section to their appropriate section. By moving these hosts, subsequent playbook runs that use this inventory file treat the nodes correctly. You can keep the empty [new_<host_type>] section. For example, when adding new nodes:

    [nodes]
    master[1:3].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    node3.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]

4.3. Scaling etcd

You can scale the etcd cluster vertically by adding more resources to the etcd hosts or horizontally by adding more etcd hosts.

Note

Due to the voting system etcd uses, the cluster must always contain an odd number of members.

Having a cluster with an odd number of etcd hosts can account for fault tolerance. Having an odd number of etcd hosts does not change the number needed for a quorum but increases the tolerance for failure. For example, with a cluster of three members, quorum is two, which leaves a failure tolerance of one. This ensures the cluster continues to operate if two of the members are healthy.

Having an in-production cluster of three etcd hosts is recommended.

The new host requires a fresh Red Hat Enterprise Linux version 7 dedicated host. The etcd storage should be located on an SSD disk to achieve maximum performance and on a dedicated disk mounted in /var/lib/etcd.

Prerequisites

  1. Before you add a new etcd host, perform a backup of both etcd configuration and data to prevent data loss.
  2. Check the current etcd cluster status to avoid adding new hosts to an unhealthy cluster. Run this command:

    # ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
              --key=/etc/etcd/peer.key \
              --cacert="/etc/etcd/ca.crt" \
              --endpoints="https://*master-0.example.com*:2379,\
                https://*master-1.example.com*:2379,\
                https://*master-2.example.com*:2379"
                endpoint health
    https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
    https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
    https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
  3. Before running the scaleup playbook, ensure the new host is registered to the proper Red Hat software channels:

    # subscription-manager register \
        --username=*<username>* --password=*<password>*
    # subscription-manager attach --pool=*<poolid>*
    # subscription-manager repos --disable="*"
    # subscription-manager repos \
        --enable=rhel-7-server-rpms \
        --enable=rhel-7-server-extras-rpms

    etcd is hosted in the rhel-7-server-extras-rpms software channel.

  4. Make sure all unused etcd members are removed from the etcd cluster. This must be completed before running the scaleup playbook.

    1. List the etcd members:

      # etcdctl --cert="/etc/etcd/peer.crt" --key="/etc/etcd/peer.key" \
        --cacert="/etc/etcd/ca.crt" --endpoints=ETCD_LISTEN_CLIENT_URLS member list -w table

      Copy the unused etcd member ID, if applicable.

    2. Remove the unused member by specifying its ID in the following command:

      # etcdctl --cert="/etc/etcd/peer.crt" --key="/etc/etcd/peer.key" \
        --cacert="/etc/etcd/ca.crt" --endpoints=ETCD_LISTEN_CLIENT_URL member remove UNUSED_ETCD_MEMBER_ID
  5. Upgrade etcd and iptables on the current etcd nodes:

    # yum update etcd iptables-services
  6. Back up the /etc/etcd configuration for the etcd hosts.
  7. If the new etcd members will also be OpenShift Container Platform nodes, add the desired number of hosts to the cluster.
  8. The rest of this procedure assumes you added one host, but if you add multiple hosts, perform all steps on each host.

4.3.1. Adding a new etcd host using Ansible

Procedure
  1. In the Ansible inventory file, create a new group named [new_etcd] and add the new host. Then, add the new_etcd group as a child of the [OSEv3] group:

    [OSEv3:children]
    masters
    nodes
    etcd
    new_etcd 1
    
    ... [OUTPUT ABBREVIATED] ...
    
    [etcd]
    master-0.example.com
    master-1.example.com
    master-2.example.com
    
    [new_etcd] 2
    etcd0.example.com 3
    1 2 3
    Add these lines.
    Note

    Replace the old etcd host entry with the new etcd host entry in the inventory file. While replacing the older etcd host, you must create a copy of /etc/etcd/ca/ directory. Alternatively, you can redeploy etcd ca and certs before scaling up the etcd hosts.

  2. From the host that installed OpenShift Container Platform and hosts the Ansible inventory file, change to the playbook directory and run the etcd scaleup playbook:

    $ cd /usr/share/ansible/openshift-ansible
    $ ansible-playbook  playbooks/openshift-etcd/scaleup.yml
  3. After the playbook runs, modify the inventory file to reflect the current status by moving the new etcd host from the [new_etcd] group to the [etcd] group:

    [OSEv3:children]
    masters
    nodes
    etcd
    new_etcd
    
    ... [OUTPUT ABBREVIATED] ...
    
    [etcd]
    master-0.example.com
    master-1.example.com
    master-2.example.com
    etcd0.example.com
  4. If you use Flannel, modify the flanneld service configuration on every OpenShift Container Platform host, located at /etc/sysconfig/flanneld, to include the new etcd host:

    FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379
  5. Restart the flanneld service:

    # systemctl restart flanneld.service

4.3.2. Manually adding a new etcd host

If you do not run etcd as static pods on master nodes, you might need to add another etcd host.

Procedure
Modify the current etcd cluster

To create the etcd certificates, run the openssl command, replacing the values with those from your environment.

  1. Create some environment variables:

    export NEW_ETCD_HOSTNAME="*etcd0.example.com*"
    export NEW_ETCD_IP="192.168.55.21"
    
    export CN=$NEW_ETCD_HOSTNAME
    export SAN="IP:${NEW_ETCD_IP}, DNS:${NEW_ETCD_HOSTNAME}"
    export PREFIX="/etc/etcd/generated_certs/etcd-$CN/"
    export OPENSSLCFG="/etc/etcd/ca/openssl.cnf"
    Note

    The custom openssl extensions used as etcd_v3_ca_* include the $SAN environment variable as subjectAltName. See /etc/etcd/ca/openssl.cnf for more information.

  2. Create the directory to store the configuration and certificates:

    # mkdir -p ${PREFIX}
  3. Create the server certificate request and sign it: (server.csr and server.crt)

    # openssl req -new -config ${OPENSSLCFG} \
        -keyout ${PREFIX}server.key  \
        -out ${PREFIX}server.csr \
        -reqexts etcd_v3_req -batch -nodes \
        -subj /CN=$CN
    
    # openssl ca -name etcd_ca -config ${OPENSSLCFG} \
        -out ${PREFIX}server.crt \
        -in ${PREFIX}server.csr \
        -extensions etcd_v3_ca_server -batch
  4. Create the peer certificate request and sign it: (peer.csr and peer.crt)

    # openssl req -new -config ${OPENSSLCFG} \
        -keyout ${PREFIX}peer.key \
        -out ${PREFIX}peer.csr \
        -reqexts etcd_v3_req -batch -nodes \
        -subj /CN=$CN
    
    # openssl ca -name etcd_ca -config ${OPENSSLCFG} \
      -out ${PREFIX}peer.crt \
      -in ${PREFIX}peer.csr \
      -extensions etcd_v3_ca_peer -batch
  5. Copy the current etcd configuration and ca.crt files from the current node as examples to modify later:

    # cp /etc/etcd/etcd.conf ${PREFIX}
    # cp /etc/etcd/ca.crt ${PREFIX}
  6. While still on the surviving etcd host, add the new host to the cluster. To add additional etcd members to the cluster, you must first adjust the default localhost peer in the peerURLs value for the first member:

    1. Get the member ID for the first member using the member list command:

      # etcdctl --cert-file=/etc/etcd/peer.crt \
          --key-file=/etc/etcd/peer.key \
          --ca-file=/etc/etcd/ca.crt \
          --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \ 1
          member list
      1
      Ensure that you specify the URLs of only active etcd members in the --peers parameter value.
    2. Obtain the IP address where etcd listens for cluster peers:

      $ ss -l4n | grep 2380
    3. Update the value of peerURLs using the etcdctl member update command by passing the member ID and IP address obtained from the previous steps:

      # etcdctl --cert-file=/etc/etcd/peer.crt \
          --key-file=/etc/etcd/peer.key \
          --ca-file=/etc/etcd/ca.crt \
          --peers="https://172.18.1.18:2379,https://172.18.9.202:2379,https://172.18.0.75:2379" \
          member update 511b7fb6cc0001 https://172.18.1.18:2380
    4. Re-run the member list command and ensure the peer URLs no longer include localhost.
  7. Add the new host to the etcd cluster. Note that the new host is not yet configured, so the status stays as unstarted until the you configure the new host.

    Warning

    You must add each member and bring it online one at a time. When you add each additional member to the cluster, you must adjust the peerURLs list for the current peers. The peerURLs list grows by one for each member added. The etcdctl member add command outputs the values that you must set in the etcd.conf file as you add each member, as described in the following instructions.

    # etcdctl -C https://${CURRENT_ETCD_HOST}:2379 \
      --ca-file=/etc/etcd/ca.crt     \
      --cert-file=/etc/etcd/peer.crt     \
      --key-file=/etc/etcd/peer.key member add ${NEW_ETCD_HOSTNAME} https://${NEW_ETCD_IP}:2380 1
    
    Added member named 10.3.9.222 with ID 4e1db163a21d7651 to cluster
    
    ETCD_NAME="<NEW_ETCD_HOSTNAME>"
    ETCD_INITIAL_CLUSTER="<NEW_ETCD_HOSTNAME>=https://<NEW_HOST_IP>:2380,<CLUSTERMEMBER1_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER2_NAME>=https:/<CLUSTERMEMBER2_IP>:2380,<CLUSTERMEMBER3_NAME>=https:/<CLUSTERMEMBER3_IP>:2380"
    ETCD_INITIAL_CLUSTER_STATE="existing"
    1
    In this line, 10.3.9.222 is a label for the etcd member. You can specify the host name, IP address, or a simple name.
  8. Update the sample ${PREFIX}/etcd.conf file.

    1. Replace the following values with the values generated in the previous step:

      • ETCD_NAME
      • ETCD_INITIAL_CLUSTER
      • ETCD_INITIAL_CLUSTER_STATE
    2. Modify the following variables with the new host IP from the output of the previous step. You can use ${NEW_ETCD_IP} as the value.

      ETCD_LISTEN_PEER_URLS
      ETCD_LISTEN_CLIENT_URLS
      ETCD_INITIAL_ADVERTISE_PEER_URLS
      ETCD_ADVERTISE_CLIENT_URLS
    3. If you previously used the member system as an etcd node, you must overwrite the current values in the /etc/etcd/etcd.conf file.
    4. Check the file for syntax errors or missing IP addresses, otherwise the etcd service might fail:

      # vi ${PREFIX}/etcd.conf
  9. On the node that hosts the installation files, update the [etcd] hosts group in the /etc/ansible/hosts inventory file. Remove the old etcd hosts and add the new ones.
  10. Create a tgz file that contains the certificates, the sample configuration file, and the ca and copy it to the new host:

    # tar -czvf /etc/etcd/generated_certs/${CN}.tgz -C ${PREFIX} .
    # scp /etc/etcd/generated_certs/${CN}.tgz ${CN}:/tmp/
Modify the new etcd host
  1. Install iptables-services to provide iptables utilities to open the required ports for etcd:

    # yum install -y iptables-services
  2. Create the OS_FIREWALL_ALLOW firewall rules to allow etcd to communicate:

    • Port 2379/tcp for clients
    • Port 2380/tcp for peer communication

      # systemctl enable iptables.service --now
      # iptables -N OS_FIREWALL_ALLOW
      # iptables -t filter -I INPUT -j OS_FIREWALL_ALLOW
      # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2379 -j ACCEPT
      # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2380 -j ACCEPT
      # iptables-save | tee /etc/sysconfig/iptables
      Note

      In this example, a new chain OS_FIREWALL_ALLOW is created, which is the standard naming the OpenShift Container Platform installer uses for firewall rules.

      Warning

      If the environment is hosted in an IaaS environment, modify the security groups for the instance to allow incoming traffic to those ports as well.

  3. Install etcd:

    # yum install -y etcd

    Ensure version etcd-2.3.7-4.el7.x86_64 or greater is installed,

  4. Ensure the etcd service is not running by removing the etcd pod definition:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  5. Remove any etcd configuration and data:

    # rm -Rf /etc/etcd/*
    # rm -Rf /var/lib/etcd/*
  6. Extract the certificates and configuration files:

    # tar xzvf /tmp/etcd0.example.com.tgz -C /etc/etcd/
  7. Start etcd on the new host:

    # systemctl enable etcd --now
  8. Verify that the host is part of the cluster and the current cluster health:

    • If you use the v2 etcd api, run the following command:

      # etcdctl --cert-file=/etc/etcd/peer.crt \
                --key-file=/etc/etcd/peer.key \
                --ca-file=/etc/etcd/ca.crt \
                --peers="https://*master-0.example.com*:2379,\
                https://*master-1.example.com*:2379,\
                https://*master-2.example.com*:2379,\
                https://*etcd0.example.com*:2379"\
                cluster-health
      member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
      member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
      member 8b8904727bf526a5 is healthy: got healthy result from https://192.168.55.21:2379
      member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
      cluster is healthy
    • If you use the v3 etcd api, run the following command:

      # ETCDCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
                --key=/etc/etcd/peer.key \
                --cacert="/etc/etcd/ca.crt" \
                --endpoints="https://*master-0.example.com*:2379,\
                  https://*master-1.example.com*:2379,\
                  https://*master-2.example.com*:2379,\
                  https://*etcd0.example.com*:2379"\
                  endpoint health
      https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
      https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
      https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
      https://etcd0.example.com:2379 is healthy: successfully committed proposal: took = 1.498829ms
Modify each OpenShift Container Platform master
  1. Modify the master configuration in the etcClientInfo section of the /etc/origin/master/master-config.yaml file on every master. Add the new etcd host to the list of the etcd servers OpenShift Container Platform uses to store the data, and remove any failed etcd hosts:

    etcdClientInfo:
      ca: master.etcd-ca.crt
      certFile: master.etcd-client.crt
      keyFile: master.etcd-client.key
      urls:
        - https://master-0.example.com:2379
        - https://master-1.example.com:2379
        - https://master-2.example.com:2379
        - https://etcd0.example.com:2379
  2. Restart the master API service:

    • On every master:

      # master-restart api
      # master-restart controllers
      Warning

      The number of etcd nodes must be odd, so you must add at least two hosts.

  3. If you use Flannel, modify the flanneld service configuration located at /etc/sysconfig/flanneld on every OpenShift Container Platform host to include the new etcd host:

    FLANNEL_ETCD_ENDPOINTS=https://master-0.example.com:2379,https://master-1.example.com:2379,https://master-2.example.com:2379,https://etcd0.example.com:2379
  4. Restart the flanneld service:

    # systemctl restart flanneld.service
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.