Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

7.2. Guest Clusters


This refers to Red Hat Enterprise Linux Cluster/HA running inside of virtualized guests on a variety of virtualization platforms. In this use-case Red Hat Enterprise Linux Clustering/HA is primarily used to make the applications running inside of the guests highly available. This use-case is similar to how Red Hat Enterprise Linux Clustering/HA has always been used in traditional bare-metal hosts. The difference is that Clustering runs inside of guests instead.
The following is a list of virtualization platforms and the level of support currently available for running guest clusters using Red Hat Enterprise Linux Cluster/HA. In the below list, Red Hat Enterprise Linux 6 Guests encompass both the High Availability (core clustering) and Resilient Storage Add-Ons (GFS2, clvmd and cmirror).
  • Red Hat Enterprise Linux 5.3+ Xen hosts fully support running guest clusters where the guest operating systems are also Red Hat Enterprise Linux 5.3 or above:
    • Xen guest clusters can use either fence_xvm or fence_scsi for guest fencing.
    • Usage of fence_xvm/fence_xvmd requires a host cluster to be running to support fence_xvmd and fence_xvm must be used as the guest fencing agent on all clustered guests.
    • Shared storage can be provided by either iSCSI or Xen shared block devices backed either by host block storage or by file backed storage (raw images).
  • Red Hat Enterprise Linux 5.5+ KVM hosts do not support running guest clusters.
  • Red Hat Enterprise Linux 6.1+ KVM hosts support running guest clusters where the guest operating systems are either Red Hat Enterprise Linux 6.1+ or Red Hat Enterprise Linux 5.6+. Red Hat Enterprise Linux 4 guests are not supported.
    • Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted.
    • Red Hat Enterprise Linux 5.6+ guest clusters can use either fence_xvm or fence_scsi for guest fencing.
    • Red Hat Enterprise Linux 6.1+ guest clusters can use either fence_xvm (in the fence-virt package) or fence_scsi for guest fencing.
    • The Red Hat Enterprise Linux 6.1+ KVM Hosts must use fence_virtd if the guest cluster is using fence_virt or fence_xvm as the fence agent. If the guest cluster is using fence_scsi then fence_virtd on the hosts is not required.
    • fence_virtd can operate in three modes:
      • Standalone mode where the host to guest mapping is hard coded and live migration of guests is not allowed
      • Using the Openais Checkpoint service to track live-migrations of clustered guests. This requires a host cluster to be running.
      • Using the Qpid Management Framework (QMF) provided by the libvirt-qpid package. This utilizes QMF to track guest migrations without requiring a full host cluster to be present.
    • Shared storage can be provided by either iSCSI or KVM shared block devices backed by either host block storage or by file backed storage (raw images).
  • Red Hat Enterprise Virtualization Management (RHEV-M) versions 2.2+ and 3.0 currently support Red Hat Enterprise Linux 5.6+ and Red Hat Enterprise Linux 6.1+ clustered guests.
    • Guest clusters must be homogeneous (either all Red Hat Enterprise Linux 5.6+ guests or all Red Hat Enterprise Linux 6.1+ guests).
    • Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted.
    • Fencing is provided by fence_scsi in RHEV-M 2.2+ and by both fence_scsi and fence_rhevm in RHEV-M 3.0. Fencing is supported using fence_scsi as described below:
      • Use of fence_scsi with iSCSI storage is limited to iSCSI servers that support SCSI 3 Persistent Reservations with the PREEMPT AND ABORT command. Not all iSCSI servers support this functionality. Check with your storage vendor to ensure that your server is compliant with SCSI 3 Persistent Reservation support. Note that the iSCSI server shipped with Red Hat Enterprise Linux does not presently support SCSI 3 Persistent Reservations, so it is not suitable for use with fence_scsi.
  • VMware vSphere 4.1, VMware vCenter 4.1, VMware ESX and ESXi 4.1 support running guest clusters where the guest operating systems are Red Hat Enterprise Linux 5.7+ or Red Hat Enterprise Linux 6.2+. Version 5.0 of VMware vSphere, vCenter, ESX and ESXi are also supported; however, due to an incomplete WDSL schema provided in the initial release of VMware vSphere 5.0, the fence_vmware_soap utility does not work on the default install. Refer to the Red Hat Knowledgebase https://access.redhat.com/knowledge/ for updated procedures to fix this issue.
    • Guest clusters must be homogeneous (either all Red Hat Enterprise Linux 5.7+ guests or all Red Hat Enterprise Linux 6.1+ guests).
    • Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted.
    • The fence_vmware_soap agent requires the 3rd party VMware perl APIs. This software package must be downloaded from VMware's web site and installed onto the Red Hat Enterprise Linux clustered guests.
    • Alternatively, fence_scsi can be used to provide fencing as described below.
    • Shared storage can be provided by either iSCSI or VMware raw shared block devices.
    • Use of VMware ESX guest clusters is supported using either fence_vmware_soap or fence_scsi.
  • Use of Hyper-V guest clusters is unsupported at this time.

7.2.1. Using fence_scsi and iSCSI Shared Storage

  • In all of the above virtualization environments, fence_scsi and iSCSI storage can be used in place of native shared storage and the native fence devices.
  • fence_scsi can be used to provide I/O fencing for shared storage provided over iSCSI if the iSCSI target properly supports SCSI 3 persistent reservations and the PREEMPT AND ABORT command. Check with your storage vendor to determine if your iSCSI solution supports the above functionality.
  • The iSCSI server software shipped with Red Hat Enterprise Linux does not support SCSI 3 persistent reservations; therefore, it cannot be used with fence_scsi. It is suitable for use as a shared storage solution in conjunction with other fence devices like fence_vmware or fence_rhevm, however.
  • If using fence_scsi on all guests, a host cluster is not required (in the Red Hat Enterprise Linux 5 Xen/KVM and Red Hat Enterprise Linux 6 KVM Host use cases)
  • If fence_scsi is used as the fence agent, all shared storage must be over iSCSI. Mixing of iSCSI and native shared storage is not permitted.

7.2.2. General Recommendations

  • As stated above it is recommended to upgrade both the hosts and guests to the latest Red Hat Enterprise Linux packages before using virtualization capabilities, as there have been many enhancements and bug fixes.
  • Mixing virtualization platforms (hypervisors) underneath guest clusters is not supported. All underlying hosts must use the same virtualization technology.
  • It is not supported to run all guests in a guest cluster on a single physical host as this provides no high availability in the event of a single host failure. This configuration can be used for prototype or development purposes, however.
  • Best practices include the following:
    • It is not necessary to have a single host per guest, but this configuration does provide the highest level of availability since a host failure only affects a single node in the cluster. If you have a 2-to-1 mapping (two guests in a single cluster per physical host) this means a single host failure results in two guest failures. Therefore it is advisable to get as close to a 1-to-1 mapping as possible.
    • Mixing multiple independent guest clusters on the same set of physical hosts is not supported at this time when using the fence_xvm/fence_xvmd or fence_virt/fence_virtd fence agents.
    • Mixing multiple independent guest clusters on the same set of physical hosts will work if using fence_scsi + iSCSI storage or if using fence_vmware + VMware (ESX/ESXi and vCenter).
    • Running non-clustered guests on the same set of physical hosts as a guest cluster is supported, but since hosts will physically fence each other if a host cluster is configured, these other guests will also be terminated during a host fencing operation.
    • Host hardware should be provisioned such that memory or virtual CPU overcommit is avoided. Overcommitting memory or virtual CPU will result in performance degradation. If the performance degradation becomes critical the cluster heartbeat could be affected, which may result in cluster failure.
Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.