このコンテンツは選択した言語では利用できません。

3.2. Red Hat Gluster Storage Console


Issues related to Red Hat Gluster Storage Console

  • BZ# 1246047
    If a logical network is attached to the interface with boot protocol DHCP, the IP address is not assigned to the interface on saving network configuration, if DHCP server responses are slow.
    Workaround: Click Refresh Capabilities on the Hosts tab and the network details are refreshed and the IP address is correctly assigned to the interface.
  • BZ#1164662
    The Trends tab in the Red Hat Gluster Storage Console appears to be empty after the ovirt engine restarts. This is due to the Red Hat Gluster Storage Console UI-plugin failing to load on the first instance of restarting the ovirt engine.
    Workaround: Refresh (F5) the browser page to load the Trends tab.
  • BZ#1167305
    The Trends tab on the Red Hat Gluster Storage Console does not display the thin-pool utilization graphs in addition to the brick utilization graphs. Currently, there is no mechanism for the UI plugin to detect if the volume is provisioned using the thin provisioning feature.
  • BZ#1167572
    On editing the cluster version in the Edit Cluster dialog box on the Red Hat Gluster Storage Console, the compatible version field gets loaded with the highest available compatibility version by default, instead of the current version of the cluster.
    Workaround: Select the correct version of the cluster in the Edit Cluster dialog box before clicking on the OK button.
  • BZ# 1054366
    In Internet Explorer 10, while creating a new cluster with Compatibility version 3.3, the Host drop down list does not open correctly. Also, if there is only one item, the drop down list gets hidden when the user clicks on it.
  • BZ# 1053395
    In Internet Explorer, while performing a task, an error message Unable to evaluate payload is displayed.
  • BZ# 1056372
    When no migration is occurring, incorrect error message is displayed for the stop migrate operation.
  • BZ# 1048426
    When there are more entries in rebalance status and remove-brick status window, the column names scrolls up along with the entries while scrolling the window.
    Workaround: Scroll up the rebalance status and remove-brick status window to view the column names.
  • BZ# 1053112
    When large sized files are migrated, the stop migrate task does not stop the migration immediately but only after the migration is complete.
  • BZ# 1040310
    If the Rebalance Status dialog box is open in the Red Hat Gluster Storage Console while Rebalance is being stopped from the Command Line Interface, the status is currently updated as Stopped. But if the Rebalance Status dialog box is not open, the task status is displayed as Unknown because the status update relies on the gluster Command Line Interface.
  • BZ# 838329
    When incorrect create request is sent through REST api, an error message is displayed which contains the internal package structure.
  • BZ# 1049863
    When Rebalance is running on multiple volumes, viewing the brick advanced details fails and the error message could not fetch brick details, please try again later is displayed in the Brick Advanced Details dialog box.
  • BZ# 1024184
    If there is an error while adding bricks, all the "." characters of FQDN / IP address in the error message will be replaced with "_" characters.
  • BZ# 975399
    When Gluster daemon service is restarted, the host status does not change to UP from Non-Operational immediately in the Red Hat Gluster Storage Console. There would be a 5 minute interval for auto-recovery operations which detect changes in Non-Operational hosts.
  • BZ# 971676
    While enabling or disabling Gluster hooks, the error message displayed if all the servers are not in UP state is incorrect.
  • BZ# 1057122
    While configuring the Red Hat Gluster Storage Console to use a remote database server, on providing either yes or no as input for Database host name validation parameter, it is considered as No.
  • When remove-brick operation fails on a volume, the Red Hat Gluster Storage node does not allow any other operation on that volume.
    Workaround: Perform commit or stop for the failed remove-brick task, before another task can be started on the volume.
  • BZ# 1060991
    In Red Hat Gluster Storage Console, Technology Preview warning is not displayed for stop remove-brick operation.
  • BZ# 1057450
    Brick operations like adding and removing a brick from Red Hat Gluster Storage Console fails when Red Hat Gluster Storage nodes in the cluster have multiple FQDNs (Fully Qualified Domain Names).
    Workaround: Host with multiple interfaces should map to the same FQDN for both Red Hat Gluster Storage Console and gluster peer probe.
  • BZ# 1038663
    Framework restricts displaying delete actions for collections in RSDL display.
  • BZ# 1061677
    When Red Hat Gluster Storage Console detects a remove-brick operation which is started from Gluster Command Line Interface, engine does not acquire lock on the volume and Rebalance task is allowed.
    Workaround: Perform commit or stop on remove-brick operation before starting Rebalance.
  • BZ# 1046055
    While creating volume, if the bricks are added in root partition, the error message displayed does not contain the information that Allow bricks in root partition and re-use the bricks by clearing xattrs option needs to be selected to add bricks in root partition.
    Workaround: Select Allow bricks in root partition and re-use the bricks by clearing xattrs option to add bricks in root partition.
  • BZ# 1060991
    In Red Hat Gluster Storage Console UI, Technology Preview warning is not displayed for stop remove-brick operation.
  • BZ# 1066130
    Simultaneous start of Rebalance on volumes that span same set of hosts fails as gluster daemon lock is acquired on participating hosts.
    Workaround: Start Rebalance again on the other volume after the process starts on first volume.
  • The Trends tab on the Red Hat Gluster Storage Console does not display all the network interfaces available on a host. This limitation is because the Red Hat Gluster Storage Console ui-plugin does not have this information.
    Workaround:The graphs associated with the hosts are available in the Nagios UI on the Red Hat Gluster Storage Console.You can view the graphs by clicking the Nagios home link
  • BZ# 1224724
    The Volume tab loads before the dashboard plug-in is loaded. When the dashboard is set as the default tab, the volume sub-tab remains on top of dashboard tab.
    Workaround: Switch to a different tab and the sub-tab is removed.
  • BZ# 1225826
    In Firefox-38.0-4.el6_6, check boxes and labels in Add brick and Remove Brick dialog boxes are misaligned.
  • BZ# 1228179
    gluster volume set help-xml does not list the config.transport option in the UI.
    Workaround: Type the option name instead of selecting it from the drop-down list. Enter the desired value in the value field.
  • BZ# 1231723
    Storage devices with disk labels appear as locked on the storage devices sub-tab. When a user deletes a brick by removing lv, vg, pv and partition, the storage device appears with the lock symbol and the user is unable to create a new brick from the storage device.
    Workaround: Using the CLI, manually create a partition. Click Sync on the Storage Device sub-tab under the host shows the created partition in the UI. The partition appears as a free device that can be used to create a brick through the Red Hat Gluster Storage Console GUI.
  • BZ# 1231725
    Red Hat Gluster Storage Console cannot detect bricks that are created manually using the CLI and mounted to a location other than /rhgs. Users must manually type the brick directory in the Add Bricks dialog box.
    Workaround: Mount bricks in the /rhgs folder, which are detected automatically by Red Hat Gluster Storage Console.
  • BZ# 1232275
    Blivet provides only partial device details on any major disk failure. The Storage Devices tab does not show some storage devices if the partition table is corrupted.
    Workaround: Clean the corrupted partition table using the dd command. All storage devices are then synced to the UI.
  • BZ# 1233592
    The Force Remove checkbox appears in the Remove GeoReplication window even if it is unnecessary. Even if you use the force option, it is the equivalent of using w/o force as the option is not available in the Gluster CLI to remove a geo-replication session.
  • BZ# 1232575
    When performing a search on a specific cluster, the volumes of all clusters that have a name beginning with the selected cluster name are returned.
  • BZ# 1234445
    The task-id corresponding to the previously performed retain/stop remove-brick is preserved by engine. When a user queries for remove-brick status, it passes the bricks of both the previous remove-brick as well as the current bricks to the status command. The UI returns the error Could not fetch remove brick status of volume.
    In Gluster, once a remove-brick has been stopped, the status can't be obtained.
  • BZ# 1235559
    The same audit log messages is used in two cases:
    1. When the current_scheduler value is set as oVirt in Gluster.
    2. When the current_scheduler value is set as oVirt in Gluster.
    The first message should be corrected to mention that the flag is set successfully to oVirt in the CLI.
  • BZ# 1236410
    While syncing snapshots created from the CLI, the engine populates the creation time, which is returned from the Gluster CLI. When you create a snapshot from the UI, the engine current time is marked as the creation time in the engine DB. This leads to a mismatch between creation times for snapshots created from the engine and the CLI.
  • BZ# 1238244
    Upgrade is supported from Red Hat Gluster Storage 3.0 to 3.1, but you cannot upgrade from Red Hat Gluster Storage 2.1 to 3.1.
    Workaround: Reinstall Red Hat Gluster Storage 3.1 on existing deployments of 2.1 and import existing clusters. Refer to the Red Hat Guster Storage Console Installation Guide for further information.
  • BZ# 1238332
    When the console doesn't know that glusterd is not running on the host, removal of a brick results in an undetermined state (question mark). When glusterd is started again, the brick remains in an undetermined state. The volume command shows status as not started but the remove-brick status command returns null in the status field.
    Workaround: Stop/commit remove-brick from the CLI.
  • BZ# 1238540
    When you create volume snapshots, time zone and time stamp details are appended to the snapshot name. The engine passes only the prefix for the snapshot name. If master and slave clusters of a geo-replication session are in different time zones (or sometimes even in the same time zone), the snapshot names of the master and slave are different. This causes a restore of a snapshot of the master volume to fail because the slave volume name does not match.
    Workaround: Identify the respective snapshots for the master and slave volumes and restore them separately from the gluster CLI by pausing the geo-replication session.
  • BZ# 1240627
    There is a time out for a VDSM call from the oVirt engine. Removing 256 snapshots from a volume causes the engine to time out during the call. UI shows a network error as the command timed out. However, the snapshots were deleted successfully.
    Workaround: Delete the snapshots in smaller chunks using the Delete option, which supports the deletion of multiple snapshots at once.
  • BZ# 1242128
    Deleting a gluster volume does not remove the /etc/fstab entries for the bricks. A Red Hat Enterprise Linux 7 system may fail to boot if the mount fails for any entry in the /etc/fstab file. If the LVs corresponding to the bricks are deleted but not the respective entry in /etc/fstab, then the system may not boot.
    Workaround:
    1. Ensure that /etc/fstab entries are removed when the Logical Volumes are deleted from system.
    2. If the system fails to boot, start it in emergency mode, use your root password, remount '/' in rw, edit fstab, save, and then reboot.
  • BZ# 1242442
    Restoring a volume to a snapshot changes the volume to use the snapshot bricks mounted at /var/run/gluster/snaps/. However, it does not remove the /etc/fstab entries for the original brick. This could cause a Red Hat Enterprise Linux 7 system to fail to boot.
    Workaround:
    1. Ensure that /etc/fstab entries are removed when the Logical Volumes are deleted from system.
    2. If the system fails to boot, then start the system in emergency mode, use the root password, remount '/' in rw, edit fstab, save it, and then reboot.
  • BZ# 1243443
    Unable to resolve Gluster hook conflicts when there are three conflicts: Content + Status + Missing
    Workaround: Resolve the Content + Missing hook conflict before resolving the Status conflict.
  • BZ# 1243537
    Labels do not show enough information for the Graphs shown on the Trends tab. When you select a host in the system tree and switch to the Trends tab, you will see two graphs for the mount point '/': one graph for the total space used and another for the inodes used on the disk.
    Workaround:
    1. The graph with y axis legend as %(Total: ** GiB/Tib) is the graph for total space used.
    2. The graph with y axis legend as %(Total: number) is the graph for inode usage.
  • BZ# 1244507
    If the meta volume is not already mounted, snapshot schedule creation fails as it needs meta volume to be mounted so that CLI based scheduling can be disabled.
    Workaround: If meta volume is available, mount it from the CLI, and then create the snapshot schedule in the UI.
  • BZ# 1246038
    Selection of the Gluster network role is not persistent when changing multiple fields. If you attach this logical network to an interface, it is ignored when you add bricks.
    Workaround: Reconfigure the role for the logical network.

トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat