Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Configuring the Object Storage service (swift)

download PDF

The Red Hat OpenStack Platform (RHOSP) Object Storage service (swift) stores its objects, or data, in containers. Containers are similar to directories in a file system although they cannot be nested. Containers provide an easy way for users to store any kind of unstructured data. For example, objects can include photos, text files, or images. Stored objects are not compressed.

5.1. Object Storage rings

The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.

Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.

The Object Storage service has three rings to store the following types of data:

  • Account information
  • Containers, to facilitate organizing objects under an account
  • Object replicas

5.1.1. Checking cluster health

The Object Storage service (swift) runs many processes in the background to ensure long-term data availability, durability, and persistence. For example:

  • Auditors constantly re-read database and object files and compare them by using checksums to make sure there is no silent bit-rot. Any database or object file that no longer matches its checksum is quarantined and becomes unreadable on that node. The replicators then copy one of the other replicas to make the local copy available again.
  • Objects and files can disappear when you replace disks or nodes or when objects are quarantined. When this happens, replicators copy missing objects or database files to one of the other nodes.

The Object Storage service includes a tool called swift-recon that collects data from all nodes and checks the overall cluster health.

Procedure

  1. Log in to one of the Controller nodes.
  2. Run the following command:

    [tripleo-admin@overcloud-controller-2 ~]$ sudo podman exec -it -u swift swift_object_server /usr/bin/swift-recon -arqlT --md5
    
    ======================================================================--> Starting reconnaissance on 3 hosts (object)
    ======================================================================[2018-12-14 14:55:47] Checking async pendings
    [async_pending] - No hosts returned valid data.
    ======================================================================[2018-12-14 14:55:47] Checking on replication
    [replication_failure] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    [replication_success] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    [replication_time] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    [replication_attempted] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    Oldest completion was 2018-12-14 14:55:39 (7 seconds ago) by 198.51.100.186:6000.
    Most recent completion was 2018-12-14 14:55:42 (4 seconds ago) by 198.51.100.174:6000.
    ======================================================================[2018-12-14 14:55:47] Checking load averages
    [5m_load_avg] low: 1, high: 2, avg: 2.1, total: 6, Failed: 0.0%, no_result: 0, reported: 3
    [15m_load_avg] low: 2, high: 2, avg: 2.6, total: 7, Failed: 0.0%, no_result: 0, reported: 3
    [1m_load_avg] low: 0, high: 0, avg: 0.8, total: 2, Failed: 0.0%, no_result: 0, reported: 3
    ======================================================================[2018-12-14 14:55:47] Checking ring md5sums
    3/3 hosts matched, 0 error[s] while checking hosts.
    ======================================================================[2018-12-14 14:55:47] Checking swift.conf md5sum
    3/3 hosts matched, 0 error[s] while checking hosts.
    ======================================================================[2018-12-14 14:55:47] Checking quarantine
    [quarantined_objects] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    [quarantined_accounts] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    [quarantined_containers] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3
    ======================================================================[2018-12-14 14:55:47] Checking time-sync
    3/3 hosts matched, 0 error[s] while checking hosts.
    ======================================================================
  3. Optional: Use the --all option to return additional output.

    This command queries all servers on the ring for the following data:

    • Async pendings: If the cluster load is too high and processes cannot update database files fast enough, some updates occur asynchronously. These numbers decrease over time.
    • Replication metrics: Review the replication timestamps; full replication passes happen frequently with few errors. An old entry, for example, an entry with a timestamp from six months ago, indicates that replication on the node has not completed in the last six months.
    • Ring md5sums: This ensures that all ring files are consistent on all nodes.
    • swift.conf md5sums: This ensures that all configuration files are consistent on all nodes.
    • Quarantined files: There must be no, or very few, quarantined files for all nodes.
    • Time-sync: All nodes must be synchronized.

5.1.2. Increasing ring partition power

The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.

In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat support.

5.1.3. Partition power recommendation for the Object Storage service

When using separate Red Hat OpenStack Platform (RHOSP) Object Storage service (swift) nodes, use a higher partition power value.

The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.

The partition power parameter is important and can only be changed for new containers and their objects. As such, it is important to set this value before initial deployment.

The default partition power value is 10 for environments that RHOSP director deploys. This is a reasonable value for smaller deployments, especially if you only plan to use disks on the Controller nodes for the Object Storage service.

The following table helps you to select an appropriate partition power if you use three replicas:

Table 5.1. Appropriate partition power values per number of available disks

Partition Power

Maximum number of disks

10

~ 35

11

~ 75

12

~ 150

13

~ 250

14

~ 500

Important

Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.

To set the partition power, use the following resource:

parameter_defaults:
  SwiftPartPower: 11
Tip

You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.

Additional resources

5.1.4. Custom rings

As technology advances and demands for storage capacity increase, creating custom rings is a way to update existing Object Storage clusters.

When you add new nodes to a cluster, their characteristics might differ from those of the original nodes. Without custom adjustments, the larger capacity of the new nodes may be underused. Or, if weights change in the rings, data dispersion can become uneven, which reduces safety.

Automation might not keep pace with future technology trends. For example, some older Object Storage clusters in use today originated before SSDs were available.

The ring builder helps manage Object Storage as clusters grow and technologies evolve. For assistance with creating custom rings, contact Red Hat support.

5.2. Customizing the Object Storage service

Depending on the requirements of your Red Hat OpenStack Platform (RHOSP) environment, you might want to customize some of the default settings of the Object Storage service (swift) to optimize your deployment performance.

5.2.1. Configuring fast-post

By default, the Object Storage service (swift) copies an object whole whenever any part of its metadata changes. You can prevent this by using the fast-post feature. The fast-post feature saves time when you change the content types of multiple large objects.

To enable the fast-post feature, disable the object_post_as_copy option on the Object Storage proxy service.

Procedure

  1. Edit swift_params.yaml:

    $ cat > swift_params.yaml << EOF
      parameter_defaults:
          ExtraConfig:
            swift::proxy::copy::object_post_as_copy: False
      EOF
  2. Include the parameter file when you deploy or update the overcloud:

    $ openstack overcloud deploy [... previous args ...] \
      -e swift_params.yaml

5.2.2. Enabling at-rest encryption

By default, objects uploaded to the Object Storage service (swift) are unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. Object Storage objects. For more information, see Encrypting Object Storage (swift) at-rest objects in Managing secrets with the Key Manager service.

5.2.3. Deploying a standalone Object Storage service cluster

You can use the composable role concept to deploy a standalone Object Storage service (swift) cluster with the bare minimum of additional services, for example, OpenStack Identity service (keystone) or HAProxy.

Procedure

  1. Copy the roles_data.yaml from /usr/share/openstack-tripleo-heat-templates.
  2. Edit the new file.
  3. Remove unneeded controller roles, for example Aodh*, Ceilometer*, Ceph*, Cinder*, Glance*, Heat*, Ironic*, Manila*, Nova*, Octavia*, Swift*.
  4. Locate the ObjectStorage role within roles_data.yaml.
  5. Copy this role to a new role within that same file and name it ObjectProxy.
  6. Replace SwiftStorage with SwiftProxy in this role.

    The example roles_data.yaml file below shows sample roles.

    - name: Controller
      description: |
    	Controller role that has all the controller services loaded and handles
    	Database, Messaging and Network functions.
      CountDefault: 1
      tags:
    	- primary
    	- controller
      networks:
    	- External
    	- InternalApi
    	- Storage
    	- StorageMgmt
    	- Tenant
      HostnameFormatDefault: '%stackname%-controller-%index%'
      ServicesDefault:
    	- OS::TripleO::Services::AuditD
    	- OS::TripleO::Services::CACerts
    	- OS::TripleO::Services::CertmongerUser
    	- OS::TripleO::Services::Clustercheck
    	- OS::TripleO::Services::Docker
    	- OS::TripleO::Services::Ec2Api
    	- OS::TripleO::Services::Etcd
    	- OS::TripleO::Services::HAproxy
    	- OS::TripleO::Services::Keepalived
    	- OS::TripleO::Services::Kernel
    	- OS::TripleO::Services::Keystone
    	- OS::TripleO::Services::Memcached
    	- OS::TripleO::Services::MySQL
    	- OS::TripleO::Services::MySQLClient
    	- OS::TripleO::Services::Ntp
    	- OS::TripleO::Services::Pacemaker
    	- OS::TripleO::Services::RabbitMQ
    	- OS::TripleO::Services::Securetty
    	- OS::TripleO::Services::Snmp
    	- OS::TripleO::Services::Sshd
    	- OS::TripleO::Services::Timezone
    	- OS::TripleO::Services::TripleoFirewall
    	- OS::TripleO::Services::TripleoPackages
    	- OS::TripleO::Services::Vpp
    
    - name: ObjectStorage
      CountDefault: 1
      description: |
    	Swift Object Storage node role
      networks:
    	- InternalApi
    	- Storage
    	- StorageMgmt
      disable_upgrade_deployment: True
      ServicesDefault:
    	- OS::TripleO::Services::AuditD
    	- OS::TripleO::Services::CACerts
    	- OS::TripleO::Services::CertmongerUser
    	- OS::TripleO::Services::Collectd
    	- OS::TripleO::Services::Docker
    	- OS::TripleO::Services::FluentdClient
    	- OS::TripleO::Services::Kernel
    	- OS::TripleO::Services::MySQLClient
    	- OS::TripleO::Services::Ntp
    	- OS::TripleO::Services::Securetty
    	- OS::TripleO::Services::SensuClient
    	- OS::TripleO::Services::Snmp
    	- OS::TripleO::Services::Sshd
    	- OS::TripleO::Services::SwiftRingBuilder
    	- OS::TripleO::Services::SwiftStorage
    	- OS::TripleO::Services::Timezone
    	- OS::TripleO::Services::TripleoFirewall
    	- OS::TripleO::Services::TripleoPackages
    
    - name: ObjectProxy
      CountDefault: 1
      description: |
    	Swift Object proxy node role
      networks:
    	- InternalApi
    	- Storage
    	- StorageMgmt
      disable_upgrade_deployment: True
      ServicesDefault:
    	- OS::TripleO::Services::AuditD
    	- OS::TripleO::Services::CACerts
    	- OS::TripleO::Services::CertmongerUser
    	- OS::TripleO::Services::Collectd
    	- OS::TripleO::Services::Docker
    	- OS::TripleO::Services::FluentdClient
    	- OS::TripleO::Services::Kernel
    	- OS::TripleO::Services::MySQLClient
    	- OS::TripleO::Services::Ntp
    	- OS::TripleO::Services::Securetty
    	- OS::TripleO::Services::SensuClient
    	- OS::TripleO::Services::Snmp
    	- OS::TripleO::Services::Sshd
    	- OS::TripleO::Services::SwiftRingBuilder
    	- OS::TripleO::Services::SwiftProxy
    	- OS::TripleO::Services::Timezone
    	- OS::TripleO::Services::TripleoFirewall
    	- OS::TripleO::Services::TripleoPackages
  7. Deploy the overcloud with your regular openstack deploy command, including the new roles.

    $ openstack overcloud deploy --templates -r roles_data.yaml -e [...]

5.2.4. Using external SAN disks

By default, when Red Hat OpenStack Platform (RHOSP) director deploys the Object Storage service (swift), Object Storage is configured and optimized to use independent local disks. This configuration ensures that the workload is distributed across all disks, which helps minimize performance impact during node failure or other system issues.

In performance-impacting events, an environment that uses a single SAN can experience decreased performance across all LUNs. The Object Storage service cannot mitigate performance issues in an environment that uses SAN disks. Therefore, use additional local disks for Object Storage to meet performance and disk space requirements. For more information, see Disk recommendation for the Object Storage service.

Using an external SAN for Object Storage requires evaluation on a per-case basis. For more information, contact Red Hat Support.

Important

If you choose to use an external SAN for Object Storage, be aware of the following conditions:

  • Red Hat does not provide support for issues related to performance that result from using an external SAN for Object Storage.
  • Red Hat does not provide support for issues that arise outside of the core Object Storage service offering. For support with high availability and performance, contact your storage vendor.
  • Red Hat does not test SAN solutions with the Object Storage service. For more information about compatibility, guidance, and support for third-party products, contact your storage vendor.
  • Red Hat recommends that you evaluate and test performance demands with your deployment. To confirm that your SAN deployment is tested, supported, and meets your performance requirements, contact your storage vendor.

Procedure

  • This template is an example of how to use two devices (/dev/mapper/vdb and /dev/mapper/vdc) for Object Storage:

    parameter_defaults:
      SwiftMountCheck: true
      SwiftUseLocalDir: false
      SwiftRawDisks: {"vdb": {"base_dir":"/dev/mapper/"}, \
                      "vdc": {"base_dir":"/dev/mapper/"}}

5.2.5. Disk recommendation for the Object Storage service

Use one or more separate, local disks for the Red Hat OpenStack Platform (RHOSP) Object Storage service (swift).

By default, RHOSP director uses the directory /srv/node/d1 on the system disk for the Object Storage service. On the Controller, this disk is also used by other services, and the disk can become a performance bottleneck.

The following example is an excerpt from a RHOSP Orchestration service (heat) custom environment file. On each Controller node, the Object Storage service uses two separate disks. The entirety of both disks contains an XFS file system:

parameter_defaults:
  SwiftRawDisks: {"sdb": {}, "sdc": {}}

SwiftRawDisks defines each storage disk on the node. This example defines both sdb and sdc disks on each Controller node.

Important

When configuring multiple disks, ensure that the Bare Metal service (ironic) uses the intended root disk.

Additional resources

5.3. Adding or removing Object Storage nodes

To add new Object Storage (swift) nodes to your cluster, you must increase the node count, update the rings, and synchronize the changes. You can increase the node count by adding nodes to the overcloud or by scaling up bare-metal nodes.

To remove Object Storage nodes from your cluster, you can perform a simple removal or an incremental removal, depending on the quantities of data in the cluster.

5.3.1. Adding nodes to the overcloud

You can add more nodes to your overcloud.

Note

A fresh installation of Red Hat OpenStack Platform (RHOSP) does not include certain updates, such as security errata and bug fixes. As a result, if you are scaling up a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, RPM updates are not applied to new nodes. To apply the latest updates to the overcloud nodes, you must do one of the following:

Procedure

  1. Create a new JSON file called newnodes.json that contains details of the new node that you want to register:

    {
      "nodes":[
        {
            "mac":[
                "dd:dd:dd:dd:dd:dd"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.02.24.207"
        },
        {
            "mac":[
                "ee:ee:ee:ee:ee:ee"
            ],
            "cpu":"4",
            "memory":"6144",
            "disk":"40",
            "arch":"x86_64",
            "pm_type":"ipmi",
            "pm_user":"admin",
            "pm_password":"p@55w0rd!",
            "pm_addr":"192.02.24.208"
        }
      ]
    }
  2. Log in to the undercloud host as the stack user.
  3. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  4. Register the new nodes:

    $ openstack overcloud node import newnodes.json
  5. Launch the introspection process for each new node:

    $ openstack overcloud node introspect \
     --provide <node_1> [<node_2>] [<node_n>]
    • Use the --provide option to reset all the specified nodes to an available state after introspection.
    • Replace <node_1>, <node_2>, and all nodes up to <node_n> with the UUID of each node that you want to introspect.
  6. Configure the image properties for each new node:

    $ openstack overcloud node configure <node>

5.3.2. Scaling up bare-metal nodes

To increase the count of bare-metal nodes in an existing overcloud, increment the node count in the overcloud-baremetal-deploy.yaml file and redeploy the overcloud.

Prerequisites

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Open the overcloud-baremetal-deploy.yaml node definition file that you use to provision your bare-metal nodes.
  4. Increment the count parameter for the roles that you want to scale up. For example, the following configuration increases the Object Storage node count to 4:

    - name: Controller
      count: 3
    - name: Compute
      count: 10
    - name: ObjectStorage
      count: 4
  5. Optional: Configure predictive node placement for the new nodes. For example, use the following configuration to provision a new Object Storage node on node03:

    - name: ObjectStorage
      count: 4
      instances:
      - hostname: overcloud-objectstorage-0
        name: node00
      - hostname: overcloud-objectstorage-1
        name: node01
      - hostname: overcloud-objectstorage-2
        name: node02
      - hostname: overcloud-objectstorage-3
        name: node03
  6. Optional: Define any other attributes that you want to assign to your new nodes. For more information about the properties you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes.
  7. If you use the Object Storage service (swift) and the whole disk overcloud image, overcloud-hardened-uefi-full, configure the size of the /srv partition based on the size of your disk and your storage requirements for /var and /srv. For more information, see Configuring whole disk partitions for the Object Storage service.
  8. Provision the overcloud nodes:

    $ openstack overcloud node provision \
      --stack <stack> \
      --network-config \
      --output <deployment_file> \
      /home/stack/templates/overcloud-baremetal-deploy.yaml
    • Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud.
    • Include the --network-config argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook.
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml.

      Note

      If you upgraded from Red Hat OpenStack Platform 16.2 to 17.1, you must include the YAML file that you created or updated during the upgrade process in the openstack overcloud node provision command. For example, use the /home/stack/tripleo-[stack]-baremetal-deploy.yaml file instead of the /home/stack/templates/overcloud-baremetal-deployed.yaml file. For more information, see Performing the overcloud adoption and preparation in Framework for upgrades (16.2 to 17.1).

  9. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active:

    $ watch openstack baremetal node list
  10. Add the generated overcloud-baremetal-deployed.yaml file to the stack with your other environment files and deploy the overcloud:

    $ openstack overcloud deploy --templates \
      -e [your environment files] \
      -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
      --deployed-server \
      --disable-validations \
      ...

5.3.3. Defining dedicated Object Storage nodes

Dedicate additional nodes to the Red Hat OpenStack Platform (RHOSP) Object Storage service to improve performance.

If you are dedicating additional nodes to the Object Storage service, edit the custom roles_data.yaml file to remove the Object Storage service entry from the Controller node. Specifically, remove the following line from the ServicesDefault list of the Controller role:

    - OS::TripleO::Services::SwiftStorage

5.3.4. Updating and rebalancing the Object Storage rings

The Object Storage service (swift) requires the same ring files on all Controller and Object Storage nodes. If a Controller node or Object Storage node is replaced, added or removed, these must be synced after an overcloud update to ensure proper functionality.

Procedure

  1. Log in to the undercloud as the stack user and create a temporary directory:

    $ mkdir temp && cd temp/
  2. Download the overcloud ring files from one of the previously existing nodes (Controller 0 in this example) to the new directory:

    $ ssh tripleo-admin@overcloud-controller-0.ctlplane 'sudo tar -czvf - \
      /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift \
      /{*.builder,*.ring.gz,backups/*.builder}' > swift-rings.tar.gz
  3. Extract the rings and change into the ring subdirectory:

    $ tar xzvf swift-rings.tar.gz && cd \
    var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift/
  4. Collect the values for the following variables according to your device details:

    • <device_name>:

      $ openstack baremetal introspection data save \
        <node_name> | jq ".inventory.disks"
    • <node_ip>:

      $ metalsmith <node_name> show
    • <port>: The default port is 600x. If you altered the default, use the applicable port.
    • <builder_file>: The builder file name from step 3.
    • <weight> and <zone> variables are user-defined.
  5. Use swift-ring-builder to add and update the new node to the existing rings. Replace the variables according to the device details.

    Note

    You must install the python3-swift RPM to use the swift-ring-builder command.

    $ swift-ring-builder etc/swift/<builder_file> \
      add <zone>-<node_ip>:<port>/<device_name> <weight>
  6. Rebalance the ring to ensure that the new device is used:

    $ swift-ring-builder etc/swift/<builder_file> rebalance
  7. Upload the modified ring files to the Controller nodes and ensure that these ring files are used. Use a script, similar to the following example, to distribute ring files:

    #!/bin/sh
    set -xe
    
    ALL="tripleo-admin@overcloud-controller-0.ctlplane \
    tripleo-admin@overcloud-controller-1.ctlplane \
    tripleo-admin@overcloud-controller-2.ctlplane"
    • Upload the rings to all nodes and restart Object Storage services:

      for DST in ${ALL}; do
        cat swift-rings.tar.gz | ssh "${DST}" 'sudo tar -C / -xvzf -'
        ssh "${DST}" 'sudo podman restart swift_copy_rings'
        ssh "${DST}" 'sudo systemctl restart tripleo_swift*'
      done

5.3.5. Syncing node changes and migrating data

You must deliver the changed ring files to the Object Storage (swift) containers after you copy new ring files to their correct folders.

Important
  • Do not migrate all of the data at the same time. Migrate the data in 10% increments. For example, configure the weight of the source device to equal 90.0 and the target device to equal 10.0. Then configure the weight of the source device to equal 80.0 and 20.0. Continue to incrementally migrate the data until you complete the process. During the migration, if you move all of the data at the same time, the old data is on the source device but the ring points to the new target device for all replicas. Until the replicators have moved all of the data to the target device, the data is inaccessible.
  • During migration, the Object Storage rings reassign the location of data, and then the replicator moves the data to the new location. As cluster activity increases, the process of replication slows down due to increased load. The larger the cluster, the longer a replication pass takes to complete. This is the expected behavior, but it can result in 404 errors in the log files if a client accesses the data that is currently being relocated. When a proxy attempts to retrieve data from a new location, but the data is not yet in the new location, swift-proxy reports a 404 error in the log files.

    When the migration is gradual, the proxy accesses replicas that are not being moved and no error occurs. When the proxy attempts to retrieve the data from an alternative replica, 404 errors in log files are resolved. To confirm that the replication process is progressing, refer to the replication logs. The Object Storage service (swift) issues replication logs every five minutes.

Procedure

  1. Use a script, similar to the following example, to distribute ring files from a previously existing Controller node to all Controller nodes and restart the Object Storage service containers on those nodes:

    #!/bin/sh
    set -xe
    
    SRC="tripleo-admin@overcloud-controller-0.ctlplane"
    ALL="tripleo-admin@overcloud-controller-0.ctlplane \
    tripleo-admin@overcloud-controller-1.ctlplane \
    tripleo-admin@overcloud-controller-2.ctlplane"
    • Fetch the current set of ring files:

      ssh "${SRC}" 'sudo tar -czvf - \
        /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift \
        /{*.builder,*.ring.gz,backups/*.builder}' > swift-rings.tar.gz
    • Upload the rings to all nodes and restart Object Storage services:

      for DST in ${ALL}; do
        cat swift-rings.tar.gz | ssh "${DST}" 'sudo tar -C / -xvzf -'
        ssh "${DST}" 'sudo podman restart swift_copy_rings'
        ssh "${DST}" 'sudo systemctl restart tripleo_swift*'
      done
  2. To confirm that the data is being moved to the new disk, run the following command on the new storage node:

    $ sudo grep -i replication  /var/log/container/swift/swift.log

5.3.6. Removing Object Storage nodes

There are two methods to remove an Object Storage (swift) node:

  • Simple removal: This method removes the node in one action and is appropriate for an efficiently-powered cluster with smaller quantities of data.
  • Incremental removal: Alter the rings to decrease the weight of the disks on the node that you want to remove. This method is appropriate if you want to minimize impact on storage network usage or if your cluster contains larger quantities of data.

For both methods, you follow the Scaling down bare-metal nodes procedure. However, for incremental removal, complete these prerequisites to alter the storage rings to decrease the weight of the disks on the node that you want to remove:

Prerequisites

For information about replacing an Object Storage node, see the prerequisites at the beginning of the Scaling down bare-metal nodes procedure .

5.3.7. Scaling down bare-metal nodes

To scale down the number of bare-metal nodes in your overcloud, tag the nodes that you want to delete from the stack in the node definition file, redeploy the overcloud, and then delete the bare-metal node from the overcloud.

Prerequisites

  • A successful undercloud installation. For more information, see Installing director on the undercloud.
  • A successful overcloud deployment. For more information, see Configuring a basic overcloud with pre-provisioned nodes.
  • If you are replacing an Object Storage node, replicate data from the node you are removing to the new replacement node. Wait for a replication pass to finish on the new node. Check the replication pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage service (swift) adds entries to the log similar to the following example:

    Mar 29 08:49:05 localhost object-server: Object replication complete.
    Mar 29 08:49:11 localhost container-server: Replication run OVER
    Mar 29 08:49:13 localhost account-server: Replication run OVER

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Decrement the count parameter in the overcloud-baremetal-deploy.yaml file, for the roles that you want to scale down.
  4. Define the hostname and name of each node that you want to remove from the stack, if they are not already defined in the instances attribute for the role.
  5. Add the attribute provisioned: false to the node that you want to remove. For example, to remove the node overcloud-objectstorage-1 from the stack, include the following snippet in your overcloud-baremetal-deploy.yaml file:

    - name: ObjectStorage
      count: 3
      instances:
      - hostname: overcloud-objectstorage-0
        name: node00
      - hostname: overcloud-objectstorage-1
        name: node01
        # Removed from cluster due to disk failure
        provisioned: false
      - hostname: overcloud-objectstorage-2
        name: node02
      - hostname: overcloud-objectstorage-3
        name: node03

    After you redeploy the overcloud, the nodes that you define with the provisioned: false attribute are no longer present in the stack. However, these nodes are still running in a provisioned state.

    Note

    To remove a node from the stack temporarily, deploy the overcloud with the attribute provisioned: false and then redeploy the overcloud with the attribute provisioned: true to return the node to the stack.

  6. Delete the node from the overcloud:

    $ openstack overcloud node delete \
      --stack <stack> \
      --baremetal-deployment \
       /home/stack/templates/overcloud-baremetal-deploy.yaml
    • Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud.

      Note

      Do not include the nodes that you want to remove from the stack as command arguments in the openstack overcloud node delete command.

  7. Delete the ironic node:

    $ openstack baremetal node delete <IRONIC_NODE_UUID>

    Replace IRONIC_NODE_UUID with the UUID of the node.

  8. Provision the overcloud nodes to generate an updated heat environment file for inclusion in the deployment command:

    $ openstack overcloud node provision \
      --stack <stack> \
      --output <deployment_file> \
      /home/stack/templates/overcloud-baremetal-deploy.yaml
    • Replace <deployment_file> with the name of the heat environment file to generate for inclusion in the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml.
  9. Add the overcloud-baremetal-deployed.yaml file generated by the provisioning command to the stack with your other environment files, and deploy the overcloud:

    $ openstack overcloud deploy \
      ...
      -e /usr/share/openstack-tripleo-heat-templates/environments \
      -e /home/stack/templates/overcloud-baremetal-deployed.yaml \
      --deployed-server \
      --disable-validations \
      ...

5.4. Container management in the Object Storage service

To help with organization in the Object Storage service (swift), you can use pseudo folders. These folders are logical devices that can contain objects and be nested. For example, you might create an Images folder in which to store pictures and a Media folder in which to store videos.

You can create one or more containers in each project, and one or more objects or pseudo folders in each container.

5.4.1. Creating private and public containers

Use the dashboard to create a container in the Object Storage service (swift).

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Click Create Container.
  3. Specify the Container Name, and select one of the following in the Container Access field.

    TypeDescription

    Private

    Limits access to a user in the current project.

    Public

    Permits API access to anyone with the public URL. However, in the dashboard, project users cannot see public containers and data from other projects.

  4. Click Create Container.
  5. Optional: New containers use the default storage policy. If you have multiple storage policies defined, for example, a default policy and another policy that enables erasure coding, you can configure a container to use a non-default storage policy:

    $ swift post -H "X-Storage-Policy:<policy>" <container_name>
    • Replace <policy> with the name or alias of the policy that you want the container to use.
    • Replace <container_name> with the name of the container.

5.4.2. Creating pseudo folders for containers

Use the dashboard to create a pseudo folder for a container in the Object Storage service (swift).

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Click the name of the container to which you want to add the pseudo folder.
  3. Click Create Pseudo-folder.
  4. Specify the name in the Pseudo-folder Name field, and click Create.

5.4.3. Deleting containers from the Object Storage service

Use the dashboard to delete a container from the Object Storage service (swift).

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Browse for the container in the Containers section, and ensure that all objects are deleted. For more information, see Deleting objects from the Object Storage service.
  3. Select Delete Container in the container arrow menu.
  4. Click Delete Container to confirm the container removal.

5.4.4. Uploading objects to containers

If you do not upload an actual file to the Object Storage service (swift), the object is still created as a placeholder that you can use later to upload the file.

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Click the name of the container in which you want to place the uploaded object. If a pseudo folder already exists in the container, you can click its name.
  3. Browse for your file, and click Upload Object.
  4. Specify a name in the Object Name field:

    • You can specify pseudo folders in the name by using a / character,for example, Images/myImage.jpg. If the specified folder does not already exist, it is created when the object is uploaded.
    • A name that is not unique to the location, that is, the object already exists, overwrites the contents of the object.
  5. Click Upload Object.

5.4.5. Copying objects between containers

Use the dashboard to copy an object in the Object Storage service (swift).

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Click the name of the object’s container or folder (to display the object).
  3. Click Upload Object.
  4. Browse for the file to be copied, and select Copy in its arrow menu.
  5. Specify the following:

    FieldDescription

    Destination container

    Target container for the new object.

    Path

    Pseudo-folder in the destination container; if the folder does not already exist, it is created.

    Destination object name

    New object’s name. If you use a name that is not unique to the location (that is, the object already exists), it overwrites the object’s previous contents.

  6. Click Copy Object.

5.4.6. Deleting objects from the Object Storage service

Use the dashboard to delete an object from the Object Storage service (swift).

Procedure

  1. In the dashboard, select Project > Object Store > Containers.
  2. Browse for the object, and select Delete Object in its arrow menu.
  3. Click Delete Object to confirm the object is removed.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.