Chapter 5. Configuring the Object Storage service (swift)
The Red Hat OpenStack Platform (RHOSP) Object Storage service (swift) stores its objects, or data, in containers. Containers are similar to directories in a file system although they cannot be nested. Containers provide an easy way for users to store any kind of unstructured data. For example, objects can include photos, text files, or images. Stored objects are not compressed.
5.1. Object Storage rings Copy linkLink copied to clipboard!
The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.
Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.
The Object Storage service has three rings to store the following types of data:
- Account information
- Containers, to facilitate organizing objects under an account
- Object replicas
5.1.1. Checking cluster health Copy linkLink copied to clipboard!
The Object Storage service (swift) runs many processes in the background to ensure long-term data availability, durability, and persistence. For example:
- Auditors constantly re-read database and object files and compare them by using checksums to make sure there is no silent bit-rot. Any database or object file that no longer matches its checksum is quarantined and becomes unreadable on that node. The replicators then copy one of the other replicas to make the local copy available again.
- Objects and files can disappear when you replace disks or nodes or when objects are quarantined. When this happens, replicators copy missing objects or database files to one of the other nodes.
The Object Storage service includes a tool called swift-recon that collects data from all nodes and checks the overall cluster health.
Procedure
- Log in to one of the Controller nodes.
Run the following command:
[tripleo-admin@overcloud-controller-2 ~]$ sudo podman exec -it -u swift swift_object_server /usr/bin/swift-recon -arqlT --md5 ======================================================================--> Starting reconnaissance on 3 hosts (object) ======================================================================[2018-12-14 14:55:47] Checking async pendings [async_pending] - No hosts returned valid data. ======================================================================[2018-12-14 14:55:47] Checking on replication [replication_failure] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 [replication_success] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 [replication_time] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 [replication_attempted] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 Oldest completion was 2018-12-14 14:55:39 (7 seconds ago) by 198.51.100.186:6000. Most recent completion was 2018-12-14 14:55:42 (4 seconds ago) by 198.51.100.174:6000. ======================================================================[2018-12-14 14:55:47] Checking load averages [5m_load_avg] low: 1, high: 2, avg: 2.1, total: 6, Failed: 0.0%, no_result: 0, reported: 3 [15m_load_avg] low: 2, high: 2, avg: 2.6, total: 7, Failed: 0.0%, no_result: 0, reported: 3 [1m_load_avg] low: 0, high: 0, avg: 0.8, total: 2, Failed: 0.0%, no_result: 0, reported: 3 ======================================================================[2018-12-14 14:55:47] Checking ring md5sums 3/3 hosts matched, 0 error[s] while checking hosts. ======================================================================[2018-12-14 14:55:47] Checking swift.conf md5sum 3/3 hosts matched, 0 error[s] while checking hosts. ======================================================================[2018-12-14 14:55:47] Checking quarantine [quarantined_objects] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 [quarantined_accounts] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 [quarantined_containers] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%, no_result: 0, reported: 3 ======================================================================[2018-12-14 14:55:47] Checking time-sync 3/3 hosts matched, 0 error[s] while checking hosts. ======================================================================Optional: Use the
--alloption to return additional output.This command queries all servers on the ring for the following data:
- Async pendings: If the cluster load is too high and processes cannot update database files fast enough, some updates occur asynchronously. These numbers decrease over time.
- Replication metrics: Review the replication timestamps; full replication passes happen frequently with few errors. An old entry, for example, an entry with a timestamp from six months ago, indicates that replication on the node has not completed in the last six months.
- Ring md5sums: This ensures that all ring files are consistent on all nodes.
-
swift.confmd5sums: This ensures that all configuration files are consistent on all nodes. - Quarantined files: There must be no, or very few, quarantined files for all nodes.
- Time-sync: All nodes must be synchronized.
5.1.2. Increasing ring partition power Copy linkLink copied to clipboard!
The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.
In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat support.
5.1.3. Partition power recommendation for the Object Storage service Copy linkLink copied to clipboard!
When using separate Red Hat OpenStack Platform (RHOSP) Object Storage service (swift) nodes, use a higher partition power value.
The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.
The partition power parameter is important and can only be changed for new containers and their objects. As such, it is important to set this value before initial deployment.
The default partition power value is 10 for environments that RHOSP director deploys. This is a reasonable value for smaller deployments, especially if you only plan to use disks on the Controller nodes for the Object Storage service.
The following table helps you to select an appropriate partition power if you use three replicas:
| Partition Power | Maximum number of disks |
| 10 | ~ 35 |
| 11 | ~ 75 |
| 12 | ~ 150 |
| 13 | ~ 250 |
| 14 | ~ 500 |
Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.
To set the partition power, use the following resource:
parameter_defaults:
SwiftPartPower: 11
You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.
5.1.4. Custom rings Copy linkLink copied to clipboard!
As technology advances and demands for storage capacity increase, creating custom rings is a way to update existing Object Storage clusters.
When you add new nodes to a cluster, their characteristics might differ from those of the original nodes. Without custom adjustments, the larger capacity of the new nodes may be underused. Or, if weights change in the rings, data dispersion can become uneven, which reduces safety.
Automation might not keep pace with future technology trends. For example, some older Object Storage clusters in use today originated before SSDs were available.
The ring builder helps manage Object Storage as clusters grow and technologies evolve. For assistance with creating custom rings, contact Red Hat support.
5.2. Customizing the Object Storage service Copy linkLink copied to clipboard!
Depending on the requirements of your Red Hat OpenStack Platform (RHOSP) environment, you might want to customize some of the default settings of the Object Storage service (swift) to optimize your deployment performance.
5.2.1. Configuring fast-post Copy linkLink copied to clipboard!
By default, the Object Storage service (swift) copies an object whole whenever any part of its metadata changes. You can prevent this by using the fast-post feature. The fast-post feature saves time when you change the content types of multiple large objects.
To enable the fast-post feature, disable the object_post_as_copy option on the Object Storage proxy service.
Procedure
Edit
swift_params.yaml:$ cat > swift_params.yaml << EOF parameter_defaults: ExtraConfig: swift::proxy::copy::object_post_as_copy: False EOFInclude the parameter file when you deploy or update the overcloud:
$ openstack overcloud deploy [... previous args ...] \ -e swift_params.yaml
5.2.2. Enabling at-rest encryption Copy linkLink copied to clipboard!
By default, objects uploaded to the Object Storage service (swift) are unencrypted. Because of this, it is possible to access objects directly from the file system. This can present a security risk if disks are not properly erased before they are discarded. Object Storage objects. For more information, see Encrypting Object Storage (swift) at-rest objects in Managing secrets with the Key Manager service.
5.2.3. Deploying a standalone Object Storage service cluster Copy linkLink copied to clipboard!
You can use the composable role concept to deploy a standalone Object Storage service (swift) cluster with the bare minimum of additional services, for example, OpenStack Identity service (keystone) or HAProxy.
Procedure
-
Copy the
roles_data.yamlfrom/usr/share/openstack-tripleo-heat-templates. - Edit the new file.
- Remove unneeded controller roles, for example Aodh*, Ceilometer*, Ceph*, Cinder*, Glance*, Heat*, Ironic*, Manila*, Nova*, Octavia*, Swift*.
-
Locate the ObjectStorage role within
roles_data.yaml. -
Copy this role to a new role within that same file and name it
ObjectProxy. Replace
SwiftStoragewithSwiftProxyin this role.The example
roles_data.yamlfile below shows sample roles.- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant HostnameFormatDefault: '%stackname%-controller-%index%' ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Clustercheck - OS::TripleO::Services::Docker - OS::TripleO::Services::Ec2Api - OS::TripleO::Services::Etcd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::Keepalived - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::Memcached - OS::TripleO::Services::MySQL - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Ntp - OS::TripleO::Services::Pacemaker - OS::TripleO::Services::RabbitMQ - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Vpp - name: ObjectStorage CountDefault: 1 description: | Swift Object Storage node role networks: - InternalApi - Storage - StorageMgmt disable_upgrade_deployment: True ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::Kernel - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Ntp - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - name: ObjectProxy CountDefault: 1 description: | Swift Object proxy node role networks: - InternalApi - Storage - StorageMgmt disable_upgrade_deployment: True ServicesDefault: - OS::TripleO::Services::AuditD - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::FluentdClient - OS::TripleO::Services::Kernel - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::Ntp - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackagesDeploy the overcloud with your regular
openstack deploycommand, including the new roles.$ openstack overcloud deploy --templates -r roles_data.yaml -e [...]
5.2.4. Disk recommendation for the Object Storage service Copy linkLink copied to clipboard!
Use one or more separate, local disks for the Red Hat OpenStack Platform (RHOSP) Object Storage service (swift).
By default, RHOSP director uses the directory /srv/node/d1 on the system disk for the Object Storage service. On the Controller, this disk is also used by other services, and the disk can become a performance bottleneck.
The following example is an excerpt from a RHOSP Orchestration service (heat) custom environment file. On each Controller node, the Object Storage service uses two separate disks. The entirety of both disks contains an XFS file system:
parameter_defaults:
SwiftRawDisks: {"sdb": {}, "sdc": {}}
SwiftRawDisks defines each storage disk on the node. This example defines both sdb and sdc disks on each Controller node.
In RHEL 9, the sdx name might differ between overcloud nodes even if your hardware configuration is the same. For information about defining disks to use with the SwiftRawDisks parameter if you are using RHEL 9, see the Red Hat Knowledge Base article Defining disks to be used with OpenStack Swift in Red Hat OpenStack Platform 17.
When configuring multiple disks, ensure that the Bare Metal service (ironic) uses the intended root disk.
5.2.5. Using external SAN disks Copy linkLink copied to clipboard!
By default, the Object Storage service (swift) is configured and optimized to use independent local disks. This configuration ensures that the workload is distributed across all disks, which helps minimize performance impact during node failure or other system issues.
In performance-impacting events, an environment that uses a single SAN can experience decreased performance across all LUNs. The Object Storage service cannot mitigate performance issues in an environment that uses SAN disks. Therefore, use additional local disks for Object Storage to meet performance and disk space requirements.
Using an external SAN for Object Storage requires evaluation on a per-case basis. For more information, contact Red Hat Support.
If you choose to use an external SAN for Object Storage, evaluate and test performance demands with your deployment. To confirm that your SAN deployment is tested, supported, and meets your performance requirements, contact your storage vendor.
Red Hat does not provide support for the following issues:
- Issues related to performance that result from using an external SAN for Object Storage.
- Issues that arise outside of the core Object Storage service offering. For support with high availability and performance, contact your storage vendor.
Procedure
This template is an example of how to use two devices (
/dev/mapper/vdband/dev/mapper/vdc) for Object Storage:parameter_defaults: SwiftMountCheck: true SwiftUseLocalDir: false SwiftRawDisks: {"vdb": {"base_dir":"/dev/mapper/"}, \ "vdc": {"base_dir":"/dev/mapper/"}}
5.3. Adding or removing Object Storage nodes Copy linkLink copied to clipboard!
To add new Object Storage (swift) nodes to your cluster, you must increase the node count, update the rings, and synchronize the changes. You can increase the node count by adding nodes to the overcloud or by scaling up bare-metal nodes.
To remove Object Storage nodes from your cluster, you can perform a simple removal or an incremental removal, depending on the quantities of data in the cluster.
5.3.1. Adding nodes to the overcloud Copy linkLink copied to clipboard!
You can add more nodes to your overcloud.
A fresh installation of Red Hat OpenStack Platform (RHOSP) does not include certain updates, such as security errata and bug fixes. As a result, if you are scaling up a connected environment that uses the Red Hat Customer Portal or Red Hat Satellite Server, RPM updates are not applied to new nodes. To apply the latest updates to the overcloud nodes, you must do one of the following:
- Complete an overcloud update of the nodes after the scale-out operation.
-
Use the
virt-customizetool to modify the packages to the base overcloud image before the scale-out operation. For more information, see the Red Hat Knowledgebase solution Modifying the Red Hat Linux OpenStack Platform Overcloud Image with virt-customize.
Procedure
Create a new JSON file called
newnodes.jsonthat contains details of the new node that you want to register:{ "nodes":[ { "mac":[ "dd:dd:dd:dd:dd:dd" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.02.24.207" }, { "mac":[ "ee:ee:ee:ee:ee:ee" ], "cpu":"4", "memory":"6144", "disk":"40", "arch":"x86_64", "pm_type":"ipmi", "pm_user":"admin", "pm_password":"p@55w0rd!", "pm_addr":"192.02.24.208" } ] }-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrcRegister the new nodes:
$ openstack overcloud node import newnodes.jsonLaunch the introspection process for each new node:
$ openstack overcloud node introspect \ --provide <node_1> [<node_2>] [<node_n>]-
Use the
--provideoption to reset all the specified nodes to anavailablestate after introspection. -
Replace
<node_1>,<node_2>, and all nodes up to<node_n>with the UUID of each node that you want to introspect.
-
Use the
Configure the image properties for each new node:
$ openstack overcloud node configure <node>
5.3.2. Scaling up bare-metal nodes Copy linkLink copied to clipboard!
To increase the count of bare-metal nodes in an existing overcloud, increment the node count in the overcloud-baremetal-deploy.yaml file and redeploy the overcloud.
Prerequisites
- The new bare-metal nodes are registered, introspected, and available for provisioning and deployment. For more information, see Registering nodes for the overcloud and Creating an inventory of the bare-metal node hardware.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrc-
Open the
overcloud-baremetal-deploy.yamlnode definition file that you use to provision your bare-metal nodes. Increment the
countparameter for the roles that you want to scale up. For example, the following configuration increases the Object Storage node count to 4:- name: Controller count: 3 - name: Compute count: 10 - name: ObjectStorage count: 4Optional: Configure predictive node placement for the new nodes. For example, use the following configuration to provision a new Object Storage node on
node03:- name: ObjectStorage count: 4 instances: - hostname: overcloud-objectstorage-0 name: node00 - hostname: overcloud-objectstorage-1 name: node01 - hostname: overcloud-objectstorage-2 name: node02 - hostname: overcloud-objectstorage-3 name: node03- Optional: Define any other attributes that you want to assign to your new nodes. For more information about the properties you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes.
-
If you use the Object Storage service (swift) and the whole disk overcloud image,
overcloud-hardened-uefi-full, configure the size of the/srvpartition based on the size of your disk and your storage requirements for/varand/srv. For more information, see Configuring whole disk partitions for the Object Storage service. Provision the overcloud nodes:
$ openstack overcloud node provision \ --stack <stack> \ --network-config \ --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml-
Replace
<stack>with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud. -
Include the
--network-configargument to provide the network definitions to thecli-overcloud-node-network-config.yamlAnsible playbook. Replace
<deployment_file>with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml.NoteIf you upgraded from Red Hat OpenStack Platform 16.2 to 17.1, you must include the YAML file that you created or updated during the upgrade process in the
openstack overcloud node provisioncommand. For example, use the/home/stack/tripleo-[stack]-baremetal-deployment.yamlfile instead of the/home/stack/templates/overcloud-baremetal-deployed.yamlfile. For more information, see Performing the overcloud adoption and preparation in Framework for upgrades (16.2 to 17.1).
-
Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
availabletoactive:$ watch openstack baremetal node listAdd the generated
overcloud-baremetal-deployed.yamlfile to the stack with your other environment files and deploy the overcloud:$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ --disable-validations \ ...
5.3.3. Defining dedicated Object Storage nodes Copy linkLink copied to clipboard!
Dedicate additional nodes to the Red Hat OpenStack Platform (RHOSP) Object Storage service to improve performance.
If you are dedicating additional nodes to the Object Storage service, edit the custom roles_data.yaml file to remove the Object Storage service entry from the Controller node. Specifically, remove the following line from the ServicesDefault list of the Controller role:
- OS::TripleO::Services::SwiftStorage
5.3.4. Updating and rebalancing the Object Storage rings Copy linkLink copied to clipboard!
The Object Storage service (swift) requires the same ring files on all Controller and Object Storage nodes. If a Controller node or Object Storage node is replaced, added or removed, these must be synced after an overcloud update to ensure proper functionality.
Procedure
Log in to the undercloud as the
stackuser and create a temporary directory:$ mkdir temp && cd temp/Download the overcloud ring files from one of the previously existing nodes (Controller 0 in this example) to the new directory:
$ ssh tripleo-admin@overcloud-controller-0.ctlplane 'sudo tar -czvf - \ /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift \ /{*.builder,*.ring.gz,backups/*.builder}' > swift-rings.tar.gzExtract the rings and change into the ring subdirectory:
$ tar xzvf swift-rings.tar.gz && cd \ var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift/Collect the values for the following variables according to your device details:
<device_name>:$ openstack baremetal introspection data save \ <node_name> | jq ".inventory.disks"<node_ip>:$ metalsmith <node_name> show-
<port>: The default port is600x. If you altered the default, use the applicable port. -
<builder_file>: The builder file name from step 3. -
<weight>and<zone>variables are user-defined.
Use
swift-ring-builderto add and update the new node to the existing rings. Replace the variables according to the device details.NoteYou must install the
python3-swiftRPM to use theswift-ring-buildercommand.$ swift-ring-builder etc/swift/<builder_file> \ add <zone>-<node_ip>:<port>/<device_name> <weight>Rebalance the ring to ensure that the new device is used:
$ swift-ring-builder etc/swift/<builder_file> rebalanceUpload the modified ring files to the Controller nodes and ensure that these ring files are used. Use a script, similar to the following example, to distribute ring files:
#!/bin/sh set -xe ALL="tripleo-admin@overcloud-controller-0.ctlplane \ tripleo-admin@overcloud-controller-1.ctlplane \ tripleo-admin@overcloud-controller-2.ctlplane"Upload the rings to all nodes and restart Object Storage services:
for DST in ${ALL}; do cat swift-rings.tar.gz | ssh "${DST}" 'sudo tar -C / -xvzf -' ssh "${DST}" 'sudo podman restart swift_copy_rings' ssh "${DST}" 'sudo systemctl restart tripleo_swift*' done
5.3.5. Syncing node changes and migrating data Copy linkLink copied to clipboard!
You must deliver the changed ring files to the Object Storage (swift) containers after you copy new ring files to their correct folders.
- Important
- Do not migrate all of the data at the same time. Move only one replica at a time by rebalancing only once between each successful replication run. During the migration, if you move all of the data at the same time, the old data is on the source device but the ring points to the new target device for all replicas. Until the replicators have moved all of the data to the target device, the data is inaccessible.
- To limit load and network traffic during the replication process, you can also migrate incrementally. For example, configure the weight of the source device to equal 90.0 and the target device to equal 10.0. Then configure the weight of the source device to equal 80.0 and 20.0. Continue to incrementally migrate the data until you complete the process.
During migration, the Object Storage rings reassign the location of data, and then the replicator moves the data to the new location. As cluster activity increases, the process of replication slows down due to increased load. The larger the cluster, the longer a replication pass takes to complete. This is the expected behavior, but it can result in 404 errors in the log files if a client accesses the data that is currently being relocated. When a proxy attempts to retrieve data from a new location, but the data is not yet in the new location,
swift-proxyreports a 404 error in the log files.When the migration is gradual, the proxy accesses replicas that are not being moved and no error occurs. When the proxy attempts to retrieve the data from an alternative replica, 404 errors in log files are resolved. To confirm that the replication process is progressing, refer to the replication logs. The Object Storage service (swift) issues replication logs every five minutes.
Procedure
Use a script, similar to the following example, to distribute ring files from a previously existing Controller node to all Controller nodes and restart the Object Storage service containers on those nodes:
#!/bin/sh set -xe SRC="tripleo-admin@overcloud-controller-0.ctlplane" ALL="tripleo-admin@overcloud-controller-0.ctlplane \ tripleo-admin@overcloud-controller-1.ctlplane \ tripleo-admin@overcloud-controller-2.ctlplane"Fetch the current set of ring files:
ssh "${SRC}" 'sudo tar -czvf - \ /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift \ /{*.builder,*.ring.gz,backups/*.builder}' > swift-rings.tar.gzUpload the rings to all nodes and restart Object Storage services:
for DST in ${ALL}; do cat swift-rings.tar.gz | ssh "${DST}" 'sudo tar -C / -xvzf -' ssh "${DST}" 'sudo podman restart swift_copy_rings' ssh "${DST}" 'sudo systemctl restart tripleo_swift*' done
To confirm that the data is being moved to the new disk, run the following command on the new storage node:
$ sudo grep -i replication /var/log/container/swift/swift.log
5.3.6. Removing Object Storage nodes Copy linkLink copied to clipboard!
There are two methods to remove an Object Storage (swift) node:
- Simple removal: This method removes the node in one action and is appropriate for an efficiently-powered cluster with smaller quantities of data.
- Incremental removal: Alter the rings to decrease the weight of the disks on the node that you want to remove. This method is appropriate if you want to minimize impact on storage network usage or if your cluster contains larger quantities of data.
For both methods, you follow the Scaling down bare-metal nodes procedure. However, for incremental removal, complete these prerequisites to alter the storage rings to decrease the weight of the disks on the node that you want to remove:
Prerequisites
- Object Storage rings are updated and rebalanced. For more information, see Updating and rebalancing the Object Storage rings.
- Changes in the Object Storage rings are synchronized. For more information, see Syncing node changes and migrating data.
For information about replacing an Object Storage node, see the prerequisites at the beginning of the Scaling down bare-metal nodes procedure .
5.3.7. Scaling down bare-metal nodes Copy linkLink copied to clipboard!
To scale down the number of bare-metal nodes in your overcloud, tag the nodes that you want to delete from the stack in the node definition file, redeploy the overcloud, and then delete the bare-metal node from the overcloud.
Prerequisites
- A successful undercloud installation. For more information, see Installing director on the undercloud.
- A successful overcloud deployment. For more information, see Configuring a basic overcloud with pre-provisioned nodes.
If you are replacing an Object Storage node, replicate data from the node you are removing to the new replacement node. Wait for a replication pass to finish on the new node. Check the replication pass progress in the
/var/log/swift/swift.logfile. When the pass finishes, the Object Storage service (swift) adds entries to the log similar to the following example:Mar 29 08:49:05 localhost object-server: Object replication complete. Mar 29 08:49:11 localhost container-server: Replication run OVER Mar 29 08:49:13 localhost account-server: Replication run OVER
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrc-
Decrement the
countparameter in theovercloud-baremetal-deploy.yamlfile, for the roles that you want to scale down. -
Define the
hostnameandnameof each node that you want to remove from the stack, if they are not already defined in theinstancesattribute for the role. Add the attribute
provisioned: falseto the node that you want to remove. For example, to remove the nodeovercloud-objectstorage-1from the stack, include the following snippet in yourovercloud-baremetal-deploy.yamlfile:- name: ObjectStorage count: 3 instances: - hostname: overcloud-objectstorage-0 name: node00 - hostname: overcloud-objectstorage-1 name: node01 # Removed from cluster due to disk failure provisioned: false - hostname: overcloud-objectstorage-2 name: node02 - hostname: overcloud-objectstorage-3 name: node03After you redeploy the overcloud, the nodes that you define with the
provisioned: falseattribute are no longer present in the stack. However, these nodes are still running in a provisioned state.NoteIf you want to remove a node from the stack temporarily, after you deploy the overcloud with the attribute
provisioned: false, you can then redeploy the overcloud with the attributeprovisioned: trueto return the node to the stack.Delete the node from the overcloud:
$ openstack overcloud node delete \ --stack <stack> \ --baremetal-deployment \ /home/stack/templates/overcloud-baremetal-deploy.yamlReplace
<stack>with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud.NoteDo not include the nodes that you want to remove from the stack as command arguments in the
openstack overcloud node deletecommand.
Delete the ironic node:
$ openstack baremetal node delete <ironic_node_uuid>Replace
<ironic_node_uuid>with the UUID of the node.Delete the network agents for the node that you deleted:
(overcloud)$ for AGENT in $(openstack network agent list \ --host <ironic_node_uuid> -c ID -f value) ; \ do openstack network agent delete $AGENT ; doneProvision the overcloud nodes to generate an updated heat environment file for inclusion in the deployment command:
$ openstack overcloud node provision \ --stack <stack> \ --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml-
Replace
<deployment_file>with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml.
-
Replace
Add the
overcloud-baremetal-deployed.yamlfile generated by the provisioning command to the stack with your other environment files, and deploy the overcloud:$ openstack overcloud deploy \ ... -e /usr/share/openstack-tripleo-heat-templates/environments \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ --disable-validations \ ...
5.4. Container management in the Object Storage service Copy linkLink copied to clipboard!
To help with organization in the Object Storage service (swift), you can use pseudo folders. These folders are logical devices that can contain objects and be nested. For example, you might create an Images folder in which to store pictures and a Media folder in which to store videos.
You can create one or more containers in each project, and one or more objects or pseudo folders in each container.
5.4.1. Creating private and public containers Copy linkLink copied to clipboard!
Use the dashboard to create a container in the Object Storage service (swift).
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Click Create Container.
Specify the Container Name, and select one of the following in the Container Access field.
Expand Type Description PrivateLimits access to a user in the current project.
PublicPermits API access to anyone with the public URL. However, in the dashboard, project users cannot see public containers and data from other projects.
- Click Create Container.
Optional: New containers use the default storage policy. If you have multiple storage policies defined, for example, a default policy and another policy that enables erasure coding, you can configure a container to use a non-default storage policy:
$ swift post -H "X-Storage-Policy:<policy>" <container_name>-
Replace
<policy>with the name or alias of the policy that you want the container to use. -
Replace
<container_name>with the name of the container.
-
Replace
5.4.2. Creating pseudo folders for containers Copy linkLink copied to clipboard!
Use the dashboard to create a pseudo folder for a container in the Object Storage service (swift).
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Click the name of the container to which you want to add the pseudo folder.
- Click Create Pseudo-folder.
- Specify the name in the Pseudo-folder Name field, and click Create.
5.4.3. Deleting containers from the Object Storage service Copy linkLink copied to clipboard!
Use the dashboard to delete a container from the Object Storage service (swift).
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Browse for the container in the Containers section, and ensure that all objects are deleted. For more information, see Deleting objects from the Object Storage service.
- Select Delete Container in the container arrow menu.
- Click Delete Container to confirm the container removal.
5.4.4. Uploading objects to containers Copy linkLink copied to clipboard!
If you do not upload an actual file to the Object Storage service (swift), the object is still created as a placeholder that you can use later to upload the file.
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Click the name of the container in which you want to place the uploaded object. If a pseudo folder already exists in the container, you can click its name.
- Browse for your file, and click Upload Object.
Specify a name in the Object Name field:
- You can specify pseudo folders in the name by using a / character,for example, Images/myImage.jpg. If the specified folder does not already exist, it is created when the object is uploaded.
- A name that is not unique to the location, that is, the object already exists, overwrites the contents of the object.
- Click Upload Object.
5.4.5. Copying objects between containers Copy linkLink copied to clipboard!
Use the dashboard to copy an object in the Object Storage service (swift).
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Click the name of the object’s container or folder (to display the object).
- Click Upload Object.
- Browse for the file to be copied, and select Copy in its arrow menu.
Specify the following:
Expand Field Description Destination container
Target container for the new object.
Path
Pseudo-folder in the destination container; if the folder does not already exist, it is created.
Destination object name
New object’s name. If you use a name that is not unique to the location (that is, the object already exists), it overwrites the object’s previous contents.
- Click Copy Object.
5.4.6. Deleting objects from the Object Storage service Copy linkLink copied to clipboard!
Use the dashboard to delete an object from the Object Storage service (swift).
Procedure
- In the dashboard, select Project > Object Store > Containers.
- Browse for the object, and select Delete Object in its arrow menu.
- Click Delete Object to confirm the object is removed.