此内容没有您所选择的语言版本。
4.4. Configure the Object Storage Service
4.4.1. Create the Object Storage Service Identity Records
Create and configure Identity service records required by the Object Storage service. These entries provide authentication for the Object Storage service, and guide other OpenStack services attempting to locate and access the functionality provided by the Object Storage service.
This procedure assumes that you have already created an administrative user account and a
services
tenant. For more information, see:
Perform this procedure on the Identity service server, or on any machine onto which you have copied the
keystonerc_admin
file and on which the keystone command-line utility is installed.
Procedure 4.3. Creating Identity Records for the Object Storage Service
- Set up the shell to access keystone as the administrative user:
#
source ~/keystonerc_admin
- Create the
swift
user:[(keystone_admin)]#
openstack user create --password PASSWORD swift
+----------+----------------------------------+ | Field | Value | +----------+----------------------------------+ | email | None | | enabled | True | | id | 00916f794cec438ea7f14ee0769e6964 | | name | swift | | username | swift | +----------+----------------------------------+Replace PASSWORD with a secure password that will be used by the Object Storage service when authenticating with the Identity service. - Link the
swift
user and theadmin
role together within the context of theservices
tenant:[(keystone_admin)]#
openstack role add --project services --user swift admin
- Create the
swift
Object Storage service entry:[(keystone_admin)]#
openstack service create --name swift \
--description "Swift Storage Service" \
object-store
- Create the
swift
endpoint entry:[(keystone_admin)]#
openstack endpoint create \
--publicurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
--adminurl 'http://IP:8080/v1' \
--internalurl 'http://IP:8080/v1/AUTH_%(tenant_id)s' \
--region RegionOne \
swift
Replace IP with the IP address or fully qualified domain name of the server hosting the Object Storage Proxy service.
4.4.2. Configure the Object Storage Service Storage Nodes
The Object Storage service stores objects on the filesystem, usually on a number of connected physical storage devices. All of the devices that will be used for object storage must be formatted
ext4
or XFS
, and mounted under the /srv/node/
directory. All of the services that will run on a given node must be enabled, and their ports opened.
Although you can run the proxy service alongside the other services, the proxy service is not covered in this procedure.
Procedure 4.4. Configuring the Object Storage Service Storage Nodes
- Format your devices using the
ext4
orXFS
filesystem. Ensure thatxattr
s are enabled. - Add your devices to the
/etc/fstab
file to ensure that they are mounted under/srv/node/
at boot time. Use theblkid
command to find your device's unique ID, and mount the device using its unique ID.Note
If usingext4
, ensure that extended attributes are enabled by mounting the filesystem with theuser_xattr
option. (InXFS
, extended attributes are enabled by default.) - Configure the firewall to open the TCP ports used by each service running on each node. By default, the account service uses port 6202, the container service uses port 6201, and the object service uses port 6200.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing TCP traffic on the ports used by the account, container, and object service. The new rule must appear before anyreject-with icmp-host-prohibited
rule:-A INPUT -p tcp -m multiport --dports 6200,6201,6202,873 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect:#
systemctl restart iptables.service
- Change the owner of the contents of
/srv/node/
toswift:swift
:#
chown -R swift:swift /srv/node/
- Set the
SELinux
context correctly for all directories under/srv/node/
:#
restorecon -R /srv
- Add a hash prefix to the
/etc/swift/swift.conf
file:#
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \
$(openssl rand -hex 10)
- Add a hash suffix to the
/etc/swift/swift.conf
file:#
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \
$(openssl rand -hex 10)
- Set the IP address that the storage services will listen on. Run the following commands for every service on every node in your Object Storage cluster:
#
openstack-config --set /etc/swift/object-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
#
openstack-config --set /etc/swift/account-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
#
openstack-config --set /etc/swift/container-server.conf \
DEFAULT bind_ip NODE_IP_ADDRESS
Replace NODE_IP_ADDRESS with the IP address of the node you are configuring. - Copy
/etc/swift/swift.conf
from the node you are currently configuring to all of your Object Storage service nodes.Important
The/etc/swift/swift.conf
file must be identical on all of your Object Storage service nodes. - Start the services that will run on the node:
#
systemctl start openstack-swift-account.service
#
systemctl start openstack-swift-container.service
#
systemctl start openstack-swift-object.service
- Configure the services to start at boot time:
#
systemctl enable openstack-swift-account.service
#
systemctl enable openstack-swift-container.service
#
systemctl enable openstack-swift-object.service
4.4.3. Configure the Object Storage Service Proxy Service
The Object Storage proxy service determines to which node
gets
and puts
are directed.
Although you can run the account, container, and object services alongside the proxy service, only the proxy service is covered in the following procedure.
Note
Because the SSL capability built into the Object Storage service is intended primarily for testing, it is not recommended for use in production. In a production cluster, Red Hat recommends that you use the load balancer to terminate SSL connections.
Procedure 4.5. Configuring the Object Storage Service Proxy Service
- Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken auth_host IP
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_tenant_name services
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_user swift
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_password PASSWORD
Replace the following values:- Replace IP with the IP address or host name of the Identity server.
- Replace services with the name of the tenant that was created for the Object Storage service (previous examples set this to
services
). - Replace swift with the name of the service user that was created for the Object Storage service (previous examples set this to
swift
). - Replace PASSWORD with the password associated with the service user.
- Start the
memcached
andopenstack-swift-proxy
services:#
systemctl start memcached.service
#
systemctl start openstack-swift-proxy.service
- Configure the
memcached
andopenstack-swift-proxy
services to start at boot time:#
systemctl enable memcached.service
#
systemctl enable openstack-swift-proxy.service
- Allow incoming connections to the server hosting the Object Storage proxy service. Open the
/etc/sysconfig/iptables
file in a text editor, and Add an INPUT rule allowing TCP traffic on port 8080. The new rule must appear before any INPUT rules that REJECT traffic: :-A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT
Important
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080
. For information regarding the creation of more restrictive firewall rules, see the Red Hat Enterprise Linux Security Guide: - Restart the
iptables
service to ensure that the change takes effect:#
systemctl restart iptables.service
4.4.4. Object Storage Service Rings
Rings determine where data is stored in a cluster of storage nodes. Ring files are generated using the swift-ring-builder tool. Three ring files are required, one each for the object, container, and account services.
Each storage device in a cluster is divided into partitions, with a recommended minimum of 100 partitions per device. Each partition is physically a directory on disk. A configurable number of bits from the MD5 hash of the filesystem path to the partition directory, known as the partition power, is used as a partition index for the device. The partition count of a cluster that has 1000 devices, where each device has 100 partitions on it, is 100,000.
The partition count is used to calculate the partition power, where 2 to the partition power is the partition count. If the partition power is a fraction, it is rounded up. If the partition count is 100,000, the part power is 17 (16.610 rounded up). This can be expressed mathematically as: 2partition power = partition count.
4.4.5. Build Object Storage Service Ring Files
Three ring files need to be created: one to track the objects stored by the Object Storage Service, one to track the containers in which objects are placed, and one to track which accounts can access which containers. The ring files are used to deduce where a particular piece of data is stored.
Ring files are generated using four possible parameters: partition power, replica count, zone, and the amount of time that must pass between partition reassignments.
Ring File Parameter | Description |
---|---|
part_power
|
2partition power = partition count.
The partition is rounded up after calculation.
|
replica_count
|
The number of times that your data will be replicated in the cluster.
|
min_part_hours
|
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
|
zone
|
Used when adding devices to rings (optional). Zones are a flexible abstraction, where each zone should be separated from other zones as possible in your deployment. You can use a zone to represent sites, cabinet, nodes, or even devices.
|
Procedure 4.6. Building Object Storage Service Ring Files
- Build one ring for each service. Provide a builder file, a partition power, a replica count, and the minimum hours between partition reassignment:
#
swift-ring-builder /etc/swift/object.builder create part_power replica_count min_part_hours
#
swift-ring-builder /etc/swift/container.builder create part_power replica_count min_part_hours
#
swift-ring-builder /etc/swift/account.builder create part_power replica_count min_part_hours
- When the rings are created, add devices to the account ring:
#
swift-ring-builder /etc/swift/account.builder add zX-SERVICE_IP:6202/dev_mountpt part_count
Replace the following values:- Replace X with the corresponding integer of a specified zone (for example,
z1
would correspond to Zone One). - Replace SERVICE_IP with the IP on which the account, container, and object services should listen. This IP should match the
bind_ip
value set during the configuration of the Object Storage service storage nodes. - Replace dev_mountpt with the
/srv/node
subdirectory under which your device is mounted. - Replace part_count with the partition count you used to calculate your partition power.
Note
Repeat this step for each device (on each node in the cluster) you want added to the ring. - Add each device to both the container and object rings:
#
swift-ring-builder /etc/swift/container.builder add zX-SERVICE_IP:6201/dev_mountpt part_count
#
swift-ring-builder /etc/swift/object.builder add zX-SERVICE_IP:6200/dev_mountpt part_count
Replace the variables with the same ones used in the previous step.Note
Repeat these commands for each device (on each node in the cluster) you want added to the ring. - Distribute the partitions across the devices in the ring:
#
swift-ring-builder /etc/swift/account.builder rebalance
#
swift-ring-builder /etc/swift/container.builder rebalance
#
swift-ring-builder /etc/swift/object.builder rebalance
- Check to see that you now have three ring files in the directory
/etc/swift
:#
ls /etc/swift/*gz
The files should be listed as follows:/etc/swift/account.ring.gz /etc/swift/container.ring.gz /etc/swift/object.ring.gz
- Restart the
openstack-swift-proxy
service:#
systemctl restart openstack-swift-proxy.service
- Ensure that all files in the
/etc/swift/
directory, including those that you have just created, are owned by theroot
user and theswift
group:Important
All mount points must be owned byroot
; all roots of mounted file systems must be owned byswift
. Before running the following command, ensure that all devices are already mounted and owned byroot
.#
chown -R root:swift /etc/swift
- Copy each ring builder file to each node in the cluster, storing them under
/etc/swift/
.