Chapter 1. Deploying Red Hat Ceph Storage in Containers
This chapter describes how to use the Ansible application with the ceph-ansible
playbook to deploy Red Hat Ceph Storage 3 in containers.
- To install the Red Hat Ceph Storage, see Section 1.2, “Installing a Red Hat Ceph Storage Cluster in Containers”.
- To install the Ceph Object Gateway, see Section 1.4, “Installing the Ceph Object Gateway in a Container”.
- To install Metadata Servers, see Section 1.5, “Installing Metadata Servers”.
-
To learn about the Ansible
--limit
option, see Section 1.8, “Understanding thelimit
option”.
1.1. Prerequisites
- Obtain a valid customer subscription.
Prepare the cluster nodes. On each node:
1.1.1. Registering Red Hat Ceph Storage Nodes to the CDN and Attaching Subscriptions
Register each Red Hat Ceph Storage (RHCS) node to the Content Delivery Network (CDN) and attach the appropriate subscription so that the node has access to software repositories. Each RHCS node must be able to access the full Red Hat Enterprise Linux 7 base content and the extras repository content.
Prerequisites
- A valid Red Hat subscription
- RHCS nodes must be able to connect to the Internet.
For RHCS nodes that cannot access the internet during installation, you must first follow these steps on a system with internet access:
Start a local Docker registry:
# docker run -d -p 5000:5000 --restart=always --name registry registry:2
Pull the Red Hat Ceph Storage 3.x image from the Red Hat Customer Portal:
# docker pull registry.access.redhat.com/rhceph/rhceph-3-rhel7
Tag the image:
# docker tag registry.access.redhat.com/rhceph/rhceph-3-rhel7 <local-host-fqdn>:5000/cephimageinlocalreg
Replace
<local-host-fqdn>
with your local host FQDN.Push the image to the local Docker registry you started:
# docker push <local-host-fqdn>:5000/cephimageinlocalreg
Replace
<local-host-fqdn>
with your local host FQDN.
Procedure
Perform the following steps on all nodes in the storage cluster as the root
user.
Register the node. When prompted, enter your Red Hat Customer Portal credentials:
# subscription-manager register
Pull the latest subscription data from the CDN:
# subscription-manager refresh
List all available subscriptions for Red Hat Ceph Storage:
# subscription-manager list --available --all --matches="*Ceph*"
Identify the appropriate subscription and retrieve its Pool ID.
Attach the subscription:
# subscription-manager attach --pool=$POOL_ID
- Replace
-
$POOL_ID
with the Pool ID identified in the previous step.
-
Disable the default software repositories. Then, enable the Red Hat Enterprise Linux 7 Server, Red Hat Enterprise Linux 7 Server Extras, and RHCS repositories:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-els-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-els-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms
Update the system to receive the latest packages:
# yum update
Additional Resources
- See the Registering a System and Managing Subscriptions chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
1.1.2. Creating an Ansible user with sudo
access
Ansible must be able to log into all the Red Hat Ceph Storage (RHCS) nodes as a user that has root
privileges to install software and create configuration files without prompting for a password. You must create an Ansible user with password-less root
access on all nodes in the storage cluster when deploying and configuring a Red Hat Ceph Storage cluster with Ansible.
Prerequisite
-
Having
root
orsudo
access to all nodes in the storage cluster.
Procedure
Log in to a Ceph node as the
root
user:ssh root@$HOST_NAME
- Replace
-
$HOST_NAME
with the host name of the Ceph node.
-
Example
# ssh root@mon01
Enter the
root
password when prompted.Create a new Ansible user:
adduser $USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# adduser admin
ImportantDo not use
ceph
as the user name. Theceph
user name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Set a new password for this user:
# passwd $USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# passwd admin
Enter the new password twice when prompted.
Configure
sudo
access for the newly created user:cat << EOF >/etc/sudoers.d/$USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF
Assign the correct file permissions to the new file:
chmod 0440 /etc/sudoers.d/$USER_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user.
-
Example
# chmod 0440 /etc/sudoers.d/admin
Additional Resources
- The Adding a New User section in the System Administrator’s Guide for Red Hat Enterprise Linux 7.
1.1.3. Enabling Password-less SSH for Ansible
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
Procedure
Do the following steps from the Ansible administration node, and as the Ansible user.
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
[user@admin ~]$ ssh-keygen
Copy the public key to all nodes in the storage cluster:
ssh-copy-id $USER_NAME@$HOST_NAME
- Replace
-
$USER_NAME
with the new user name for the Ansible user. -
$HOST_NAME
with the host name of the Ceph node.
-
Example
[user@admin ~]$ ssh-copy-id admin@ceph-mon01
Create and edit the
~/.ssh/config
file.ImportantBy creating and editing the
~/.ssh/config
file you do not have to specify the-u $USER_NAME
option each time you execute theansible-playbook
command.Create the SSH
config
file:[user@admin ~]$ touch ~/.ssh/config
Open the
config
file for editing. Set theHostname
andUser
options for each node in the storage cluster:Host node1 Hostname $HOST_NAME User $USER_NAME Host node2 Hostname $HOST_NAME User $USER_NAME ...
- Replace
-
$HOST_NAME
with the host name of the Ceph node. -
$USER_NAME
with the new user name for the Ansible user.
-
Example
Host node1 Hostname monitor User admin Host node2 Hostname osd User admin Host node3 Hostname gateway User admin
Set the correct file permissions for the
~/.ssh/config
file:[admin@admin ~]$ chmod 600 ~/.ssh/config
Additional Resources
-
The
ssh_config(5)
manual page - The OpenSSH chapter in the System Administrator’s Guide for Red Hat Enterprise Linux 7
1.1.4. Configuring a firewall for Red Hat Ceph Storage
Red Hat Ceph Storage (RHCS) uses the firewalld
service.
The Monitor daemons use port 6789
for communication within the Ceph storage cluster.
On each Ceph OSD node, the OSD daemons use several ports in the range 6800-7300
:
- One for communicating with clients and monitors over the public network
- One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network
- One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network
The Ceph Manager (ceph-mgr
) daemons use ports in range 6800-7300
. Consider colocating the ceph-mgr
daemons with Ceph Monitors on same nodes.
The Ceph Metadata Server nodes (ceph-mds
) use ports in the range 6800-7300
.
The Ceph Object Gateway nodes are configured by Ansible to use port 8080
by default. However, you can change the default port, for example to port 80
.
To use the SSL/TLS service, open port 443
.
Prerequisite
- Network hardware is connected.
Procedure
Run the following commands as the root
user.
On all RHCS nodes, start the
firewalld
service. Enable it to run on boot, and ensure that it is running:# systemctl enable firewalld # systemctl start firewalld # systemctl status firewalld
On all Monitor nodes, open port
6789
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="6789" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="6789" accept" --permanent
- Replace
-
IP_address
with the network address of the Monitor node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept"
[root@monitor ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.11/24" port protocol="tcp" \ port="6789" accept" --permanent
On all OSD nodes, open ports
6800-7300
on the public network:[root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@osd ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Manager (
ceph-mgr
) nodes (usually the same nodes as Monitor ones), open ports6800-7300
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Metadata Server (
ceph-mds
) nodes, open port6800
on the public network:[root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp [root@monitor ~]# firewall-cmd --zone=public --add-port=6800/tcp --permanent
If you have a separate cluster network, repeat the commands with the appropriate zone.
On all Ceph Object Gateway nodes, open the relevant port or ports on the public network.
To open the default Ansible configured port of
8080
:[root@gateway ~]# firewall-cmd --zone=public --add-port=8080/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=8080/tcp --permanent
To limit access based on the source address:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="8080" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="8080" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="8080" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="8080" accept" --permanent
Optional. If you installed Ceph Object Gateway using Ansible and changed the default port that Ansible configures Ceph Object Gateway to use from
8080
, for example, to port80
, open this port:[root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=80/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="80" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="80" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept"
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="80" accept" --permanent
Optional. To use SSL/TLS, open port
443
:[root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp [root@gateway ~]# firewall-cmd --zone=public --add-port=443/tcp --permanent
To limit access based on the source address, run the following commands:
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="443" accept"
firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="IP_address/netmask_prefix" port protocol="tcp" \ port="443" accept" --permanent
- Replace
-
IP_address
with the network address of the object gateway node. -
netmask_prefix
with the netmask in CIDR notation.
-
Example
[root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" [root@gateway ~]# firewall-cmd --zone=public --add-rich-rule="rule family="ipv4" \ source address="192.168.0.31/24" port protocol="tcp" \ port="443" accept" --permanent
Additional Resources
- For more information about public and cluster network, see Verifying the Network Configuration for Red Hat Ceph Storage.
-
For additional details on
firewalld
, see the Using Firewalls chapter in the Security Guide for Red Hat Enterprise Linux 7.
1.1.5. Using a HTTP Proxy
If the Ceph nodes are behind a HTTP/HTTPS proxy, then docker will need to be configured to access the images in the registry. Do the following procedure to configure access for docker using a HTTP/HTTPS proxy.
Prerequisites
- A running HTTP/HTTPS proxy
Procedure
As
root
, create a systemd directory for the docker service:# mkdir /etc/systemd/system/docker.service.d/
As
root
, create the HTTP/HTTPS configuration file.For HTTP, create the
/etc/systemd/system/docker.service.d/http-proxy.conf
file and add the following lines to the file:[Service] Environment="HTTP_PROXY=http://proxy.example.com:80/"
For HTTPS, create the
/etc/systemd/system/docker.service.d/https-proxy.conf
file and add the following lines to the file:[Service] Environment="HTTPS_PROXY=https://proxy.example.com:443/"
-
As
root
, copy the HTTP/HTTPS configuration file to all Ceph nodes in the storage cluster before running theceph-ansible
playbook.
1.2. Installing a Red Hat Ceph Storage Cluster in Containers
Use the Ansible application with the ceph-ansible
playbook to install Red Hat Ceph Storage 3 in containers.
A Ceph cluster used in production usually consists of ten or more nodes. To deploy Red Hat Ceph Storage as a container image, Red Hat recommends to use a Ceph cluster that consists of at least three OSD and three Monitor nodes.
Ceph can run with one monitor; however, to ensure high availability in a production cluster, Red Hat will only support deployments with at least three monitor nodes.
Prerequisites
Using the root user account on the Ansible administration node, enable the Red Hat Ceph Storage 3 Tools repository and Ansible repository:
[root@admin ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-els-rpms --enable=rhel-7-server-ansible-2.6-rpms
Install the
ceph-ansible
package:[root@admin ~]# yum install ceph-ansible
Procedure
Run the following commands from the Ansible administration node unless instructed otherwise.
As the Ansible user, create the
ceph-ansible-keys
directory where Ansible stores temporary values generated by theceph-ansible
playbook.[user@admin ~]$ mkdir ~/ceph-ansible-keys
As root, create a symbolic link to the
/usr/share/ceph-ansible/group_vars
directory in the/etc/ansible/
directory:[root@admin ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars
Navigate to the
/usr/share/ceph-ansible/
directory:[root@admin ~]$ cd /usr/share/ceph-ansible
Create new copies of the
yml.sample
files:[root@admin ceph-ansible]# cp group_vars/all.yml.sample group_vars/all.yml [root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml [root@admin ceph-ansible]# cp site-docker.yml.sample site-docker.yml
Edit the copied files.
Edit the
group_vars/all.yml
file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.ImportantDo not set the
cluster: ceph
parameter to any value other thanceph
because using custom cluster names is not supported.Table 1.1. General Ansible Settings Option Value Required Notes monitor_interface
The interface that the Monitor nodes listen to
monitor_interface
,monitor_address
, ormonitor_address_block
is requiredmonitor_address
The address that the Monitor nodes listen to
monitor_address_block
The subnet of the Ceph public network
Use when the IP addresses of the nodes are unknown, but the subnet is known
ip_version
ipv6
Yes if using IPv6 addressing
journal_size
The required size of the journal in MB
No
public_network
The IP address and netmask of the Ceph public network
Yes
The Verifying the Network Configuration for Red Hat Ceph Storage section in the Installation Guide for Red Hat Enterprise Linux
cluster_network
The IP address and netmask of the Ceph cluster network
No
ceph_docker_image
rhceph/rhceph-3-rhel7
, orcephimageinlocalreg
if using a local Docker registryYes
containerized_deployment
true
Yes
ceph_docker_registry
registry.access.redhat.com
, or<local-host-fqdn>
if using a local Docker registryYes
An example of the
all.yml
file can look like:monitor_interface: eth0 journal_size: 5120 public_network: 192.168.0.0/24 ceph_docker_image: rhceph/rhceph-3-rhel7 containerized_deployment: true ceph_docker_registry: registry.access.redhat.com
For additional details, see the
all.yml
file.Edit the
group_vars/osds.yml
file. See the table below for the most common required and optional parameters to uncomment. Note that the table does not include all parameters.ImportantUse a different physical device to install an OSD than the device where the operating system is installed. Sharing the same device between the operating system and OSDs causes performance issues.
Table 1.2. OSD Ansible Settings Option Value Required Notes osd_scenario
collocated
to use the same device for write-ahead logging and key/value data (BlueStore) or journal (FileStore) and OSD datanon-collocated
to use a dedicated device, such as SSD or NVMe media to store write-ahead log and key/value data (BlueStore) or journal data (FileStore)lvm
to use the Logical Volume Manager to store OSD dataYes
When using
osd_scenario: non-collocated
,ceph-ansible
expects the numbers of variables indevices
anddedicated_devices
to match. For example, if you specify 10 disks indevices
, you must specify 10 entries indedicated_devices
.osd_auto_discovery
true
to automatically discover OSDsYes if using
osd_scenario: collocated
Cannot be used when
devices
setting is useddevices
List of devices where
ceph data
is storedYes to specify the list of devices
Cannot be used when
osd_auto_discovery
setting is used. When usinglvm
as theosd_scenario
and setting thedevices
option,ceph-volume lvm batch
mode creates the optimized OSD configuration.dedicated_devices
List of dedicated devices for non-collocated OSDs where
ceph journal
is storedYes if
osd_scenario: non-collocated
Should be nonpartitioned devices
dmcrypt
true
to encrypt OSDsNo
Defaults to
false
lvm_volumes
A list of FileStore or BlueStore dictionaries
Yes if using
osd_scenario: lvm
and storage devices are not defined usingdevices
Each dictionary must contain a
data
,journal
anddata_vg
keys. Any logical volume or volume group must be the name and not the full path. Thedata
, andjournal
keys can be a logical volume (LV) or partition, but do not use one journal for multipledata
LVs. Thedata_vg
key must be the volume group containing thedata
LV. Optionally, thejournal_vg
key can be used to specify the volume group containing the journal LV, if applicable. See the examples below for various supported configurations.osds_per_device
The number of OSDs to create per device.
No
Defaults to
1
osd_objectstore
The Ceph object store type for the OSDs.
No
Defaults to
bluestore
. The other option isfilestore
. Required for upgrades.The following are examples of the
osds.yml
file when using the three OSD scenarios:collocated
,non-collocated
, andlvm
. The default OSD object store format is BlueStore, if not specified.Collocated
osd_objectstore: filestore osd_scenario: collocated devices: - /dev/sda - /dev/sdb
Non-collocated - BlueStore
osd_objectstore: bluestore osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme1n1
This non-collocated example will create four BlueStore OSDs, one per device. In this example, the traditional hard drives (
sda
,sdb
,sdc
,sdd
) are used for object data, and the solid state drives (SSDs) (/dev/nvme0n1
,/dev/nvme1n1
) are used for the BlueStore databases and write-ahead logs. This configuration pairs the/dev/sda
and/dev/sdb
devices with the/dev/nvme0n1
device, and pairs the/dev/sdc
and/dev/sdd
devices with the/dev/nvme1n1
device.Non-collocated - FileStore
osd_objectstore: filestore osd_scenario: non-collocated devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd dedicated_devices: - /dev/nvme0n1 - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme1n1
LVM simple
osd_objectstore: bluestore osd_scenario: lvm devices: - /dev/sda - /dev/sdb
or
osd_objectstore: bluestore osd_scenario: lvm devices: - /dev/sda - /dev/sdb - /dev/nvme0n1
With these simple configurations
ceph-ansible
uses batch mode (ceph-volume lvm batch
) to create the OSDs.In the first scenario, if the
devices
are traditional hard drives or SSDs, then one OSD per device is created.In the second scenario, when there is a mix of traditional hard drives and SSDs, the data is placed on the traditional hard drives (
sda
,sdb
) and the BlueStore database (block.db
) is created as large as possible on the SSD (nvme0n1
).LVM advance
osd_objectstore: filestore osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: vg1 journal: journal-lv1 journal_vg: vg2 - data: data-lv2 journal: /dev/sda data_vg: vg1
or
osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: data-vg1 db: db-lv1 db_vg: db-vg1 wal: wal-lv1 wal_vg: wal-vg1 - data: data-lv2 data_vg: data-vg2 db: db-lv2 db_vg: db-vg2 wal: wal-lv2 wal_vg: wal-vg2
With these advance scenario examples, the volume groups and logical volumes must be created beforehand. They will not be created by
ceph-ansible
.NoteIf using all NVMe SSDs set the
osd_scenario: lvm
andosds_per_device: 4
options. For more information, see the Configuring OSD Ansible settings for all NVMe Storage section in the Red Hat Ceph Storage Container Guide.For additional details, see the comments in the
osds.yml
file.
Edit the Ansible inventory file located by default at
/etc/ansible/hosts
. Remember to comment out example hosts.Add the Monitor nodes under the
[mons]
section:[mons] <monitor-host-name> <monitor-host-name> <monitor-host-name>
Add OSD nodes under the
[osds]
section. If the nodes have sequential naming, consider using a range:[osds] <osd-host-name[1:10]>
NoteFor OSDs in a new installation, the default object store format is BlueStore.
Alternatively, you can colocate Monitors with the OSD daemons on one node by adding the same node under the
[mons]
and[osds]
sections. See Chapter 2, Colocation of Containerized Ceph Daemons for details.
Optionally, for all deployments, bare-metal or in containers, you can create a custom CRUSH hierarchy using
ansible-playbook
:Setup your Ansible inventory file. Specify where you want the OSD hosts to be in the CRUSH map’s hierarchy by using the
osd_crush_location
parameter. You must specify at least two CRUSH bucket types to specify the location of the OSD, and one buckettype
must be host. By default, these includeroot
,datacenter
,room
,row
,pod
,pdu
,rack
,chassis
andhost
.Syntax
[osds] CEPH_OSD_NAME osd_crush_location="{ 'root': ROOT_BUCKET_', 'rack': 'RACK_BUCKET', 'pod': 'POD_BUCKET', 'host': 'CEPH_HOST_NAME' }"
Example
[osds] ceph-osd-01 osd_crush_location="{ 'root': 'default', 'rack': 'rack1', 'pod': 'monpod', 'host': 'ceph-osd-01' }"
Set the
crush_rule_config
andcreate_crush_tree
parameters toTrue
, and create at least one CRUSH rule if you do not want to use the default CRUSH rules. For example, if you are using HDD devices, edit the paramters as follows:crush_rule_config: True crush_rule_hdd: name: replicated_hdd_rule root: root-hdd type: host class: hdd default: True crush_rules: - "{{ crush_rule_hdd }}" create_crush_tree: True
If you are using SSD devices, then edit the parameters as follows:
crush_rule_config: True crush_rule_ssd: name: replicated_ssd_rule root: root-ssd type: host class: ssd default: True crush_rules: - "{{ crush_rule_ssd }}" create_crush_tree: True
NoteThe default CRUSH rules fail if both ssd and hdd OSDs are not deployed because the default rules now include the class parameter, which must be defined.
NoteAdd the custom CRUSH hierarchy to the OSD files in the host_vars directory as described in a step below to make this configuration work.
Create
pools
, with createdcrush_rules
ingroup_vars/clients.yml
file.Example
copy_admin_key: True user_config: True pool1: name: "pool1" pg_num: 128 pgp_num: 128 rule_name: "HDD" type: "replicated" device_class: "hdd" pools: - "{{ pool1 }}"
View the tree.
[root@mon ~]# ceph osd tree
Validate the pools.
# for i in $(rados lspools);do echo "pool: $i"; ceph osd pool get $i crush_rule;done pool: pool1 crush_rule: HDD
For all deployments, bare-metal or in containers, open for editing the Ansible inventory file, by default the
/etc/ansible/hosts
file. Comment out the example hosts.Add the Ceph Manager (
ceph-mgr
) nodes under the[mgrs]
section. Colocate the Ceph Manager daemon with Monitor nodes.[mgrs] <monitor-host-name> <monitor-host-name> <monitor-host-name>
As the Ansible user, ensure that Ansible can reach the Ceph hosts:
[user@admin ~]$ ansible all -m ping
As
root
, create the/var/log/ansible/
directory and assign the appropriate permissions for theansible
user:[root@admin ~]# mkdir /var/log/ansible [root@admin ~]# chown ansible:ansible /var/log/ansible [root@admin ~]# chmod 755 /var/log/ansible
Edit the
/usr/share/ceph-ansible/ansible.cfg
file, updating thelog_path
value as follows:log_path = /var/log/ansible/ansible.log
As the Ansible user, change to the
/usr/share/ceph-ansible/
directory:[user@admin ~]$ cd /usr/share/ceph-ansible/
Run the
ceph-ansible
playbook:[user@admin ceph-ansible]$ ansible-playbook site-docker.yml
NoteIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkg
option:[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --skip-tags=with_pkg
NoteTo increase the deployment speed, use the
--forks
option toansible-playbook
. By default,ceph-ansible
sets forks to20
. With this setting, up to twenty nodes will be installed at the same time. To install up to thirty nodes at a time, runansible-playbook --forks 30 PLAYBOOK FILE
. The resources on the admin node must be monitored to ensure they are not overused. If they are, lower the number passed to--forks
.Using the root account on a Monitor node, verify the status of the Ceph cluster:
docker exec ceph-<mon|mgr>-<id> ceph health
Replace:
-
<id>
with the host name of the Monitor node:
For example:
[root@monitor ~]# docker exec ceph-mon-mon0 ceph health HEALTH_OK
-
1.3. Configuring OSD Ansible settings for all NVMe storage
To optimize performance when using only non-volatile memory express (NVMe) devices for storage, configure four OSDs on each NVMe device. Normally only one OSD is configured per device, which will underutilize the throughput of an NVMe device.
If you mix SSDs and HDDs, then SSDs will be used for either journals or block.db
, not OSDs.
In testing, configuring four OSDs on each NVMe device was found to provide optimal performance. It is recommended to set osds_per_device: 4
, but it is not required. Other values may provide better performance in your environment.
Prerequisites
- Satisfying all software and hardware requirements for a Ceph cluster.
Procedure
Set
osd_scenario: lvm
andosds_per_device: 4
ingroup_vars/osds.yml
:osd_scenario: lvm osds_per_device: 4
List the NVMe devices under
devices
:devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
The settings in
group_vars/osds.yml
will look similar to this example:osd_scenario: lvm osds_per_device: 4 devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1
You must use devices
with this configuration, not lvm_volumes
. This is because lvm_volumes
is generally used with pre-created logical volumes and osds_per_device
implies automatic logical volume creation by Ceph.
1.4. Installing the Ceph Object Gateway in a Container
Use the Ansible application with the ceph-ansible
playbook to install the Ceph Object Gateway in a container.
Prerequisites
- A working Red Hat Ceph Storage cluster.
Procedure
Run the following commands from the Ansible administration node unless specified otherwise.
As the
root
user, navigate to the/usr/share/ceph-ansible/
directory.[root@admin ~]# cd /usr/share/ceph-ansible/
Uncomment the
radosgw_interface
parameter in thegroup_vars/all.yml
file.radosgw_interface: interface
Replace interface with the interface that the Ceph Object Gateway nodes listen to.
Optional. Change the default variables.
Create a new copy of the
rgws.yml.sample
file located in thegroup_vars
directory.[root@admin ceph-ansible]# cp group_vars/rgws.yml.sample group_vars/rgws.yml
-
Edit the
group_vars/rgws.yml
file. For additional details, see thergws.yml
file.
Add the host name of the Ceph Object Gateway node to the
[rgws]
section of the Ansible inventory file located by default at/etc/ansible/hosts
.[rgws] gateway01
Alternatively, you can colocate the Ceph Object Gateway with the OSD daemon on one node by adding the same node under the
[osds]
and[rgws]
sections. See Colocation of containerized Ceph daemons for details.As the Ansible user, run the
ceph-ansible
playbook.[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit rgws
NoteIf you deploy Red Hat Ceph Storage to Red Hat Enterprise Linux Atomic Host hosts, use the
--skip-tags=with_pkg
option:[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --skip-tags=with_pkg
Verify that the Ceph Object Gateway node was deployed successfully.
Connect to a Monitor node as the
root
user:ssh hostname
Replace hostname with the host name of the Monitor node, for example:
[user@admin ~]$ ssh root@monitor
Verify that the Ceph Object Gateway pools were created properly:
[root@monitor ~]# docker exec ceph-mon-mon1 rados lspools rbd cephfs_data cephfs_metadata .rgw.root default.rgw.control default.rgw.data.root default.rgw.gc default.rgw.log default.rgw.users.uid
From any client on the same network as the Ceph cluster, for example the Monitor node, use the
curl
command to send an HTTP request on port 8080 using the IP address of the Ceph Object Gateway host:curl http://IP-address:8080
Replace IP-address with the IP address of the Ceph Object Gateway node. To determine the IP address of the Ceph Object Gateway host, use the
ifconfig
orip
commands:[root@client ~]# curl http://192.168.122.199:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
List buckets:
[root@monitor ~]# docker exec ceph-mon-mon1 radosgw-admin bucket list
Additional Resources
- The Red Hat Ceph Storage 3 Ceph Object Gateway Guide for Red Hat Enterprise Linux
-
Understanding the
limit
option
1.5. Installing Metadata Servers
Use the Ansible automation application to install a Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System.
Prerequisites
- A working Red Hat Ceph Storage cluster.
Procedure
Perform the following steps on the Ansible administration node.
Add a new section
[mdss]
to the/etc/ansible/hosts
file:[mdss] hostname hostname hostname
Replace hostname with the host names of the nodes where you want to install the Ceph Metadata Servers.
Alternatively, you can colocate the Metadata Server with the OSD daemon on one node by adding the same node under the
[osds]
and[mdss]
sections. See Colocation of containerized Ceph daemons for details.Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible
Optional. Change the default variables.
Create a copy of the
group_vars/mdss.yml.sample
file namedmdss.yml
:[root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml
-
Optionally, edit parameters in
mdss.yml
. Seemdss.yml
for details.
As the Ansible user, run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit mdss
- After installing Metadata Servers, configure them. For details, see the Configuring Metadata Server Daemons chapter in the Ceph File System Guide for Red Hat Ceph Storage 3.
Additional Resources
- The Ceph File System Guide for Red Hat Ceph Storage 3
-
Understanding the
limit
option
1.6. Installing the NFS-Ganesha Gateway
The Ceph NFS Ganesha Gateway is an NFS interface built on top of the Ceph Object Gateway to provide applications with a POSIX filesystem interface to the Ceph Object Gateway for migrating files within filesystems to Ceph Object Storage.
Prerequisites
-
A running Ceph storage cluster, preferably in the
active + clean
state. - At least one node running a Ceph Object Gateway.
- Perform the Before You Start procedure.
Procedure
Perform the following tasks on the Ansible administration node.
Create the
nfss
file from the sample file:[root@ansible ~]# cd /usr/share/ceph-ansible/group_vars [root@ansible ~]# cp nfss.yml.sample nfss.yml
Add gateway hosts to the
/etc/ansible/hosts
file under an[nfss]
group to identify their group membership to Ansible. If the hosts have sequential naming, use a range. For example:[nfss] <nfs_host_name_1> <nfs_host_name_2> <nfs_host_name[3..10]>
Navigate to the Ansible configuration directory,
/etc/ansible/
:[root@ansible ~]# cd /usr/share/ceph-ansible
To copy the administrator key to the Ceph Object Gateway node, uncomment the
copy_admin_key
setting in the/usr/share/ceph-ansible/group_vars/nfss.yml
file:copy_admin_key: true
Configure the FSAL (File System Abstraction Layer) sections of the
/usr/share/ceph-ansible/group_vars/nfss.yml
file. Provide an ID, S3 user ID, S3 access key and secret. For NFSv4, it should look something like this:################### # FSAL RGW Config # ################### #ceph_nfs_rgw_export_id: <replace-w-numeric-export-id> #ceph_nfs_rgw_pseudo_path: "/" #ceph_nfs_rgw_protocols: "3,4" #ceph_nfs_rgw_access_type: "RW" #ceph_nfs_rgw_user: "cephnfs" # Note: keys are optional and can be generated, but not on containerized, where # they must be configered. #ceph_nfs_rgw_access_key: "<replace-w-access-key>" #ceph_nfs_rgw_secret_key: "<replace-w-secret-key>"
WarningAccess and secret keys are optional, and can be generated.
Run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit nfss
Additional Resources
1.7. Installing the Ceph iSCSI gateway in a container
The Ansible deployment application installs the required daemons and tools to configure a Ceph iSCSI gateway in a container.
Prerequisites
- A working Red Hat Ceph Storage cluster.
Procedure
As the root user, open and edit the
/etc/ansible/hosts
file. Add a node name entry in the iSCSI gateway group:Example
[iscsigws] ceph-igw-1 ceph-igw-2
Navigate to the
/usr/share/ceph-ansible
directory:[root@admin ~]# cd /usr/share/ceph-ansible/
Create a copy of the
iscsigws.yml.sample
file and name itiscsigws.yml
:[root@admin ceph-ansible]# cp group_vars/iscsigws.yml.sample group_vars/iscsigws.yml
ImportantThe new file name (
iscsigws.yml
) and the new section heading ([iscsigws]
) are only applicable to Red Hat Ceph Storage 3.1 or higher. Upgrading from previous versions of Red Hat Ceph Storage to 3.1 will still use the old file name (iscsi-gws.yml
) and the old section heading ([iscsi-gws]
).ImportantCurrently, Red Hat does not support the following options to be installed using ceph-ansible for container-based deployments:
-
gateway_iqn
-
rbd_devices
-
client_connections
See the Configuring the Ceph iSCSI gateway in a container section for instructions on configuring these options manually.
-
-
Open the
iscsigws.yml
file for editing. Configure the
gateway_ip_list
option by adding the iSCSI gateway IP addresses, using IPv4 or IPv6 addresses:Example
gateway_ip_list: 192.168.1.1,192.168.1.2
ImportantYou cannot use a mix of IPv4 and IPv6 addresses.
Optionally, uncomment the
trusted_ip_list
option and add the IPv4 or IPv6 addresses accordingly, if you want to use SSL. You will needroot
access to the iSCSI gateway containers to configure SSL. To configure SSL, do the following steps:-
If needed, install the
openssl
package within all the iSCSI gateway containers. On the primary iSCSI gateway container, create a directory to hold the SSL keys:
# mkdir ~/ssl-keys # cd ~/ssl-keys
On the primary iSCSI gateway container, create the certificate and key files:
# openssl req -newkey rsa:2048 -nodes -keyout iscsi-gateway.key -x509 -days 365 -out iscsi-gateway.crt
NoteYou will be prompted to enter the environmental information.
On the primary iSCSI gateway container, create a PEM file:
# cat iscsi-gateway.crt iscsi-gateway.key > iscsi-gateway.pem
On the primary iSCSI gateway container, create a public key:
# openssl x509 -inform pem -in iscsi-gateway.pem -pubkey -noout > iscsi-gateway-pub.key
-
From the primary iSCSI gateway container, copy the
iscsi-gateway.crt
,iscsi-gateway.pem
,iscsi-gateway-pub.key
, andiscsi-gateway.key
files to the/etc/ceph/
directory on the other iSCSI gateway containers.
-
If needed, install the
Optionally, review and uncomment any of the following iSCSI target API service options accordingly:
#api_user: admin #api_password: admin #api_port: 5000 #api_secure: false #loop_delay: 1 #trusted_ip_list: 192.168.122.1,192.168.122.2
Optionally, review and uncomment any of the following resource options, updating them according to the workload needs:
# TCMU_RUNNER resource limitation #ceph_tcmu_runner_docker_memory_limit: 1g #ceph_tcmu_runner_docker_cpu_limit: 1 # RBD_TARGET_GW resource limitation #ceph_rbd_target_gw_docker_memory_limit: 1g #ceph_rbd_target_gw_docker_cpu_limit: 1 # RBD_TARGET_API resource limitation #ceph_rbd_target_api_docker_memory_limit: 1g #ceph_rbd_target_api_docker_cpu_limit: 1
As the Ansible user, run the Ansible playbook:
[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit iscsigws
For Red Hat Enterprise Linux Atomic, add the
--skip-tags=with_pkg
option:[user@admin ceph-ansible]$ ansible-playbook site-docker.yml --limit iscsigws --skip-tags=with_pkg
Once the Ansible playbook has finished, open TCP ports
3260
and theapi_port
specified in theiscsigws.yml
file on each node listed in thetrusted_ip_list
option.NoteIf the
api_port
option is not specified, the default port is5000
.
Additional Resources
- For more information on installing Red Hat Ceph Storage in a container, see the Installing a Red Hat Ceph Storage cluster in containers section.
- For more information on Ceph’s iSCSI gateway options, see Table 8.1 in the Red Hat Ceph Storage Block Device Guide.
- For more information on the iSCSI target API options, see Table 8.2 in the Red Hat Ceph Storage Block Device Guide.
-
For an example of the
iscsigws.yml
file, see Appendix A the Red Hat Ceph Storage Block Device Guide.
1.7.1. Configuring the Ceph iSCSI gateway in a container
The Ceph iSCSI gateway configuration is done with the gwcli
command-line utility for creating and managing iSCSI targets, Logical Unit Numbers (LUNs) and Access Control Lists (ACLs).
Prerequisites
- A working Red Hat Ceph Storage cluster.
- Installation of the iSCSI gateway software.
Procedure
As the
root
user, start the iSCSI gateway command-line interface:# docker exec -it rbd-target-api gwcli
Create the iSCSI gateways using either IPv4 or IPv6 addresses:
Syntax
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:$TARGET_NAME > goto gateways > create $ISCSI_GW_NAME $ISCSI_GW_IP_ADDR > create $ISCSI_GW_NAME $ISCSI_GW_IP_ADDR
Example
>/iscsi-target create iqn.2003-01.com.redhat.iscsi-gw:ceph-igw > goto gateways > create ceph-gw-1 10.172.19.21 > create ceph-gw-2 10.172.19.22
ImportantYou cannot use a mix of IPv4 and IPv6 addresses.
Add a RADOS Block Device (RBD):
Syntax
> cd /disks >/disks/ create $POOL_NAME image=$IMAGE_NAME size=$IMAGE_SIZE[m|g|t] max_data_area_mb=$BUFFER_SIZE
Example
> cd /disks >/disks/ create rbd image=disk_1 size=50g max_data_area_mb=32
ImportantThere can not be any periods (.) in the pool name or in the image name.
WarningDo NOT adjust the
max_data_area_mb
option, unless Red Hat Support has instructed you to do so.The
max_data_area_mb
option controls the amount of memory in megabytes that each image can use to pass SCSI command data between the iSCSI target and the Ceph cluster. If this value is too small, then it can result in excessive queue full retries which will affect performance. If the value is too large, then it can result in one disk using too much of the system’s memory, which can cause allocation failures for other subsystems. The default value is 8.This value can be changed using the
reconfigure
command The image must not be in use by an iSCSI initiator for this command to take effect.Syntax
>/disks/ reconfigure max_data_area_mb $NEW_BUFFER_SIZE
Example
>/disks/ reconfigure max_data_area_mb 64
Create a client:
Syntax
> goto hosts > create iqn.1994-05.com.redhat:$CLIENT_NAME > auth chap=$USER_NAME/$PASSWORD
Example
> goto hosts > create iqn.1994-05.com.redhat:rh7-client > auth chap=iscsiuser1/temp12345678
ImportantDisabling CHAP is only supported on Red Hat Ceph Storage 3.1 or higher. Red Hat does not support mixing clients, some with CHAP enabled and some CHAP disabled. All clients must have either CHAP enabled or have CHAP disabled. The default behavior is to only authenticate an initiator by its initiator name.
If initiators are failing to log into the target, then the CHAP authentication might be a misconfigured for some initiators.
Example
o- hosts ................................ [Hosts: 2: Auth: MISCONFIG]
Do the following command at the
hosts
level to reset all the CHAP authentication:/> goto hosts /iscsi-target...csi-igw/hosts> auth nochap ok ok /iscsi-target...csi-igw/hosts> ls o- hosts ................................ [Hosts: 2: Auth: None] o- iqn.2005-03.com.ceph:esx ........... [Auth: None, Disks: 4(310G)] o- iqn.1994-05.com.redhat:rh7-client .. [Auth: None, Disks: 0(0.00Y)]
Add disks to a client:
Syntax
>/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:$CLIENT_NAME > disk add $POOL_NAME.$IMAGE_NAME
Example
>/iscsi-target..eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client > disk add rbd.disk_1
Run the following command to verify the iSCSI gateway configuration:
> ls
Optionally, confirm that the API is using SSL correctly, look in the
/var/log/rbd-target-api.log
file forhttps
, for example:Aug 01 17:27:42 test-node.example.com python[1879]: * Running on https://0.0.0.0:5000/
- The next step is to configure an iSCSI initiator.
Additional Resources
- For more information on installing Red Hat Ceph Storage in a container, see the Installing a Red Hat Ceph Storage cluster in containers section.
- For more information on installing the iSCSI gateway software in a container, see the Installing the Ceph iSCSI gateway in a container section.
- For more information on connecting an iSCSI initiator, see the Configuring the iSCSI Initiator section in the Red Hat Ceph Storage Block Device Guide.
1.7.2. Removing the Ceph iSCSI gateway in a container
The Ceph iSCSI gateway configuration can be removed using Ansible.
Prerequisites
- A working Red Hat Ceph Storage cluster.
- Installation of the iSCSI gateway software.
- Exported RBD images.
- Root-level access to the Red Hat Ceph Storage cluster.
- Root-level access to the iSCSI initiators.
- Access to the Ansible administration node.
Procedure
Disconnect all iSCSI initiators before purging the iSCSI gateway configuration. Follow the steps below for the appropriate operating system:
Red Hat Enterprise Linux initiators:
Run the following command as the
root
user:Syntax
iscsiadm -m node -T TARGET_NAME --logout
Replace TARGET_NAME with the configured iSCSI target name.
Example
# iscsiadm -m node -T iqn.2003-01.com.redhat.iscsi-gw:ceph-igw --logout Logging out of session [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] Logging out of session [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] Logout of [sid: 1, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.21,3260] successful. Logout of [sid: 2, target: iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw, portal: 10.172.19.22,3260] successful.
Windows initiators:
See the Microsoft documentation for more details.
VMware ESXi initiators:
See the VMware documentation for more details.
As the
root
user, run the iSCSI gateway command line utility:# gwcli
Remove the hosts:
Syntax
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:_TARGET_NAME_/hosts /> /iscsi-target...TARGET_NAME/hosts> delete CLIENT_NAME
Replace TARGET_NAME with the configured iSCSI target name, and replace CLIENT_NAME with iSCSI initiator name.
Example
/> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:ceph-igw/hosts /> /iscsi-target...eph-igw/hosts> delete iqn.1994-05.com.redhat:rh7-client
Remove the disks:
Syntax
/> cd /disks/ /disks> delete POOL_NAME.IMAGE_NAME
Replace POOL_NAME with the name of the pool, and replace the IMAGE_NAME with the name of the image.
Example
/> cd /disks/ /disks> delete rbd.disk_1
Remove the iSCSI target and gateway configuration:
/> cd /iscsi-target/ /iscsi-target> clearconfig confirm=true
On a Ceph Monitor or Client node, as the
root
user, remove the iSCSI gateway configuration object (gateway.conf
):[root@mon ~]# rados rm -p pool gateway.conf
Optionally, if the exported Ceph RADOS Block Device (RBD) is no longer needed, then remove the RBD image. Run the following command on a Ceph Monitor or Client node, as the
root
user:Syntax
rbd rm IMAGE_NAME
Example
[root@mon ~]# rbd rm rbd01
Additional Resources
- For more information on installing Red Hat Ceph Storage in a container, see the Installing a Red Hat Ceph Storage cluster in containers section.
- For more information on installing the iSCSI gateway software in a container, see the Installing the Ceph iSCSI gateway in a container section.
1.7.3. Optimizing the performance of the iSCSI Target
There are many settings that control how the iSCSI Target transfers data over the network. These settings can be used to optimize the performance of the iSCSI gateway.
Only change these settings if instructed to by Red Hat Support or as specified in this document.
The gwcli reconfigure subcommand
The gwcli reconfigure
subcommand controls the settings that are used to optimize the performance of the iSCSI gateway.
Settings that affect the performance of the iSCSI target
- max_data_area_mb
- cmdsn_depth
- immediate_data
- initial_r2t
- max_outstanding_r2t
- first_burst_length
- max_burst_length
- max_recv_data_segment_length
- max_xmit_data_segment_length
Additional Resources
-
Information about
max_data_area_mb
, including an example showing how to adjust it usinggwcli reconfigure
, is in the section Configuring the iSCSI Target using the Command Line Interface for the Block Device Guide, and Configuring the Ceph iSCSI gateway in a container for the Container Guide.
1.8. Understanding the limit
option
This section contains information about the Ansible --limit
option.
Ansible supports the --limit
option that enables you to use the site
and site-docker
Ansible playbooks for a particular section of the inventory file.
$ ansible-playbook site.yml|site-docker.yml --limit osds|rgws|clients|mdss|nfss|iscsigws
For example, to redeploy only OSDs on containers, run the following command as the Ansible user:
$ ansible-playbook /usr/share/ceph-ansible/site-docker.yml --limit osds
1.9. Additional Resources
- The Getting Started with Containers guide for Red Hat Enterprise Linux Atomic Host