Chapter 3. Virtual Machine Instances
OpenStack Compute is the central component that provides virtual machines on demand. Compute interacts with the Identity service for authentication, Image service for images (used to launch instances), and the dashboard service for the user and administrative interface.
RHEL OpenStack Platform allows you to easily manage virtual machine instances in the cloud. The Compute service creates, schedules, and manages instances, and exposes this functionality to other OpenStack components. This chapter discusses these procedures along with procedures to add components like key pairs, security groups, host aggregates and flavors. The term instance is used by OpenStack to mean a virtual machine instance.
3.1. Manage Instances
Before you can create an instance, you need to ensure certain other OpenStack components (for example, a network, key pair and an image or a volume as the boot source) are available for the instance.
This section discusses the procedures to add these components, create and manage an instance. Managing an instance refers to updating, and logging in to an instance, viewing how the instances are being used, resizing or deleting them.
3.1.1. Add Components
Use the following sections to create a network, key pair and upload an image or volume source. These components are used in the creation of an instance and are not available by default. You will also need to create a new security group to allow SSH access to the user.
- In the dashboard, select Project.
- Select Network > Networks, and ensure there is a private network to which you can attach the new instance (to create a network, see Add a Network section in the Networking Guide available at Red Hat Enterprise Linux OpenStack Platform).
- Select Compute > Access & Security > Key Pairs, and ensure there is a key pair (to create a key pair, see Section 3.2.1.1, “Create a Key Pair”).
Ensure that you have either an image or a volume that can be used as a boot source:
- To view boot-source images, select the Images tab (to create an image, see Section 1.2.1, “Create an Image”).
- To view boot-source volumes, select the Volumes tab (to create a volume, see Section 4.1.1, “Create a Volume”).
- Select Compute > Access & Security > Security Groups, and ensure you have created a security group rule (to create a security group, see Project Security Management in the Users and Identity Management Guide available at Red Hat Enterprise Linux OpenStack Platform).
3.1.2. Create an Instance
- In the dashboard, select Project > Compute > Instances.
- Click Launch Instance.
- Fill out instance fields (those marked with '* ' are required), and click Launch when finished.
Tab | Field | Notes |
---|---|---|
Project and User | Project | Select the project from the dropdown list. |
User | Select the user from the dropdown list. | |
Details | Availability Zone | Zones are logical groupings of cloud resources in which your instance can be placed. If you are unsure, use the default zone (for more information, see Section 3.4, “Manage Host Aggregates”). |
Instance Name | A name to identify your instance. | |
Flavor | The flavor determines what resources the instance is given (for example, memory). For default flavor allocations and information on creating new flavors, see Section 3.3, “Manage Flavors”. | |
Instance Count | The number of instances to create with these parameters. "1" is preselected. | |
Instance Boot Source | Depending on the item selected, new fields are displayed allowing you to select the source:
| |
Access and Security | Key Pair | The specified key pair is injected into the instance and is used to remotely access the instance using SSH (if neither a direct login information or a static key pair is provided). Usually one key pair per project is created. |
Security Groups | Security groups contain firewall rules which filter the type and direction of the instance’s network traffic (for more information on configuring groups, see Project Security Management in the Users and Identity Management Guide available at Red Hat Enterprise Linux OpenStack Platform). | |
Networking | Selected Networks | You must select at least one network. Instances are typically assigned to a private network, and then later given a floating IP address to enable external access. |
Post-Creation | Customization Script Source | You can provide either a set of commands or a script file, which will run after the instance is booted (for example, to set the instance hostname or a user password). If 'Direct Input' is selected, write your commands in the Script Data field; otherwise, specify your script file. Note Any script that starts with '#cloud-config' is interpreted as using the cloud-config syntax (for information on the syntax, see http://cloudinit.readthedocs.org/en/latest/topics/examples.html). |
Advanced Options | Disk Partition | By default, the instance is built as a single partition and dynamically resized as needed. However, you can choose to manually configure the partitions yourself. |
Configuration Drive | If selected, OpenStack writes metadata to a read-only configuration drive that is attached to the instance when it boots (instead of to Compute’s metadata service). After the instance has booted, you can mount this drive to view its contents (enables you to provide files to the instance). |
3.1.4. Resize an Instance
To resize an instance (memory or CPU count), you must select a new flavor for the instance that has the right capacity. If you are increasing the size, remember to first ensure that the host has enough space.
Ensure communication between hosts by setting up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts (for example, compute nodes can share the same SSH key).
For more information about setting up SSH key authentication, see Configure SSH Tunneling Between Nodes in the Migrating Instances Guide available at Red Hat Enterprise Linux OpenStack Platform.
Enable resizing on the original host by setting the following parameter in the
/etc/nova/nova.conf
file:[DEFAULT] allow_resize_to_same_host = True
- In the dashboard, select Project > Compute > Instances.
- Click the instance’s Actions arrow, and select Resize Instance.
- Select a new flavor in the New Flavor field.
If you want to manually partition the instance when it launches (results in a faster build time):
- Select Advanced Options.
- In the Disk Partition field, select Manual.
- Click Resize.
3.1.5. Connect to an Instance
This section discusses the different methods you can use to access an instance console using the dashboard or the command-line interface. You can also directly connect to an instance’s serial port allowing you to debug even if the network connection fails.
3.1.5.1. Access an Instance Console using the Dashboard
The console allows you a way to directly access your instance within the dashboard.
- In the dashboard, select Compute > Instances.
- Click the instance’s More button and select Console.
- Log in using the image’s user name and password (for example, a CirrOS image uses cirros/cubswin:)).
3.1.5.2. Directly Connect to a VNC Console
You can directly access an instance’s VNC console using a URL returned by nova get-vnc-console
command.
- Browser
To obtain a browser URL, use:
$ nova get-vnc-console INSTANCE_ID novnc
- Java Client
To obtain a Java-client URL, use:
$ nova get-vnc-console INSTANCE_ID xvpvnc
nova-xvpvncviewer provides a simple example of a Java client. To download the client, use:
# git clone https://github.com/cloudbuilders/nova-xvpvncviewer # cd nova-xvpvncviewer/viewer # make
Run the viewer with the instance’s Java-client URL:
# java -jar VncViewer.jar _URL_
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
3.1.5.3. Directly Connect to a Serial Console
You can directly access an instance’s serial port using a websocket client. Serial connections are typically used as a debugging tool (for example, instances can be accessed even if the network configuration fails). To obtain a serial URL for a running instance, use:
$ nova get-serial-console INSTANCE_ID
novaconsole provides a simple example of a websocket client. To download the client, use:
# git clone https://github.com/larsks/novaconsole/ # cd novaconsole
Run the client with the instance’s serial URL:
# python console-client-poll.py
This tool is provided only for customer convenience, and is not officially supported by Red Hat.
However, depending on your installation, the administrator may need to first set up the nova-serialproxy service. The proxy service is a websocket proxy that allows connections to OpenStack Compute serial ports.
3.1.5.3.1. Install and Configure nova-serialproxy
Install the
nova-serialproxy
service:# yum install openstack-nova-serialproxy
Update the
serial_console
section in/etc/nova/nova.conf
:Enable the
nova-serialproxy
service:$ openstack-config --set /etc/nova/nova.conf serial_console enabled true
Specify the string used to generate URLS provided by the
nova get-serial-console
command.$ openstack-config --set /etc/nova/nova.conf serial_console base_url ws://PUBLIC_IP:6083/
Where
PUBLIC_IP
is the public IP address of the host running thenova-serialproxy
service.Specify the IP address on which the instance serial console should listen (string).
$ openstack-config --set /etc/nova/nova.conf serial_console listen 0.0.0.0
Specify the address to which proxy clients should connect (string).
$ openstack-config --set /etc/nova/nova.conf serial_console proxyclient_address ws://HOST_IP:6083/
Where
HOST_IP
is the IP address of your Compute host. For example, an enablednova-serialproxy
service is as following:[serial_console] enabled=true base_url=ws://192.0.2.0:6083/ listen=0.0.0.0 proxyclient_address=192.0.2.3
Restart Compute services:
# openstack-service restart nova
Start the
nova-serialproxy
service:# systemctl enable openstack-nova-serialproxy # systemctl start openstack-nova-serialproxy
- Restart any running instances, to ensure that they are now listening on the right sockets.
Open the firewall for serial-console port connections. Serial ports are set using
[serial_console]`
port_range in/etc/nova/nova.conf
; by default, the range is 10000:20000. Update iptables with:# iptables -I INPUT 1 -p tcp --dport 10000:20000 -j ACCEPT
3.1.6. View Instance Usage
The following usage statistics are available:
Per Project
To view instance usage per project, select Project > Compute > Overview. A usage summary is immediately displayed for all project instances.
You can also view statistics for a specific period of time by specifying the date range and clicking Submit.
Per Hypervisor
If logged in as an administrator, you can also view information for all projects. Click Admin > System and select one of the tabs. For example, the Resource Usage tab offers a way to view reports for a distinct time period. You might also click Hypervisors to view your current vCPU, memory, or disk statistics.
NoteThe
vCPU Usage
value (x of y
) reflects the number of total vCPUs of all virtual machines (x) and the total number of hypervisor cores (y).
3.1.7. Delete an Instance
- In the dashboard, select Project > Compute > Instances, and select your instance.
- Click Terminate Instance.
Deleting an instance does not delete its attached volumes; you must do this separately (see Section 4.1.4, “Delete a Volume”).
3.1.8. Manage Multiple Instances at Once
If you need to start multiple instances at the same time (for example, those that were down for compute or controller maintenance) you can do so easily at Project > Compute > Instances:
- Click the check boxes in the first column for the instances that you want to start. If you want to select all of the instances, click the check box in the first row in the table.
- Click More Actions above the table and select Start Instances.
Similarly, you can shut off or soft reboot multiple instances by selecting the respective actions.
3.2. Manage Instance Security
You can manage access to an instance by assigning it the correct security group (set of firewall rules) and key pair (enables SSH user access). Further, you can assign a floating IP address to an instance to enable external network access. The sections below outline how to create and manage key pairs, security groups, floating IP addresses and logging in to an instance using SSH. There is also a procedure for injecting an admin
password in to an instance.
For information on managing security groups, see Project Security Management in the Users and Identity Management Guide available at Red Hat Enterprise Linux OpenStack Platform.
3.2.1. Manage Key Pairs
Key pairs provide SSH access to the instances. Each time a key pair is generated, its certificate is downloaded to the local machine and can be distributed to users. Typically, one key pair is created for each project (and used for multiple instances).
You can also import an existing key pair into OpenStack.
3.2.1.1. Create a Key Pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Create Key Pair.
- Specify a name in the Key Pair Name field, and click Create Key Pair.
When the key pair is created, a key pair file is automatically downloaded through the browser. Save this file for later connections from external machines. For command-line SSH connections, you can load this file into SSH by executing:
# ssh-add ~/.ssh/os-key.pem
3.2.1.2. Import a Key Pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Import Key Pair.
- Specify a name in the Key Pair Name field, and copy and paste the contents of your public key into the Public Key field.
- Click Import Key Pair.
3.2.1.3. Delete a Key Pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click the key’s Delete Key Pair button.
3.2.2. Create a Security Group
Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security group are project specific; project members can edit the default rules for their security group and add new rule sets.
- In the dashboard, select the Project tab, and click Compute > Access & Security.
- On the Security Groups tab, click + Create Security Group.
- Provide a name and description for the group, and click Create Security Group.
For more information on managing project security, see Project Security Management in the Users and Identity Management Guide available at Red Hat Enterprise Linux OpenStack Platform.
3.2.3. Create, Assign, and Release Floating IP Addresses
By default, an instance is given an internal IP address when it is first created. However, you can enable access through the public network by creating and assigning a floating IP address (external address). You can change an instance’s associated IP address regardless of the instance’s state.
Projects have a limited range of floating IP address that can be used (by default, the limit is 50), so you should release these addresses for reuse when they are no longer needed. Floating IP addresses can only be allocated from an existing floating IP pool, see Create Floating IP Pools in the Networking Guide available at Red Hat Enterprise Linux OpenStack Platform.
3.2.3.1. Allocate a Floating IP to the Project
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click Allocate IP to Project.
- Select a network from which to allocate the IP address in the Pool field.
- Click Allocate IP.
3.2.3.2. Assign a Floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' Associate button.
Select the address to be assigned in the IP address field.
NoteIf no addresses are available, you can click the
+
button to create a new address.- Select the instance to be associated in the Port to be Associated field. An instance can only be associated with one floating IP address.
- Click Associate.
3.2.3.3. Release a Floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' menu arrow (next to the Associate/Disassociate button.
- Select Release Floating IP.
3.2.4. Log in to an Instance
Prerequisites:
- Ensure that the instance’s security group has an SSH rule (see Project Security Management in the Users and Identity Management Guide available at Red Hat Enterprise Linux OpenStack Platform).
- Ensure the instance has a floating IP address (external address) assigned to it (see Create, Assign, and Release Floating IP Addresses).
- Obtain the instance’s key-pair certificate. The certificate is downloaded when the key pair is created; if you did not create the key pair yourself, ask your administrator (see Section 3.2.1, “Manage Key Pairs”).
To first load the key pair file into SSH, and then use ssh without naming it:
Change the permissions of the generated key-pair certificate.
$ chmod 600 os-key.pem
Check whether
ssh-agent
is already running:# ps -ef | grep ssh-agent
If not already running, start it up with:
# eval `ssh-agent`
On your local machine, load the key-pair certificate into SSH. For example:
$ ssh-add ~/.ssh/os-key.pem
- You can now SSH into the file with the user supplied by the image.
The following example command shows how to SSH into the Red Hat Enterprise Linux guest image with the user cloud-user
:
$ ssh cloud-user@192.0.2.24
You can also use the certificate directly. For example:
$ ssh -i /myDir/os-key.pem cloud-user@192.0.2.24
3.2.5. Inject an admin
Password Into an Instance
You can inject an admin
(root
) password into an instance using the following procedure.
In the
/etc/openstack-dashboard/local_settings
file, set thechange_set_password
parameter value toTrue
.can_set_password: True
In the
/etc/nova/nova.conf
file, set theinject_password
parameter toTrue
.inject_password=true
Restart the Compute service.
# service nova-compute restart
When you use the nova boot
command to launch a new instance, the output of the command displays an adminPass
parameter. You can use this password to log into the instance as the root
user.
The Compute service overwrites the password value in the /etc/shadow
file for the root
user. This procedure can also be used to activate the root
account for the KVM guest images. For more information on how to use KVM guest images, see Section 1.2.1.1, “Use a KVM Guest Image With RHEL OpenStack Platform”
You can also set a custom password from the dashboard. To enable this, run the following command after you have set can_set_password
parameter to true
.
# systemctl restart httpd.service
The newly added admin
password fields are as follows:
These fields can be used when you launch or rebuild an instance.
3.3. Manage Flavors
Each created instance is given a flavor (resource template), which determines the instance’s size and capacity. Flavors can also specify secondary ephemeral storage, swap disk, metadata to restrict usage, or special project access (none of the default flavors have these additional attributes defined).
Name | vCPUs | RAM | Root Disk Size |
---|---|---|---|
m1.tiny | 1 | 512 MB | 1 GB |
m1.small | 1 | 2048 MB | 20 GB |
m1.medium | 2 | 4096 MB | 40 GB |
m1.large | 4 | 8192 MB | 80 GB |
m1.xlarge | 8 | 16384 MB | 160 GB |
The majority of end users will be able to use the default flavors. However, you can create and manage specialized flavors. For example, you can:
- Change default memory and capacity to suit the underlying hardware needs.
- Add metadata to force a specific I/O rate for the instance or to match a host aggregate.
Behavior set using image properties overrides behavior set using flavors (for more information, see Section 1.2, “Manage Images”.
3.3.1. Update Configuration Permissions
By default, only administrators can create flavors or view the complete flavor list (select Admin > System > Flavors). To allow all users to configure flavors, specify the following in the /etc/nova/policy.json
file (nova-api server):
"compute_extension:flavormanage": "",
3.3.2. Create a Flavor
- As an admin user in the dashboard, select Admin > System > Flavors.
Click Create Flavor, and specify the following fields:
Tab Field Description Flavor Information
Name
Unique name.
ID
Unique ID. The default value,
auto
, generates a UUID4 value, but you can also manually specify an integer or UUID4 value.VCPUs
Number of virtual CPUs.
RAM (MB)
Memory (in megabytes).
Root Disk (GB)
Ephemeral disk size (in gigabytes); to use the native image size, specify
0
. This disk is not used if Instance Boot Source=Boot from Volume.Epehemeral Disk (GB)
Secondary ephemeral disk size (in gigabytes) available to an instance. This disk is destroyed when an instance is deleted.
The default value is
0
, which implies that no ephemeral disk is created.Swap Disk (MB)
Swap disk size (in megabytes).
Flavor Access
Selected Projects
Projects which can use the flavor. If no projects are selected, all projects have access (
Public=Yes
).- Click Create Flavor.
3.3.3. Update General Attributes
- As an admin user in the dashboard, select Admin > System > Flavors.
- Click the flavor’s Edit Flavor button.
- Update the values, and click Save.
3.3.4. Update Flavor Metadata
In addition to editing general attributes, you can add metadata to a flavor (extra_specs
), which can help fine-tune instance usage. For example, you might want to set the maximum-allowed bandwidth or disk writes.
- Pre-defined keys determine hardware support or quotas. Pre-defined keys are limited by the hypervisor you are using (for libvirt, see Table 3.2, “Libvirt Metadata”).
-
Both pre-defined and user-defined keys can determine instance scheduling. For example, you might specify
SpecialComp=True
; any instance with this flavor can then only run in a host aggregate with the same key-value combination in its metadata (see Section 3.4, “Manage Host Aggregates”).
3.3.4.1. View Metadata
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata.
3.3.4.2. Add Metadata
You specify a flavor’s metadata using a key/value
pair.
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata. - Under Available Metadata, click on the Other field, and specify the key you want to add (see Table 3.2, “Libvirt Metadata”).
- Click the + button; you can now view the new key under Existing Metadata.
Fill in the key’s value in its right-hand field.
- When finished with adding key-value pairs, click Save.
Key | Description |
---|---|
hw:action | Action that configures support limits per instance. Valid actions are:
Example: |
hw:NUMA_def | Definition of NUMA topology for the instance. For flavors whose RAM and vCPU allocations are larger than the size of NUMA nodes in the compute hosts, defining NUMA topology enables hosts to better utilize NUMA and improve performance of the guest OS. NUMA definitions defined through the flavor override image definitions. Valid definitions are:
Note If the values of numa_cpu or numa_mem.N specify more than that available, an exception is raised. Example when the instance has 8 vCPUs and 4GB RAM:
The scheduler looks for a host with 2 NUMA nodes with the ability to run 6 CPUs + 3 GB of RAM on one node, and 2 CPUS + 1 GB of RAM on another node. If a host has a single NUMA node with capability to run 8 CPUs and 4 GB of RAM, it will not be considered a valid match. The same logic is applied in the scheduler regardless of the numa_mempolicy setting. |
hw:watchdog_action | An instance watchdog device can be used to trigger an action if the instance somehow fails (or hangs). Valid actions are:
Example: |
hw_rng:action | A random-number generator device can be added to an instance using its image properties (see hw_rng_model in the "Command-Line Interface Reference" in RHEL OpenStack Platform documentation). If the device has been added, valid actions are:
Example: |
hw_video:ram_max_mb | Maximum permitted RAM to be allowed for video devices (in MB).
Example: |
quota:option | Enforcing limit for the instance. Valid options are:
Example: |
3.4. Manage Host Aggregates
A single Compute deployment can be partitioned into logical groups for performance or administrative purposes. OpenStack uses the following terms:
Host aggregates - A host aggregate creates logical units in a OpenStack deployment by grouping together hosts. Aggregates are assigned Compute hosts and associated metadata; a host can be in more than one host aggregate. Only administrators can see or create host aggregates.
An aggregate’s metadata is commonly used to provide information for use with the Compute scheduler (for example, limiting specific flavors or images to a subset of hosts). Metadata specified in a host aggregate will limit the use of that host to any instance that has the same metadata specified in its flavor.
Administrators can use host aggregates to handle load balancing, enforce physical isolation (or redundancy), group servers with common attributes, or separate out classes of hardware. When you create an aggregate, a zone name must be specified, and it is this name which is presented to the end user.
Availability zones - An availability zone is the end-user view of a host aggregate. An end user cannot view which hosts make up the zone, nor see the zone’s metadata; the user can only see the zone’s name.
End users can be directed to use specific zones which have been configured with certain capabilities or within certain areas.
3.4.1. Enable Host Aggregate Scheduling
By default, host-aggregate metadata is not used to filter instance usage; you must update the Compute scheduler’s configuration to enable metadata usage:
-
Edit the
/etc/nova/nova.conf
file (you must have either root or nova user permissions). Ensure that the
scheduler_default_filters
parameter contains:AggregateInstanceExtraSpecsFilter
for host aggregate metadata. For example:scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
AvailabilityZoneFilter
for availability zone host specification when launching an instance. For example:scheduler_default_filters=AvailabilityZoneFilter,RetryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
- Save the configuration file.
3.4.2. View Availability Zones or Host Aggregates
As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section; all zones are in the Availability Zones section.
3.4.3. Add a Host Aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
- Click Create Host Aggregate.
- Add a name for the aggregate in the Name field, and a name by which the end user should see it in the Availability Zone field.
- Click Manage Hosts within Aggregate.
- Select a host for use by clicking its + icon.
- Click Create Host Aggregate.
3.4.4. Update a Host Aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
To update the instance’s Name or Availability zone:
- Click the aggregate’s Edit Host Aggregate button.
- Update the Name or Availability Zone field, and click Save.
To update the instance’s Assigned hosts:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Change a host’s assignment by clicking its + or - icon.
- When finished, click Save.
To update the instance’s Metatdata:
- Click the aggregate’s arrow icon under Actions.
- Click the Update Metadata button. All current values are listed on the right-hand side under Existing Metadata.
- Under Available Metadata, click on the Other field, and specify the key you want to add. Use predefined keys (see Table 3.3, “Host Aggregate Metadata”) or add your own (which will only be valid if exactly the same key is set in an instance’s flavor).
Click the + button; you can now view the new key under Existing Metadata.
NoteRemove a key by clicking its - icon.
Click Save.
Table 3.3. Host Aggregate Metadata Key Description cpu_allocation_ratio
Sets allocation ratio of virtual CPU to physical CPU. Depends on the
AggregateCoreFilter
filter being set for the Compute scheduler.disk_allocation_ratio
Sets allocation ratio of Virtual disk to physical disk. Depends on the
AggregateDiskFilter
filter being set for the Compute scheduler.filter_tenant_id
If specified, the aggregate only hosts this tenant (project). Depends on the
AggregateMultiTenancyIsolation
filter being set for the Compute scheduler.ram_allocation_ratio
Sets allocation ratio of virtual RAM to physical RAM. Depends on the
AggregateRamFilter
filter being set for the Compute scheduler.
3.4.5. Delete a Host Aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
Remove all assigned hosts from the aggregate:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Remove all hosts by clicking their - icon.
- When finished, click Save.
- Click the aggregate’s arrow icon under Actions.
- Click Delete Host Aggregate in this and the next dialog screen.
3.5. Schedule Hosts and Cells
The Compute scheduling service determines on which cell or host (or host aggregate), an instance will be placed. As an administrator, you can influence where the scheduler will place an instance. For example, you might want to limit scheduling to hosts in a certain group or with the right RAM.
You can configure the following components:
- Filters - Determine the initial set of hosts on which an instance might be placed (see Section 3.5.1, “Configure Scheduling Filters”).
- Weights - When filtering is complete, the resulting set of hosts are prioritized using the weighting system. The highest weight has the highest priority (see Section 3.5.2, “Configure Scheduling Weights”).
-
Scheduler service - There are a number of configuration options in the
/etc/nova/nova.conf
file (on the scheduler host), which determine how the scheduler executes its tasks, and handles weights and filters. There is both a host and a cell scheduler. For a list of these options, refer to the "Configuration Reference" (RHEL OpenStack Platform Documentation).
In the following diagram, both host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling.
3.5.1. Configure Scheduling Filters
You define which filters you would like the scheduler to use in the scheduler_default_filters option (/etc/nova/nova.conf
file; you must have either root or nova user permissions). Filters can be added or removed.
By default, the following filters are configured to run in the scheduler:
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Some filters use information in parameters passed to the instance in:
-
The
nova boot
command, see the "Command-Line Interface Reference" in RHEL OpenStack Platform Documentation. - The instance’s flavor (see Section 3.3.4, “Update Flavor Metadata”)
- The instance’s image (see Appendix A, Image Configuration Parameters).
All available filters are listed in the following table.
Filter | Description |
---|---|
AggregateCoreFilter | Uses the host-aggregate metadata key cpu_allocation_ratio to filter out hosts exceeding the over-commit ratio (virtual CPU to physical CPU allocation ratio); only valid if a host aggregate is specified for the instance. |
If this ratio is not set, the filter uses the cpu_allocation_ratio value in the /etc/nova/nova.conf file. The default value is | |
AggregateDiskFilter | Uses the host-aggregate metadata key disk_allocation_ratio to filter out hosts exceeding the over-commit ratio (virtual disk to physical disk allocation ratio); only valid if a host aggregate is specified for the instance. |
If this ratio is not set, the filter uses the disk_allocation_ratio value in the /etc/nova/nova.conf file. The default value is | |
AggregateImagePropertiesIsolation | Only passes hosts in host aggregates whose metadata matches the instance’s image metadata; only valid if a host aggregate is specified for the instance. For more information, see Section 1.2.1, “Create an Image”. |
AggregateInstanceExtraSpecsFilter | Metadata in the host aggregate must match the host’s flavor metadata. For more information, see Section 3.3.4, “Update Flavor Metadata”. |
AggregateMultiTenancyIsolation | A host with the specified filter_tenant_id can only contain instances from that tenant (project). Note The tenant can still place instances on other hosts. |
AggregateRamFilter | Uses the host-aggregate metadata key ram_allocation_ratio to filter out hosts exceeding the over commit ratio (virtual RAM to physical RAM allocation ratio); only valid if a host aggregate is specified for the instance. |
If this ratio is not set, the filter uses the ram_allocation_ratio value in the /etc/nova/nova.conf file. The default value is | |
AllHostsFilter | Passes all available hosts (however, does not disable other filters). |
AvailabilityZoneFilter | Filters using the instance’s specified availability zone. |
ComputeCapabilitiesFilter |
Ensures Compute metadata is read correctly. Anything before the |
ComputeFilter | Passes only hosts that are operational and enabled. |
CoreFilter |
Uses the cpu_allocation_ratio in the |
DifferentHostFilter |
Enables an instance to build on a host that is different from one or more specified hosts. Specify |
DiskFilter |
Uses disk_allocation_ratio in the |
ImagePropertiesFilter | Only passes hosts that match the instance’s image properties. For more information, see Section 1.2.1, “Create an Image”. |
IsolatedHostsFilter |
Passes only isolated hosts running isolated images that are specified in the |
JsonFilter | Recognises and uses an instance’s custom JSON filters:
|
The filter is specfied as a query hint in the
| |
MetricFilter | Filters out hosts with unavailable metrics. |
NUMATopologyFilter | Filters out hosts based on its NUMA topology; if the instance has no topology defined, any host can be used. The filter tries to match the exact NUMA topology of the instance to those of the host (it does not attempt to pack the instance onto the host). The filter also looks at the standard over-subscription limits for each NUMA node, and provides limits to the compute host accordingly. |
RamFilter |
Uses ram_allocation_ratio in the |
RetryFilter |
Filters out hosts that have failed a scheduling attempt; valid if scheduler_max_attempts is greater than zero (by default, |
SameHostFilter |
Passes one or more specified hosts; specify hosts for the instance using the |
ServerGroupAffinityFilter | Only passes hosts for a specific server group:
|
ServerGroupAntiAffinityFilter | Only passes hosts in a server group that do not already host an instance:
|
SimpleCIDRAffinityFilter |
Only passes hosts on the specified IP subnet range specified by the instance’s cidr and
|
3.5.2. Configure Scheduling Weights
Both cells and hosts can be weighted for scheduling; the host or cell with the largest weight (after filtering) is selected. All weighers are given a multiplier that is applied after normalising the node’s weight. A node’s weight is calculated as:
w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...
You can configure weight options in the scheduler host’s /etc/nova/nova.conf
file (must have either root or nova user permissions).
3.5.2.1. Configure Weight Options for Hosts
You can define the host weighers you would like the scheduler to use in the [DEFAULT] scheduler_weight_classes option. Valid weighers are:
-
nova.scheduler.weights.ram
- Weighs the host’s available RAM. -
nova.scheduler.weights.metrics
- Weighs the host’s metrics. -
nova.scheduler.weights.all_weighers
- Uses all host weighers (default).
Weigher | Option | Description |
---|---|---|
All | [DEFAULT] scheduler_host_subset_size |
Defines the subset size from which a host is selected (integer); must be at least |
metrics | [metrics] required |
Specifies how to handle metrics in [metrics]
|
metrics |
[metrics] |
Used as the weight if any metric in [metrics] |
metrics |
[metrics] |
Mulitplier used for weighing metrics. By default, |
metrics |
[metrics] |
Specifies metrics and the ratio with which they are weighed; use a comma-separated list of
Example: |
ram |
[DEFAULT] |
Multiplier for RAM (floating point). By default, |
3.5.2.2. Configure Weight Options for Cells
You define which cell weighers you would like the scheduler to use in the [cells] scheduler_weight_classes option (/etc/nova/nova.conf
file; you must have either root
or nova
user permissions).
The use of cells is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
Valid weighers are:
-
nova.cells.weights.all_weighers
- Uses all cell weighers(default). -
nova.cells.weights.mute_child
- Weighs whether a child cell has not sent capacity or capability updates for some time. -
nova.cells.weights.ram_by_instance_type
- Weighs the cell’s available RAM. nova.cells.weights.weight_offset
- Evaluates a cell’s weight offset.NoteA cell’s weight offset is specified using
--woffset `in the `nova-manage cell create
command.
Weighers | Option | Description |
---|---|---|
|
[cells] |
Multiplier for hosts which have been silent for some time (negative floating point). By default, this value is |
|
[cells] |
Weight value given to silent hosts (positive floating point). By default, this value is |
|
[cells] |
Multiplier for weighing RAM (floating point). By default, this value is |
|
[cells] |
Multiplier for weighing cells (floating point). Enables the instance to specify a preferred cell (floating point) by setting its weight offset to |
3.6. Evacuate Instances
If you want to move an instance from a dead or shut-down compute node to a new host server in the same environment (for example, because the server needs to be swapped out), you can evacuate it using nova evacuate
.
- An evacuation is only useful if the instance disks are on shared storage or if the instance disks are Block Storage volumes. Otherwise, the disks will not be accessible and cannot be accessed by the new compute node.
-
An instance can only be evacuated from a server if the server is shut down; if the server is not shut down, the
evacuate
command will fail.
If you have a functioning compute node, and you want to:
-
Make a static copy (not running) of an instance for backup purposes or to copy the instance to a different environment, make a snapshot using
nova image-create
(see Migrate a Static Instance). -
Move an instance in a static state (not running) to a host in the same environment (shared storage not needed), migrate it using
nova migrate
(see Migrate a Static Instance). -
Move an instance in a live state (running) to a host in the same environment, migrate it using
nova live-migration
(see Migrate a Live (running) Instance).
3.6.1. Evacuate One Instance
Evacuate an instance using:
# nova evacuate [--password pass] [--on-shared-storage] instance_name [target_host]
Where:
-
--password
- Admin password to set for the evacuated instance (cannot be used if--on-shared-storage
is specified). If a password is not specified, a random password is generated and output when evacuation is complete. -
--on-shared-storage
- Indicates that all instance files are on shared storage. -
instance_name
- Name of the instance to be evacuated. target_host
- Host to which the instance is evacuated; if you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using:# nova host-list | grep compute
For example:
# nova evacuate myDemoInstance Compute2_OnEL7.myDomain
-
3.6.2. Evacuate All Instances
Evacuate all instances on a specified host using:
# nova host-evacuate instance_name [--target target_host] [--on-shared-storage] source_host
Where:
--target
- Host to which the instance is evacuated; if you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using:# nova host-list | grep compute
-
--on-shared-storage
- Indicates that all instance files are on shared storage. source_host
- Name of the host to be evacuated.For example:
# nova host-evacuate --target Compute2_OnEL7.localdomain myDemoHost.localdomain
3.7. Manage Instance Snapshots
An instance snapshot allows you to create a new image from an instance. This is very convenient for upgrading base images or for taking a published image and customizing it for local use.
The difference between an image that you upload directly to the Image Service and an image that you create by snapshot is that an image created by snapshot has additional properties in the Image Service database. These properties are found in the image_properties
table and include the following parameters:
Name | Value |
---|---|
image_type | snapshot |
instance_uuid | <uuid of instance that was snapshotted> |
base_image_ref | <uuid of original image of instance that was snapshotted> |
image_location | snapshot |
Snapshots allow you to create new instances based on that snapshot, and potentially restore an instance to that state. Moreover, this can be performed while the instance is running.
By default, a snapshot is accessible to the users and projects that were selected while launching an instance that the snapshot is based on.
3.7.1. Create an Instance Snapshot
- In the dashboard, select Project > Compute > Instances.
- Select the instance from which you want to create a snapshot.
- In the Actions column, click Create Snapshot.
In the Create Snapshot dialog, enter a name for the snapshot and click Create Snapshot.
The Images category now shows the instance snapshot.
To launch an instance from a snapshot, select the snapshot and click Launch.
3.7.2. Manage a Snapshot
- In the dashboard, select Project > Images.
- All snapshots you created, appear under the Project option.
For every snapshot you create, you can perform the following functions, using the dropdown list:
- Use the Create Volume option to create a volume and entering the values for volume name, description, image source, volume type, size and availability zone. For more information, see Create a Volume.
- Use the Edit Image option to update the snapshot image by updating the values for name, description, Kernel ID, Ramdisk ID, Architecture, Format, Minimum Disk (GB), Minimum RAM (MB), public or private. For more information, see Update an Image.
- Use the Delete Image option to delete the snapshot.
3.7.3. Rebuild an Instance to a State in a Snapshot
In an event that you delete an instance on which a snapshot is based, the snapshot still stores the instance ID. You can check this information using the nova image-list command and use the snapshot to restore the instance.
- In the dashboard, select Project > Compute > Images.
- Select the snapshot from which you want to restore the instance.
- In the Actions column, click Launch Instance.
- In the Launch Instance dialog, enter a name and the other details for the instance and click Launch.
For more information on launching an instance, see Create an Instance.
3.7.4. Consistent Snapshots
Previously, file systems had to be quiesced manually (fsfreeze) before taking a snapshot of active instances for consistent backups.
With the RHEL OpenStack Platform 7 release, Compute’s libvirt
driver now automatically requests the QEMU Guest Agent to freeze the file systems (and applications if fsfreeze-hook
is installed) during an image snapshot. Support for quiescing file systems enables scheduled, automatic snapshots at the block device level.
This feature is only valid if the QEMU Guest Agent is installed (qemu-ga
) and the image metadata enables the agent (hw_qemu_guest_agent=yes
)
Snapshots should not be considered a substitute for an actual system backup.