Chapter 4. Virtual machine instances
OpenStack Compute (nova) is the central component that provides virtual machines on demand. Compute interacts with the Identity service (keystone) for authentication, the Image service (glance) for images to launch instances, and the dashboard service for the user and administrative interface.
With Red Hat OpenStack Platform (RHOSP) you can easily manage virtual machine instances in the cloud. The Compute service creates, schedules, and manages instances, and exposes this functionality to other OpenStack components. This chapter discusses these procedures along with procedures to add components like key pairs, security groups, host aggregates and flavors. The term instance in OpenStack means a virtual machine instance.
4.1. Managing instances
Before you can create an instance, you need to ensure certain other OpenStack components (for example, a network, key pair and an image or a volume as the boot source) are available for the instance.
This section discusses the procedures to add these components, create and manage an instance. Managing an instance refers to updating, and logging in to an instance, viewing how the instances are being used, resizing or deleting them.
4.1.1. Adding components
Use the following sections to create a network, key pair and upload an image or volume source. Use these components when you create an instance that is not available by default. You must also create a new security group to allow SSH access to the user.
- In the dashboard, select Project.
- Select Network > Networks, and ensure there is a private network to which you can attach the new instance (to create a network, see Creating a Network section in the Networking Guide).
- Select Compute > Access & Security > Key Pairs, and ensure there is a key pair (to create a key pair, see Section 4.2.1.1, “Creating a key pair”).
Ensure that you have either an image or a volume that can you can use as a boot source:
- To view boot-source images, select the Images tab (to create an image, see Section 1.2.1, “Creating an image”).
- To view boot-source volumes, select the Volumes tab (to create a volume, see Create a Volume in the Storage Guide).
- Select Compute > Access & Security > Security Groups, and ensure you have created a security group rule (to create a security group, see Project Security Management in the Users and Identity Management Guide).
4.1.2. Launching an instance
Launch one or more instances from the dashboard.
Instances are launched by default using the Launch Instance form. However, you can also enable a Launch Instance wizard that simplifies the steps required. For more information, see Appendix B, Enabling the launch instance wizard.
- In the dashboard, select Project > Compute > Instances.
- Click Launch Instance.
- Complete the fields (* indicates a required field), and click Launch.
One or more instances are created and launched based on the options provided.
It is not possible to launch an instance with a Block Storage (cinder) volume if the root disk size is larger than the HDD of the Compute node. Use one of the following workarounds to allow an instance to be launched with a Block Storage volume:
- Use a flavor with the root disk and ephemeral disk set to 0.
-
Remove
DiskFilter
from theNovaSchedulerDefaultFilters
configuration.
4.1.2.1. Launching instance options
The following table outlines the options available when you use the Launch Instance form to launch a new instance. The same options are also available in the Launch instance wizard.
Tab | Field | Notes |
---|---|---|
Project and User | Project | Select the project from the list. |
User | Select the user from the list. | |
Details | Availability Zone | Zones are logical groupings of cloud resources in which you can place your instance. If you are unsure, use the default zone (for more information, see Section 4.4, “Managing host aggregates”). |
Instance Name | A name to identify your instance. | |
Flavor | The flavor determines what resources to give the instance, for example, memory. For default flavor allocations and information about creating new flavors, see Section 4.3, “Managing flavors”. | |
Instance Count | The number of instances to create with these parameters. 1 is preselected. | |
Instance Boot Source | Depending on the item selected, new fields are displayed to select the source:
| |
Access and Security | Key Pair | The specified key pair is injected into the instance and is used to remotely access the instance using SSH (if neither a direct login information or a static key pair is provided). Usually one key pair per project is created. |
Security Groups | Security groups contain firewall rules which filter the type and direction of the instance network traffic. For more information about configuring groups, see Project Security Management in the Users and Identity Management Guide). | |
Networking | Selected Networks | You must select at least one network. Instances are typically assigned to a private network, and then later given a floating IP address to enable external access. |
Post-Creation | Customization Script Source | You can provide either a set of commands or a script file, which runs after the instance is booted (for example, to set the instance host name or a user password). If Direct Input is selected, write your commands in the Script Data field; otherwise, specify your script file. Note
Any script that starts with |
Advanced Options | Disk Partition | By default, the instance is built as a single partition and dynamically resized as needed. However, you can choose to manually configure the partitions yourself. |
Configuration Drive | If selected, OpenStack writes metadata to a read-only configuration drive that is attached to the instance when it boots (instead of to Compute’s metadata service). After the instance has booted, you can mount this drive to view its contents and provide files to the instance. |
4.1.3. Updating an instance
You can update an instance by selecting Project > Compute > Instances, and selecting an action for that instance in the Actions column. Use actions to manipulate the instance in a number of ways:
Action | Description |
---|---|
Create Snapshot | Snapshots preserve the disk state of a running instance. You can create a snapshot to migrate the instance, as well as to preserve backup copies. |
Associate/Disassociate Floating IP | You must associate an instance with a floating IP (external) address before it can communicate with external networks, or be reached by external users. Because there are a limited number of external addresses in your external subnets, it is recommended that you disassociate any unused addresses. |
Edit Instance | Update the instance’s name and associated security groups. |
Edit Security Groups | Add and remove security groups to or from this instance using the list of available security groups (for more information on configuring groups, see Project Security Management in the Users and Identity Management Guide). |
Console | View the instance console in the browser for easy access to the instance. |
View Log | View the most recent section of the instance console log. When opened, you can view the full log by clicking View Full Log. |
Pause/Resume Instance | Immediately pause the instance (you are not asked for confirmation); the state of the instance is stored in memory (RAM). |
Suspend/Resume Instance | Immediately suspend the instance (you are not asked for confirmation); like hibernation, the state of the instance is kept on disk. |
Resize Instance | Display the Resize Instance window (see Section 4.1.4, “Resizing an instance”). |
Soft Reboot | Gracefully stop and restart the instance. A soft reboot attempts to gracefully shut down all processes before restarting the instance. |
Hard Reboot |
Stop and restart the instance. A hard reboot effectively shuts down the |
Shut Off Instance | Gracefully stop the instance. |
Rebuild Instance | Use new image and disk-partition options to rebuild the image (shut down, re-image, and re-boot the instance). If encountering operating system issues, this option is easier to try than terminating the instance and starting from the beginning. |
Terminate Instance | Permanently destroy the instance (you are asked for confirmation). |
You can create and allocate an external IP address, see Section 4.2.3, “Creating, assigning, and releasing floating IP addresses”
4.1.4. Resizing an instance
To resize an instance (memory or CPU count), you must select a new flavor for the instance that has the right capacity. If you are increasing the size, remember to first ensure that the host has enough space.
- Ensure communication between hosts by setting up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts. For example, Compute nodes can share the same SSH key.
Enable resizing on the original host by setting the
allow_resize_to_same_host
parameter toTrue
for the Controller role.NoteThe
allow_resize_to_same_host
parameter does not resize the instance on the same host. Even if the parameter equalsTrue
on all Compute nodes, the scheduler does not force the instance to resize on the same host. This is the expected behavior.- In the dashboard, select Project > Compute > Instances.
- Click the instance’s Actions arrow, and select Resize Instance.
- Select a new flavor in the New Flavor field.
If you want to manually partition the instance when it launches (results in a faster build time):
- Select Advanced Options.
- In the Disk Partition field, select Manual.
- Click Resize.
4.1.5. Connecting to an instance
You can access an instance console by using the dashboard or the command-line interface. You can also directly connect to the serial port of an instance so that you can debug even if the network connection fails.
4.1.5.1. Accessing an instance console by using the dashboard
You can connect to the instance console from the dashboard.
Procedure
- In the dashboard, select Compute > Instances.
- Click the instance’s More button and select Console.
- Log in using the image’s user name and password (for example, a CirrOS image uses cirros/cubswin:)).
4.1.5.2. Accessing an instance console by using the CLI
You can connect directly to the VNC console for an instance by entering the VNC console URL in a browser.
Procedure
To display the VNC console URL for an instance, enter the following command:
$ openstack console url show <vm_name> +-------+------------------------------------------------------+ | Field | Value | +-------+------------------------------------------------------+ | type | novnc | | url | http://172.25.250.50:6080/vnc_auto.html?token= | | | 962dfd71-f047-43d3-89a5-13cb88261eb9 | +-------+-------------------------------------------------------+
- To connect directly to the VNC console, enter the displayed URL in a browser.
4.1.6. Viewing instance usage
The following usage statistics are available:
Per Project
To view instance usage per project, select Project > Compute > Overview. A usage summary is immediately displayed for all project instances.
You can also view statistics for a specific period of time by specifying the date range and clicking Submit.
Per Hypervisor
If logged in as an administrator, you can also view information for all projects. Click Admin > System and select one of the tabs. For example, the Resource Usage tab offers a way to view reports for a distinct time period. You might also click Hypervisors to view your current vCPU, memory, or disk statistics.
NoteThe
vCPU Usage
value (x of y
) reflects the number of total vCPUs of all virtual machines (x) and the total number of hypervisor cores (y).
4.1.7. Deleting an instance
- In the dashboard, select Project > Compute > Instances, and select your instance.
- Click Terminate Instance.
Deleting an instance does not delete its attached volumes; you must do this separately (see Delete a Volume in the Storage Guide).
4.1.8. Managing multiple instances simultaneously
If you need to start multiple instances at the same time (for example, those that were down for compute or controller maintenance) you can do so easily at Project > Compute > Instances:
- Click the check boxes in the first column for the instances that you want to start. If you want to select all of the instances, click the check box in the first row in the table.
- Click More Actions above the table and select Start Instances.
Similarly, you can shut off or soft reboot multiple instances by selecting the respective actions.
4.2. Managing instance security
You can manage access to an instance by assigning it the correct security group (set of firewall rules) and key pair (enables SSH user access). Further, you can assign a floating IP address to an instance to enable external network access. The sections below outline how to create and manage key pairs, security groups, floating IP addresses and logging in to an instance using SSH. There is also a procedure for injecting an admin
password in to an instance.
For information on managing security groups, see Project Security Management in the Users and Identity Management Guide.
4.2.1. Managing key pairs
Key pairs provide SSH access to the instances. Each time a key pair is generated, its certificate is downloaded to the local machine and can be distributed to users. Typically, one key pair is created for each project (and used for multiple instances).
You can also import an existing key pair into OpenStack.
4.2.1.1. Creating a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Create Key Pair.
- Specify a name in the Key Pair Name field, and click Create Key Pair.
When the key pair is created, a key pair file is automatically downloaded through the browser. Save this file for later connections from external machines. For command-line SSH connections, you can load this file into SSH by executing:
# ssh-add ~/.ssh/os-key.pem
4.2.1.2. Importing a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click Import Key Pair.
- Specify a name in the Key Pair Name field, and copy and paste the contents of your public key into the Public Key field.
- Click Import Key Pair.
4.2.1.3. Deleting a key pair
- In the dashboard, select Project > Compute > Access & Security.
- On the Key Pairs tab, click the key’s Delete Key Pair button.
4.2.2. Creating a security group
Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security group are project specific; project members can edit the default rules for their security group and add new rule sets.
- In the dashboard, select the Project tab, and click Compute > Access & Security.
- On the Security Groups tab, click + Create Security Group.
- Provide a name and description for the group, and click Create Security Group.
For more information on managing project security, see Project Security Management in the Users and Identity Management Guide.
4.2.3. Creating, assigning, and releasing floating IP addresses
By default, an instance is given an internal IP address when it is first created. However, you can enable access through the public network by creating and assigning a floating IP address (external address). You can change an instance’s associated IP address regardless of the instance’s state.
Projects have a limited range of floating IP address that can be used (by default, the limit is 50), so you should release these addresses for reuse when they are no longer needed. Floating IP addresses can only be allocated from an existing floating IP pool, see Creating Floating IP Pools in the Networking Guide.
4.2.3.1. Allocating a floating IP to the project
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click Allocate IP to Project.
- Select a network from which to allocate the IP address in the Pool field.
- Click Allocate IP.
4.2.3.2. Assigning a floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' Associate button.
Select the address to be assigned in the IP address field.
NoteIf no addresses are available, you can click the
+
button to create a new address.- Select the instance to be associated in the Port to be Associated field. An instance can only be associated with one floating IP address.
- Click Associate.
4.2.3.3. Releasing a floating IP
- In the dashboard, select Project > Compute > Access & Security.
- On the Floating IPs tab, click the address' menu arrow (next to the Associate/Disassociate button).
- Select Release Floating IP.
4.2.4. Logging in to an instance
Prerequisites:
- Ensure that the instance’s security group has an SSH rule (see Project Security Management in the Users and Identity Management Guide).
- Ensure the instance has a floating IP address (external address) assigned to it (see Section 4.2.3, “Creating, assigning, and releasing floating IP addresses”).
- Obtain the instance’s key-pair certificate. The certificate is downloaded when the key pair is created; if you did not create the key pair yourself, ask your administrator (see Section 4.2.1, “Managing key pairs”).
To first load the key pair file into SSH, and then use ssh without naming it:
Change the permissions of the generated key-pair certificate.
$ chmod 600 os-key.pem
Check whether
ssh-agent
is already running:# ps -ef | grep ssh-agent
If not already running, start it up with:
# eval `ssh-agent`
On your local machine, load the key-pair certificate into SSH. For example:
$ ssh-add ~/.ssh/os-key.pem
- You can now SSH into the file with the user supplied by the image.
The following example command shows how to SSH into the Red Hat Enterprise Linux guest image with the user cloud-user
:
$ ssh cloud-user@192.0.2.24
You can also use the certificate directly. For example:
$ ssh -i /myDir/os-key.pem cloud-user@192.0.2.24
4.2.5. Injecting an admin
password into an instance
You can inject an admin
(root
) password into an instance using the following procedure.
In the
/etc/openstack-dashboard/local_settings
file, set thechange_set_password
parameter value toTrue
.can_set_password: True
Set the
inject_password
parameter to "True" in your Compute environment file.inject_password=true
Restart the Compute service.
# service nova-compute restart
When you use the nova boot
command to launch a new instance, the output of the command displays an adminPass
parameter. You can use this password to log into the instance as the root
user.
The Compute service overwrites the password value in the /etc/shadow
file for the root
user. This procedure can also be used to activate the root
account for the KVM guest images. For more information on how to use KVM guest images, see Section 1.2.1.1, “Using a KVM guest image with Red Hat OpenStack Platform”
You can also set a custom password from the dashboard. To enable this, run the following command after you have set can_set_password
parameter to true
.
# systemctl restart httpd.service
The newly added admin
password fields are as follows:
These fields can be used when you launch or rebuild an instance.
4.3. Managing flavors
Each created instance is given a flavor (resource template), which determines the instance’s size and capacity. Flavors can also specify secondary ephemeral storage, swap disk, metadata to restrict usage, or special project access (none of the default flavors have these additional attributes defined).
Name | vCPUs | RAM | Root Disk Size |
---|---|---|---|
m1.tiny | 1 | 512 MB | 1 GB |
m1.small | 1 | 2048 MB | 20 GB |
m1.medium | 2 | 4096 MB | 40 GB |
m1.large | 4 | 8192 MB | 80 GB |
m1.xlarge | 8 | 16384 MB | 160 GB |
The majority of end users will be able to use the default flavors. However, you can create and manage specialized flavors. For example, you can:
- Change default memory and capacity to suit the underlying hardware needs.
- Add metadata to force a specific I/O rate for the instance or to match a host aggregate.
Behavior set using image properties overrides behavior set using flavors (for more information, see Section 1.2, “Managing images”).
4.3.1. Updating configuration permissions
By default, only administrators can create flavors or view the complete flavor list (select Admin > System > Flavors). To allow all users to configure flavors, specify the following in the /etc/nova/policy.json
file (nova-api server):
"compute_extension:flavormanage": "",
4.3.2. Creating a flavor
- As an admin user in the dashboard, select Admin > System > Flavors.
Click Create Flavor, and specify the following fields:
Table 4.4. Flavor Options Tab Field Description Flavor Information
Name
Unique name.
ID
Unique ID. The default value,
auto
, generates a UUID4 value, but you can also manually specify an integer or UUID4 value.VCPUs
Number of virtual CPUs.
RAM (MB)
Memory (in megabytes).
Root Disk (GB)
Ephemeral disk size (in gigabytes); to use the native image size, specify
0
. This disk is not used if Instance Boot Source=Boot from Volume.Epehemeral Disk (GB)
Secondary ephemeral disk size (in gigabytes) available to an instance. This disk is destroyed when an instance is deleted.
The default value is
0
, which implies that no ephemeral disk is created.Swap Disk (MB)
Swap disk size (in megabytes).
Flavor Access
Selected Projects
Projects which can use the flavor. If no projects are selected, all projects have access (
Public=Yes
).- Click Create Flavor.
4.3.3. Updating general attributes
- As an admin user in the dashboard, select Admin > System > Flavors.
- Click the flavor’s Edit Flavor button.
- Update the values, and click Save.
4.3.4. Updating flavor metadata
In addition to editing general attributes, you can add metadata to a flavor (extra_specs
), which can help fine-tune instance usage. For example, you might want to set the maximum-allowed bandwidth or disk writes.
- Pre-defined keys determine hardware support or quotas. Pre-defined keys are limited by the hypervisor you are using (for libvirt, see Table 4.5, “Libvirt Metadata”).
-
Both pre-defined and user-defined keys can determine instance scheduling. For example, you might specify
SpecialComp=True
; any instance with this flavor can then only run in a host aggregate with the same key-value combination in its metadata (see Section 4.4, “Managing host aggregates”).
4.3.4.1. Viewing metadata
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata.
4.3.4.2. Adding metadata
You specify a flavor’s metadata using a key/value
pair.
- As an admin user in the dashboard, select Admin > System > Flavors.
-
Click the flavor’s Metadata link (
Yes
orNo
). All current values are listed on the right-hand side under Existing Metadata. - Under Available Metadata, click on the Other field, and specify the key you want to add (see Table 4.5, “Libvirt Metadata”).
- Click the + button; you can now view the new key under Existing Metadata.
Fill in the key’s value in its right-hand field.
- When finished with adding key-value pairs, click Save.
Key | Description |
---|---|
| Action that configures support limits per instance. Valid actions are:
Example: |
| Definition of NUMA topology for the instance. For flavors whose RAM and vCPU allocations are larger than the size of NUMA nodes in the compute hosts, defining NUMA topology enables hosts to better utilize NUMA and improve performance of the guest OS. NUMA definitions defined through the flavor override image definitions. Valid definitions are:
Note
If the values of Example when the instance has 8 vCPUs and 4GB RAM:
The scheduler looks for a host with 2 NUMA nodes with the ability to run 6 CPUs + 3072 MB, or 3 GB, of RAM on one node, and 2 CPUS + 1024 MB, or 1 GB, of RAM on another node. If a host has a single NUMA node with capability to run 8 CPUs and 4 GB of RAM, it will not be considered a valid match. |
| An instance watchdog device can be used to trigger an action if the instance somehow fails (or hangs). Valid actions are:
Example: |
| You can use this parameter to specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values:
Example: |
|
A random-number generator device can be added to an instance using its image properties (see If the device has been added, valid actions are:
Example: |
| Maximum permitted RAM to be allowed for video devices (in MB).
Example: |
| Enforcing limit for the instance. Valid options are:
Example: In addition, the VMware driver supports the following quota options, which control upper and lower limits for CPUs, RAM, disks, and networks, as well as shares, which can be used to control relative allocation of available resources among tenants:
|
4.4. Managing host aggregates
A single Compute deployment can be partitioned into logical groups for performance or administrative purposes. OpenStack uses the following terms:
Host aggregates - A host aggregate creates logical units in a OpenStack deployment by grouping together hosts. Aggregates are assigned Compute hosts and associated metadata; a host can be in more than one host aggregate. Only administrators can see or create host aggregates.
An aggregate’s metadata is commonly used to provide information for use with the Compute scheduler (for example, limiting specific flavors or images to a subset of hosts). Metadata specified in a host aggregate will limit the use of that host to any instance that has the same metadata specified in its flavor.
Administrators can use host aggregates to handle load balancing, enforce physical isolation (or redundancy), group servers with common attributes, or separate out classes of hardware. When you create an aggregate, a zone name must be specified, and it is this name which is presented to the end user.
Availability zones - An availability zone is the end-user view of a host aggregate. An end user cannot view which hosts make up the zone, nor see the zone’s metadata; the user can only see the zone’s name.
End users can be directed to use specific zones which have been configured with certain capabilities or within certain areas.
4.4.1. Enabling host aggregate scheduling
By default, host-aggregate metadata is not used to filter instance usage. You must update the Compute scheduler’s configuration to enable metadata usage:
- Open your Compute environment file.
Add the following values to the
NovaSchedulerDefaultFilters
parameter, if they are not already present:AggregateInstanceExtraSpecsFilter
for host aggregate metadata.NoteScoped specifications must be used for setting flavor
extra_specs
when specifying bothAggregateInstanceExtraSpecsFilter
andComputeCapabilitiesFilter
filters as values of the sameNovaSchedulerDefaultFilters
parameter, otherwise theComputeCapabilitiesFilter
will fail to select a suitable host. See Table 4.7, “Scheduling Filters” for further details.-
AvailabilityZoneFilter
for availability zone host specification when launching an instance.
- Save the configuration file.
- Deploy the overcloud.
4.4.2. Viewing availability zones or host aggregates
As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section; all zones are in the Availability Zones section.
4.4.3. Adding a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
- Click Create Host Aggregate.
- Add a name for the aggregate in the Name field, and a name by which the end user should see it in the Availability Zone field.
- Click Manage Hosts within Aggregate.
- Select a host for use by clicking its + icon.
- Click Create Host Aggregate.
4.4.4. Updating a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
To update the instance’s Name or Availability zone:
- Click the aggregate’s Edit Host Aggregate button.
- Update the Name or Availability Zone field, and click Save.
To update the instance’s Assigned hosts:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Change a host’s assignment by clicking its + or - icon.
- When finished, click Save.
To update the instance’s Metadata:
- Click the aggregate’s arrow icon under Actions.
- Click the Update Metadata button. All current values are listed on the right-hand side under Existing Metadata.
- Under Available Metadata, click on the Other field, and specify the key you want to add. Use predefined keys (see Table 4.6, “Host Aggregate Metadata”) or add your own (which will only be valid if exactly the same key is set in an instance’s flavor).
Click the + button; you can now view the new key under Existing Metadata.
NoteRemove a key by clicking its - icon.
Click Save.
Table 4.6. Host Aggregate Metadata Key Description filter_tenant_id
If specified, the aggregate only hosts this tenant (project). Depends on the
AggregateMultiTenancyIsolation
filter being set for the Compute scheduler.
4.4.5. Deleting a host aggregate
- As an admin user in the dashboard, select Admin > System > Host Aggregates. All currently defined aggregates are listed in the Host Aggregates section.
Remove all assigned hosts from the aggregate:
- Click the aggregate’s arrow icon under Actions.
- Click Manage Hosts.
- Remove all hosts by clicking their - icon.
- When finished, click Save.
- Click the aggregate’s arrow icon under Actions.
- Click Delete Host Aggregate in this and the next dialog screen.
4.5. Scheduling hosts
The Compute scheduling service determines on which host (or host aggregate), an instance will be placed. As an administrator, you can influence where the scheduler will place an instance. For example, you might want to limit scheduling to hosts in a certain group or with the right RAM.
You can configure the following components:
- Filters - Determine the initial set of hosts on which an instance might be placed (see Section 4.5.1, “Configuring scheduling filters”).
- Weights - When filtering is complete, the resulting set of hosts are prioritized using the weighting system. The highest weight has the highest priority (see Section 4.5.2, “Configuring scheduling weights”).
-
Scheduler service - There are a number of configuration options in the
/var/lib/config-data/puppet-generated/<nova_container>/etc/nova/nova.conf
file (on the scheduler host), which determine how the scheduler executes its tasks, and handles weights and filters.
In the following diagram, both host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling.
4.5.1. Configuring scheduling filters
You define the filters you want the scheduler to use by adding or removing filters from the NovaSchedulerDefaultFilters
parameter in your Compute environment file.
The default configuration runs the following filters in the scheduler:
- RetryFilter
- AvailabilityZoneFilter
- ComputeFilter
- ComputeCapabilitiesFilter
- ImagePropertiesFilter
- ServerGroupAntiAffinityFilter
- ServerGroupAffinityFilter
Some filters use information in parameters passed to the instance in:
-
The
nova boot
command. - The instance’s flavor (see Section 4.3.4, “Updating flavor metadata”)
- The instance’s image (see Appendix A, Image configuration parameters).
The following table lists all the available filters.
Filter | Description |
---|---|
| Only passes hosts in host aggregates whose metadata matches the instance’s image metadata; only valid if a host aggregate is specified for the instance. For more information, see Section 1.2.1, “Creating an image”. |
| Metadata in the host aggregate must match the host’s flavor metadata. For more information, see Section 4.3.4, “Updating flavor metadata”. |
This filter can only be specified in the same
| |
|
A host with the specified Note The tenant can still place instances on other hosts. |
| Passes all available hosts (however, does not disable other filters). |
| Filters using the instance’s specified availability zone. |
|
Ensures Compute metadata is read correctly. Anything before the |
| Passes only hosts that are operational and enabled. |
|
Enables an instance to build on a host that is different from one or more specified hosts. Specify |
| Only passes hosts that match the instance’s image properties. For more information, see Section 1.2.1, “Creating an image”. |
|
Passes only isolated hosts running isolated images that are specified using |
| Recognises and uses an instance’s custom JSON filters:
|
The filter is specified as a query hint in the
| |
|
Use this filter to limit scheduling to Compute nodes that report the metrics configured by using To use this filter, add the following configuration to your Compute environment file: parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/compute_monitors: value: 'cpu.virt_driver'
By default, the Compute scheduling service updates the metrics every 60 seconds. To ensure the metrics are up-to-date, you can increase the frequency at which the metrics data is refreshed using the parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/update_resources_interval: value: '2' |
| Filters out hosts based on its NUMA topology. If the instance has no topology defined, any host can be used. The filter tries to match the exact NUMA topology of the instance to those of the host (it does not attempt to pack the instance onto the host). The filter also looks at the standard over-subscription limits for each NUMA node, and provides limits to the compute host accordingly. |
|
Filters out hosts that have failed a scheduling attempt; valid if |
|
Passes one or more specified hosts; specify hosts for the instance using the |
| Only passes hosts for a specific server group:
|
| Only passes hosts in a server group that do not already host an instance:
|
|
Only passes hosts on the specified IP subnet range specified by the instance’s cidr and
|
4.5.2. Configuring scheduling weights
Hosts can be weighted for scheduling; the host with the largest weight (after filtering) is selected. All weighers are given a multiplier that is applied after normalising the node’s weight. A node’s weight is calculated as:
w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...
You can configure weight options in the Compute node’s configuration file.
Configuration option | Description |
---|---|
| Use this parameter to configure which of the following attributes to use for calculating the weight of each host:
Type: String |
| Use this parameter to specify the multiplier to use to weigh hosts based on the available RAM. Set to a positive value to prefer hosts with more available RAM, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available RAM, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.
By default, the scheduler spreads instances across all hosts evenly ( Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts based on the available disk space. Set to a positive value to prefer hosts with more available disk space, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available disk space, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the disk weigher is relative to other weighers.
By default, the scheduler spreads instances across all hosts evenly ( Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts based on the available vCPUs. Set to a positive value to prefer hosts with more available vCPUs, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available vCPUs, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the vCPU weigher is relative to other weighers.
By default, the scheduler spreads instances across all hosts evenly ( Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts based on the host workload. Set to a negative value to prefer hosts with lighter workloads, which distributes the workload across more hosts. Set to a positive value to prefer hosts with heavier workloads, which schedules instances onto hosts that are already busy. The absolute value, whether positive or negative, controls how strong the I/O operations weigher is relative to other weighers.
By default, the scheduler distributes the workload across more hosts ( Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts based on recent build failures. Set to a positive value to increase the significance of build failures recently reported by the host. Hosts with recent build failures are then less likely to be chosen.
Set to Default: 1000000.0 Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving an instance. By default, the scheduler prefers hosts within the same source cell when migrating an instance. Set to a positive value to prefer hosts within the same cell the instance is currently running. Set to a negative value to prefer hosts located in a different cell from that where the instance is currently running. Default: 1000000.0 Type: Floating point |
| Use this parameter to specify the multiplier to use to weigh hosts based on the number of PCI devices on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the more PCI devices a Compute node has the higher the weight allocated to the Compute node. For example, if there are three hosts available, one with a single PCI device, one with multiple PCI devices and one without any PCI devices, then the Compute scheduler prioritizes these hosts based on the demands of the instance. The first host should be preferred if the instance requests one PCI device, the second host if the instance requires multiple PCI devices and the third host if the instance does not request a PCI device. Configure this option to prevent non-PCI instances from occupying resources on hosts with PCI devices. Default: 1.0 Type: Positive floating point |
| Use this parameter to specify the size of the subset of filtered hosts from which to select the host. Must be set to at least 1. A value of 1 selects the first host returned by the weighing functions. Any value less than 1 is ignored and 1 is used instead. Set to a value greater than 1 to prevent multiple scheduler processes handling similar requests selecting the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Default: 1 Type: Integer |
| Use this parameter to specify the multiplier to use to weigh hosts for group soft-affinity. Default: 1.0 Type: Positive floating point |
| Use this parameter to specify the multiplier to use to weigh hosts for group soft-anti-affinity. Default: 1.0 Type: Positive floating point |
|
Use this parameter to specify the multiplier to use for weighting metrics. By default, Set to a number greater than 1.0 to increase the effect of the metric on the overall weight. Set to a number between 0.0 and 1.0 to reduce the effect of the metric on the overall weight. Set to 0.0 to ignore the metric value and return the value of the ‘weight_of_unavailable’ option. Set to a negative number to prioritize the host with lower metrics, and stack instances in hosts. Default: 1.0 Type: Floating point |
| Use this parameter to specify the metrics to use for weighting, and the ratio to use to calculate the weight of each metric. Valid metric names:
Example:
Type: Comma-separated list of |
|
Use this parameter to specify how to handle configured
Type: Boolean |
|
Use this parameter to specify the weight to use if any Default: -10000.0 Type: Floating point |
4.5.3. Reserving NUMA nodes with PCI devices
Compute uses the filter scheduler to prioritize hosts with PCI devices for instances requesting PCI. The hosts are weighted using the PCIWeigher
option, based on the number of PCI devices available on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the hosts with more PCI devices are allocated a higher weight than the others. If an instance is not requesting PCI devices, then prioritization does not take place.
This feature is especially useful in the following cases:
- As an operator, if you want to reserve nodes with PCI devices (typically expensive and with limited resources) for guest instances that request them.
- As a user launching instances, you want to ensure that PCI devices are available when required.
For this value to be considered, one of the following values must be added to the NovaSchedulerDefaultFilters
parameter in your Compute environment file: PciPassthroughFilter
or NUMATopologyFilter
.
The pci_weight_multiplier
configuration option must be a positive value.
4.6. Managing instance snapshots
You can use an instance snapshot to create a new image from an instance. This is very convenient for upgrading base images or for taking a published image and customizing it for local use.
The difference between an image that you upload directly to the Image service and an image that you create by snapshot is that an image created by snapshot has additional properties in the Image service database. These properties are in the image_properties
table and include the following parameters:
Name | Value |
---|---|
image_type | snapshot |
instance_uuid | <uuid_of_instance_that_was_snapshotted> |
base_image_ref | <uuid_of_original_image_of_instance_that_was_snapshotted> |
image_location | snapshot |
Use snapshots to create new instances based on that snapshot, and potentially restore an instance to that state. You can perform this action while the instance is running.
By default, a snapshot is accessible to the users and projects that were selected while launching an instance that the snapshot is based on.
4.6.1. Creating an instance snapshot
If you intend to use an instance snapshot as a template to create new instances, you must ensure that the disk state is consistent. Before you create a snapshot, set the snapshot image metadata property os_require_quiesce=yes
:
$ openstack image set --property os_require_quiesce=yes <image_id>
For this to work, the guest must have the qemu-guest-agent
package installed, and the image must be created with the metadata property parameter hw_qemu_guest_agent=yes
set.:
$ openstack image create \ --disk-format raw \ --container-format bare \ --file <file_name> \ --is-public True \ --property hw_qemu_guest_agent=yes \ --progress \ --name <name>
If you unconditionally enable the hw_qemu_guest_agent=yes
parameter, then you are adding another device to the guest. This consumes a PCI slot, and limits the number of other devices you can allocate to the guest. It also causes Windows guests to display a warning message about an unknown hardware device.
For these reasons, setting the hw_qemu_guest_agent=yes
parameter is optional, and you must use the parameter only for images that require the QEMU guest agent.
- In the dashboard, select Project > Compute > Instances.
- Select the instance from which you want to create a snapshot.
- In the Actions column, click Create Snapshot.
In the Create Snapshot dialog, enter a name for the snapshot and click Create Snapshot.
The Images category now shows the instance snapshot.
To launch an instance from a snapshot, select the snapshot and click Launch.
4.6.2. Managing a snapshot
- In the dashboard, select Project > Images.
- All snapshots you created, appear under the Project option.
For every snapshot you create, you can perform the following functions, using the dropdown list:
- Use the Create Volume option to create a volume and entering the values for volume name, description, image source, volume type, size and availability zone. For more information, see Create a Volume in the Storage Guide.
- Use the Edit Image option to update the snapshot image by updating the values for name, description, Kernel ID, Ramdisk ID, Architecture, Format, Minimum Disk (GB), Minimum RAM (MB), public or private. For more information, see Section 1.2.3, “Updating an image”.
- Use the Delete Image option to delete the snapshot.
4.6.3. Rebuilding an instance to a state in a snapshot
In an event that you delete an instance on which a snapshot is based, the snapshot still stores the instance ID. You can check this information by using the nova image-list command and use the snapshot to restore the instance.
- In the dashboard, select Project > Compute > Images.
- Select the snapshot from which you want to restore the instance.
- In the Actions column, click Launch Instance.
- In the Launch Instance dialog, enter a name and the other details for the instance and click Launch.
For more information on launching an instance, see Section 4.1.2, “Launching an instance”.
4.6.4. Consistent snapshots
Previously, file systems had to be quiesced manually (fsfreeze) before taking a snapshot of active instances for consistent backups.
The Compute libvirt
driver automatically requests the QEMU Guest Agent to freeze the file systems (and applications if fsfreeze-hook
is installed) during an image snapshot. Support for quiescing file systems enables scheduled, automatic snapshots at the block device level.
This feature is valid only if the QEMU Guest Agent is installed (qemu-ga
) and the image metadata enables the agent (hw_qemu_guest_agent=yes
).
Do not use snapshots as a substitute for system backups.
4.7. Using rescue mode for instances
Compute has a method to reboot a virtual machine in rescue mode. Rescue mode provides a mechanism for access when the virtual machine image renders the instance inaccessible. A rescue virtual machine allows a user to fix their virtual machine by accessing the instance with a new root password. This feature is useful if the file system of an instance is corrupted. By default, rescue mode starts an instance from the initial image attaching the current boot disk as a secondary one.
4.7.1. Preparing an image for a rescue mode instance
Due to the fact that both the boot disk and the disk for rescue mode have same UUID, sometimes the virtual machine can be booted from the boot disk instead of the disk for rescue mode.
To avoid this issue, you should create a new image as rescue image based on the procedure in Section 1.2.1, “Creating an image”:
The rescue
image is stored in glance
and configured in the nova.conf
as a default, or you can select when you do the rescue.
4.7.1.1. Rescuing an image that uses ext4
file system
When the base image uses ext4
file system, you can create a rescue image from it by using the following procedure:
Change the
UUID
to a random value by using thetune2fs
command:# tune2fs -U random /dev/<device_node>
Replace
<device_node>
with the root device node, for example,sda
orvda
.Verify the details of the file system, including the new UUID:
# tune2fs -l
-
Update the
/etc/fstab
to use the new UUID. You might need to repeat this for any additional partitions that you have that are mounted in thefstab
by UUID. -
Update the
/boot/grub2/grub.conf
file and update the UUID parameter with the new UUID of the root disk. - Shut down and use this image as your rescue image. This causes the rescue image to have a new random UUID that does not conflict with the instance that you are rescuing.
The XFS file system cannot change the UUID of the root device on the running virtual machine. Reboot the virtual machine until the virtual machine is launched from the disk for rescue mode.
4.7.2. Adding the rescue image to the OpenStack Image service
When you have completed modifying the UUID of your image, use the following commands to add the generated rescue image to the OpenStack Image service:
Add the rescue image to the Image service:
# openstack image create --name <image_name> --disk-format qcow2 \ --container-format bare --is-public True --file <image_path>
Replace
<image_name>
with the name of the image and<image_path>
with the location of the image.Use the
image list
command to obtain the<image_id>
required to launch an instance in the rescue mode.# openstack image list
You can also upload an image by using the OpenStack Dashboard, see Section 1.2.2, “Uploading an image”.
4.7.3. Launching an instance in rescue mode
Because you need to rescue an instance with a specific image, rather than the default one, use the
--image
parameter:# openstack server rescue --image <image> <instance>
-
Replace
<image>
with the name or ID of the image you want to use. -
Replace
<instance>
with the name or ID of the instance that you want to rescue.
NoteFor more information on rescuing an instance, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/instances_and_images_guide/assembly-managing-an-instance_instances#rescuing-an-instance_instances
By default, the instance has 60 seconds to shut down. You can override the timeout value on a per image basis by using the image metadata setting
os_shutdown_timeout
to specify the time that different types of operating systems require to shut down cleanly.-
Replace
- Reboot the virtual machine.
-
Confirm the status of the virtual machine is RESCUE on the controller node by using
nova list
command or by using dashboard. - Log in to the new virtual machine dashboard by using the password for rescue mode.
You can now make the necessary changes to your instance to fix any issues.
4.7.4. Unrescuing an instance
You can unrescue
the fixed instance to restart it from the boot disk.
Execute the following commands on the controller node.
# nova unrescue <virtual_machine_id>
Here
<virtual_machine_id>
is ID of a virtual machine that you want to unrescue.
The status of your instance returns to ACTIVE
after the unrescue operation has completed successfully. :leveloffset: +3
4.8. Creating a customized instance
Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can use the following methods to pass data to instances:
- User data
-
Use to include instructions in the instance launch command for
cloud-init
to execute. - Instance metadata
- A list of key-value pairs that you can specify when you create or update an instance.
You can access the additional data passed to the instance by using a config drive or the metadata service.
- Config drive
-
You can attach a config drive to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it. You can use the config drive as a source for
cloud-init
information. Config drives are useful when combined withcloud-init
for server bootstrapping, and when you want to pass large files to your instances. For example, you can configurecloud-init
to automatically mount the config drive and run the setup scripts during the initial instance boot. Config drives are created with the volume label ofconfig-2
, and attached to the instance when it boots. The contents of any additional files passed to the config drive are added to theuser_data
file in theopenstack/{version}/
directory of the config drive.cloud-init
retrieves the user data from this file. - Metadata service
-
Uses a REST API to retrieve data specific to an instance. Instances access this service at
169.254.169.254
or atfe80::a9fe:a9fe
.
cloud-init
can use both a config drive and the metadata service to consume the additional data for customizing an instance. The cloud-init
package supports several data input formats. Shell scripts and the cloud-config
format are the most common input formats:
-
Shell scripts: The data declaration begins with
#!
orContent-Type: text/x-shellscript
. Shell scripts are invoked last in the boot process. -
cloud-config
format: The data declaration begins with#cloud-config
orContent-Type: text/cloud-config
.cloud-config
files must be valid YAML to be parsed and executed bycloud-init
.
cloud-init
has a maximum user data size of 16384 bytes for data passed to an instance. You cannot change the size limit, therefore use gzip compression when you need to exceed the size limit.
4.8.1. Customizing an instance by using user data
You can use user data to include instructions in the instance launch command. cloud-init
executes these commands to customize the instance as the last step in the boot process.
Procedure
Create a file with instructions for
cloud-init
. For example, create a bash script that installs and enables a web server on the instance:$ vim /home/scripts/install_httpd #!/bin/bash yum -y install httpd python-psycopg2 systemctl enable httpd --now
Launch an instance with the
--user-data
option to pass the bash script:$ openstack server create \ --image rhel8 \ --flavor default \ --nic net-id=web-server-network \ --security-group default \ --key-name web-server-keypair \ --user-data /home/scripts/install_httpd \ --wait web-server-instance
When the instance state is active, attach a floating IP address:
$ openstack floating ip create web-server-network $ openstack server add floating ip web-server-instance 172.25.250.123
Log in to the instance with SSH:
$ ssh -i ~/.ssh/web-server-keypair cloud-user@172.25.250.123
Check that the customization was successfully performed. For example, to check that the web server has been installed and enabled, enter the following command:
$ curl http://localhost | grep Test <title>Test Page for the Apache HTTP Server on Red Hat Enterprise Linux</title> <h1>Red Hat Enterprise Linux <strong>Test Page</strong></h1>
Review the
/var/log/cloud-init.log
file for relevant messages, such as whether or not thecloud-init
executed:$ sudo less /var/log/cloud-init.log ...output omitted... ...util.py[DEBUG]: Cloud-init v. 0.7.9 finished at Sat, 23 Jun 2018 02:26:02 +0000. Datasource DataSourceOpenStack [net,ver=2]. Up 21.25 seconds
4.8.2. Customizing an instance by using metadata
You can use instance metadata to specify the properties of an instance in the instance launch command.
Procedure
Launch an instance with the
--property <key=value>
option. For example, to mark the instance as a webserver, set the following property:$ openstack server create \ --image rhel8 \ --flavor default \ --property role=webservers \ --wait web-server-instance
Optional: Add an additional property to the instance after it is created, for example:
$ openstack server set \ --property region=emea \ --wait web-server-instance
4.8.3. Customizing an instance by using a config drive
You can create a config drive for an instance that is attached during the instance boot process. You can pass content to the config drive that the config drive makes available to the instance.
Procedure
Enable the config drive, and specify a file that contains content that you want to make available in the config drive. For example, the following command creates a new instance named
config-drive-instance
and attaches a config drive that contains the contents of the filemy-user-data.txt
:(overcloud)$ openstack server create --flavor m1.tiny \ --config-drive true \ --user-data ./my-user-data.txt \ --image cirros config-drive-instance
This command creates the config drive with the volume label of
config-2
, which is attached to the instance when it boots, and adds the contents ofmy-user-data.txt
to theuser_data
file in theopenstack/{version}/
directory of the config drive.- Log in to the instance.
Mount the config drive:
If the instance OS uses
udev
:# mkdir -p /mnt/config # mount /dev/disk/by-label/config-2 /mnt/config
If the instance OS does not use
udev
, you need to first identify the block device that corresponds to the config drive:# blkid -t LABEL="config-2" -odevice /dev/vdb # mkdir -p /mnt/config # mount /dev/vdb /mnt/config