Search

Chapter 9. nova

download PDF

The following chapter contains information about the configuration options in the nova service.

9.1. nova.conf

This section contains options for the /etc/nova/nova.conf file.

9.1.1. DEFAULT

The following table outlines the options available under the [DEFAULT] group in the /etc/nova/nova.conf file.

.

Configuration option = Default valueTypeDescription

allow_resize_to_same_host = False

boolean value

Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize.

allow_same_net_traffic = True

boolean value

Determine whether to allow network traffic from same network.

When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, security groups or other approaches should be used.

Possible values:

  • True: Network traffic should be allowed pass between all instances on the same network, regardless of their tenant and security policies
  • False: Network traffic should not be allowed pass between instances unless it is unblocked in a security group

Related options:

  • use_neutron: This must be set to False to enable nova-network networking
  • firewall_driver: This must be set to nova.virt.libvirt.firewall.IptablesFirewallDriver to ensure the libvirt firewall driver is enabled.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

auto_assign_floating_ip = False

boolean value

Autoassigning floating IP to VM

When set to True, floating IP is auto allocated and associated to the VM upon creation.

Related options:

  • use_neutron: this options only works with nova-network.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

backdoor_port = None

string value

Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service’s log file.

backdoor_socket = None

string value

Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process.

bandwidth_poll_interval = 600

integer value

Interval to pull network bandwidth usage info.

Not supported on all hypervisors. If a hypervisor doesn’t support bandwidth usage, it will not get the info in the usage events.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

bindir = /usr/local/bin

string value

The directory where the Nova binaries are installed.

This option is only relevant if the networking capabilities from Nova are used (see services below). Nova’s networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.

block_device_allocate_retries = 60

integer value

The number of times to check for a volume to be "available" before attaching it during server create.

When creating a server with block device mappings where source_type is one of blank, image or snapshot and the destination_type is volume, the nova-compute service will create a volume and then attach it to the server. Before the volume can be attached, it must be in status "available". This option controls how many times to check for the created volume to be "available" before it is attached.

If the operation times out, the volume will be deleted if the block device mapping delete_on_termination value is True.

It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details.

Possible values:

  • 60 (default)
  • If value is 0, then one attempt is made.
  • For any value > 0, total attempts are (value + 1)

Related options:

  • block_device_allocate_retries_interval - controls the interval between checks

block_device_allocate_retries_interval = 3

integer value

Interval (in seconds) between block device allocation retries on failures.

This option allows the user to specify the time interval between consecutive retries. The block_device_allocate_retries option specifies the maximum number of retries.

Possible values:

  • 0: Disables the option.
  • Any positive integer in seconds enables the option.

Related options:

  • block_device_allocate_retries - controls the number of retries

cert = self.pem

string value

Path to SSL certificate file.

Related options:

  • key
  • ssl_only
  • [console] ssl_ciphers
  • [console] ssl_minimum_version

cnt_vpn_clients = 0

integer value

This option represents the number of IP addresses to reserve at the top of the address range for VPN clients. It also will be ignored if the configuration option for network_manager is not set to the default of nova.network.manager.VlanManager.

Possible values:

  • Any integer, 0 or greater.

Related options:

  • use_neutron
  • network_manager

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

compute_driver = None

string value

Defines which driver to use for controlling virtualization.

Possible values:

  • libvirt.LibvirtDriver
  • xenapi.XenAPIDriver
  • fake.FakeDriver
  • ironic.IronicDriver
  • vmwareapi.VMwareVCDriver
  • hyperv.HyperVDriver
  • powervm.PowerVMDriver
  • zvm.ZVMDriver

compute_monitors = []

list value

A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility.

Note

Only one monitor per namespace (For example: cpu) can be loaded at a time.

Possible values:

  • An empty list will disable the feature (Default).
  • An example value that would enable the CPU

    bandwidth monitor that uses the virt driver variant
    compute_monitors = cpu.virt_driver

config_drive_format = iso9660

string value

Config drive format.

Config drive format that will contain metadata attached to the instance when it boots.

Related options:

  • This option is meaningful when one of the following alternatives occur:

    1. force_config_drive option set to true
    2. the REST API call to create the instance contains an enable flag for config drive option
    3. the image used to create the instance requires a config drive, this is defined by img_config_drive property for that image.
  • A compute node running Hyper-V hypervisor can be configured to attach config drive as a CD drive. To attach the config drive as a CD drive, set the [hyperv] config_drive_cdrom option to true.

Deprecated since: 19.0.0

Reason: This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful.

conn_pool_min_size = 2

integer value

The pool size limit for connections expiration policy

conn_pool_ttl = 1200

integer value

The time-to-live in sec of idle connections in the pool

console_host = <based on operating system>

string value

Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host.

Possible values:

  • Current hostname (default) or any string representing hostname.

control_exchange = openstack

string value

The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.

cpu_allocation_ratio = None

floating point value

Virtual CPU to physical CPU allocation ratio.

This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for VCPU inventory. In addition, the AggregateCoreFilter (deprecated) will fall back to this configuration value if no per-aggregate setting is found.

  1. note::

    This option does not affect `PCPU` inventory, which cannot be
    overcommitted.
  2. note::

    If this option is set to something *other than* `None` or `0.0`, the
    allocation ratio will be overwritten by the value of this option, otherwise,
    the allocation ratio will not change. Once set to a non-default value, it is
    not possible to "unset" the config to get back to the default behavior. If
    you want to reset back to the initial value, explicitly specify it to the
    value of `initial_cpu_allocation_ratio`.

Possible values:

  • Any valid positive integer or float value

Related options:

  • initial_cpu_allocation_ratio

create_unique_mac_address_attempts = 5

integer value

This option determines how many times nova-network will attempt to create a unique MAC address before giving up and raising a VirtualInterfaceMacAddressException error.

Possible values:

  • Any positive integer. The default is 5.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

daemon = False

boolean value

Run as a background process.

debug = False

boolean value

If set to true, the logging level will be set to DEBUG instead of the default INFO level.

default_access_ip_network_name = None

string value

Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen.

Possible values:

  • None (default)
  • Any string representing network name.

default_availability_zone = nova

string value

Default availability zone for compute services.

This option determines the default availability zone for nova-compute services, which will be used if the service(s) do not belong to aggregates with availability zone metadata.

Possible values:

  • Any string representing an existing availability zone name.

default_ephemeral_format = None

string value

The default format an ephemeral_volume will be formatted with on creation.

Possible values:

  • ext2
  • ext3
  • ext4
  • xfs
  • ntfs (only for Windows guests)

default_floating_pool = nova

string value

Default pool for floating IPs.

This option specifies the default floating IP pool for allocating floating IPs.

While allocating a floating ip, users can optionally pass in the name of the pool they want to allocate from, otherwise it will be pulled from the default pool.

If this option is not set, then nova is used as default floating pool.

Possible values:

  • Any string representing a floating IP pool name

Deprecated since: 16.0.0

Reason: This option was used for two purposes: to set the floating IP pool name for nova-network and to do the same for neutron. nova-network is deprecated, as are any related configuration options. Users of neutron, meanwhile, should use the default_floating_pool option in the [neutron] group.

default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']

list value

List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.

default_schedule_zone = None

string value

Default availability zone for instances.

This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime.

Possible values:

  • Any string representing an existing availability zone name.
  • None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another.

defer_iptables_apply = False

boolean value

Defer application of IPTables rules until after init phase.

When a compute service is restarted each instance running on the host has its iptables rules built and applied sequentially during the host init stage. The impact of this, especially on a host running many instances, can be observed as a period where some instances are not accessible as the existing iptables rules have been torn down and not yet re-applied.

This is a workaround that prevents the application of the iptables rules until all instances on the host had been initialised then the rules for all instances are applied all at once preventing a blackout period.

Deprecated since: 19.0.0

Reason: nova-network is deprecated, as are any related configuration options.

dhcp_lease_time = 86400

integer value

The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).

Possible values:

  • Any positive integer value.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

dhcpbridge = $bindir/nova-dhcpbridge

string value

The location of the binary nova-dhcpbridge. By default it is the binary named nova-dhcpbridge that is installed with all the other nova binaries.

Possible values:

  • Any string representing the full path to the binary for dhcpbridge

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

dhcpbridge_flagfile = ['/etc/nova/nova-dhcpbridge.conf']

multi valued

This option is a list of full paths to one or more configuration files for dhcpbridge. In most cases the default path of /etc/nova/nova-dhcpbridge.conf should be sufficient, but if you have special needs for configuring dhcpbridge, you can change or add to this list.

Possible values

  • A list of strings, where each string is the full path to a dhcpbridge configuration file.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

disk_allocation_ratio = None

floating point value

Virtual disk to physical disk allocation ratio.

This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for DISK_GB inventory. In addition, the AggregateDiskFilter (deprecated) will fall back to this configuration value if no per-aggregate setting is found.

When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.

  1. note::

    If the value is set to `>1`, we recommend keeping track of the free disk
    space, as the value approaching `0` may result in the incorrect
    functioning of instances using it at the moment.
  2. note::

    If this option is set to something *other than* `None` or `0.0`, the
    allocation ratio will be overwritten by the value of this option, otherwise,
    the allocation ratio will not change. Once set to a non-default value, it is
    not possible to "unset" the config to get back to the default behavior. If
    you want to reset back to the initial value, explicitly specify it to the
    value of `initial_disk_allocation_ratio`.

Possible values:

  • Any valid positive integer or float value

Related options:

  • initial_disk_allocation_ratio

dmz_cidr = []

list value

This option is a list of zero or more IP address ranges in your network’s DMZ that should be accepted.

Possible values:

  • A list of strings, each of which should be a valid CIDR.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

dns_server = []

multi valued

Despite the singular form of the name of this option, it is actually a list of zero or more server addresses that dnsmasq will use for DNS nameservers. If this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use the servers specified in this option. If the option use_network_dns_servers is True, the dns1 and dns2 servers from the network will be appended to this list, and will be used as DNS servers, too.

Possible values:

  • A list of strings, where each string is either an IP address or a FQDN.

Related options:

  • use_network_dns_servers

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

dns_update_periodic_interval = -1

integer value

This option determines the time, in seconds, to wait between refreshing DNS entries for the network.

Possible values:

  • A positive integer
  • -1 to disable updates

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

`dnsmasq_config_file = `

string value

The path to the custom dnsmasq configuration file, if any.

Possible values:

  • The full path to the configuration file, or an empty string if there is no custom dnsmasq configuration file.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ebtables_exec_attempts = 3

integer value

This option determines the number of times to retry ebtables commands before giving up. The minimum number of retries is 1.

Possible values:

  • Any positive integer

Related options:

  • ebtables_retry_interval

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ebtables_retry_interval = 1.0

floating point value

This option determines the time, in seconds, that the system will sleep in between ebtables retries. Note that each successive retry waits a multiple of this value, so for example, if this is set to the default of 1.0 seconds, and ebtables_exec_attempts is 4, after the first failure, the system will sleep for 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and after the third failure it will sleep 3 * 1.0 seconds.

Possible values:

  • Any non-negative float or integer. Setting this to zero will result in no waiting between attempts.

Related options:

  • ebtables_exec_attempts

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

enable_network_quota = False

boolean value

This option is used to enable or disable quota checking for tenant networks.

Related options:

  • quota_networks

Deprecated since: 14.0.0

Reason: CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.

enable_new_services = True

boolean value

Enable new nova-compute services on this host automatically.

When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute.

Possible values:

  • True: Each new compute service is enabled as soon as it registers itself.
  • False: Compute services must be enabled via an os-services REST API call or with the CLI with nova service-enable <hostname> <binary>, otherwise they are not ready to use.

enabled_apis = ['osapi_compute', 'metadata']

list value

List of APIs to be enabled by default.

enabled_ssl_apis = []

list value

List of APIs with enabled SSL.

Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support.

executor_thread_pool_size = 64

integer value

Size of executor thread pool when executor is threading or eventlet.

fake_network = False

boolean value

This option is used mainly in testing to avoid calls to the underlying network utilities.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

fatal_deprecations = False

boolean value

Enables or disables fatal status of deprecations.

firewall_driver = nova.virt.firewall.NoopFirewallDriver

string value

Firewall driver to use with nova-network service.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, this should be to set to the nova.virt.firewall.NoopFirewallDriver.

Possible values:

  • nova.virt.firewall.IptablesFirewallDriver
  • nova.virt.firewall.NoopFirewallDriver
  • nova.virt.libvirt.firewall.IptablesFirewallDriver
  • […​]

Related options:

  • use_neutron: This must be set to False to enable nova-network networking

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

fixed_ip_disassociate_timeout = 600

integer value

This is the number of seconds to wait before disassociating a deallocated fixed IP address. This is only used with the nova-network service, and has no effect when using neutron for networking.

Possible values:

  • Any integer, zero or greater.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

fixed_range_v6 = fd00::/48

string value

This option determines the fixed IPv6 address block when creating a network.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IPv6 CIDR

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

flat_injected = False

boolean value

This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM.

flat_interface = None

string value

This option is the name of the virtual interface of the VM on which the bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt for the bridge interface name.

Possible values:

  • Any valid virtual interface name, such as eth0

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

flat_network_bridge = None

string value

This option determines the bridge used for simple network interfaces when no bridge is specified in the VM creation request.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any string representing a valid network bridge, such as br100

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

flat_network_dns = 8.8.4.4

string value

This is the address of the DNS server for a simple network. If this option is not specified, the default of 8.8.4.4 is used.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

string value

Full class name for the DNS Manager for floating IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries associated with floating IPs.

When a user adds a DNS entry for a specified domain to a floating IP, nova will add a DNS entry using the specified floating DNS driver. When a floating IP is deallocated, its DNS entry will automatically be deleted.

Possible values:

  • Full Python path to the class to be used

Related options:

  • use_neutron: this options only works with nova-network.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

force_config_drive = False

boolean value

Force injection to take place on a config drive

When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option.

Possible values:

  • True: Force to use of config drive regardless the user’s input in the REST API call.
  • False: Do not force use of config drive. Config drives can still be enabled via the REST API or image metadata properties.

Related options:

  • Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag.
  • To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.

force_dhcp_release = True

boolean value

When this option is True, a call is made to release the DHCP for the instance when that instance is terminated.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

force_raw_images = True

boolean value

Force conversion of backing images to raw format.

Possible values:

  • True: Backing image files will be converted to raw image format
  • False: Backing image files will not be converted

Related options:

  • compute_driver: Only the libvirt driver uses this option.
  • [libvirt]/images_type: If images_type is rbd, setting this option to False is not allowed. See the bug https://bugs.launchpad.net/nova/+bug/1816686 for more details.

force_snat_range = []

multi valued

This is a list of zero or more IP ranges that traffic from the routing_source_ip will be SNATted to. If the list is empty, then no SNAT rules are created.

Possible values:

  • A list of strings, each of which should be a valid CIDR.

Related options:

  • routing_source_ip

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

forward_bridge_interface = ['all']

multi valued

One or more interfaces that bridges can forward traffic to. If any of the items in this list is the special keyword all, then all traffic will be forwarded.

Possible values:

  • A list of zero or more interface names, or the word all.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

gateway = None

string value

This is the default IPv4 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron
  • gateway_v6

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

gateway_v6 = None

string value

This is the default IPv6 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any valid IP address.

Related options:

  • use_neutron
  • gateway

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

graceful_shutdown_timeout = 60

integer value

Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait.

heal_instance_info_cache_interval = 60

integer value

Interval between instance network information cache updates.

Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it’s cache if this option is set to 0. If we don’t update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0.

Possible values:

  • Any positive integer in seconds.
  • Any value ⇐0 will disable the sync. This is not recommended.

host = <based on operating system>

string value

Hostname, FQDN or IP address of this host.

Used as:

  • the oslo.messaging queue name for nova-compute worker
  • we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host.
  • cinder host attachment information

Must be valid within AMQP key.

Possible values:

  • String with hostname, FQDN or IP address. Default is hostname of this host.

image_cache_manager_interval = 2400

integer value

Number of seconds to wait between runs of the image cache manager.

Possible values: * 0: run at the default rate. * -1: disable * Any other value

image_cache_subdirectory_name = _base

string value

Location of cached images.

This is NOT the full path - just a folder name relative to $instances_path. For per-compute-host cached images, set to base$my_ip

initial_cpu_allocation_ratio = 16.0

floating point value

Initial virtual CPU to physical CPU allocation ratio.

This is only used when initially creating the computes_nodes table record for a given nova-compute service.

See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.

Related options:

  • cpu_allocation_ratio

initial_disk_allocation_ratio = 1.0

floating point value

Initial virtual disk to physical disk allocation ratio.

This is only used when initially creating the computes_nodes table record for a given nova-compute service.

See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.

Related options:

  • disk_allocation_ratio

initial_ram_allocation_ratio = 1.5

floating point value

Initial virtual RAM to physical RAM allocation ratio.

This is only used when initially creating the computes_nodes table record for a given nova-compute service.

See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios.

Related options:

  • ram_allocation_ratio

injected_network_template = $pybasedir/nova/virt/interfaces.template

string value

Path to /etc/network/interfaces template.

The path to a template file for the /etc/network/interfaces-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server.

The template will be rendered using Jinja2 template engine, and receive a top-level key called interfaces. This key will contain a list of dictionaries, one for each interface.

Refer to the cloudinit documentation for more information:

https://cloudinit.readthedocs.io/en/latest/topics/datasources.html

Possible values:

  • A path to a Jinja2-formatted template for a Debian /etc/network/interfaces file. This applies even if using a non Debian-derived guest.

Related options:

  • flat_inject: This must be set to True to ensure nova embeds network configuration information in the metadata provided through the config drive.

instance_build_timeout = 0

integer value

Maximum time in seconds that an instance can take to build.

If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.

instance_delete_interval = 300

integer value

Interval for retrying failed instance file deletes.

This option depends on maximum_instance_delete_attempts. This option specifies how often to retry deletes whereas maximum_instance_delete_attempts specifies the maximum number of retry attempts that can be made.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • maximum_instance_delete_attempts from instance_cleaning_opts group.

`instance_dns_domain = `

string value

If specified, Nova checks if the availability_zone of every instance matches what the database says the availability_zone should be for the specified dns_domain.

Related options:

  • use_neutron: this options only works with nova-network.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

string value

Full class name for the DNS Manager for instance IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries for instances.

On instance creation, nova will add DNS entries for the instance name and id, using the specified instance DNS driver and domain. On instance deletion, nova will remove the DNS entries.

Possible values:

  • Full Python path to the class to be used

Related options:

  • use_neutron: this options only works with nova-network.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

`instance_format = [instance: %(uuid)s] `

string value

The format for an instance that is passed with the log message.

instance_name_template = instance-%08x

string value

Template string to be used to generate instance names.

This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like instance-%(uuid)s. If you already have instances in your deployment when you change this, your deployment will break.

Possible values:

  • A string which either uses the instance database ID (like the default)
  • A string with a list of named database columns, for example %(id)d or %(uuid)s or %(hostname)s.

instance_usage_audit = False

boolean value

This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service.

instance_usage_audit_period = month

string value

Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset.

Possible values:

  • period, example: hour, day, month or year
  • period with offset, example: month@15 will result in monthly audits starting on 15th day of month.

`instance_uuid_format = [instance: %(uuid)s] `

string value

The format for an instance UUID that is passed with the log message.

instances_path = $state_path/instances

string value

Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS.

Possible values:

  • $state_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova’s state. (default) or Any string representing directory path.

Related options:

  • [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup

internal_service_availability_zone = internal

string value

Availability zone for internal services.

This option determines the availability zone for the various internal nova services, such as nova-scheduler, nova-conductor, etc.

Possible values:

  • Any string representing an existing availability zone name.

`iptables_bottom_regex = `

string value

This expression, if defined, will select any matching iptables rules and place them at the bottom when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_top_regex

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

iptables_drop_action = DROP

string value

By default, packets that do not pass the firewall are DROPped. In many cases, though, an operator may find it more useful to change this from DROP to REJECT, so that the user issuing those packets may have a better idea as to what’s going on, or LOGDROP in order to record the blocked traffic before DROPping.

Possible values:

  • A string representing an iptables chain. The default is DROP.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

`iptables_top_regex = `

string value

This expression, if defined, will select any matching iptables rules and place them at the top when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_bottom_regex

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ipv6_backend = rfc2462

string value

Abstracts out IPv6 address generation to pluggable backends.

nova-network can be put into dual-stack mode, so that it uses both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances acquire IPv6 global unicast addresses with the help of stateless address auto-configuration mechanism.

Related options:

  • use_neutron: this option only works with nova-network.
  • use_ipv6: this option only works if ipv6 is enabled for nova-network.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

key = None

string value

SSL key file (if separate from cert).

Related options:

  • cert

l3_lib = nova.network.l3.LinuxNetL3

string value

This option allows you to specify the L3 management library to be used.

Possible values:

  • Any dot-separated string that represents the import path to an L3 networking library.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_base_dn = ou=hosts,dc=example,dc=org

string value

Base distinguished name for the LDAP search query

This option helps to decide where to look up the host in LDAP.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_password = password

string value

Bind user’s password for LDAP server Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_servers = ['dns.example.org']

multi valued

DNS Servers for LDAP DNS driver

Possible values:

  • A valid URL representing a DNS server

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_soa_expiry = 86400

integer value

Expiry interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server holds the information before it is no longer considered authoritative.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_soa_hostmaster = hostmaster@example.org

string value

Hostmaster for LDAP DNS driver Statement of Authority

Possible values:

  • Any valid string representing LDAP DNS hostmaster.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_soa_minimum = 7200

integer value

Minimum interval (in seconds) for LDAP DNS driver Start of Authority

It is Minimum time-to-live applies for all resource records in the zone file. This value is supplied to other servers how long they should keep the data in cache.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_soa_refresh = 1800

integer value

Refresh interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server waits before requesting for primary DNS server’s current SOA record. If the records are different, secondary DNS server will request a zone transfer from primary.

Note

Lower values would cause more traffic.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_soa_retry = 3600

integer value

Retry interval (in seconds) for LDAP DNS driver Start of Authority

Time interval, a secondary/slave DNS server should wait, if an attempt to transfer zone failed during the previous refresh interval.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_url = ldap://ldap.example.com:389

uri value

URL for LDAP server which will store DNS entries

Possible values:

  • A valid LDAP URL representing the server

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

ldap_dns_user = uid=admin,ou=people,dc=example,dc=org

string value

Bind user for LDAP server Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver

string value

This is the class used as the ethernet device driver for linuxnet bridge operations. The default value should be all you need for most cases, but if you wish to use a customized class, set this option to the full dot-separated import path for that class.

Possible values:

  • Any string representing a dot-separated class path that Nova can import.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

linuxnet_ovs_integration_bridge = br-int

string value

The name of the Open vSwitch bridge that is used with linuxnet when connecting with Open vSwitch."

Possible values:

  • Any string representing a valid bridge name.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

live_migration_retry_count = 30

integer value

Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables.

Possible values:

  • Any positive integer representing retry count.

log-config-append = None

string value

The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format).

log-date-format = %Y-%m-%d %H:%M:%S

string value

Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.

log-dir = None

string value

(Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.

log-file = None

string value

(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.

log_options = True

boolean value

Enables or disables logging values of all registered options when starting a service (at DEBUG level).

log_rotate_interval = 1

integer value

The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval".

log_rotate_interval_type = days

string value

Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the next rotation.

log_rotation_type = none

string value

Log rotation type.

logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

string value

Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter

logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

string value

Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter

logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

string value

Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter

logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

string value

Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter

logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

string value

Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter

long_rpc_timeout = 1800

integer value

This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value.

Operations with RPC calls that utilize this value:

  • live migration
  • scheduling
  • enabling/disabling a compute service
  • volume attach

Related options:

  • rpc_response_timeout

max_concurrent_builds = 10

integer value

Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node.

Possible Values:

  • 0 : treated as unlimited.
  • Any positive integer representing maximum concurrent builds.

max_concurrent_live_migrations = 1

integer value

Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment.

Possible values:

  • 0 : treated as unlimited.
  • Any positive integer representing maximum number of live migrations to run concurrently.

max_local_block_devices = 3

integer value

Maximum number of devices that will result in a local image being created on the hypervisor node.

A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of imageRef being used when creating a server, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail.

Possible values:

  • 0: Creating a local disk is not allowed.
  • Negative number: Allows unlimited number of local discs.
  • Positive number: Allows only these many number of local discs.

max_logfile_count = 30

integer value

Maximum number of rotated log files.

max_logfile_size_mb = 200

integer value

Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size".

maximum_instance_delete_attempts = 5

integer value

The number of times to attempt to reap an instance’s files.

This option specifies the maximum number of retry attempts that can be made.

Possible values:

  • Any positive integer defines how many attempts are made.

Related options:

  • [DEFAULT] instance_delete_interval can be used to disable this option.

metadata_host = $my_ip

string value

This option determines the IP address for the network metadata API server.

This is really the client side of the metadata host equation that allows nova-network to find the metadata server when doing a default multi host networking.

Possible values:

  • Any valid IP address. The default is the address of the Nova API server.

Related options:

  • metadata_port

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

metadata_listen = 0.0.0.0

string value

IP address on which the metadata API will listen.

The metadata API service listens on this IP address for incoming requests.

metadata_listen_port = 8775

port value

Port on which the metadata API will listen.

The metadata API service listens on this port number for incoming requests.

metadata_port = 8775

port value

This option determines the port used for the metadata API server.

Related options:

  • metadata_host

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

metadata_workers = <based on operating system>

integer value

Number of workers for metadata service. If not specified the number of available CPUs will be used.

The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes.

Possible Values:

  • Any positive integer
  • None (default value)

migrate_max_retries = -1

integer value

Number of times to retry live-migration before failing.

Possible values:

  • If == -1, try until out of hosts (default)
  • If == 0, only try once, no retries
  • Integer greater than 0

mkisofs_cmd = genisoimage

string value

Name or path of the tool used for ISO image creation.

Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value.

To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.

Possible values:

  • Name of the ISO image creator program, in case it is in the same directory as the nova-compute service
  • Path to ISO image creator program

Related options:

  • This option is meaningful when config drives are enabled.
  • To use config drive with Hyper-V, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.

multi_host = False

boolean value

Default value for multi_host in networks.

nova-network service can operate in a multi-host or single-host mode. In multi-host mode each compute node runs a copy of nova-network and the instances on that compute node use the compute node as a gateway to the Internet. Where as in single-host mode, a central server runs the nova-network service. All compute nodes forward traffic from the instances to the cloud controller which then forwards traffic to the Internet.

If this options is set to true, some rpc network calls will be sent directly to host.

Note that this option is only used when using nova-network instead of Neutron in your deployment.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

my_block_storage_ip = $my_ip

string value

The IP address which is used to connect to the block storage network.

Possible values:

  • String with valid IP address. Default is IP address of this host.

Related options:

  • my_ip - if my_block_storage_ip is not set, then my_ip value is used.

my_ip = <based on operating system>

string value

The IP address which the host is using to connect to the management network.

Possible values:

  • String with valid IP address. Default is IPv4 address of this host.

Related options:

  • metadata_host
  • my_block_storage_ip
  • routing_source_ip
  • vpn_ip

network_allocate_retries = 0

integer value

Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails.

Possible values:

  • Any positive integer representing retry count.

network_driver = nova.network.linux_net

string value

Driver to use for network creation.

Network driver initializes (creates bridges and so on) only when the first VM lands on a host node. All network managers configure the network using network drivers. The driver is not tied to any particular network manager.

The default Linux driver implements vlans, bridges, and iptables rules using linux utilities.

Note that this option is only used when using nova-network instead of Neutron in your deployment.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

network_manager = nova.network.manager.VlanManager

string value

Full class name for the Manager for network Deprecated since: 18.0.0

Reason: nova-network is deprecated, as are any related configuration options.

network_size = 256

integer value

This option determines the number of addresses in each private subnet.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any positive integer that is less than or equal to the available network size. Note that if you are creating multiple networks, they must all fit in the available IP address space. The default is 256.

Related options:

  • use_neutron
  • num_networks

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

networks_path = $state_path/networks

string value

The location where the network configuration files will be kept. The default is the networks directory off of the location where nova’s Python module is installed.

Possible values

  • A string containing the full path to the desired configuration directory

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

non_inheritable_image_properties = ['cache_in_nova', 'bittorrent', 'img_signature_hash_method', 'img_signature', 'img_signature_key_type', 'img_signature_certificate_uuid']

list value

Image properties that should not be inherited from the instance when taking a snapshot.

This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.

Possible values:

  • A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don’t need them.
  • Default list: cache_in_nova, bittorrent, img_signature_hash_method, img_signature, img_signature_key_type, img_signature_certificate_uuid

num_networks = 1

integer value

This option represents the number of networks to create if not explicitly specified when the network is created. The only time this is used is if a CIDR is specified, but an explicit network_size is not. In that case, the subnets are created by diving the IP address space of the CIDR by num_networks. The resulting subnet sizes cannot be larger than the configuration option network_size; in that event, they are reduced to network_size, and a warning is logged.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

  • Any positive integer is technically valid, although there are practical limits based upon available IP address space and virtual interfaces.

Related options:

  • use_neutron
  • network_size

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

osapi_compute_listen = 0.0.0.0

string value

IP address on which the OpenStack API will listen.

The OpenStack API service listens on this IP address for incoming requests.

osapi_compute_listen_port = 8774

port value

Port on which the OpenStack API will listen.

The OpenStack API service listens on this port number for incoming requests.

`osapi_compute_unique_server_name_scope = `

string value

Sets the scope of the check for unique instance names.

The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an 'InstanceExists' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs.

osapi_compute_workers = None

integer value

Number of workers for OpenStack API service. The default will be the number of CPUs available.

OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes.

Possible Values:

  • Any positive integer
  • None (default value)

ovs_vsctl_timeout = 120

integer value

This option represents the period of time, in seconds, that the ovs_vsctl calls will wait for a response from the database before timing out. A setting of 0 means that the utility should wait forever for a response.

Possible values:

  • Any positive integer if a limited timeout is desired, or zero if the calls should wait forever for a response.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

password_length = 12

integer value

Length of generated instance admin passwords.

periodic_enable = True

boolean value

Enable periodic tasks.

If set to true, this option allows services to periodically run tasks on the manager.

In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one.

periodic_fuzzy_delay = 60

integer value

Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding.

When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler.

Possible Values:

  • Any positive integer (in seconds)
  • 0 : disable the random delay

pointer_model = usbtablet

string value

Generic property to specify the pointer type.

Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.

If set, the hw_pointer_model image property takes precedence over this configuration option.

Related options:

  • usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM.

preallocate_images = none

string value

The image preallocation mode to use.

Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation.

public_interface = eth0

string value

This is the name of the network interface for public IP addresses. The default is eth0.

Possible values:

  • Any string representing a network interface name

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

publish_errors = False

boolean value

Enables or disables publication of error events.

pybasedir = /usr/lib/python3.6/site-packages

string value

The directory where the Nova python modules are installed.

This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.

Related options:

  • state_path

quota_networks = 3

integer value

This option controls the number of private networks that can be created per project (or per tenant).

Related options:

  • enable_network_quota

Deprecated since: 14.0.0

Reason: CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.

ram_allocation_ratio = None

floating point value

Virtual RAM to physical RAM allocation ratio.

This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for MEMORY_MB inventory. In addition, the AggregateRamFilter (deprecated) will fall back to this configuration value if no per-aggregate setting is found.

  1. note::

    If this option is set to something *other than* `None` or `0.0`, the
    allocation ratio will be overwritten by the value of this option, otherwise,
    the allocation ratio will not change. Once set to a non-default value, it is
    not possible to "unset" the config to get back to the default behavior. If
    you want to reset back to the initial value, explicitly specify it to the
    value of `initial_ram_allocation_ratio`.

Possible values:

  • Any valid positive integer or float value

Related options:

  • initial_ram_allocation_ratio

rate_limit_burst = 0

integer value

Maximum number of logged messages per rate_limit_interval.

rate_limit_except_level = CRITICAL

string value

Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered.

rate_limit_interval = 0

integer value

Interval, number of seconds, of log rate limiting.

reboot_timeout = 0

integer value

Time interval after which an instance is hard rebooted automatically.

When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds.

Possible values:

  • 0: Disables the option (default).
  • Any positive integer in seconds: Enables the option.

reclaim_instance_interval = 0

integer value

Interval for reclaiming deleted instances.

A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it’s too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically.

Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node.

Possible values:

  • Any positive integer(in seconds) greater than 0 will enable this option.
  • Any value ⇐0 will disable the option.

record = None

string value

Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done.

remove_unused_base_images = True

boolean value

Should unused base images be removed?

remove_unused_original_minimum_age_seconds = 86400

integer value

Unused unresized base images younger than this will not be removed.

report_interval = 10

integer value

Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment.

Related Options:

  • service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely.

rescue_timeout = 0

integer value

Interval to wait before un-rescuing an instance stuck in RESCUE.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.

reserved_host_cpus = 0

integer value

Number of host CPUs to reserve for host processes.

The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the reserved value reported to placement.

This option cannot be set if the [compute] cpu_shared_set or [compute] cpu_dedicated_set config options have been defined. When these options are defined, any host CPUs not included in these values are considered reserved for the host.

Possible values:

  • Any positive integer representing number of physical CPUs to reserve for the host.

Related options:

  • [compute] cpu_shared_set
  • [compute] cpu_dedicated_set

reserved_host_disk_mb = 0

integer value

Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host.

Possible values:

  • Any positive integer representing amount of disk in MB to reserve for the host.

reserved_host_memory_mb = 512

integer value

Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host.

Possible values:

  • Any positive integer representing amount of memory in MB to reserve for the host.

reserved_huge_pages = None

dict value

Number of huge/large memory pages to reserved per NUMA host cell.

Possible values:

  • A list of valid key=value which reflect NUMA node ID, page size

    (Default unit is KiB) and number of pages to be reserved. For example

    reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1

    In this example we are reserving on NUMA node 0 64 pages of 2MiB
    and on NUMA node 1 1 page of 1GiB.

resize_confirm_window = 0

integer value

Automatically confirm resizes after N seconds.

Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.

resize_fs_using_block_device = False

boolean value

Enable resizing of filesystems via a block device.

If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).

resume_guests_state_on_host_boot = False

boolean value

This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts.

rootwrap_config = /etc/nova/rootwrap.conf

string value

Path to the rootwrap configuration file.

Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry.

routing_source_ip = $my_ip

string value

The public IP address of the network host.

This is used when creating an SNAT rule.

Possible values:

  • Any valid IP address

Related options:

  • force_snat_range

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

rpc_conn_pool_size = 30

integer value

Size of RPC connection pool.

rpc_response_timeout = 60

integer value

Seconds to wait for a response from a call.

run_external_periodic_tasks = True

boolean value

Some periodic tasks can be run in a separate process. Should we run them here?

running_deleted_instance_action = reap

string value

The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified.

Related options:

  • running_deleted_instance_poll_interval
  • running_deleted_instance_timeout

running_deleted_instance_poll_interval = 1800

integer value

Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set.

Possible values:

  • Any positive integer in seconds enables the option.
  • 0: Disables the option.
  • 1800: Default value.

Related options:

  • running_deleted_instance_action

running_deleted_instance_timeout = 0

integer value

Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup.

Possible values:

  • Any positive integer in seconds(default is 0).

Related options:

  • "running_deleted_instance_action"

scheduler_instance_sync_interval = 120

integer value

Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova.

If the CONF option scheduler_tracks_instance_changes is False, the sync calls will not be made. So, changing this option will have no effect.

If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • This option has no impact if scheduler_tracks_instance_changes is set to False.

send_arp_for_ha = False

boolean value

When True, when a device starts up, and upon binding floating IP addresses, arp messages will be sent to ensure that the arp caches on the compute hosts are up-to-date.

Related options:

  • send_arp_for_ha_count

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

send_arp_for_ha_count = 3

integer value

When arp messages are configured to be sent, they will be sent with the count set to the value of this option. Of course, if this is set to zero, no arp messages will be sent.

Possible values:

  • Any integer greater than or equal to 0

Related options:

  • send_arp_for_ha

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

service_down_time = 60

integer value

Maximum time in seconds since last check-in for up service

Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn’t updated the status for more than service_down_time, then the compute node is considered down.

Related Options:

  • report_interval (service_down_time should not be less than report_interval)
  • scheduler.periodic_task_interval

servicegroup_driver = db

string value

This option specifies the driver to be used for the servicegroup service.

ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver.

Related Options:

  • service_down_time (maximum time since last check-in for up service)

share_dhcp_address = False

boolean value

THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.

If True in multi_host mode, all compute hosts share the same dhcp address. The same IP address used for DHCP will be added on each nova-network node which is only visible to the VMs on the same host.

The use of this configuration has been deprecated and may be removed in any release after Mitaka. It is recommended that instead of relying on this option, an explicit value should be passed to create_networks() as a keyword argument with the name share_address.

Deprecated since: 2014.2

*Reason:*None

shelved_offload_time = 0

integer value

Time before a shelved instance is eligible for removal from a host.

By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes.

Possible values:

  • 0: Instance will be immediately offloaded after being shelved.
  • Any value < 0: An instance will never offload.
  • Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded.

shelved_poll_interval = 3600

integer value

Interval for polling shelved instances to offload.

The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the shelved_offload_time config value it offloads the shelved instances. Check shelved_offload_time config option description for details.

Possible values:

  • Any value ⇐ 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • shelved_offload_time

shutdown_timeout = 60

integer value

Total time to wait in seconds for an instance to perform a clean shutdown.

It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up.

The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.

Possible values:

  • A positive integer or 0 (default value is 60).

source_is_ipv6 = False

boolean value

Set to True if source host is addressed with IPv6.

ssl_only = False

boolean value

Disallow non-encrypted connections.

Related options:

  • cert
  • key

state_path = $pybasedir

string value

The top-level directory for maintaining Nova’s state.

This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option instances_path gets overwritten, this directory can grow very large.

Possible values:

  • The full path to a directory. Defaults to value provided in pybasedir.

sync_power_state_interval = 600

integer value

Interval to sync power states between the database and the hypervisor.

The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

Related options:

  • If handle_virt_lifecycle_events in the workarounds group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.

sync_power_state_pool_size = 1000

integer value

Number of greenthreads available for use to sync power states.

This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic.

Possible values:

  • Any positive integer representing greenthreads count.

syslog-log-facility = LOG_USER

string value

Syslog facility to receive log lines. This option is ignored if log_config_append is set.

teardown_unused_network_gateway = False

boolean value

Determines whether unused gateway devices, both VLAN and bridge, are deleted if the network is in nova-network VLAN mode and is multi-hosted.

Related options:

  • use_neutron
  • vpn_ip
  • fake_network

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

tempdir = None

string value

Explicitly specify the temporary working directory.

timeout_nbd = 10

integer value

Amount of time, in seconds, to wait for NBD device start up.

transport_url = rabbit://

string value

The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is:

driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query

Example: rabbit://rabbitmq:password@127.0.0.1:5672//

For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html

update_dns_entries = False

boolean value

When this option is True, whenever a DNS entry must be updated, a fanout cast message is sent to all network hosts to update their DNS entries in multi-host mode.

Related options:

  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

update_resources_interval = 0

integer value

Interval for updating compute resources.

This option specifies how often the update_available_resources periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.

Possible values:

  • 0: Will run at the default periodic interval.
  • Any value < 0: Disables the option.
  • Any positive integer in seconds.

use-journal = False

boolean value

Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set.

use-json = False

boolean value

Use JSON formatting for logging. This option is ignored if log_config_append is set.

use-syslog = False

boolean value

Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.

use_cow_images = True

boolean value

Enable use of copy-on-write (cow) images.

QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used.

use_eventlog = False

boolean value

Log output to Windows Event Log.

use_ipv6 = False

boolean value

Assign IPv6 and IPv4 addresses when creating instances.

Related options:

  • use_neutron: this only works with nova-network.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

use_network_dns_servers = False

boolean value

When this option is set to True, the dns1 and dns2 servers for the network specified by the user on boot will be used for DNS, as well as any specified in the dns_server option.

Related options:

  • dns_server

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

use_neutron = True

boolean value

Enable neutron as the backend for networking.

Determine whether to use Neutron or Nova Network as the back end. Set to true to use neutron.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

use_rootwrap_daemon = False

boolean value

Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.

use_single_default_gateway = False

boolean value

When set to True, only the first nic of a VM will get its default gateway from the DHCP server.

Deprecated since: 16.0.0

Reason: nova-network is deprecated, as are any related configuration options.

use_stderr = False

boolean value

Log output to standard error. This option is ignored if log_config_append is set.

vcpu_pin_set = None

string value

Mask of host CPUs that can be used for VCPU resources.

The behavior of this option depends on the definition of the [compute] cpu_dedicated_set option and affects the behavior of the [compute] cpu_shared_set option.

  • If [compute] cpu_dedicated_set is defined, defining this option will result in an error.
  • If [compute] cpu_dedicated_set is not defined, this option will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to, overriding the [compute] cpu_shared_set option.

Possible values:

  • A comma-separated list of physical CPU numbers that virtual CPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a

    previous range. For example
    vcpu_pin_set = "4-12,^8,15"

Related options:

  • [compute] cpu_dedicated_set
  • [compute] cpu_shared_set

Deprecated since: 20.0.0

Reason: This option has been superseded by the ``[compute] cpu_dedicated_set`` and ``[compute] cpu_shared_set`` options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver).

vif_plugging_is_fatal = True

boolean value

Determine if instance should boot or fail on VIF plugging timeout.

Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval.

This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready.

Possible values:

  • True: Instances should fail after VIF plugging timeout
  • False: Instances should continue booting after VIF plugging timeout

vif_plugging_timeout = 300

integer value

Timeout for Neutron VIF plugging event message arrival.

Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal).

If you are hitting timeout failures at scale, consider running rootwrap in "daemon mode" in the neutron agent via the [agent]/root_helper_daemon neutron configuration option.

Related options:

  • vif_plugging_is_fatal - If vif_plugging_timeout is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all.

virt_mkfs = []

multi valued

Name of the mkfs commands for ephemeral device.

The format is <os_type>=<mkfs command>

vlan_interface = None

string value

This option is the name of the virtual interface of the VM on which the VLAN bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt and xenapi for the bridge interface name.

Please note that this setting will be ignored in nova-network if the configuration option for network_manager is not set to the default of nova.network.manager.VlanManager.

Possible values:

  • Any valid virtual interface name, such as eth0

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options. While this option has an effect when using neutron, it incorrectly override the value provided by neutron and should therefore not be used.

vlan_start = 100

integer value

This is the VLAN number used for private networks. Note that the when creating the networks, if the specified number has already been assigned, nova-network will increment this number until it finds an available VLAN.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of nova.network.manager.VlanManager.

Possible values:

  • Any integer between 1 and 4094. Values outside of that range will raise a ValueError exception.

Related options:

  • network_manager
  • use_neutron

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

volume_usage_poll_interval = 0

integer value

Interval for gathering volume usages.

This option updates the volume usage cache for every volume_usage_poll_interval number of seconds.

Possible values:

  • Any positive integer(in seconds) greater than 0 will enable this option.
  • Any value ⇐0 will disable the option.

vpn_ip = $my_ip

string value

This option is no longer used since the /os-cloudpipe API was removed in the 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN servers. It defaults to the IP address of the host.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of nova.network.manager.VlanManager.

Possible values:

  • Any valid IP address. The default is $my_ip, the IP address of the VM.

Related options:

  • network_manager
  • use_neutron
  • vpn_start

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

vpn_start = 1000

port value

This is the port number to use as the first VPN port for private networks.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of nova.network.manager.VlanManager, or if you specify a value the vpn_start parameter when creating a network.

Possible values:

  • Any integer representing a valid port number. The default is 1000.

Related options:

  • use_neutron
  • vpn_ip
  • network_manager

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

watch-log-file = False

boolean value

Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.

web = /usr/share/spice-html5

string value

Path to directory with content which will be served by a web server.

9.1.2. api

The following table outlines the options available under the [api] group in the /etc/nova/nova.conf file.

Table 9.1. api
Configuration option = Default valueTypeDescription

auth_strategy = keystone

string value

Determine the strategy to use for authentication.

compute_link_prefix = None

string value

This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).

config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01

string value

When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:

  • 1.0
  • 2007-01-19
  • 2007-03-01
  • 2007-08-29
  • 2007-10-10
  • 2007-12-15
  • 2008-02-01
  • 2008-09-01
  • 2009-04-04

The option is in the format of a single string, with each version separated by a space.

Possible values:

  • Any string that represents zero or more versions, separated by spaces.

dhcp_domain = novalocal

string value

Domain name used to configure FQDN for instances.

This option has two purposes:

#. For neutron and nova-network users, it is used to configure a fully-qualified domain name for instance hostnames. If unset, only the hostname without a domain will be configured. #. (Deprecated) For nova-network users, this option configures the DNS domains used for the DHCP server. Refer to the --domain option of the dnsmasq utility for more information. Like nova-network itself, this purpose is deprecated.

Possible values:

  • Any string that is a valid domain name.

Related options:

  • use_neutron

enable_instance_password = True

boolean value

Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False.

glance_link_prefix = None

string value

This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).

instance_list_cells_batch_fixed_size = 100

integer value

This controls the batch size of instances requested from each cell database if instance_list_cells_batch_strategy` is set to fixed. This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation for instance_list_cells_batch_strategy, the minimum value for this is 100 records per batch.

Related options:

  • instance_list_cells_batch_strategy
  • max_limit

instance_list_cells_batch_strategy = distributed

string value

This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request.

Related options:

  • instance_list_cells_batch_fixed_size
  • max_limit

instance_list_per_project_cells = False

boolean value

When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True.

list_records_by_skipping_down_cells = True

boolean value

When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True.

Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See "Handling Down Cells" section of the Compute API guide (https://docs.openstack.org/api-guide/compute/down_cells.html) for more information.

local_metadata_per_cell = False

boolean value

Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service.

max_limit = 1000

integer value

As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.

metadata_cache_expiration = 15

integer value

This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect.

neutron_default_tenant_id = default

string value

Tenant ID for getting the default network from Neutron API (also referred in some places as the project ID) to use.

Related options:

  • use_neutron_default_nets

use_forwarded_for = False

boolean value

When True, the X-Forwarded-For header is treated as the canonical remote address. When False (the default), the remote_address header is used.

You should only enable this if you have an HTML sanitizing proxy.

use_neutron_default_nets = False

boolean value

When True, the TenantNetworkController will query the Neutron API to get the default networks to use.

Related options:

  • neutron_default_tenant_id

vendordata_dynamic_connect_timeout = 5

integer value

Maximum wait time for an external REST service to connect.

Possible values:

  • Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal

vendordata_dynamic_failure_fatal = False

boolean value

Should failures to fetch dynamic vendordata be fatal to instance boot?

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout

vendordata_dynamic_read_timeout = 5

integer value

Maximum wait time for an external REST service to return data once connected.

Possible values:

  • Any integer. Note that instance start is blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_failure_fatal

`vendordata_dynamic_ssl_certfile = `

string value

Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.

Possible values:

  • An empty string, or a path to a valid certificate file

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal

vendordata_dynamic_targets = []

list value

A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>.

The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference.

vendordata_jsonfile_path = None

string value

Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary.

Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host.

Possible values:

  • Any string representing the path to the data file, or an empty string (default).

vendordata_providers = ['StaticJSON']

list value

A list of vendordata providers.

vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment.

For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference.

Related options:

  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
  • vendordata_dynamic_failure_fatal

9.1.3. api_database

The following table outlines the options available under the [api_database] group in the /etc/nova/nova.conf file.

Table 9.2. api_database
Configuration option = Default valueTypeDescription

connection = None

string value

The SQLAlchemy connection string to use to connect to the database. Do not set this for the nova-compute service.

connection_debug = 0

integer value

Verbosity of SQL debugging information: 0=None, 100=Everything.

`connection_parameters = `

string value

Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&…​

connection_recycle_time = 3600

integer value

Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.

connection_trace = False

boolean value

Add Python stack traces to SQL as comment strings.

max_overflow = None

integer value

If set, use this value for max_overflow with SQLAlchemy.

max_pool_size = None

integer value

Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.

max_retries = 10

integer value

Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.

mysql_sql_mode = TRADITIONAL

string value

The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=

pool_timeout = None

integer value

If set, use this value for pool_timeout with SQLAlchemy.

retry_interval = 10

integer value

Interval between retries of opening a SQL connection.

slave_connection = None

string value

The SQLAlchemy connection string to use to connect to the slave database.

sqlite_synchronous = True

boolean value

If True, SQLite uses synchronous mode.

9.1.4. barbican

The following table outlines the options available under the [barbican] group in the /etc/nova/nova.conf file.

Table 9.3. barbican
Configuration option = Default valueTypeDescription

auth_endpoint = http://localhost/identity/v3

string value

Use this endpoint to connect to Keystone

barbican_api_version = None

string value

Version of the Barbican API, for example: "v1"

barbican_endpoint = None

string value

Use this endpoint to connect to Barbican, for example: "http://localhost:9311/"

barbican_endpoint_type = public

string value

Specifies the type of endpoint. Allowed values are: public, private, and admin

number_of_retries = 60

integer value

Number of times to retry poll for key creation completion

retry_delay = 1

integer value

Number of seconds to wait before retrying poll for key creation completion

verify_ssl = True

boolean value

Specifies if insecure TLS (https) requests. If False, the server’s certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile.

verify_ssl_path = None

string value

A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored.

9.1.5. cache

The following table outlines the options available under the [cache] group in the /etc/nova/nova.conf file.

Table 9.4. cache
Configuration option = Default valueTypeDescription

backend = dogpile.cache.null

string value

Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend.

backend_argument = []

multi valued

Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>".

config_prefix = cache.oslo

string value

Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.

debug_cache_backend = False

boolean value

Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.

enabled = False

boolean value

Global toggle for caching.

expiration_time = 600

integer value

Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it.

memcache_dead_retry = 300

integer value

Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

memcache_pool_connection_get_timeout = 10

integer value

Number of seconds that an operation will wait to get a memcache client connection.

memcache_pool_maxsize = 10

integer value

Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).

memcache_pool_unused_timeout = 60

integer value

Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).

memcache_servers = ['localhost:11211']

list value

Memcache servers in the format of "host:port". (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

memcache_socket_timeout = 1.0

floating point value

Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).

proxies = []

list value

Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.

tls_allowed_ciphers = None

string value

Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available.

tls_cafile = None

string value

Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored.

tls_certfile = None

string value

Path to a single file in PEM format containing the client’s certificate as well as any number of CA certificates needed to establish the certificate’s authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored.

tls_enabled = False

boolean value

Global toggle for TLS usage when comunicating with the caching servers.

tls_keyfile = None

string value

Path to a single file containing the client’s private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored.

9.1.6. cinder

The following table outlines the options available under the [cinder] group in the /etc/nova/nova.conf file.

Table 9.5. cinder
Configuration option = Default valueTypeDescription

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

catalog_info = volumev3::publicURL

string value

Info to match when looking for cinder in the service catalog.

The <service_name> is optional and omitted by default since it should not be necessary in most deployments.

Possible values:

  • Format is separated values of the form: <service_type>:<service_name>:<endpoint_type>

Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.

Related options:

  • endpoint_template - Setting this option will override catalog_info

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

cross_az_attach = True

boolean value

Allow attach between instance and volume in different availability zones.

If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach.

default-domain-id = None

string value

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default-domain-name = None

string value

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

endpoint_template = None

string value

If this option is set then it will override service catalog lookup with this template for cinder endpoint

Possible values:

Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release.

Related options:

  • catalog_info - If endpoint_template is not set, catalog_info will be used.

http_retries = 3

integer value

Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.

Possible values:

  • Any integer value. 0 means connection is attempted only once

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

os_region_name = None

string value

Region name of this node. This is used when picking the URL in the service catalog.

Possible values:

  • Any string representing region name

password = None

string value

User’s password

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

split-loggers = False

boolean value

Log requests to multiple loggers.

system-scope = None

string value

Scope for system operations

tenant-id = None

string value

Tenant ID

tenant-name = None

string value

Tenant Name

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

9.1.7. compute

The following table outlines the options available under the [compute] group in the /etc/nova/nova.conf file.

Table 9.6. compute
Configuration option = Default valueTypeDescription

consecutive_build_service_disable_threshold = 10

integer value

Enables reporting of build failures to the scheduler.

Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher.

Possible values:

  • Any positive integer enables reporting build failures.
  • Zero to disable reporting build failures.

Related options:

  • [filter_scheduler]/build_failure_weight_multiplier

cpu_dedicated_set = None

string value

Mask of host CPUs that can be used for PCPU resources.

The behavior of this option affects the behavior of the deprecated vcpu_pin_set option.

  • If this option is defined, defining vcpu_pin_set will result in an error.
  • If this option is not defined, vcpu_pin_set will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to.

This behavior will be simplified in a future release when vcpu_pin_set is removed.

Possible values:

  • A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a

    previous range. For example
    cpu_dedicated_set = "4-12,^8,15"

Related options:

  • [compute] cpu_shared_set: This is the counterpart option for defining where VCPU resources should be allocated from.
  • vcpu_pin_set: A legacy option that this option partially replaces.

cpu_shared_set = None

string value

Mask of host CPUs that can be used for VCPU resources and offloaded emulator threads.

The behavior of this option depends on the definition of the deprecated vcpu_pin_set option.

  • If vcpu_pin_set is not defined, [compute] cpu_shared_set will be be used to provide VCPU inventory and to determine the host CPUs that unpinned instances can be scheduled to. It will also be used to determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy (hw:emulator_threads_policy=share).
  • If vcpu_pin_set is defined, [compute] cpu_shared_set will only be used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy (hw:emulator_threads_policy=share). vcpu_pin_set will be used to provide VCPU inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to.

This behavior will be simplified in a future release when vcpu_pin_set is removed.

Possible values:

  • A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a

    previous range. For example
    cpu_shared_set = "4-12,^8,15"

Related options:

  • [compute] cpu_dedicated_set: This is the counterpart option for defining where PCPU resources should be allocated from.
  • vcpu_pin_set: A legacy option whose definition may change the behavior of this option.

image_type_exclude_list = []

list value

A list of image formats that should not be advertised as supported by this compute node.

In some situations, it may be desirable to have a compute node refuse to support an expensive or complex image format. This factors into the decisions made by the scheduler about which compute node to select when booted with a given image.

Possible values:

  • Any glance image disk_format name (i.e. raw, qcow2, etc)

Related options:

  • [scheduler]query_placement_for_image_type_support - enables filtering computes based on supported image types, which is required to be enabled for this to take effect.

live_migration_wait_for_vif_plug = True

boolean value

Determine if the source compute host should wait for a network-vif-plugged event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host.

Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this.

Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host, a network-vif-plugged event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor.

  1. note::

    The compute service cannot reliably determine which types of virtual
    interfaces (`port.binding:vif_type`) will send `network-vif-plugged`
    events without an accompanying port `binding:host_id` change.
    Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least
    one known backend that will not currently work in this case, see bug
    https://launchpad.net/bugs/1755890 for more details.

Possible values:

  • True: wait for network-vif-plugged events before starting guest transfer
  • False: do not wait for network-vif-plugged events before starting guest transfer (this is the legacy behavior)

Related options:

  • [DEFAULT]/vif_plugging_is_fatal: if live_migration_wait_for_vif_plug is True and vif_plugging_timeout is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host
  • [DEFAULT]/vif_plugging_timeout: if live_migration_wait_for_vif_plug is True, this controls the amount of time to wait before timing out and either failing if vif_plugging_is_fatal is True, or simply continuing with the live migration

max_concurrent_disk_ops = 0

integer value

Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit.

max_disk_devices_to_attach = -1

integer value

Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the ide disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume.

Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the disk_bus field in :doc:/user/block-device-mapping for more information about specifying disk bus in a block device mapping, and see https://docs.openstack.org/glance/latest/admin/useful-image-properties.html for more information about the hw_disk_bus image property.

Operators changing the [compute]/max_disk_devices_to_attach on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. For example, if server A has 26 devices attached and an operators changes [compute]/max_disk_devices_to_attach to 20, a request to rebuild server A will fail and go into ERROR state because 26 devices are already attached and exceed the new configured maximum of 20.

Operators setting [compute]/max_disk_devices_to_attach should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. This means if an operator has set a maximum of 26 on compute host A and a maximum of 20 on compute host B, a cold migration of a server with 26 attached devices from compute host A to compute host B will succeed. Then, once the server is on compute host B, a subsequent request to rebuild the server will fail and go into ERROR state because 26 devices are already attached and exceed the configured maximum of 20 on compute host B.

The configured maximum is not enforced on shelved offloaded servers, as they have no compute host.

  1. warning:: If this option is set to 0, the nova-compute service will fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot.

Possible values:

  • -1 means unlimited
  • Any integer >= 1 represents the maximum allowed. A value of 0 will cause the nova-compute service to fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot.

resource_provider_association_refresh = 300

integer value

Interval for updating nova-compute-side cache of the compute node resource provider’s inventories, aggregates, and traits.

This option specifies the number of seconds between attempts to update a provider’s inventories, aggregates and traits in the local cache of the compute node.

A value of zero disables cache refresh completely.

The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the next time the data is accessed.

Possible values:

  • Any positive integer in seconds, or zero to disable refresh.

shutdown_retry_interval = 10

integer value

Time to wait in seconds before resending an ACPI shutdown signal to instances.

The overall time to wait is set by shutdown_timeout.

Possible values:

  • Any integer greater than 0 in seconds

Related options:

  • shutdown_timeout

9.1.8. conductor

The following table outlines the options available under the [conductor] group in the /etc/nova/nova.conf file.

Table 9.7. conductor
Configuration option = Default valueTypeDescription

workers = None

integer value

Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.

9.1.9. console

The following table outlines the options available under the [console] group in the /etc/nova/nova.conf file.

Table 9.8. console
Configuration option = Default valueTypeDescription

allowed_origins = []

list value

Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header.

Possible values:

  • A list where each element is an allowed origin hostnames, else an empty list

ssl_ciphers = None

string value

OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example::

ssl_ciphers = "kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES"

See the man page for the OpenSSL ciphers command for details of the cipher preference string format and allowed values::

https://www.openssl.org/docs/man1.1.0/man1/ciphers.html

Related options:

  • [DEFAULT] cert
  • [DEFAULT] key

ssl_minimum_version = default

string value

Minimum allowed SSL/TLS protocol version.

Related options:

  • [DEFAULT] cert
  • [DEFAULT] key

9.1.10. consoleauth

The following table outlines the options available under the [consoleauth] group in the /etc/nova/nova.conf file.

Table 9.9. consoleauth
Configuration option = Default valueTypeDescription

token_ttl = 600

integer value

The lifetime of a console auth token (in seconds).

A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted.

9.1.11. cors

The following table outlines the options available under the [cors] group in the /etc/nova/nova.conf file.

Table 9.10. cors
Configuration option = Default valueTypeDescription

allow_credentials = True

boolean value

Indicate that the actual request can include user credentials

allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id']

list value

Indicate which header field names may be used during the actual request.

allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH']

list value

Indicate which methods can be used during the actual request.

allowed_origin = None

list value

Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com

expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token']

list value

Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.

max_age = 3600

integer value

Maximum cache age of CORS preflight requests.

9.1.12. database

The following table outlines the options available under the [database] group in the /etc/nova/nova.conf file.

Table 9.11. database
Configuration option = Default valueTypeDescription

backend = sqlalchemy

string value

The back end to use for the database.

connection = None

string value

The SQLAlchemy connection string to use to connect to the database.

connection_debug = 0

integer value

Verbosity of SQL debugging information: 0=None, 100=Everything.

`connection_parameters = `

string value

Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&…​

connection_recycle_time = 3600

integer value

Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool.

connection_trace = False

boolean value

Add Python stack traces to SQL as comment strings.

db_inc_retry_interval = True

boolean value

If True, increases the interval between retries of a database operation up to db_max_retry_interval.

db_max_retries = 20

integer value

Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.

db_max_retry_interval = 10

integer value

If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.

db_retry_interval = 1

integer value

Seconds between retries of a database transaction.

max_overflow = 50

integer value

If set, use this value for max_overflow with SQLAlchemy.

max_pool_size = 5

integer value

Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit.

max_retries = 10

integer value

Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.

mysql_enable_ndb = False

boolean value

If True, transparently enables support for handling MySQL Cluster (NDB).

mysql_sql_mode = TRADITIONAL

string value

The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=

pool_timeout = None

integer value

If set, use this value for pool_timeout with SQLAlchemy.

retry_interval = 10

integer value

Interval between retries of opening a SQL connection.

slave_connection = None

string value

The SQLAlchemy connection string to use to connect to the slave database.

sqlite_synchronous = True

boolean value

If True, SQLite uses synchronous mode.

use_db_reconnect = False

boolean value

Enable the experimental use of database reconnect on connection lost.

use_tpool = False

boolean value

Enable the experimental use of thread pooling for all DB API calls

9.1.13. devices

The following table outlines the options available under the [devices] group in the /etc/nova/nova.conf file.

Table 9.12. devices
Configuration option = Default valueTypeDescription

enabled_vgpu_types = []

list value

The vGPU types enabled in the compute node.

Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance. But please note that Nova only supports a single type in the Queens release. If more than one vGPU type is specified (as a comma-separated list), only the first one will be used. An example is as the following::

[devices]
enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11

9.1.14. ephemeral_storage_encryption

The following table outlines the options available under the [ephemeral_storage_encryption] group in the /etc/nova/nova.conf file.

Table 9.13. ephemeral_storage_encryption
Configuration option = Default valueTypeDescription

cipher = aes-xts-plain64

string value

Cipher-mode string to be used.

The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "<cipher>-<chainmode>-<ivmode>".

Possible values:

  • Any crypto option listed in /proc/crypto.

enabled = False

boolean value

Enables/disables LVM ephemeral storage encryption.

key_size = 512

integer value

Encryption key length in bits.

The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key.

9.1.15. filter_scheduler

The following table outlines the options available under the [filter_scheduler] group in the /etc/nova/nova.conf file.

Table 9.14. filter_scheduler
Configuration option = Default valueTypeDescription

aggregate_image_properties_isolation_namespace = None

string value

Image property namespace for use in the host aggregate.

Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the aggregate_image_properties_isolation filter is enabled.

Possible values:

  • A string, where the string corresponds to an image property namespace

Related options:

  • aggregate_image_properties_isolation_separator

aggregate_image_properties_isolation_separator = .

string value

Separator character(s) for image property namespace and name.

When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the aggregate_image_properties_isolation filter is enabled.

Possible values:

  • A string, where the string corresponds to an image property namespace separator character

Related options:

  • aggregate_image_properties_isolation_namespace

available_filters = ['nova.scheduler.filters.all_filters']

multi valued

Filters that the scheduler can use.

An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the enabled_filters option will be used, but any filter appearing in that option must also be included in this list.

By default, this is set to all filters that are included with nova.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host

Related options:

  • enabled_filters

build_failure_weight_multiplier = 1000000.0

floating point value

Multiplier used for weighing hosts that have had recent build failures.

This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

Related options:

  • [compute]/consecutive_build_service_disable_threshold - Must be nonzero for a compute to report data considered by this weigher.

cpu_weight_multiplier = 1.0

floating point value

CPU weight multiplier ratio.

Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the cpu weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

Related options:

  • filter_scheduler.weight_classes: This weigher must be added to list of enabled weight classes if the weight_classes setting is set to a non-default value.

disk_weight_multiplier = 1.0

floating point value

Disk weight multiplier ratio.

Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the disk weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

enabled_filters = ['AvailabilityZoneFilter', 'ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter']

list value

Filters that the scheduler will use.

An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host

Related options:

  • All of the filters in this option must be present in the available_filters option, or a SchedulerHostFilterNotFound exception will be raised.

host_subset_size = 1

integer value

Size of subset of best hosts selected by scheduler.

New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option.

Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer, where the integer corresponds to the size of a host subset. Any integer is valid, although any value less than 1 will be treated as 1

image_properties_default_architecture = None

string value

The default architecture to be used when using the image properties filter.

When using the ImagePropertiesFilter, it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on aarch64 compute nodes because the user did not specify the hw_architecture property in Glance.

Possible values:

  • CPU Architectures such as x86_64, aarch64, s390x.

io_ops_weight_multiplier = -1.0

floating point value

IO operations weight multiplier ratio.

This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the io_ops weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

isolated_hosts = []

list value

List of hosts that can only run certain images.

If there is a need to restrict some images to only run on certain designated hosts, list those host names here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled.

Possible values:

  • A list of strings, where each string corresponds to the name of a host

Related options:

  • scheduler/isolated_images
  • scheduler/restrict_isolated_hosts_to_isolated_images

isolated_images = []

list value

List of UUIDs for images that can only be run on certain hosts.

If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled.

Possible values:

  • A list of UUID strings, where each string corresponds to the UUID of an image

Related options:

  • scheduler/isolated_hosts
  • scheduler/restrict_isolated_hosts_to_isolated_images

max_instances_per_host = 50

integer value

Maximum number of instances that can exist on a host.

If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option’s value.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the NumInstancesFilter or AggregateNumInstancesFilter filter is enabled.

Possible values:

  • An integer, where the integer corresponds to the max instances that can be scheduled on a host.

max_io_ops_per_host = 8

integer value

The number of instances that can be actively performing IO on a host.

Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the io_ops_filter filter is enabled.

Possible values:

  • An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host.

pci_weight_multiplier = 1.0

floating point value

PCI device affinity weight multiplier.

The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. The NUMATopologyFilter filter must be enabled for this to have any significance. For more information, refer to the filter documentation:

https://docs.openstack.org/nova/latest/user/filter-scheduler.html

Possible values:

  • A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher.

ram_weight_multiplier = 1.0

floating point value

RAM weight multiplier ratio.

This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ram weigher is enabled.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

restrict_isolated_hosts_to_isolated_images = True

boolean value

Prevent non-isolated images from being built on isolated hosts.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.

Related options:

  • scheduler/isolated_images
  • scheduler/isolated_hosts

shuffle_best_same_weighed_hosts = False

boolean value

Enable spreading the instances between hosts with the same best weight.

Enabling it is beneficial for cases when host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense.

soft_affinity_weight_multiplier = 1.0

floating point value

Multiplier used for weighing hosts for group soft-affinity.

Possible values:

  • A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity.

soft_anti_affinity_weight_multiplier = 1.0

floating point value

Multiplier used for weighing hosts for group soft-anti-affinity.

Possible values:

  • A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity.

track_instance_changes = True

boolean value

Enable querying of individual hosts for instance information.

The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host.

If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Note

In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the [workarounds]/disable_group_policy_check_upcall option.

weight_classes = ['nova.scheduler.weights.all_weighers']

list value

Weighers that the scheduler will use.

Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is scheduler_host_subset_size.

By default, this is set to all weighers that are included with Nova.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host

9.1.16. glance

The following table outlines the options available under the [glance] group in the /etc/nova/nova.conf file.

Table 9.15. glance
Configuration option = Default valueTypeDescription

allowed_direct_url_schemes = []

list value

List of url schemes that can be directly accessed.

This option specifies a list of url schemes that can be downloaded directly via the direct_url. This direct_URL can be fetched from Image metadata which can be used by nova to get the image more efficiently. nova-compute could benefit from this by invoking a copy when it has access to the same file system as glance.

Possible values:

  • [file], Empty list (default)

Deprecated since: 17.0.0

Reason: This was originally added for the nova.image.download.file FileTransfer extension which was removed in the 16.0.0 Pike release. The nova.image.download.modules extension point is not maintained and there is no indication of its use in production clouds.

api_servers = None

list value

List of glance api servers endpoints available to nova.

https is used for ssl-based glance api servers.

Note

The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason.

Possible values:

  • A list of any fully qualified url of the form "scheme://hostname:port[/path]" (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image").

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

connect-retries = None

integer value

The maximum number of retries that should be attempted for connection errors.

connect-retry-delay = None

floating point value

Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

debug = False

boolean value

Enable or disable debug logging with glanceclient.

default_trusted_certificate_ids = []

list value

List of certificate IDs for certificates that should be trusted.

May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail.

Related options:

  • The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled.

enable_certificate_validation = False

boolean value

Enable certificate validation for image signature verification.

During image signature verification nova will first verify the validity of the image’s signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy.

Related options:

  • This option only takes effect if verify_glance_signatures is enabled.
  • The value of default_trusted_certificate_ids may be used when this option is enabled.

Deprecated since: 16.0.0

Reason: This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together.

enable_rbd_download = False

boolean value

Enable download of Glance images directly via RBD.

Allow compute hosts to quickly download and cache images localy directly from Ceph rather than slow downloads from the Glance API. This can reduce download time for images in the ten to hundreds of GBs from tens of minutes to tens of seconds, but requires a Ceph-based deployment and access from the compute nodes to Ceph.

Related options:

  • [glance] rbd_user
  • [glance] rbd_connect_timeout
  • [glance] rbd_pool
  • [glance] rbd_ceph_conf

endpoint-override = None

string value

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

num_retries = 3

integer value

Enable glance operation retries.

Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries.

`rbd_ceph_conf = `

string value

Path to the ceph configuration file to use.

Related options:

  • This option is only used if [glance] enable_rbd_download is set to True.

rbd_connect_timeout = 5

integer value

The RADOS client timeout in seconds when initially connecting to the cluster.

Related options:

  • This option is only used if [glance] enable_rbd_download is set to True.

`rbd_pool = `

string value

The RADOS pool in which the Glance images are stored as rbd volumes.

Related options:

  • This option is only used if [glance] enable_rbd_download is set to True.

`rbd_user = `

string value

The RADOS client name for accessing Glance images stored as rbd volumes.

Related options:

  • This option is only used if [glance] enable_rbd_download is set to True.

region-name = None

string value

The default region_name for endpoint URL discovery.

service-name = None

string value

The default service_name for endpoint URL discovery.

service-type = image

string value

The default service_type for endpoint URL discovery.

split-loggers = False

boolean value

Log requests to multiple loggers.

status-code-retries = None

integer value

The maximum number of retries that should be attempted for retriable HTTP status codes.

status-code-retry-delay = None

floating point value

Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

timeout = None

integer value

Timeout value for http requests

valid-interfaces = ['internal', 'public']

list value

List of interfaces, in order of preference, for endpoint URL.

verify_glance_signatures = False

boolean value

Enable image signature verification.

nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers.

Related options:

  • The options in the key_manager group, as the key_manager is used for the signature validation.
  • Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled.

9.1.17. guestfs

The following table outlines the options available under the [guestfs] group in the /etc/nova/nova.conf file.

Table 9.16. guestfs
Configuration option = Default valueTypeDescription

debug = False

boolean value

Enable/disables guestfs logging.

This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed.

Related options:

Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s.

  • libvirt.inject_key
  • libvirt.inject_partition
  • libvirt.inject_password

9.1.18. healthcheck

The following table outlines the options available under the [healthcheck] group in the /etc/nova/nova.conf file.

Table 9.17. healthcheck
Configuration option = Default valueTypeDescription

backends = []

list value

Additional backends that can perform health checks and report that information back as part of a request.

detailed = False

boolean value

Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies.

disable_by_file_path = None

string value

Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin.

disable_by_file_paths = []

list value

Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin.

path = /healthcheck

string value

The path to respond to healtcheck requests on.

9.1.19. hyperv

The following table outlines the options available under the [hyperv] group in the /etc/nova/nova.conf file.

Table 9.18. hyperv
Configuration option = Default valueTypeDescription

config_drive_cdrom = False

boolean value

Mount config drive as a CD drive.

OpenStack can be configured to write instance metadata to a config drive, which is then attached to the instance before it boots. The config drive can be attached as a disk drive (default) or as a CD drive.

Related options:

  • This option is meaningful with force_config_drive option set to True or when the REST API call to create an instance will have --config-drive=True flag.
  • config_drive_format option must be set to iso9660 in order to use CD drive as the config drive image.
  • To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value to the full path to an qemu-img command installation.
  • You can configure the Compute service to always create a configuration drive by setting the force_config_drive option to True.

config_drive_inject_password = False

boolean value

Inject password to config drive.

When enabled, the admin password will be available from the config drive image.

Related options:

  • This option is meaningful when used with other options that enable config drive usage with Hyper-V, such as force_config_drive.

dynamic_memory_ratio = 1.0

floating point value

Dynamic memory ratio

Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup.

Possible values:

  • 1.0: Disables dynamic memory allocation (Default).
  • Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup.

enable_instance_metrics_collection = False

boolean value

Enable instance metrics collection

Enables metrics collections for an instance by using Hyper-V’s metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer.

enable_remotefx = False

boolean value

Enable RemoteFX feature

This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled.

Instances with RemoteFX can be requested with the following flavor extra specs:

os:resolution. Guest VM screen resolution size. Acceptable values
1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160

3840x2160 is only available on Windows / Hyper-V Server 2016.

os:monitors. Guest VM number of monitors. Acceptable values
[1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016

os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values::

64, 128, 256, 512, 1024

`instances_path_share = `

string value

Instances path share

The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally.

Possible values:

  • "": An administrative share will be used (Default).
  • Name of a Windows share.

Related options:

  • "instances_path": The directory which will be used if this option here is left blank.

iscsi_initiator_list = []

list value

List of iSCSI initiators that will be used for estabilishing iSCSI sessions.

If none are specified, the Microsoft iSCSI initiator service will choose the initiator.

limit_cpu_features = False

boolean value

Limit CPU features

This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance.

mounted_disk_query_retry_count = 10

integer value

Mounted disk query retry count

The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached.

Possible values:

  • Positive integer values. Values greater than 1 is recommended (Default: 10).

Related options:

  • Time interval between disk mount retries is declared with "mounted_disk_query_retry_interval" option.

mounted_disk_query_retry_interval = 5

integer value

Mounted disk query retry interval

Interval between checks for a mounted disk, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This option is meaningful when the mounted_disk_query_retry_count is greater than 1.
  • The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options.

power_state_check_timeframe = 60

integer value

Power state check timeframe

The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe.

Possible values:

  • Timeframe in seconds (Default: 60).

power_state_event_polling_interval = 2

integer value

Power state event polling interval

Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value.

Possible values:

  • Time in seconds (Default: 2).

qemu_img_cmd = qemu-img.exe

string value

qemu-img command

qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value.

Possible values:

  • Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default).
  • Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND).

Related options:

  • If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the config drive will remain an ISO. To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation.

use_multipath_io = False

boolean value

Use multipath connections when attaching iSCSI or FC disks.

This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices.

volume_attach_retry_count = 10

integer value

Volume attach retry count

The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached.

Possible values:

  • Positive integer values (Default: 10).

Related options:

  • Time interval between attachment attempts is declared with volume_attach_retry_interval option.

volume_attach_retry_interval = 5

integer value

Volume attach retry interval

Interval between volume attachment attempts, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This options is meaningful when volume_attach_retry_count is greater than 1.
  • The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options.

vswitch_name = None

string value

External virtual switch name

The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private).

Possible values:

  • If not provided, the first of a list of available vswitches is used. This list is queried using WQL.
  • Virtual switch name.

wait_soft_reboot_seconds = 60

integer value

Wait soft reboot seconds

Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.

Possible values:

  • Time in seconds (Default: 60).

9.1.20. ironic

The following table outlines the options available under the [ironic] group in the /etc/nova/nova.conf file.

Table 9.19. ironic
Configuration option = Default valueTypeDescription

api_max_retries = 60

integer value

The number of times to retry when a request conflicts. If set to 0, only try once, no retries.

Related options:

  • api_retry_interval

api_retry_interval = 2

integer value

The number of seconds to wait before retrying the request.

Related options:

  • api_max_retries

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

connect-retries = None

integer value

The maximum number of retries that should be attempted for connection errors.

connect-retry-delay = None

floating point value

Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

endpoint-override = None

string value

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

partition_key = None

string value

Case-insensitive key to limit the set of nodes that may be managed by this service to the set of nodes in Ironic which have a matching conductor_group property. If unset, all available nodes will be eligible to be managed by this service. Note that setting this to the empty string ("") will match the default conductor group, and is different than leaving the option unset.

password = None

string value

User’s password

peer_list = []

list value

List of hostnames for all nova-compute services (including this host) with this partition_key config value. Nodes matching the partition_key value will be distributed between all services specified here. If partition_key is unset, this option is ignored.

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

region-name = None

string value

The default region_name for endpoint URL discovery.

serial_console_state_timeout = 10

integer value

Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout.

service-name = None

string value

The default service_name for endpoint URL discovery.

service-type = baremetal

string value

The default service_type for endpoint URL discovery.

split-loggers = False

boolean value

Log requests to multiple loggers.

status-code-retries = None

integer value

The maximum number of retries that should be attempted for retriable HTTP status codes.

status-code-retry-delay = None

floating point value

Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

system-scope = None

string value

Scope for system operations

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

valid-interfaces = ['internal', 'public']

list value

List of interfaces, in order of preference, for endpoint URL.

9.1.21. key_manager

The following table outlines the options available under the [key_manager] group in the /etc/nova/nova.conf file.

Table 9.20. key_manager
Configuration option = Default valueTypeDescription

auth_type = None

string value

The type of authentication credential to create. Possible values are token, password, keystone_token, and keystone_password. Required if no context is passed to the credential factory.

auth_url = None

string value

Use this endpoint to connect to Keystone.

backend = barbican

string value

Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time.

domain_id = None

string value

Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type.

domain_name = None

string value

Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type.

fixed_key = None

string value

Fixed key returned by key manager, specified in hex.

Possible values:

  • Empty string or a key in hex value

password = None

string value

Password for authentication. Required for password and keystone_password auth_type.

project_domain_id = None

string value

Project’s domain ID for project. Optional for keystone_token and keystone_password auth_type.

project_domain_name = None

string value

Project’s domain name for project. Optional for keystone_token and keystone_password auth_type.

project_id = None

string value

Project ID for project scoping. Optional for keystone_token and keystone_password auth_type.

project_name = None

string value

Project name for project scoping. Optional for keystone_token and keystone_password auth_type.

reauthenticate = True

boolean value

Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type.

token = None

string value

Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory.

trust_id = None

string value

Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type.

user_domain_id = None

string value

User’s domain ID for authentication. Optional for keystone_token and keystone_password auth_type.

user_domain_name = None

string value

User’s domain name for authentication. Optional for keystone_token and keystone_password auth_type.

user_id = None

string value

User ID for authentication. Optional for keystone_token and keystone_password auth_type.

username = None

string value

Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type.

9.1.22. keystone

The following table outlines the options available under the [keystone] group in the /etc/nova/nova.conf file.

Table 9.21. keystone
Configuration option = Default valueTypeDescription

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

connect-retries = None

integer value

The maximum number of retries that should be attempted for connection errors.

connect-retry-delay = None

floating point value

Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

endpoint-override = None

string value

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

region-name = None

string value

The default region_name for endpoint URL discovery.

service-name = None

string value

The default service_name for endpoint URL discovery.

service-type = identity

string value

The default service_type for endpoint URL discovery.

split-loggers = False

boolean value

Log requests to multiple loggers.

status-code-retries = None

integer value

The maximum number of retries that should be attempted for retriable HTTP status codes.

status-code-retry-delay = None

floating point value

Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

timeout = None

integer value

Timeout value for http requests

valid-interfaces = ['internal', 'public']

list value

List of interfaces, in order of preference, for endpoint URL.

9.1.23. keystone_authtoken

The following table outlines the options available under the [keystone_authtoken] group in the /etc/nova/nova.conf file.

Table 9.22. keystone_authtoken
Configuration option = Default valueTypeDescription

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

auth_uri = None

string value

Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens

*Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release.

auth_version = None

string value

API version of the Identity API endpoint.

cache = None

string value

Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead.

cafile = None

string value

A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.

certfile = None

string value

Required if identity server requires client certificate

delay_auth_decision = False

boolean value

Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.

enforce_token_bind = permissive

string value

Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.

http_connect_timeout = None

integer value

Request timeout value for communicating with Identity API server.

http_request_max_retries = 3

integer value

How many times are we trying to reconnect when communicating with Identity API Server.

include_service_catalog = True

boolean value

(Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.

insecure = False

boolean value

Verify HTTPS connections.

interface = admin

string value

Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default).

keyfile = None

string value

Required if identity server requires client certificate

memcache_pool_conn_get_timeout = 10

integer value

(Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.

memcache_pool_dead_retry = 300

integer value

(Optional) Number of seconds memcached server is considered dead before it is tried again.

memcache_pool_maxsize = 10

integer value

(Optional) Maximum total number of open connections to every memcached server.

memcache_pool_socket_timeout = 3

integer value

(Optional) Socket timeout in seconds for communicating with a memcached server.

memcache_pool_unused_timeout = 60

integer value

(Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.

memcache_secret_key = None

string value

(Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.

memcache_security_strategy = None

string value

(Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.

memcache_use_advanced_pool = False

boolean value

(Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.

memcached_servers = None

list value

Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.

region_name = None

string value

The region in which the identity server can be found.

service_token_roles = ['service']

list value

A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check.

service_token_roles_required = False

boolean value

For backwards compatibility reasons we must let valid service tokens pass that don’t pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible.

service_type = None

string value

The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules.

token_cache_time = 300

integer value

In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

www_authenticate_uri = None

string value

Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.

9.1.24. libvirt

The following table outlines the options available under the [libvirt] group in the /etc/nova/nova.conf file.

Table 9.23. libvirt
Configuration option = Default valueTypeDescription

`connection_uri = `

string value

Overrides the default libvirt URI of the chosen virtualization type.

If set, Nova will use this URI to connect to libvirt.

Possible values:

  • An URI like qemu:///system or xen+ssh://oirase/ for example. This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.

Related options:

  • virt_type: Influences what is used as default value here.

cpu_mode = None

string value

Is used to set the CPU mode an instance should have.

If virt_type="kvm&verbar;qemu", it will default to host-model, otherwise it will default to none.

Related options:

  • cpu_models: This should be set ONLY when cpu_mode is set to custom. Otherwise, it would result in an error and the instance launch will fail.

cpu_model_extra_flags = []

list value

Enable or disable guest CPU flags.

To explicitly enable or disable CPU flags, use the +flag or -flag notation — the + sign will enable the CPU flag for the guest, while a - sign will disable it. If neither + nor - is specified, the flag will be enabled, which is the default behaviour. For example, if you specify the following (assuming the said CPU model and features are supported by the host hardware and software)::

[libvirt]
cpu_mode = custom
cpu_models = Cascadelake-Server
cpu_model_extra_flags = -hle, -rtm, +ssbd, mtrr

Nova will disable the hle and rtm flags for the guest; and it will enable ssbd and mttr (because it was specified with neither + nor - prefix).

The CPU flags are case-insensitive. In the following example, the pdpe1gb flag will be disabled for the guest; vmx and pcid flags will be enabled::

[libvirt]
cpu_mode = custom
cpu_models = Haswell-noTSX-IBRS
cpu_model_extra_flags = -PDPE1GB, +VMX, pcid

Specifying extra CPU flags is valid in combination with all the three possible values of cpu_mode config attribute: custom (this also requires an explicit CPU model to be specified via the cpu_models config attribute), host-model, or host-passthrough.

There can be scenarios where you may need to configure extra CPU flags even for host-passthrough CPU mode, because sometimes QEMU may disable certain CPU features. An example of this is Intel’s "invtsc" (Invariable Time Stamp Counter) CPU flag — if you need to expose this flag to a Nova instance, you need to explicitly enable it.

The possible values for cpu_model_extra_flags depends on the CPU model in use. Refer to /usr/share/libvirt/cpu_map/*.xml for possible CPU feature flags for a given CPU model.

A special note on a particular CPU flag: pcid (an Intel processor feature that alleviates guest performance degradation as a result of applying the Meltdown CVE fixes). When configuring this flag with the custom CPU mode, not all CPU models (as defined by QEMU and libvirt) need it:

  • The only virtual CPU models that include the pcid capability are Intel "Haswell", "Broadwell", and "Skylake" variants.
  • The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", and "IvyBridge" will not expose the pcid capability by default, even if the host CPUs by the same name include it. I.e. PCID needs to be explicitly specified when using the said virtual CPU models.

The libvirt driver’s default CPU mode, host-model, will do the right thing with respect to handling PCID CPU flag for the guest — assuming you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode, host-passthrough, checks if PCID is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of PCID, with either of these CPU modes (host-model or host-passthrough), there is no need to use the cpu_model_extra_flags.

Related options:

  • cpu_mode
  • cpu_models

cpu_models = []

list value

An ordered list of CPU models the host supports.

It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example: SandyBridge,IvyBridge,Haswell,Broadwell, the latter CPU model’s features is richer that the previous CPU model.

Possible values:

  • The named CPU models can be found via virsh cpu-models ARCH, where ARCH is your host architecture.

Related options:

  • cpu_mode: This should be set to custom ONLY when you want to configure (via cpu_models) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail.
  • virt_type: Only the virtualization types kvm and qemu use this.

    1. note:: Be careful to only specify models which can be fully supported in hardware.

disk_cachemodes = []

list value

Specific cache modes to use for different disk types.

For example: file=directsync,block=none,network=writeback

For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment.

Possible cache modes:

  • default: "It Depends" — For Nova-managed disks, none, if the host file system is capable of Linux’s O_DIRECT semantics; otherwise writeback. For volume drivers, the default is driver-dependent: none for everything except for SMBFS and Virtuzzo (which use writeback).
  • none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to none regardless of configuration.
  • writethrough: With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled.
  • writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU’s integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety.
  • directsync: Like "writethrough", but it bypasses the host page cache.
  • unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments.

disk_prefix = None

string value

Override the default disk prefix for the devices attached to an instance.

If set, this is used to identify a free disk device name for a bus.

Possible values:

  • Any prefix which will result in a valid disk device name like sda or hda for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd.

Related options:

  • virt_type: Influences which device type is used, which determines the default disk prefix.

enabled_perf_events = []

list value

This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statsitics via the libvirt driver, which in turn uses the Linux kernel’s perf infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events. For more information, refer to the "Performance monitoring events" section here: https://libvirt.org/formatdomain.html#elementsPerf. And here: https://libvirt.org/html/libvirt-libvirt-domain.html — look for VIR_PERF_PARAM_*

For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows::

[libvirt]
enabled_perf_events = cpu_clock, cache_misses

Possible values: A string list. The list of supported events can be found here: https://libvirt.org/formatdomain.html#elementsPerf.

Note that support for Intel CMT events (cmt, mbmbt, mbml) is deprecated, and will be removed in the "Stein" release. That’s because the upstream Linux kernel (from 4.14 onwards) has deleted support for Intel CMT, because it is broken by design.

file_backed_memory = 0

integer value

Available capacity in MiB for file-backed memory.

Set to 0 to disable file-backed memory.

When enabled, instances will create memory files in the directory specified in /etc/libvirt/qemu.conf's memory_backing_dir option. The default location is /var/lib/libvirt/qemu/ram.

When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel’s pagecache mechanism.

  1. note:: This feature is not compatible with hugepages.
  2. note:: This feature is not compatible with memory overcommit.

Related options:

  • virt_type must be set to kvm or qemu.
  • ram_allocation_ratio must be set to 1.0.

gid_maps = []

list value

List of guid targets and ranges.Syntax is guest-gid:host-gid:count. Maximum of 5 allowed.

hw_disk_discard = None

string value

Discard option for nova managed disks.

Requires:

  • Libvirt >= 1.0.6
  • Qemu >= 1.5 (raw format)
  • Qemu >= 1.6 (qcow2 format)

hw_machine_type = None

list value

For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the :command:virsh capabilities command. The format of the value for this config option is host-arch=machine-type. For example: x86_64=machinetype1,armv7l=machinetype2.

`images_rbd_ceph_conf = `

string value

Path to the ceph configuration file to use

images_rbd_pool = rbd

string value

The RADOS pool in which rbd volumes are stored

images_type = default

string value

VM Images format.

If default is specified, then use_cow_images flag is used instead of this one.

Related options:

  • compute.use_cow_images
  • images_volume_group
  • [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup
  • compute.force_raw_images

images_volume_group = None

string value

LVM Volume Group that is used for VM images, when you specify images_type=lvm

Related options:

  • images_type

inject_key = False

boolean value

Allow the injection of an SSH key at boot time.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the authorized_keys of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume.

This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service.

Linux distribution guest only.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.

inject_partition = -2

integer value

Determines the way how the file system is chosen to inject data into it.

libguestfs will be used a first solution to inject data. If that’s not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t be boot.

Possible values:

  • -2 ⇒ disable the injection of data.
  • -1 ⇒ find the root partition with the file system to mount with libguestfs
  • 0 ⇒ The image is not partitioned
  • >0 ⇒ The number of the partition to use for the injection

Linux distribution guest only.

Related options:

  • inject_key: If this option allows the injection of a SSH key it depends on value greater or equal to -1 for inject_partition.
  • inject_password: If this option allows the injection of an admin password it depends on value greater or equal to -1 for inject_partition.
  • guestfs You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues.
  • virt_type: If you use lxc as virt_type it will be treated as a single partition image

inject_password = False

boolean value

Allow the injection of an admin password for instance only at create and rebuild process.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume.

Linux distribution guest only.

Possible values:

  • True: Allows the injection.
  • False: Disallows the injection. Any via the REST API provided admin password will be silently ignored.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.

iscsi_iface = None

string value

The iSCSI transport iface to use to connect to target in case offload support is desired.

Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name.

iser_use_multipath = False

boolean value

Use multipath connection of the iSER volume.

iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance.

live_migration_bandwidth = 0

integer value

Maximum bandwidth(in MiB/s) to be used during migration.

If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details.

live_migration_completion_timeout = 800

integer value

Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation.

Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts.

Related options:

  • live_migration_downtime
  • live_migration_downtime_steps
  • live_migration_downtime_delay

live_migration_downtime = 500

integer value

Maximum permitted downtime, in milliseconds, for live migration switchover.

Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over.

Related options:

  • live_migration_completion_timeout

live_migration_downtime_delay = 75

integer value

Time to wait, in seconds, between each step increase of the migration downtime.

Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device.

live_migration_downtime_steps = 10

integer value

Number of incremental steps to reach max downtime value.

Will be rounded up to a minimum of 3 steps.

live_migration_inbound_addr = None

host address value

Target used for live migration traffic.

If this option is set to None, the hostname of the migration target compute node will be used.

This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network.

Related options:

  • live_migration_tunnelled: The live_migration_inbound_addr value is ignored if tunneling is enabled.

live_migration_permit_auto_converge = False

boolean value

This option allows nova to start live migration with auto converge on.

Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use.

Related options:

  • live_migration_permit_post_copy

live_migration_permit_post_copy = False

boolean value

This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.

When permitted, post-copy mode will be automatically activated if we reach the timeout defined by live_migration_completion_timeout and live_migration_timeout_action is set to force_complete. Note if you change to no timeout or choose to use abort, i.e. live_migration_completion_timeout = 0, then there will be no automatic switch to post-copy.

The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete.

When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide.

Related options:

  • live_migration_permit_auto_converge
  • live_migration_timeout_action

live_migration_scheme = None

string value

URI scheme used for live migration.

Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme.

Related options:

  • virt_type: This option is meaningful only when virt_type is set to kvm or qemu.
  • live_migration_uri: If live_migration_uri value is not None, the scheme used for live migration is taken from live_migration_uri instead.

live_migration_timeout_action = abort

string value

This option will be used to determine what action will be taken against a VM after live_migration_completion_timeout expires. By default, the live migrate operation will be aborted after completion timeout. If it is set to force_complete, the compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available (live_migration_permit_post_copy is set to True).

Related options:

  • live_migration_completion_timeout
  • live_migration_permit_post_copy

live_migration_tunnelled = False

boolean value

Enable tunnelled migration.

This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively.

Note that this option is NOT compatible with use of block migration.

Related options:

  • live_migration_inbound_addr: The live_migration_inbound_addr value is ignored if tunneling is enabled.

live_migration_uri = None

string value

Live migration target URI to use.

Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname.

If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list:

  • kvm: qemu+tcp://%s/system
  • qemu: qemu+tcp://%s/system
  • xen: xenmigr://%s/system
  • parallels: parallels+tcp://%s/system

Related options:

  • live_migration_inbound_addr: If live_migration_inbound_addr value is not None and live_migration_tunnelled is False, the ip/hostname address of target compute node is used instead of live_migration_uri as the uri for live migration.
  • live_migration_scheme: If live_migration_uri is not set, the scheme used for live migration is taken from live_migration_scheme instead.

Deprecated since: 15.0.0

Reason: live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively.

live_migration_with_native_tls = False

boolean value

Use QEMU-native TLS encryption when live migrating.

This option will allow both migration stream (guest RAM plus device state) and disk stream to be transported over native TLS, i.e. TLS support built into QEMU.

Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permissions are in place, and are validated.

Notes:

  • To have encryption for migration stream and disk stream (also called: "block migration"), live_migration_with_native_tls is the preferred config attribute instead of live_migration_tunnelled.
  • The live_migration_tunnelled will be deprecated in the long-term for two main reasons: (a) it incurs a huge performance penalty; and (b) it is not compatible with block migration. Therefore, if your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to use live_migration_with_native_tls.
  • The live_migration_tunnelled and live_migration_with_native_tls should not be used at the same time.
  • Unlike live_migration_tunnelled, the live_migration_with_native_tls is compatible with block migration. That is, with this option, NBD stream, over which disks are migrated to a target host, will be encrypted.

Related options:

live_migration_tunnelled: This transports migration stream (but not disk stream) over libvirtd.

max_queues = None

integer value

The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used.

mem_stats_period_seconds = 10

integer value

A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.

nfs_mount_options = None

string value

Mount options passed to the NFS client. See section of the nfs man page for details.

Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point.

Possible values:

  • Any string representing mount options separated by commas.
  • Example string: vers=3,lookupcache=pos

nfs_mount_point_base = $state_path/mnt

string value

Directory where the NFS volume is mounted on the compute node. The default is mnt directory of the location where nova’s Python module is installed.

NFS provides shared storage for the OpenStack Block Storage service.

Possible values:

  • A string representing absolute path of mount point.

num_aoe_discover_tries = 3

integer value

Number of times to rediscover AoE target to find volume.

Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device.

num_iser_scan_tries = 5

integer value

Number of times to scan iSER target to find volume.

iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume.

num_memory_encrypted_guests = None

integer value

Maximum number of guests with encrypted memory which can run concurrently on this compute host.

For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots.

The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0.

If the machine does support memory encryption, for now a value of None means an effectively unlimited inventory, i.e. no limit will be imposed by Nova on the number of SEV guests which can be launched, even though the underlying hardware will enforce its own limit. However it is expected that in the future, auto-detection of the inventory from the hardware will become possible, at which point None will cause auto-detection to automatically impose the correct limit.

  1. note::

    It is recommended to read :ref:`the deployment documentation's
    section on this option <num_memory_encrypted_guests>` before
    deciding whether to configure this setting or leave it at the
    default.

Related options:

  • :oslo.config:option:libvirt.virt_type must be set to kvm.
  • It’s recommended to consider including x86_64=q35 in :oslo.config:option:libvirt.hw_machine_type; see :ref:deploying-sev-capable-infrastructure for more on this.

num_nvme_discover_tries = 5

integer value

Number of times to rediscover NVMe target to find volume

Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device.

num_pcie_ports = 0

integer value

The number of PCIe ports an instance will get.

Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use.

By default we have just 1-2 free ports which limits hotplug.

More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt

Due to QEMU limitations for aarch64/virt maximum value is set to 28.

Default value 0 moves calculating amount of ports to libvirt.

num_volume_scan_tries = 5

integer value

Number of times to scan given storage protocol to find volume.

pmem_namespaces = []

list value

Configure persistent memory(pmem) namespaces.

These namespaces must have been already created on the host. This config option is in the following format::

"$LABEL:$NSNAME[&verbar;$NSNAME][,$LABEL:$NSNAME[&verbar;$NSNAME]]"
  • $NSNAME is the name of the pmem namespace.
  • $LABEL represents one resource class, this is used to generate the resource class name as CUSTOM_PMEM_NAMESPACE_$LABEL.

    For example
    [libvirt] pmem_namespaces=128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7

quobyte_client_cfg = None

string value

Path to a Quobyte Client configuration file.

quobyte_mount_point_base = $state_path/mnt

string value

Directory where the Quobyte volume is mounted on the compute node.

Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted.

Possible values:

  • A string representing absolute path of mount point.

rbd_connect_timeout = 5

integer value

The RADOS client timeout in seconds when initially connecting to the cluster.

rbd_secret_uuid = None

string value

The libvirt UUID of the secret for the rbd_user volumes.

rbd_user = None

string value

The RADOS client name for accessing rbd(RADOS Block Devices) volumes.

Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server.

realtime_scheduler_priority = 1

integer value

In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)

remote_filesystem_transport = ssh

string value

libvirt’s transport method for remote file operations.

Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for:

  • creating directory on remote host
  • creating file on remote host
  • removing file from remote host
  • copying file to remote host

remove_unused_resized_minimum_age_seconds = 3600

integer value

Unused resized base images younger than this will not be removed

rescue_image_id = None

string value

The ID of the image to boot from to rescue data from a corrupted instance.

If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used.

Possible values:

  • An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options rescue_kernel_id and rescue_ramdisk_id too. If nothing is set, the image of the instance is used.

Related options:

  • rescue_kernel_id: If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.
  • rescue_ramdisk_id: If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.

rescue_kernel_id = None

string value

The ID of the kernel (AKI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon's AMI/AKI/ARI image format, it’s useful to use rescue_kernel_id too.

rescue_ramdisk_id = None

string value

The ID of the RAM disk (ARI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon's AMI/AKI/ARI image format, it’s useful to use rescue_ramdisk_id too.

rng_dev_path = /dev/urandom

string value

The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is /dev/urandom — it is non-blocking, therefore relatively fast; and avoids the limitations of /dev/random, which is a legacy interface. For more details (and comparison between different RNG sources), refer to the "Usage" section in the Linux kernel API documentation for [u]random: http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html.

rx_queue_size = None

integer value

Configure virtio rx queue size.

This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7.

`smbfs_mount_options = `

string value

Mount options passed to the SMBFS client.

Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified.

smbfs_mount_point_base = $state_path/mnt

string value

Directory where the SMBFS shares are mounted on the compute node.

snapshot_compression = False

boolean value

Enable snapshot compression for qcow2 images.

Note: you can set snapshot_image_format to qcow2 to force all snapshots to be in qcow2 format, independently from their original image type.

Related options:

  • snapshot_image_format

snapshot_image_format = None

string value

Determine the snapshot image format when sending to the image service.

If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image.

snapshots_directory = $instances_path/snapshots

string value

Location where libvirt driver will store snapshots before uploading them to image service

sparse_logical_volumes = False

boolean value

Create sparse logical volumes (with virtualsize) if this flag is set to True.

Deprecated since: 18.0.0

Reason: Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes.

sysinfo_serial = unique

string value

The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS. All choices except unique will change the serial when migrating the instance to another host. Changing the choice of this option will also affect existing instances on this host once they are stopped and started again. It is recommended to use the default choice (unique) since that will not change when an instance is migrated. However, if you have a need for per-host serials in addition to per-instance serial numbers, then consider restricting flavors via host aggregates.

tx_queue_size = None

integer value

Configure virtio tx queue size.

This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10.

uid_maps = []

list value

List of uid targets and ranges.Syntax is guest-uid:host-uid:count. Maximum of 5 allowed.

use_usb_tablet = True

boolean value

Enable a mouse cursor within a graphical VNC or SPICE sessions.

This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn’t support a graphical framebuffer, then it is valid to set this to False.

Related options:

  • [vnc]enabled: If VNC is enabled, use_usb_tablet will have an effect.
  • [spice]enabled + [spice].agent_enabled: If SPICE is enabled and the spice agent is disabled, the config value of use_usb_tablet will have an effect.

Deprecated since: 14.0.0

*Reason:*This option is being replaced by the pointer_model option.

use_virtio_for_bridges = True

boolean value

Use virtio for bridge interfaces with KVM/QEMU

virt_type = kvm

string value

Describes the virtualization type (or so called domain type) libvirt should use.

The choice of this type must match the underlying virtualization strategy you have chosen for this host.

Related options:

  • connection_uri: depends on this
  • disk_prefix: depends on this
  • cpu_mode: depends on this
  • cpu_models: depends on this

volume_clear = zero

string value

Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage.

Related options:

  • images_type - must be set to lvm
  • volume_clear_size

volume_clear_size = 0

integer value

Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in volume_clear option.

Possible values:

  • 0 - clear whole volume
  • >0 - clear specified amount of MiB

Related options:

  • images_type - must be set to lvm
  • volume_clear - must be set and the value must be different than none for this option to have any impact

volume_use_multipath = False

boolean value

Use multipath connection of the iSCSI or FC volume

Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance.

vzstorage_cache_path = None

string value

Path to the SSD cache file.

You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility.

This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares.

Related options:

  • vzstorage_mount_opts may include more detailed cache options.

vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz

string value

Path to vzstorage client log.

This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares.

Related options:

  • vzstorage_mount_opts may include more detailed logging options.

vzstorage_mount_group = qemu

string value

Mount owner group name.

This option defines the owner group of Vzstorage cluster mountpoint.

Related options:

  • vzstorage_mount_* group of parameters

vzstorage_mount_opts = []

list value

Extra mount options for pstorage-mount

For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "[-v, -R, 500]" Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.

Related options:

  • All other vzstorage_* options

vzstorage_mount_perms = 0770

string value

Mount access mode.

This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s.

Related options:

  • vzstorage_mount_* group of parameters

vzstorage_mount_point_base = $state_path/mnt

string value

Directory where the Virtuozzo Storage clusters are mounted on the compute node.

This option defines non-standard mountpoint for Vzstorage cluster.

Related options:

  • vzstorage_mount_* group of parameters

vzstorage_mount_user = stack

string value

Mount owner user name.

This option defines the owner user of Vzstorage cluster mountpoint.

Related options:

  • vzstorage_mount_* group of parameters

wait_soft_reboot_seconds = 120

integer value

Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.

xen_hvmloader_path = /usr/lib/xen/boot/hvmloader

string value

Location where the Xen hvmloader is kept

9.1.25. metrics

The following table outlines the options available under the [metrics] group in the /etc/nova/nova.conf file.

Table 9.24. metrics
Configuration option = Default valueTypeDescription

required = True

boolean value

This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • True or False, where False ensures any metric being unavailable for a host will set the host weight to weight_of_unavailable.

Related options:

  • weight_of_unavailable

weight_multiplier = 1.0

floating point value

When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:

  • >1.0: increases the effect of the metric on overall weight
  • 1.0: no change to the calculated weight
  • >0.0,<1.0: reduces the effect of the metric on overall weight
  • 0.0: the metric value is ignored, and the value of the weight_of_unavailable option is returned instead
  • >-1.0,<0.0: the effect is reduced and reversed
  • -1.0: the effect is reversed
  • ←1.0: the effect is increased proportionally and reversed

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

Related options:

  • weight_of_unavailable

weight_of_unavailable = -10000.0

floating point value

When any of the following conditions are met, this value will be used in place of any actual metric value:

  • One of the metrics named in weight_setting is not available for a host, and the value of required is False
  • The ratio specified for a metric in weight_setting is 0
  • The weight_multiplier option is set to 0

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • An integer or float value, where the value corresponds to the multiplier ratio for this weigher.

Related options:

  • weight_setting
  • required
  • weight_multiplier

weight_setting = []

list value

This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more name=ratio pairs, separated by commas, where name is the name of the metric to be weighed, and ratio is the relative weight for that metric.

Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the weight_of_unavailable option.

As an example, let’s consider the case where this option is set to:

`name1=1.0, name2=-1.3`

The final weight will be:

`(name1.value * 1.0) + (name2.value * -1.3)`

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

Possible values:

  • A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the weight_of_unavailable option.

Related options:

  • weight_of_unavailable

9.1.26. mks

The following table outlines the options available under the [mks] group in the /etc/nova/nova.conf file.

Table 9.25. mks
Configuration option = Default valueTypeDescription

enabled = False

boolean value

Enables graphical console access for virtual machines.

mksproxy_base_url = http://127.0.0.1:6090/

uri value

Location of MKS web console proxy

The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured

Possible values:

  • Must be a valid URL of the form:http://host:port/ or https://host:port/

9.1.27. neutron

The following table outlines the options available under the [neutron] group in the /etc/nova/nova.conf file.

Table 9.26. neutron
Configuration option = Default valueTypeDescription

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

connect-retries = None

integer value

The maximum number of retries that should be attempted for connection errors.

connect-retry-delay = None

floating point value

Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

default-domain-id = None

string value

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default-domain-name = None

string value

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default_floating_pool = nova

string value

Default name for the floating IP pool.

Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding responses.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

endpoint-override = None

string value

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

extension_sync_interval = 600

integer value

Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait.

http_retries = 3

integer value

Number of times neutronclient should retry on any failed http call.

0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.

Possible values:

  • Any integer value. 0 means connection is attempted only once

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

`metadata_proxy_shared_secret = `

string value

This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the X-Metadata-Provider-Signature header must be supplied in the request.

Related options:

  • service_metadata_proxy

ovs_bridge = br-int

string value

Default name for the Open vSwitch integration bridge.

Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses.

password = None

string value

User’s password

physnets = []

list value

List of physnets present on this host.

For each physnet listed, an additional section, [neutron_physnet_$PHYSNET], will be added to the configuration file. Each section must be configured with a single configuration option, numa_nodes, which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example::

[neutron]
physnets = foo, bar
[neutron_physnet_foo]
numa_nodes = 0
[neutron_physnet_bar]
numa_nodes = 0,1

Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity.

Tunnelled networks (VXLAN, GRE, …​) cannot be accounted for in this way and are instead configured using the [neutron_tunnel] group. For example::

[neutron_tunnel]
numa_nodes = 1

Related options:

  • [neutron_tunnel] numa_nodes can be used to configure NUMA affinity for all tunneled networks
  • [neutron_physnet_$PHYSNET] numa_nodes must be configured for each value of $PHYSNET specified by this option

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

region-name = None

string value

The default region_name for endpoint URL discovery.

service-name = None

string value

The default service_name for endpoint URL discovery.

service-type = network

string value

The default service_type for endpoint URL discovery.

service_metadata_proxy = False

boolean value

When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the X-Instance-ID header.

Related options:

  • metadata_proxy_shared_secret

split-loggers = False

boolean value

Log requests to multiple loggers.

status-code-retries = None

integer value

The maximum number of retries that should be attempted for retriable HTTP status codes.

status-code-retry-delay = None

floating point value

Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

system-scope = None

string value

Scope for system operations

tenant-id = None

string value

Tenant ID

tenant-name = None

string value

Tenant Name

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

valid-interfaces = ['internal', 'public']

list value

List of interfaces, in order of preference, for endpoint URL.

9.1.28. notifications

The following table outlines the options available under the [notifications] group in the /etc/nova/nova.conf file.

Table 9.27. notifications
Configuration option = Default valueTypeDescription

bdms_in_notifications = False

boolean value

If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database.

default_level = INFO

string value

Default notification level for outgoing notifications.

notification_format = unversioned

string value

Specifies which notification format shall be emitted by nova.

The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface.

However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default.

Note that notifications can be completely disabled by setting driver=noop in the [oslo_messaging_notifications] group.

The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html

notify_on_state_change = None

string value

If set, send compute.instance.update notifications on instance state changes.

Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications.

versioned_notifications_topics = ['versioned_notifications']

list value

Specifies the topics for the versioned notifications issued by nova.

The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list.

The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html

9.1.29. osapi_v21

The following table outlines the options available under the [osapi_v21] group in the /etc/nova/nova.conf file.

Table 9.28. osapi_v21
Configuration option = Default valueTypeDescription

project_id_regex = None

string value

This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone.

Possible values:

  • A string representing any legal regular expression

Deprecated since: 13.0.0

Reason: Recent versions of nova constrain project IDs to hexadecimal characters and dashes. If your installation uses IDs outside of this range, you should use this option to provide your own regex and give you time to migrate offending projects to valid IDs before the next release.

9.1.30. oslo_concurrency

The following table outlines the options available under the [oslo_concurrency] group in the /etc/nova/nova.conf file.

Table 9.29. oslo_concurrency
Configuration option = Default valueTypeDescription

disable_process_locking = False

boolean value

Enables or disables inter-process locks.

lock_path = None

string value

Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.

9.1.31. oslo_messaging_amqp

The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/nova/nova.conf file.

Table 9.30. oslo_messaging_amqp
Configuration option = Default valueTypeDescription

addressing_mode = dynamic

string value

Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing

anycast_address = anycast

string value

Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers.

broadcast_prefix = broadcast

string value

address prefix used when broadcasting to all servers

connection_retry_backoff = 2

integer value

Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt.

connection_retry_interval = 1

integer value

Seconds to pause before attempting to re-connect.

connection_retry_interval_max = 30

integer value

Maximum limit for connection_retry_interval + connection_retry_backoff

container_name = None

string value

Name for the AMQP container. must be globally unique. Defaults to a generated UUID

default_notification_exchange = None

string value

Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify

default_notify_timeout = 30

integer value

The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry.

default_reply_retry = 0

integer value

The maximum number of attempts to re-send a reply message which failed due to a recoverable error.

default_reply_timeout = 30

integer value

The deadline for an rpc reply message delivery.

default_rpc_exchange = None

string value

Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc

default_send_timeout = 30

integer value

The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry.

default_sender_link_timeout = 600

integer value

The duration to schedule a purge of idle sender links. Detach link after expiry.

group_request_prefix = unicast

string value

address prefix when sending to any server in group

idle_timeout = 0

integer value

Timeout for inactive connections (in seconds)

link_retry_delay = 10

integer value

Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.

multicast_address = multicast

string value

Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages.

notify_address_prefix = openstack.org/om/notify

string value

Address prefix for all generated Notification addresses

notify_server_credit = 100

integer value

Window size for incoming Notification messages

pre_settled = ['rpc-cast', 'rpc-reply']

multi valued

Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply- send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled

pseudo_vhost = True

boolean value

Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host.

reply_link_credit = 200

integer value

Window size for incoming RPC Reply messages.

rpc_address_prefix = openstack.org/om/rpc

string value

Address prefix for all generated RPC addresses

rpc_server_credit = 100

integer value

Window size for incoming RPC Request messages

`sasl_config_dir = `

string value

Path to directory that contains the SASL configuration

`sasl_config_name = `

string value

Name of configuration file (without .conf suffix)

`sasl_default_realm = `

string value

SASL realm to use if no realm present in username

`sasl_mechanisms = `

string value

Space separated list of acceptable SASL mechanisms

server_request_prefix = exclusive

string value

address prefix used when sending to a specific server

ssl = False

boolean value

Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system’s CA-bundle to verify the server’s certificate.

`ssl_ca_file = `

string value

CA certificate PEM file used to verify the server’s certificate

`ssl_cert_file = `

string value

Self-identifying certificate PEM file for client authentication

`ssl_key_file = `

string value

Private key PEM file used to sign ssl_cert_file certificate (optional)

ssl_key_password = None

string value

Password for decrypting ssl_key_file (if encrypted)

ssl_verify_vhost = False

boolean value

By default SSL checks that the name in the server’s certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server’s SSL certificate uses the virtual host name instead of the DNS name.

trace = False

boolean value

Debug: dump AMQP frames to stdout

unicast_address = unicast

string value

Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination.

9.1.32. oslo_messaging_kafka

The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/nova/nova.conf file.

Table 9.31. oslo_messaging_kafka
Configuration option = Default valueTypeDescription

compression_codec = none

string value

The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version

conn_pool_min_size = 2

integer value

The pool size limit for connections expiration policy

conn_pool_ttl = 1200

integer value

The time-to-live in sec of idle connections in the pool

consumer_group = oslo_messaging_consumer

string value

Group id for Kafka consumer. Consumers in one group will coordinate message consumption

enable_auto_commit = False

boolean value

Enable asynchronous consumer commits

kafka_consumer_timeout = 1.0

floating point value

Default timeout(s) for Kafka consumers

kafka_max_fetch_bytes = 1048576

integer value

Max fetch bytes of Kafka consumer

max_poll_records = 500

integer value

The maximum number of records returned in a poll call

pool_size = 10

integer value

Pool Size for Kafka Consumers

producer_batch_size = 16384

integer value

Size of batch for the producer async send

producer_batch_timeout = 0.0

floating point value

Upper bound on the delay for KafkaProducer batching in seconds

sasl_mechanism = PLAIN

string value

Mechanism when security protocol is SASL

security_protocol = PLAINTEXT

string value

Protocol used to communicate with brokers

`ssl_cafile = `

string value

CA certificate PEM file used to verify the server certificate

9.1.33. oslo_messaging_notifications

The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/nova/nova.conf file.

Table 9.32. oslo_messaging_notifications
Configuration option = Default valueTypeDescription

driver = []

multi valued

The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop

retry = -1

integer value

The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite

topics = ['notifications']

list value

AMQP topic used for OpenStack notifications.

transport_url = None

string value

A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.

9.1.34. oslo_messaging_rabbit

The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/nova/nova.conf file.

Table 9.33. oslo_messaging_rabbit
Configuration option = Default valueTypeDescription

amqp_auto_delete = False

boolean value

Auto-delete queues in AMQP.

amqp_durable_queues = False

boolean value

Use durable queues in AMQP.

direct_mandatory_flag = True

boolean value

(DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore

enable_cancel_on_failover = False

boolean value

Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down

heartbeat_in_pthread = False

boolean value

EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn’t provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread.

heartbeat_rate = 2

integer value

How often times during the heartbeat_timeout_threshold we check the heartbeat.

heartbeat_timeout_threshold = 60

integer value

Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disables heartbeat).

kombu_compression = None

string value

EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions.

kombu_failover_strategy = round-robin

string value

Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.

kombu_missing_consumer_retry_timeout = 60

integer value

How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.

kombu_reconnect_delay = 1.0

floating point value

How long to wait before reconnecting in response to an AMQP consumer cancel notification.

rabbit_ha_queues = False

boolean value

Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} "

rabbit_interval_max = 30

integer value

Maximum interval of RabbitMQ connection retries. Default is 30 seconds.

rabbit_login_method = AMQPLAIN

string value

The RabbitMQ login method.

rabbit_qos_prefetch_count = 0

integer value

Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.

rabbit_retry_backoff = 2

integer value

How long to backoff for between retries when connecting to RabbitMQ.

rabbit_retry_interval = 1

integer value

How frequently to retry connecting with RabbitMQ.

rabbit_transient_queues_ttl = 1800

integer value

Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.

ssl = False

boolean value

Connect over SSL.

`ssl_ca_file = `

string value

SSL certification authority file (valid only if SSL enabled).

`ssl_cert_file = `

string value

SSL cert file (valid only if SSL enabled).

`ssl_key_file = `

string value

SSL key file (valid only if SSL enabled).

`ssl_version = `

string value

SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.

9.1.35. oslo_middleware

The following table outlines the options available under the [oslo_middleware] group in the /etc/nova/nova.conf file.

Table 9.34. oslo_middleware
Configuration option = Default valueTypeDescription

enable_proxy_headers_parsing = False

boolean value

Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.

max_request_body_size = 114688

integer value

The maximum body size for each request, in bytes.

secure_proxy_ssl_header = X-Forwarded-Proto

string value

The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.

9.1.36. oslo_policy

The following table outlines the options available under the [oslo_policy] group in the /etc/nova/nova.conf file.

Table 9.35. oslo_policy
Configuration option = Default valueTypeDescription

enforce_scope = False

boolean value

This option controls whether or not to enforce scope when evaluating policies. If True, the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False, a message will be logged informing operators that policies are being invoked with mismatching scope.

policy_default_rule = default

string value

Default rule. Enforced when a requested rule is not found.

policy_dirs = ['policy.d']

multi valued

Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.

policy_file = policy.json

string value

The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option.

remote_content_type = application/x-www-form-urlencoded

string value

Content Type to send and receive data for REST based policy check

remote_ssl_ca_crt_file = None

string value

Absolute path to ca cert file for REST based policy check

remote_ssl_client_crt_file = None

string value

Absolute path to client cert for REST based policy check

remote_ssl_client_key_file = None

string value

Absolute path client key file REST based policy check

remote_ssl_verify_server_crt = False

boolean value

server identity verification for REST based policy check

9.1.37. pci

The following table outlines the options available under the [pci] group in the /etc/nova/nova.conf file.

Table 9.36. pci
Configuration option = Default valueTypeDescription

alias = []

multi valued

An alias for a PCI passthrough device requirement.

This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements.

This should be configured for the nova-api service and, assuming you wish to use move operations, for each nova-compute service.

Possible Values:

  • A dictionary of JSON values which describe the aliases. For example::

    alias = {
      "name": "QuickAssist",
      "product_id": "0443",
      "vendor_id": "8086",
      "device_type": "type-PCI",
      "numa_policy": "required"
    }
    This defines an alias for the Intel QuickAssist card. (multi valued). Valid
    key values are :
    `name`
      Name of the PCI alias.
    `product_id`
      Product ID of the device in hexadecimal.
    `vendor_id`
      Vendor ID of the device in hexadecimal.
    `device_type`
      Type of PCI device. Valid values are: `type-PCI`, `type-PF` and
      `type-VF`. Note that `"device_type": "type-PF"` **must** be specified
      if you wish to passthrough a device that supports SR-IOV in its entirety.
    `numa_policy`
      Required NUMA affinity of device. Valid values are: `legacy`,
      `preferred` and `required`.
  • Supports multiple aliases by repeating the option (not by specifying

    a list value)
    alias = { "name": "QuickAssist-1", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } alias = { "name": "QuickAssist-2", "product_id": "0444", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" }

passthrough_whitelist = []

multi valued

White list of PCI devices available to VMs.

Possible values:

  • A JSON dictionary which describe a whitelisted PCI device. It should take

    the following format

    ["vendor_id": "<id>",] ["product_id": "<id>",] ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" | "devname": "<name>",] {"<tag>": "<tag_value>",}

    Where `[` indicates zero or one occurrences, `{` indicates zero or
    multiple occurrences, and `&verbar;` mutually exclusive options. Note that any
    missing fields are automatically wildcarded.
    Valid key values are :
    `vendor_id`
      Vendor ID of the device in hexadecimal.
    `product_id`
      Product ID of the device in hexadecimal.
    `address`
      PCI address of the device. Both traditional glob style and regular
      expression syntax is supported.
    `devname`
      Device name of the device (for e.g. interface name). Not all PCI devices
      have a name.
    `<tag>`
      Additional `<tag>` and `<tag_value>` used for matching PCI devices.
      Supported `<tag>` values are :
    • physical_network
    • trusted
    Valid examples are
    passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet"} passthrough_whitelist = {"address":":0a:00."} passthrough_whitelist = {"address":":0a:00.", "physical_network":"physnet1"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ".", "bus": "02", "slot": "01", "function": "[2-7]"}, "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ".", "bus": "02", "slot": "0[1-2]", "function": ".*"}, "physical_network":"physnet1"} passthrough_whitelist = {"devname": "eth0", "physical_network":"physnet1", "trusted": "true"}
    The following are invalid, as they specify mutually exclusive options
    passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet", "address":":0a:00."}
  • A JSON list of JSON dictionaries corresponding to the above format. For

    example
    passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}]

9.1.38. placement

The following table outlines the options available under the [placement] group in the /etc/nova/nova.conf file.

Table 9.37. placement
Configuration option = Default valueTypeDescription

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

connect-retries = None

integer value

The maximum number of retries that should be attempted for connection errors.

connect-retry-delay = None

floating point value

Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

default-domain-id = None

string value

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default-domain-name = None

string value

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

endpoint-override = None

string value

Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version, min-version, and/or max-version options.

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

password = None

string value

User’s password

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

region-name = None

string value

The default region_name for endpoint URL discovery.

service-name = None

string value

The default service_name for endpoint URL discovery.

service-type = placement

string value

The default service_type for endpoint URL discovery.

split-loggers = False

boolean value

Log requests to multiple loggers.

status-code-retries = None

integer value

The maximum number of retries that should be attempted for retriable HTTP status codes.

status-code-retry-delay = None

floating point value

Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used.

system-scope = None

string value

Scope for system operations

tenant-id = None

string value

Tenant ID

tenant-name = None

string value

Tenant Name

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

valid-interfaces = ['internal', 'public']

list value

List of interfaces, in order of preference, for endpoint URL.

9.1.39. powervm

The following table outlines the options available under the [powervm] group in the /etc/nova/nova.conf file.

Table 9.38. powervm
Configuration option = Default valueTypeDescription

disk_driver = localdisk

string value

The disk driver to use for PowerVM disks. PowerVM provides support for localdisk and PowerVM Shared Storage Pool disk drivers.

Related options:

  • volume_group_name - required when using localdisk

proc_units_factor = 0.1

floating point value

Factor used to calculate the amount of physical processor compute power given to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor.

`volume_group_name = `

string value

Volume Group to use for block device operations. If disk_driver is localdisk, then this attribute must be specified. It is strongly recommended NOT to use rootvg since that is used by the management partition and filling it will cause failures.

9.1.40. privsep

The following table outlines the options available under the [privsep] group in the /etc/nova/nova.conf file.

Table 9.39. privsep
Configuration option = Default valueTypeDescription

capabilities = []

list value

List of Linux capabilities retained by the privsep daemon.

group = None

string value

Group that the privsep daemon should run as.

helper_command = None

string value

Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments.

thread_pool_size = <based on operating system>

integer value

The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system.

user = None

string value

User that the privsep daemon should run as.

9.1.41. profiler

The following table outlines the options available under the [profiler] group in the /etc/nova/nova.conf file.

Table 9.40. profiler
Configuration option = Default valueTypeDescription

connection_string = messaging://

string value

Connection string for a notifier backend.

Default value is messaging:// which sets the notifier to oslo_messaging.

Examples of possible values:

  • messaging:// - use oslo_messaging driver for sending spans.
  • redis://127.0.0.1:6379 - use redis driver for sending spans.
  • mongodb://127.0.0.1:27017 - use mongodb driver for sending spans.
  • elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans.
  • jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans.

enabled = False

boolean value

Enable the profiling for all services on this node.

Default value is False (fully disable the profiling feature).

Possible values:

  • True: Enables the feature
  • False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.

es_doc_type = notification

string value

Document type for notification indexing in elasticsearch.

es_scroll_size = 10000

integer value

Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000).

es_scroll_time = 2m

string value

This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it.

filter_error_trace = False

boolean value

Enable filter traces that contain error/exception to a separated place.

Default value is set to False.

Possible values:

  • True: Enable filter traces that contain error/exception.
  • False: Disable the filter.

hmac_keys = SECRET_KEY

string value

Secret key(s) to use for encrypting context data for performance profiling.

This string value should have the following format: <key1>[,<key2>,…​<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.

Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.

sentinel_service_name = mymaster

string value

Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster).

socket_timeout = 0.1

floating point value

Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1).

trace_sqlalchemy = False

boolean value

Enable SQL requests profiling in services.

Default value is False (SQL requests won’t be traced).

Possible values:

  • True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
  • False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.

9.1.42. quota

The following table outlines the options available under the [quota] group in the /etc/nova/nova.conf file.

Table 9.41. quota
Configuration option = Default valueTypeDescription

cores = 20

integer value

The number of instance cores or vCPUs allowed per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

count_usage_from_placement = False

boolean value

Enable the counting of quota usage from the placement service.

Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases.

This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases.

Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted.

Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram.

Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved.

The populate_queued_for_delete and populate_user_id online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if this configuration option is set to True. Operators who want to avoid the performance hit from the EXISTS queries should wait to set this configuration option to True until after they have completed their online data migrations via nova-manage db online_data_migrations.

driver = nova.quota.DbQuotaDriver

string value

Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks.

fixed_ips = -1

integer value

The number of fixed IPs allowed per project.

Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up. This quota value should be at least the number of instances allowed

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

floating_ips = 10

integer value

The number of floating IPs allowed per project.

Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

injected_file_content_bytes = 10240

integer value

The number of bytes allowed per injected file.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

injected_file_path_length = 255

integer value

The maximum allowed injected file path length.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

injected_files = 5

integer value

The number of injected files allowed.

File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

instances = 10

integer value

The number of instances allowed per project.

Possible Values

  • A positive integer or 0.
  • -1 to disable the quota.

key_pairs = 100

integer value

The maximum number of key pairs allowed per user.

Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

metadata_items = 128

integer value

The number of metadata items allowed per instance.

Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

ram = 51200

integer value

The number of megabytes of instance RAM allowed per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

recheck_quota = True

boolean value

Recheck quota after resource creation to prevent allowing quota to be exceeded.

This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them.

The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request.

security_group_rules = 20

integer value

The number of security rules per security group.

The associated rules in each security group control the traffic to instances in the group.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

security_groups = 10

integer value

The number of security groups per project.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

Deprecated since: 15.0.0

Reason: nova-network is deprecated, as are any related configuration options.

server_group_members = 10

integer value

The maximum number of servers per server group.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

server_groups = 10

integer value

The maximum number of server groups per project.

Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota.

Possible values:

  • A positive integer or 0.
  • -1 to disable the quota.

9.1.43. rdp

The following table outlines the options available under the [rdp] group in the /etc/nova/nova.conf file.

Table 9.42. rdp
Configuration option = Default valueTypeDescription

enabled = False

boolean value

Enable Remote Desktop Protocol (RDP) related features.

Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V.

Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform.

Related options:

  • compute_driver: Must be hyperv.

html5_proxy_base_url = http://127.0.0.1:6083/

uri value

The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance.

An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack.

An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect

Possible values:

  • <scheme>://<ip-address>:<port-number>/

    The scheme must be identical to the scheme configured for the RDP HTML5
    console proxy service. It is `http` or `https`.
    The IP address must be identical to the address on which the RDP HTML5
    console proxy service is listening.
    The port must be identical to the port on which the RDP HTML5 console proxy
    service is listening.

Related options:

  • rdp.enabled: Must be set to True for html5_proxy_base_url to be effective.

9.1.44. remote_debug

The following table outlines the options available under the [remote_debug] group in the /etc/nova/nova.conf file.

Table 9.43. remote_debug
Configuration option = Default valueTypeDescription

host = None

host address value

Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • IP address of a remote host as a command line parameter to a nova service. For Example:

    /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
    --remote_debug-host <IP address where the debugger is running>

port = None

port value

Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • Port number you want to use as a command line parameter to a nova service. For Example:

    /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
    --remote_debug-host <IP address where the debugger is running>
    --remote_debug-port <port> it's listening on>.

9.1.45. scheduler

The following table outlines the options available under the [scheduler] group in the /etc/nova/nova.conf file.

Table 9.44. scheduler
Configuration option = Default valueTypeDescription

discover_hosts_in_cells_interval = -1

integer value

Periodic task interval.

This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur.

Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run.

driver = filter_scheduler

string value

The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace nova.scheduler.driver of file setup.cfg. If nothing is specified in this option, the filter_scheduler is used.

Other options are:

  • fake_scheduler which is used for testing.

Possible values:

  • Any of the drivers included in Nova:
  • filter_scheduler
  • fake_scheduler
  • You may also set this to the entry point name of a custom scheduler driver, but you will be responsible for creating and maintaining it in your setup.cfg file.

Related options:

  • workers

enable_isolated_aggregate_filtering = False

boolean value

This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key trait:$TRAIT_NAME and value required, the instance flavor extra_specs and/or image metadata must also contain trait:$TRAIT_NAME=required to be eligible to be scheduled to hosts in that aggregate. More technical details at https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html

limit_tenants_to_placement_aggregate = False

boolean value

This setting causes the scheduler to look up a host aggregate with the metadata key of filter_tenant_id set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such as filter_tenant_id:123.

The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request.

See also the placement_aggregate_required_for_tenants option.

max_attempts = 3

integer value

This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state.

Possible values:

  • A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance.

max_placement_results = 1000

integer value

This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates.

A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler.

This option is only used by the FilterScheduler; if you use a different scheduler, this option has no effect.

periodic_task_interval = 60

integer value

Periodic task interval.

This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. Currently there are no in-tree scheduler driver that use this option.

If this is larger than the nova-service service_down_time setting, the ComputeFilter (if enabled) may think the compute service is down. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler.

Possible values:

  • An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks.

Related options:

  • nova-service service_down_time

placement_aggregate_required_for_tenants = False

boolean value

This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node.

See also the limit_tenants_to_placement_aggregate option.

query_placement_for_availability_zone = False

boolean value

This setting causes the scheduler to look up a host aggregate with the metadata key of availability_zone set to the value provided by an incoming request, and request results from placement be limited to that aggregate.

The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the availability_zone key is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts.

Note that if you enable this flag, you can disable the (less efficient) AvailabilityZoneFilter in the scheduler.

query_placement_for_image_type_support = False

boolean value

This setting causes the scheduler to ask placement only for compute hosts that support the disk_format of the image used in the request.

workers = None

integer value

Number of workers for the nova-scheduler service. The default will be the number of CPUs available if using the "filter_scheduler" scheduler driver, otherwise the default will be 1.

9.1.46. serial_console

The following table outlines the options available under the [serial_console] group in the /etc/nova/nova.conf file.

Table 9.45. serial_console
Configuration option = Default valueTypeDescription

base_url = ws://127.0.0.1:6083/

uri value

The URL an end user would use to connect to the nova-serialproxy service.

The nova-serialproxy service is called with this token enriched URL and establishes the connection to the proper instance.

Related options:

  • The IP address must be identical to the address to which the nova-serialproxy service is listening (see option serialproxy_host in this section).
  • The port must be the same as in the option serialproxy_port of this section.
  • If you choose to use a secured websocket connection, then start this option with wss:// instead of the unsecured ws://. The options cert and key in the [DEFAULT] section have to be set for that.

enabled = False

boolean value

Enable the serial console feature.

In order to use this feature, the service nova-serialproxy needs to run. This service is typically executed on the controller node.

port_range = 10000:20000

string value

A range of TCP ports a guest can use for its backend.

Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched.

Possible values:

  • Each string which passes the regex ^\d+:\d+$ For example 10000:20000. Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535.

proxyclient_address = 127.0.0.1

string value

The IP address to which proxy clients (like nova-serialproxy) should connect to get the serial console of an instance.

This is typically the IP address of the host of a nova-compute service.

serialproxy_host = 0.0.0.0

string value

The IP address which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this IP address for incoming connection requests to instances which expose serial console.

Related options:

  • Ensure that this is the same IP address which is defined in the option base_url of this section or use 0.0.0.0 to listen on all addresses.

serialproxy_port = 6083

port value

The port number which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this port number for incoming connection requests to instances which expose serial console.

Related options:

  • Ensure that this is the same port number which is defined in the option base_url of this section.

9.1.47. service_user

The following table outlines the options available under the [service_user] group in the /etc/nova/nova.conf file.

Table 9.46. service_user
Configuration option = Default valueTypeDescription

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

default-domain-id = None

string value

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default-domain-name = None

string value

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

password = None

string value

User’s password

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

send_service_user_token = False

boolean value

When True, if sending a user token to a REST API, also send a service token.

Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user’s behalf, we include a service token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware.

split-loggers = False

boolean value

Log requests to multiple loggers.

system-scope = None

string value

Scope for system operations

tenant-id = None

string value

Tenant ID

tenant-name = None

string value

Tenant Name

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

9.1.48. spice

The following table outlines the options available under the [spice] group in the /etc/nova/nova.conf file.

Table 9.47. spice
Configuration option = Default valueTypeDescription

agent_enabled = True

boolean value

Enable the SPICE guest agent support on the instances.

The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled:

  • Copy & Paste of text and images between the guest and client machine
  • Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing.
  • Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved.

enabled = False

boolean value

Enable SPICE related features.

Related options:

  • VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console.

html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html

uri value

Location of the SPICE HTML5 console proxy.

End user would use this URL to connect to the nova-spicehtml5proxy service. This service will forward request to the console of an instance.

In order to use SPICE console, the service nova-spicehtml5proxy should be running. This service is typically launched on the controller node.

Possible values:

  • Must be a valid URL of the form: http://host:port/spice_auto.html where host is the node running nova-spicehtml5proxy and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment.

Related options:

  • This option depends on html5proxy_host and html5proxy_port options. The access URL returned by the compute node must have the host and port where the nova-spicehtml5proxy service is listening.

html5proxy_host = 0.0.0.0

host address value

IP address or a hostname on which the nova-spicehtml5proxy service listens for incoming requests.

Related options:

  • This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a host that is accessible from the HTML5 client.

html5proxy_port = 6082

port value

Port on which the nova-spicehtml5proxy service listens for incoming requests.

Related options:

  • This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a port that is accessible from the HTML5 client.

keymap = None

string value

A keyboard layout which is supported by the underlying hypervisor on this node.

Possible values:

  • This is usually an IETF language tag (default is en-us). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.

Deprecated since: 18.0.0

Reason: Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. Refer to bug #1682020 for more information.

server_listen = 127.0.0.1

string value

The address where the SPICE server running on the instances should listen.

Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s).

Possible values:

  • IP address to listen on.

server_proxyclient_address = 127.0.0.1

string value

The address used by nova-spicehtml5proxy client to connect to instance console.

Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s).

Possible values:

  • Any valid IP address on the compute node.

Related options:

  • This option depends on the server_listen option. The proxy client must be able to access the address specified in server_listen using the value of this option.

9.1.49. upgrade_levels

The following table outlines the options available under the [upgrade_levels] group in the /etc/nova/nova.conf file.

Table 9.48. upgrade_levels
Configuration option = Default valueTypeDescription

baseapi = None

string value

Base API RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

cert = None

string value

Cert RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

Deprecated since: 18.0.0

Reason: The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used.

compute = None

string value

Compute RPC API version cap.

By default, we always send messages using the most recent version the client knows about.

Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can’t understand. Note that we only support upgrading from release N to release N+1.

Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment.

Possible values:

  • By default send the latest version the client knows about
  • auto: Automatically determines what version to use based on the service versions in the deployment.
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

conductor = None

string value

Conductor RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

console = None

string value

Console RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

network = None

string value

Network RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

Deprecated since: 18.0.0

Reason: The nova-network service was deprecated in 14.0.0 (Newton) and will be removed in an upcoming release.

scheduler = None

string value

Scheduler RPC API version cap.

Possible values:

  • By default send the latest version the client knows about
  • A string representing a version number in the format N.N; for example, possible values might be 1.12 or 2.0.
  • An OpenStack release name, in lower case, such as mitaka or liberty.

9.1.50. vault

The following table outlines the options available under the [vault] group in the /etc/nova/nova.conf file.

Table 9.49. vault
Configuration option = Default valueTypeDescription

approle_role_id = None

string value

AppRole role_id for authentication with vault

approle_secret_id = None

string value

AppRole secret_id for authentication with vault

kv_mountpoint = secret

string value

Mountpoint of KV store in Vault to use, for example: secret

root_token_id = None

string value

root token for vault

ssl_ca_crt_file = None

string value

Absolute path to ca cert file

use_ssl = False

boolean value

SSL Enabled/Disabled

vault_url = http://127.0.0.1:8200

string value

Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200"

9.1.51. vendordata_dynamic_auth

The following table outlines the options available under the [vendordata_dynamic_auth] group in the /etc/nova/nova.conf file.

Table 9.50. vendordata_dynamic_auth
Configuration option = Default valueTypeDescription

auth-url = None

string value

Authentication URL

auth_section = None

string value

Config Section from which to load plugin specific options

auth_type = None

string value

Authentication type to load

cafile = None

string value

PEM encoded Certificate Authority to use when verifying HTTPs connections.

certfile = None

string value

PEM encoded client certificate cert file

collect-timing = False

boolean value

Collect per-API call timing information.

default-domain-id = None

string value

Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

default-domain-name = None

string value

Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.

domain-id = None

string value

Domain ID to scope to

domain-name = None

string value

Domain name to scope to

insecure = False

boolean value

Verify HTTPS connections.

keyfile = None

string value

PEM encoded client certificate key file

password = None

string value

User’s password

project-domain-id = None

string value

Domain ID containing project

project-domain-name = None

string value

Domain name containing project

project-id = None

string value

Project ID to scope to

project-name = None

string value

Project name to scope to

split-loggers = False

boolean value

Log requests to multiple loggers.

system-scope = None

string value

Scope for system operations

tenant-id = None

string value

Tenant ID

tenant-name = None

string value

Tenant Name

timeout = None

integer value

Timeout value for http requests

trust-id = None

string value

Trust ID

user-domain-id = None

string value

User’s domain id

user-domain-name = None

string value

User’s domain name

user-id = None

string value

User ID

username = None

string value

Username

9.1.52. vmware

The following table outlines the options available under the [vmware] group in the /etc/nova/nova.conf file.

Table 9.51. vmware
Configuration option = Default valueTypeDescription

api_retry_count = 10

integer value

Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc.

ca_file = None

string value

Specifies the CA bundle file to be used in verifying the vCenter server certificate.

cache_prefix = None

string value

This option adds a prefix to the folder where cached images are stored

This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes.

Note: This should only be used when the compute nodes are running on same host or they have a shared file system.

Possible values:

  • Any string representing the cache prefix to the folder

cluster_name = None

string value

Name of a VMware Cluster ComputeResource.

connection_pool_size = 10

integer value

This option sets the http connection pool size

The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice.

console_delay_seconds = None

integer value

Set this value if affected by an increased network latency causing repeated characters when typing in a remote console.

datastore_regex = None

string value

Regular expression pattern to match the name of datastore.

The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas".

Note

If no regex is given, it just picks the datastore with the most freespace.

Possible values:

  • Any matching regular expression to a datastore must be given

host_ip = None

host address value

Hostname or IP address for connection to VMware vCenter host.

host_password = None

string value

Password for connection to VMware vCenter host.

host_port = 443

port value

Port for connection to VMware vCenter host.

host_username = None

string value

Username for connection to VMware vCenter host.

insecure = False

boolean value

If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification.

Related options: * ca_file: This option is ignored if "ca_file" is set.

integration_bridge = None

string value

This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set.

Possible values:

  • Any valid string representing the name of the integration bridge

maximum_objects = 100

integer value

This option specifies the limit on the maximum number of objects to return in a single result.

A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests.

pbm_default_policy = None

string value

This option specifies the default policy to be used.

If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used.

Possible values:

  • Any valid storage policy such as VSAN default storage policy

Related options:

  • pbm_enabled

pbm_enabled = False

boolean value

This option enables or disables storage policy based placement of instances.

Related options:

  • pbm_default_policy

pbm_wsdl_location = None

string value

This option specifies the PBM service WSDL file location URL.

Setting this will disable storage policy based placement of instances.

Possible values:

serial_log_dir = /opt/vmware/vspc

string value

Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the serial_log_dir config value of VSPC.

serial_port_proxy_uri = None

uri value

Identifies a proxy service that provides network access to the serial_port_service_uri.

Possible values:

  • Any valid URI (The scheme is telnet or telnets.)

Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri

serial_port_service_uri = None

string value

Identifies the remote system where the serial port traffic will be sent.

This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs.

Possible values:

  • Any valid URI

task_poll_interval = 0.5

floating point value

Time interval in seconds to poll remote tasks invoked on VMware VC server.

use_linked_clone = True

boolean value

This option enables/disables the use of linked clone.

The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service.

If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM.

vlan_interface = vmnic0

string value

This option specifies the physical ethernet adapter name for VLAN networking.

Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.

Possible values:

  • Any valid string representing VLAN interface name

vnc_keymap = en-us

string value

Keymap for VNC.

The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.

Possible values:

  • A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an IETF language tag (for example en-us).

vnc_port = 5900

port value

This option specifies VNC starting port.

Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option vnc_port helps you to set default starting port for the VNC client.

Possible values:

  • Any valid port number within 5900 -(5900 + vnc_port_total)

Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total

vnc_port_total = 10000

integer value

Total number of VNC ports.

9.1.53. vnc

The following table outlines the options available under the [vnc] group in the /etc/nova/nova.conf file.

Table 9.52. vnc
Configuration option = Default valueTypeDescription

auth_schemes = ['none']

list value

The authentication schemes to use with the compute node.

Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first.

Related options:

  • [vnc]vencrypt_client_key, [vnc]vencrypt_client_cert: must also be set

enabled = True

boolean value

Enable VNC related features.

Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest.

keymap = None

string value

Keymap for VNC.

The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.

Possible values:

  • A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an IETF language tag (for example en-us). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.

Deprecated since: 18.0.0

Reason: Configuring this option forces QEMU to do keymap conversions. These conversions are lossy and can result in significant issues for users of non en-US keyboards. You should instead use a VNC client that supports Extended Key Event messages, such as noVNC 1.0.0. Refer to bug #1682020 for more information.

novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html

uri value

Public address of noVNC VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.

If using noVNC >= 1.0.0, you should use vnc_lite.html instead of vnc_auto.html.

Related options:

  • novncproxy_host
  • novncproxy_port

novncproxy_host = 0.0.0.0

string value

IP address that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private address to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_port
  • novncproxy_base_url

novncproxy_port = 6080

port value

Port that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private port to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_host
  • novncproxy_base_url

server_listen = 127.0.0.1

host address value

The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node.

server_proxyclient_address = 127.0.0.1

host address value

Private, internal IP address or hostname of VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.

This option sets the private address to which proxy clients, such as nova-novncproxy, should connect to.

vencrypt_ca_certs = None

string value

The path to the CA certificate PEM file

The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server.

Related options:

  • vnc.auth_schemes: must include vencrypt

vencrypt_client_cert = None

string value

The path to the client key file (for x509)

The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication.

Related options:

  • vnc.auth_schemes: must include vencrypt
  • vnc.vencrypt_client_key: must also be set

vencrypt_client_key = None

string value

The path to the client certificate PEM file (for x509)

The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication.

Related options:

  • vnc.auth_schemes: must include vencrypt
  • vnc.vencrypt_client_cert: must also be set

xvpvncproxy_base_url = http://127.0.0.1:6081/console

uri value

Public URL address of XVP VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_port

Deprecated since: 19.0.0

Reason: The ``nova-xvpvnxproxy`` service is deprecated and will be removed in an upcoming release.

xvpvncproxy_host = 0.0.0.0

host address value

IP address or hostname that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private address to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_port
  • xvpvncproxy_base_url

Deprecated since: 19.0.0

Reason: The ``nova-xvpvnxproxy`` service is deprecated and will be removed in an upcoming release.

xvpvncproxy_port = 6081

port value

Port that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private port to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_base_url

Deprecated since: 19.0.0

Reason: The ``nova-xvpvnxproxy`` service is deprecated and will be removed in an upcoming release.

9.1.54. workarounds

The following table outlines the options available under the [workarounds] group in the /etc/nova/nova.conf file.

Table 9.53. workarounds
Configuration option = Default valueTypeDescription

disable_fallback_pcpu_query = False

boolean value

Disable fallback request for VCPU allocations when using pinned instances.

Starting in Train, compute nodes using the libvirt virt driver can report PCPU inventory and will use this for pinned instances. The scheduler will automatically translate requests using the legacy CPU pinning-related flavor extra specs, hw:cpu_policy and hw:cpu_thread_policy, their image metadata property equivalents, and the emulator threads pinning flavor extra spec, hw:emulator_threads_policy, to new placement requests. However, compute nodes require additional configuration in order to report PCPU inventory and this configuration may not be present immediately after an upgrade. To ensure pinned instances can be created without this additional configuration, the scheduler will make a second request to placement for old-style VCPU-based allocations and fallback to these allocation candidates if necessary. This has a slight performance impact and is not necessary on new or upgraded deployments where the new configuration has been set on all hosts. By setting this option, the second lookup is disabled and the scheduler will only request PCPU-based allocations.

Deprecated since: 20.0.0

*Reason:*None

disable_group_policy_check_upcall = False

boolean value

Disable the server group policy check upcall in compute.

In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy.

Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute.

Related options:

  • [filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service.

disable_libvirt_livesnapshot = False

boolean value

Disable live snapshots when using the libvirt driver.

Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem.

When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process.

For more information, refer to the bug report:

https://bugs.launchpad.net/nova/+bug/1334398

Possible values:

  • True: Live snapshot is disabled when using libvirt
  • False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it)

Deprecated since: 19.0.0

Reason: This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release.

disable_native_luksv1 = False

boolean value

When attaching encrypted LUKSv1 Cinder volumes to instances the Libvirt driver configures the encrypted disks to be natively decrypted by QEMU.

A performance issue has been discovered in the libgcrypt library used by QEMU that serverly limits the I/O performance in this scenario.

For more information please refer to the following bug report:

RFE: hardware accelerated AES-XTS mode https://bugzilla.redhat.com/show_bug.cgi?id=1762765

Enabling this workaround option will cause Nova to use the legacy dm-crypt based os-brick encryptor to decrypt the LUKSv1 volume.

Note that enabling this option while using volumes that do not provide a host block device such as Ceph will result in a failure to boot from or attach the volume to an instance. See the [workarounds]/rbd_block_device option for a way to avoid this for RBD.

Related options:

  • compute_driver (libvirt)
  • rbd_block_device (workarounds)

disable_rootwrap = False

boolean value

Use sudo instead of rootwrap.

Allow fallback to sudo for performance reasons.

For more information, refer to the bug report:

https://bugs.launchpad.net/nova/+bug/1415106

Possible values:

  • True: Use sudo instead of rootwrap
  • False: Use rootwrap as usual

Interdependencies to other options:

  • Any options that affect rootwrap will be ignored.

enable_numa_live_migration = False

boolean value

Enable live migration of instances with NUMA topologies.

Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In previous versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in `bug #1289064`_. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes.

Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances.

Related options:

Deprecated since: 20.0.0

*Reason:*This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release.

ensure_libvirt_rbd_instance_dir_cleanup = False

boolean value

Ensure the instance directory is removed during clean up when using rbd.

When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using [libvirt]/images_type=rbd. This avoids the following bugs with evacuation and revert resize clean up that lead to the instance directory remaining on the host:

https://bugs.launchpad.net/nova/+bug/1414895

https://bugs.launchpad.net/nova/+bug/1761062

Both of these bugs can then result in DestinationDiskExists errors being raised if the instances ever attempt to return to the host.

  1. warning:: Operators will need to ensure that the instance directory itself, specified by [DEFAULT]/instances_path, is not shared between computes before enabling this workaround otherwise the console.log, kernels, ramdisks and any additional files being used by the running instance will be lost.

Related options:

  • compute_driver (libvirt)
  • [libvirt]/images_type (rbd)
  • instances_path

handle_virt_lifecycle_events = True

boolean value

Enable handling of events emitted from compute drivers.

Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored.

This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature.

Care should be taken when this feature is disabled and sync_power_state_interval is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.

For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630

Interdependencies to other options:

  • If sync_power_state_interval is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.

never_download_image_if_on_rbd = False

boolean value

When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space.

For more information, refer to the bug report:

https://bugs.launchpad.net/nova/+bug/1858877

Enabling this option will cause nova to refuse to boot an instance if it would require downloading the image from glance and uploading it to ceph itself.

Related options:

  • compute_driver (libvirt)
  • [libvirt]/images_type (rbd)

rbd_volume_local_attach = False

boolean value

Attach RBD Cinder volumes to the compute as host block devices.

When enabled this option instructs os-brick to connect RBD volumes locally on the compute host as block devices instead of natively through QEMU.

This workaround does not currently support extending attached volumes.

This can be used with the disable_native_luksv1 workaround configuration option to avoid the recently discovered performance issues found within the libgcrypt library.

This workaround is temporary and will be removed during the W release once all impacted distributions have been able to update their versions of the libgcrypt library.

Related options: * compute_driver (libvirt) * disable_qemu_native_luksv1 (workarounds)

reserve_disk_resource_for_image_cache = False

boolean value

If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the :oslo.config:option:DEFAULT.instances_path is on different disk partition than the image cache directory then the driver will not reserve resource for the cache.

Such disk reservation is done by a periodic task in the resource tracker that runs every :oslo.config:option:update_resources_interval seconds. So the reservation is not updated immediately when an image is cached.

Related options:

  • :oslo.config:option:DEFAULT.instances_path
  • :oslo.config:option:image_cache_subdirectory_name
  • :oslo.config:option:update_resources_interval

9.1.55. wsgi

The following table outlines the options available under the [wsgi] group in the /etc/nova/nova.conf file.

Table 9.54. wsgi
Configuration option = Default valueTypeDescription

api_paste_config = api-paste.ini

string value

This option represents a file name for the paste.deploy config for nova-api.

Possible values:

  • A string representing file name for the paste.deploy config.

client_socket_timeout = 900

integer value

This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.

default_pool_size = 1000

integer value

This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.

keep_alive = True

boolean value

This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.

Possible values:

  • True : reuse HTTP connection.
  • False : closes the client socket connection explicitly.

Related options:

  • tcp_keepidle

max_header_line = 16384

integer value

This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).

Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length.

secure_proxy_ssl_header = None

string value

This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.

Possible values:

  • None (default) - the request scheme is not influenced by any HTTP headers
  • Valid HTTP header, like HTTP_X_FORWARDED_PROTO
Warning

Do not set this unless you know what you are doing.

Make sure ALL of the following are true before setting this (assuming the values from the example above):

  • Your API is behind a proxy.
  • Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it.
  • Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS.

If any of those are not true, you should keep this setting set to None.

ssl_ca_file = None

string value

This option allows setting path to the CA certificate file that should be used to verify connecting clients.

Possible values:

  • String representing path to the CA certificate file.

Related options:

  • enabled_ssl_apis

ssl_cert_file = None

string value

This option allows setting path to the SSL certificate of API server.

Possible values:

  • String representing path to the SSL certificate.

Related options:

  • enabled_ssl_apis

ssl_key_file = None

string value

This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.

Possible values:

  • String representing path to the SSL private key.

Related options:

  • enabled_ssl_apis

tcp_keepidle = 600

integer value

This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.

Related options:

  • keep_alive

wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f

string value

It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.

This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect.

Possible values:

  • %(client_ip)s "%(request_line)s" status: %(status_code)s ' 'len: %(body_length)s time: %(wall_seconds).7f (default)
  • Any formatted string formed by specific values.

Deprecated since: 16.0.0

Reason: This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi.

9.1.56. xenserver

The following table outlines the options available under the [xenserver] group in the /etc/nova/nova.conf file.

Table 9.55. xenserver
Configuration option = Default valueTypeDescription

agent_path = usr/sbin/xe-update-networking

string value

Path to locate guest agent on the server.

Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image.

Related options:

For this option to have an effect: * flat_injected should be set to True * compute_driver should be set to xenapi.XenAPIDriver

agent_resetnetwork_timeout = 60

integer value

Number of seconds to wait for agent’s reply to resetnetwork request.

This indicates the amount of time xapi agent plugin waits for the agent to respond to the resetnetwork request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

agent_timeout = 30

integer value

Number of seconds to wait for agent’s reply to a request.

Nova configures/performs certain administrative actions on a server with the help of an agent that’s installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: version,' key_init', password,resetnetwork,inject_file, and agentupdate.

To perform one of the above operations, the xapi agent plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi agent plugin to determine the success/failure of the operation.

This config option determines how long the xapi agent plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out.

Related options:

  • agent_version_timeout
  • agent_resetnetwork_timeout

agent_version_timeout = 300

integer value

Number of seconds to wait for agent’t reply to version request.

This indicates the amount of time xapi agent plugin waits for the agent to respond to the version request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

During the build process the version request is used to determine if the agent is available/operational to perform other requests such as resetnetwork, password, key_init and inject_file. If the version call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational.

block_device_creation_timeout = 10

integer value

Time in secs to wait for a block device to be created

cache_images = all

string value

Cache glance images locally.

The value for this option must be chosen from the choices listed here. Configuring a value other than these will default to all.

Note: There is nothing that deletes these images.

check_host = True

boolean value

Ensure compute service is running on host XenAPI connects to. This option must be set to false if the independent_compute option is set to true.

Possible values:

  • Setting this option to true will make sure that compute service is running on the same host that is specified by connection_url.
  • Setting this option to false, doesn’t perform the check.

Related options:

  • independent_compute

connection_concurrent = 5

integer value

Maximum number of concurrent XenAPI connections.

In nova, multiple XenAPI requests can happen at a time. Configuring this option will parallelize access to the XenAPI session, which allows you to make concurrent XenAPI connections.

connection_password = None

string value

Password for connection to XenServer/Xen Cloud Platform

connection_url = None

string value

URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket.

Possible values:

  • Any string that represents a URL. The connection_url is generally the management network IP address of the XenServer.
  • This option must be set if you chose the XenServer driver.

connection_username = root

string value

Username for connection to XenServer/Xen Cloud Platform

console_public_hostname = <based on operating system>

string value

Publicly visible name for this console host.

Possible values:

  • Current hostname (default) or any string representing hostname.

default_os_type = linux

string value

Default OS type used when uploading an image to glance

disable_agent = False

boolean value

Disables the use of XenAPI agent.

This configuration option suggests whether the use of agent should be enabled or not regardless of what image properties are present. Image properties have an effect only when this is set to True. Read description of config option use_agent_default for more information.

Related options:

  • use_agent_default

image_compression_level = None

integer value

Compression level for images.

By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used.

Possible values:

  • Range is 1-9, e.g., 9 for gzip -9, 9 being most compressed but most CPU intensive on dom0.
  • Any values out of this range will default to None.

image_handler = direct_vhd

string value

The plugin used to handle image uploads and downloads.

Provide a short name representing an image driver required to handle the image between compute host and glance.

`image_upload_handler = `

string value

Dom0 plugin driver used to handle image uploads.

Provide a string value representing a plugin driver required to handle the image uploading to GlanceStore.

Images, and snapshots from XenServer need to be uploaded to the data store for use. image_upload_handler takes in a value for the Dom0 plugin driver. This driver is then called to upload images to the GlanceStore.

Deprecated since: 18.0.0

Reason: Instead of setting the class path here, we will use short names to represent image handlers. The download and upload handlers must also be matching. So another new option "image_handler" will be used to set the short name for a specific image handler for both image download and upload.

independent_compute = False

boolean value

Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host.

Related options:

  • CONF.flat_injected (Must be False)
  • CONF.xenserver.check_host (Must be False)
  • CONF.default_ephemeral_format (Must be unset or ext3)
  • Joining host aggregates (will error if attempted)
  • Swap disks for Windows VMs (will error if attempted)
  • Nova-based auto_configure_disk (will error if attempted)

introduce_vdi_retry_wait = 20

integer value

Number of seconds to wait for SR to settle if the VDI does not exist when first introduced.

Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception.

ipxe_boot_menu_url = None

string value

URL to the iPXE boot menu.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_network_name
  • ipxe_mkisofs_cmd

ipxe_mkisofs_cmd = mkisofs

string value

Name and optionally path of the tool used for ISO image creation.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

Note: By default mkisofs is not present in the Dom0, so the package can either be manually added to Dom0 or include the mkisofs binary in the image itself.

Related Options:

  • ipxe_network_name
  • ipxe_boot_menu_url

ipxe_network_name = None

string value

Name of network to use for booting iPXE ISOs.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_boot_menu_url
  • ipxe_mkisofs_cmd

login_timeout = 10

integer value

Timeout in seconds for XenAPI login.

max_kernel_ramdisk_size = 16777216

integer value

Maximum size in bytes of kernel or ramdisk images.

Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest.

num_vbd_unplug_retries = 10

integer value

Maximum number of retries to unplug VBD. If set to 0, should try once, no retries.

ovs_integration_bridge = None

string value

The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch.

Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI.

Possible values:

  • Any string that represents a bridge name.

running_timeout = 60

integer value

Wait time for instances to go to running state.

Provide an integer value representing time in seconds to set the wait time for an instance to go to running state.

When a request to create an instance is received by nova-api and communicated to nova-compute, the creation of the instance occurs through interaction with Xen via XenAPI in the compute node. Once the node on which the instance(s) are to be launched is decided by nova-schedule and the launch is triggered, a certain amount of wait time is involved until the instance(s) can become available and running. This wait time is defined by running_timeout. If the instances do not go to running state within this specified wait time, the launch expires and the instance(s) are set to error state.

sparse_copy = True

boolean value

Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won’t have to be rsynced.

sr_base_path = /var/run/sr-mount

string value

Base path to the storage repository on the XenServer host.

sr_matching_filter = default-sr:true

string value

Filter for finding the SR to be used to install guest instances on.

Possible values:

  • To use the Local Storage in default XenServer/XCP installations set this flag to other-config:i18n-key=local-storage.
  • To select an SR with a different matching criteria, you could set it to other-config:my_favorite_sr=true.
  • To fall back on the Default SR, as displayed by XenCenter, set this flag to: default-sr:true.

target_host = None

host address value

The iSCSI Target Host.

This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken.

Possible values:

  • Any string that represents hostname/ip of Target.

target_port = 3260

port value

The iSCSI Target Port.

This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken.

use_agent_default = False

boolean value

Whether or not to use the agent by default when its usage is enabled but not indicated by the image.

The use of XenAPI agent can be disabled altogether using the configuration option disable_agent. However, if it is not disabled, the use of an agent can still be controlled by the image in use through one of its properties, xenapi_use_agent. If this property is either not present or specified incorrectly on the image, the use of agent is determined by this configuration option.

Note that if this configuration is set to True when the agent is not present, the boot times will increase significantly.

Related options:

  • disable_agent

use_join_force = True

boolean value

When adding new host to a pool, this will append a --force flag to the command, forcing hosts to join a pool, even if they have different CPUs.

Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced.

vhd_coalesce_max_attempts = 20

integer value

Max number of times to poll for VHD to coalesce.

This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up.

Related options:

  • vhd_coalesce_poll_interval

vhd_coalesce_poll_interval = 5.0

floating point value

The interval used for polling of coalescing vhds.

This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts.

Related options:

  • vhd_coalesce_max_attempts

9.1.57. xvp

The following table outlines the options available under the [xvp] group in the /etc/nova/nova.conf file.

Table 9.56. xvp
Configuration option = Default valueTypeDescription

console_xvp_conf = /etc/xvp.conf

string value

Generated XVP conf file

console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template

string value

XVP conf template

console_xvp_log = /var/log/xvp.log

string value

XVP log file

console_xvp_multiplex_port = 5900

port value

Port for XVP to multiplex VNC connections on

console_xvp_pid = /var/run/xvp.pid

string value

XVP master process pid file

9.1.58. zvm

The following table outlines the options available under the [zvm] group in the /etc/nova/nova.conf file.

Table 9.57. zvm
Configuration option = Default valueTypeDescription

ca_file = None

string value

CA certificate file to be verified in httpd server with TLS enabled

A string, it must be a path to a CA bundle to use.

cloud_connector_url = None

uri value

URL to be used to communicate with z/VM Cloud Connector.

image_tmp_path = $state_path/images

string value

The path at which images will be stored (snapshot, deploy, etc).

Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location.

Possible values: A file system path on the host running the compute service.

reachable_timeout = 300

integer value

Timeout (seconds) to wait for an instance to start.

The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted.

Possible Values: Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.