Chapter 9. nova
The following chapter contains information about the configuration options in the nova
service.
9.1. nova.conf
This section contains options for the /etc/nova/nova.conf
file.
9.1.1. DEFAULT
The following table outlines the options available under the [DEFAULT]
group in the /etc/nova/nova.conf
file.
.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize. |
| integer value | Timeout for Accelerator Request (ARQ) bind event message arrival. Number of seconds to wait for ARQ bind resolution event to arrive. The event indicates that every ARQ for an instance has either bound successfully or failed to bind. If it does not arrive, instance bringup is aborted with an exception. |
| string value | Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service’s log file. |
| string value | Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. |
| integer value | The number of times to check for a volume to be "available" before attaching it during server create.
When creating a server with block device mappings where
If the operation times out, the volume will be deleted if the block device mapping It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details. Possible values:
Related options:
|
| integer value | Interval (in seconds) between block device allocation retries on failures.
This option allows the user to specify the time interval between consecutive retries. The Possible values:
Related options:
|
| string value | Path to SSL certificate file. Related options:
|
| string value | Defines which driver to use for controlling virtualization. Possible values:
|
| list value | A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility. Note Only one monitor per namespace (For example: cpu) can be loaded at a time. Possible values:
|
| string value | Config drive format. Config drive format that will contain metadata attached to the instance when it boots. Related options:
Deprecated since: 19.0.0 Reason: This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful. |
| integer value | The pool size limit for connections expiration policy |
| integer value | The time-to-live in sec of idle connections in the pool |
| string value | Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host. Possible values:
|
| string value | The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. |
| floating point value | Virtual CPU to physical CPU allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for
Possible values:
Related options:
|
| boolean value | Run as a background process. |
| boolean value | If set to true, the logging level will be set to DEBUG instead of the default INFO level. |
| string value | Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen. Possible values:
|
| string value | Default availability zone for compute services. This option determines the default availability zone for nova-compute services, which will be used if the service(s) do not belong to aggregates with availability zone metadata. Possible values:
|
| string value | The default format an ephemeral_volume will be formatted with on creation. Possible values:
|
| list value | List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. |
| string value | Default availability zone for instances. This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime. Possible values:
Related options:
|
| floating point value | Virtual disk to physical disk allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.
Possible values:
Related options:
|
| boolean value | Enable new nova-compute services on this host automatically. When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute. Possible values:
|
| list value | List of APIs to be enabled by default. |
| list value | List of APIs with enabled SSL. Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support. |
| integer value | Size of executor thread pool when executor is threading or eventlet. |
| boolean value | Enables or disables fatal status of deprecations. |
| boolean value | This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware virt driver to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM. |
| boolean value | Force injection to take place on a config drive When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option. Possible values:
Related options:
|
| boolean value | Force conversion of backing images to raw format. Possible values:
Related options:
|
| integer value | Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. |
| integer value | Interval between instance network information cache updates. Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it’s cache if this option is set to 0. If we don’t update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0. Possible values:
|
| string value | Hostname, FQDN or IP address of this host. Used as:
Must be valid within AMQP key. Possible values:
|
| floating point value | Initial virtual CPU to physical CPU allocation ratio.
This is only used when initially creating the See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options:
|
| floating point value | Initial virtual disk to physical disk allocation ratio.
This is only used when initially creating the See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options:
|
| floating point value | Initial virtual RAM to physical RAM allocation ratio.
This is only used when initially creating the See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options:
|
| string value | Path to /etc/network/interfaces template. The path to a template file for the /etc/network/interfaces-style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server.
The template will be rendered using Jinja2 template engine, and receive a top-level key called Refer to the cloudinit documentaion for more information: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html Possible values:
Related options:
|
| integer value | Maximum time in seconds that an instance can take to build. If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period. Possible values:
|
| integer value | Interval for retrying failed instance file deletes. This option depends on maximum_instance_delete_attempts. This option specifies how often to retry deletes whereas maximum_instance_delete_attempts specifies the maximum number of retry attempts that can be made. Possible values:
Related options:
|
`instance_format = [instance: %(uuid)s] ` | string value | The format for an instance that is passed with the log message. |
| string value | Template string to be used to generate instance names.
This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like Possible values:
|
| boolean value | This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service. |
| string value | Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset. Possible values:
|
`instance_uuid_format = [instance: %(uuid)s] ` | string value | The format for an instance UUID that is passed with the log message. |
| string value | Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS. Possible values:
Related options:
|
| string value | Availability zone for internal services. This option determines the availability zone for the various internal nova services, such as nova-scheduler, nova-conductor, etc. Possible values:
|
| string value | SSL key file (if separate from cert). Related options:
|
| integer value | Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables. Possible values:
|
| string value | The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). |
| string value | Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. |
| string value | (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. |
| string value | (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. |
| boolean value | Enables or disables logging values of all registered options when starting a service (at DEBUG level). |
| integer value | The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". |
| string value | Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the next rotation. |
| string value | Log rotation type. |
| string value | Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter |
| string value | Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter |
| string value | Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter |
| string value | Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter |
| string value | Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter |
| integer value | This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value. Operations with RPC calls that utilize this value:
Related options:
|
| integer value | Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node. Possible Values:
|
| integer value | Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment. Possible values:
|
| integer value | Maximum number of instance snapshot operations to run concurrently. This limit is enforced to prevent snapshots overwhelming the host/network/storage and causing failure. This value can be set per compute node. Possible Values:
|
| integer value | Maximum number of devices that will result in a local image being created on the hypervisor node.
A negative number means unlimited. Setting Possible values:
|
| integer value | Maximum number of rotated log files. |
| integer value | Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". |
| integer value | The number of times to attempt to reap an instance’s files. This option specifies the maximum number of retry attempts that can be made. Possible values:
Related options:
|
| string value | IP address on which the metadata API will listen. The metadata API service listens on this IP address for incoming requests. |
| port value | Port on which the metadata API will listen. The metadata API service listens on this port number for incoming requests. |
| integer value | Number of workers for metadata service. If not specified the number of available CPUs will be used. The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes. Possible Values:
|
| integer value | Number of times to retry live-migration before failing. Possible values:
|
| string value | Name or path of the tool used for ISO image creation.
Use the
To use a config drive with Hyper-V, you must set the Possible values:
Related options:
|
| string value | The IP address which is used to connect to the block storage network. Possible values:
Related options:
|
| string value | The IP address which the host is using to connect to the management network. Possible values:
Related options:
|
| integer value | Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails. Possible values:
|
| list value | Image properties that should not be inherited from the instance when taking a snapshot. This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.
Possible values:
|
| string value | IP address on which the OpenStack API will listen. The OpenStack API service listens on this IP address for incoming requests. |
| port value | Port on which the OpenStack API will listen. The OpenStack API service listens on this port number for incoming requests. |
`osapi_compute_unique_server_name_scope = ` | string value | Sets the scope of the check for unique instance names. The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an 'InstanceExists' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs. |
| integer value | Number of workers for OpenStack API service. The default will be the number of CPUs available. OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes. Possible Values:
|
| integer value | Length of generated instance admin passwords. |
| boolean value | Enable periodic tasks. If set to true, this option allows services to periodically run tasks on the manager. In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one. |
| integer value | Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler. Possible Values:
|
| string value | Generic property to specify the pointer type. Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.
If set, either the Related options:
|
| string value | The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. |
| boolean value | Enables or disables publication of error events. |
| string value | The directory where the Nova python modules are installed. This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value. Possible values:
Related options:
|
| floating point value | Virtual RAM to physical RAM allocation ratio.
This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for
Possible values:
Related options:
|
| integer value | Maximum number of logged messages per rate_limit_interval. |
| string value | Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. |
| integer value | Interval, number of seconds, of log rate limiting. |
| integer value | Time interval after which an instance is hard rebooted automatically. When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Possible values:
|
| integer value | Interval for reclaiming deleted instances. A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it’s too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically. Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node.
Possible values:
Related options:
|
| string value | Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done. |
| integer value | Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment. Related Options:
|
| integer value | Interval to wait before un-rescuing an instance stuck in RESCUE. Possible values:
|
| integer value | Number of host CPUs to reserve for host processes.
The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the
This option cannot be set if the Possible values:
Related options:
|
| integer value | Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host. Possible values:
|
| integer value | Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host. Possible values:
|
| dict value | Number of huge/large memory pages to reserved per NUMA host cell. Possible values:
|
| integer value | Automatically confirm resizes after N seconds. Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time. Possible values:
|
| boolean value | Enable resizing of filesystems via a block device. If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw). |
| boolean value | This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts. |
| string value | Path to the rootwrap configuration file. Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry. |
| integer value | Size of RPC connection pool. |
| boolean value | Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping |
| integer value | Seconds to wait for a response from a call. |
| boolean value | Some periodic tasks can be run in a separate process. Should we run them here? |
| string value | The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified. Related options:
|
| integer value | Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set. Possible values:
Related options:
|
| integer value | Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup. Possible values:
Related options:
|
| integer value | Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option scheduler_tracks_instance_changes is False, the sync calls will not be made. So, changing this option will have no effect. If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently. Possible values:
Related options:
|
| integer value | Maximum time in seconds since last check-in for up service Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn’t updated the status for more than service_down_time, then the compute node is considered down. Related Options:
|
| string value | This option specifies the driver to be used for the servicegroup service. ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver. Related Options:
|
| integer value | Time before a shelved instance is eligible for removal from a host. By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes. Possible values:
|
| integer value | Interval for polling shelved instances to offload. The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the shelved_offload_time config value it offloads the shelved instances. Check shelved_offload_time config option description for details. Possible values:
Related options:
|
| integer value | Total time to wait in seconds for an instance to perform a clean shutdown. It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up. The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. Possible values:
|
| boolean value | Set to True if source host is addressed with IPv6. |
| boolean value | Disallow non-encrypted connections. Related options:
|
| string value | The top-level directory for maintaining Nova’s state.
This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option Possible values:
|
| integer value | Interval to sync power states between the database and the hypervisor. The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state. Possible values:
Related options:
|
| integer value | Number of greenthreads available for use to sync power states. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic. Possible values:
|
| string value | Syslog facility to receive log lines. This option is ignored if log_config_append is set. |
| string value | Explicitly specify the temporary working directory. |
| integer value | Amount of time, in seconds, to wait for NBD device start up. |
| string value | The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:password@127.0.0.1:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html |
| integer value | Interval for updating compute resources. This option specifies how often the update_available_resource periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds. Possible values:
|
| boolean value | Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. |
| boolean value | Use JSON formatting for logging. This option is ignored if log_config_append is set. |
| boolean value | Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. |
| boolean value | Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. |
| boolean value | Log output to Windows Event Log. |
| boolean value | Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes. |
| boolean value | Log output to standard error. This option is ignored if log_config_append is set. |
| string value |
Mask of host CPUs that can be used for
The behavior of this option depends on the definition of the
Possible values:
Related options:
Deprecated since: 20.0.0 Reason: This option has been superseded by the ``[compute] cpu_dedicated_set`` and ``[compute] cpu_shared_set`` options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver). |
| boolean value | Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values:
|
| integer value | Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal).
If you are hitting timeout failures at scale, consider running rootwrap in "daemon mode" in the neutron agent via the Related options:
|
| multi valued | Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command> |
| integer value | Interval for gathering volume usages. This option updates the volume usage cache for every volume_usage_poll_interval number of seconds. Possible values:
|
| boolean value | Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. |
| string value | Path to directory with content which will be served by a web server. |
9.1.2. api
The following table outlines the options available under the [api]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Determine the strategy to use for authentication. Deprecated since: 21.0.0 Reason: The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. |
| string value | This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values:
|
| string value | When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:
The option is in the format of a single string, with each version separated by a space. Possible values:
|
| string value | Domain name used to configure FQDN for instances. Configure a fully-qualified domain name for instance hostnames. If unset, only the hostname without a domain will be configured. Possible values:
|
| boolean value | Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. |
| string value | This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values:
|
| integer value |
This controls the batch size of instances requested from each cell database if Related options:
|
| string value | This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request. Related options:
|
| boolean value | When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True. |
| boolean value | When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True. Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See "Handling Down Cells" section of the Compute API guide (https://docs.openstack.org/api-guide/compute/down_cells.html) for more information. |
| boolean value | Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service. |
| integer value | As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. |
| integer value | This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. |
| string value | Tenant ID for getting the default network from Neutron API (also referred in some places as the project ID) to use. Related options:
|
| boolean value | When True, the X-Forwarded-For header is treated as the canonical remote address. When False (the default), the remote_address header is used. You should only enable this if you have an HTML sanitizing proxy. |
| boolean value | When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options:
|
| integer value | Maximum wait time for an external REST service to connect. Possible values:
Related options:
|
| boolean value | Should failures to fetch dynamic vendordata be fatal to instance boot? Related options:
|
| integer value | Maximum wait time for an external REST service to return data once connected. Possible values:
Related options:
|
`vendordata_dynamic_ssl_certfile = ` | string value | Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values:
Related options:
|
| list value |
A list of targets for the dynamic vendordata provider. These targets are of the form The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. |
| string value | Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host. Possible values:
|
| list value | A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Related options:
|
9.1.3. api_database
The following table outlines the options available under the [api_database]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value |
The SQLAlchemy connection string to use to connect to the database. Do not set this for the |
| integer value | Verbosity of SQL debugging information: 0=None, 100=Everything. |
`connection_parameters = ` | string value | Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&… |
| integer value | Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool. |
| boolean value | Add Python stack traces to SQL as comment strings. |
| integer value | If set, use this value for max_overflow with SQLAlchemy. |
| integer value | Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. |
| integer value | Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. |
| string value | The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= |
| integer value | If set, use this value for pool_timeout with SQLAlchemy. |
| integer value | Interval between retries of opening a SQL connection. |
| string value | The SQLAlchemy connection string to use to connect to the slave database. |
| boolean value | If True, SQLite uses synchronous mode. |
9.1.4. barbican
The following table outlines the options available under the [barbican]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Use this endpoint to connect to Keystone |
| string value | Version of the Barbican API, for example: "v1" |
| string value | Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" |
| string value | Specifies the type of endpoint. Allowed values are: public, private, and admin |
| integer value | Number of times to retry poll for key creation completion |
| integer value | Number of seconds to wait before retrying poll for key creation completion |
| boolean value | Specifies if insecure TLS (https) requests. If False, the server’s certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. |
| string value | A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. |
9.1.5. cache
The following table outlines the options available under the [cache]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. |
| multi valued | Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". |
| string value | Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. |
| floating point value | Time in seconds before attempting to add a node back in the pool in the HashClient’s internal mechanisms. |
| boolean value | Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. |
| boolean value | Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. |
| boolean value | Global toggle for the socket keepalive of dogpile’s pymemcache backend |
| boolean value | Global toggle for caching. |
| integer value | Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it. |
| integer value | Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient’s internal mechanisms. |
| floating point value | Time in seconds that should pass between retry attempts in the HashClient’s internal mechanisms. |
| integer value | Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). |
| integer value | Number of seconds that an operation will wait to get a memcache client connection. |
| boolean value | Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). |
| integer value | Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). |
| integer value | Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). |
| list value |
Memcache servers in the format of "host:port". (dogpile.cache.memcached and oslo_cache.memcache_pool backends only). If a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address with the address family ( |
| floating point value | Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). |
| list value | Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. |
| integer value | Number of times to attempt an action before failing. |
| floating point value | Number of seconds to sleep between each attempt. |
| integer value | The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. |
| integer value | The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. |
| integer value | The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. |
| string value | Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. |
| string value | Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. |
| string value | Path to a single file in PEM format containing the client’s certificate as well as any number of CA certificates needed to establish the certificate’s authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. |
| boolean value | Global toggle for TLS usage when comunicating with the caching servers. |
| string value | Path to a single file containing the client’s private key in. Otherwhise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. |
9.1.6. cinder
The following table outlines the options available under the [cinder]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | Info to match when looking for cinder in the service catalog.
The Possible values:
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options:
|
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| boolean value | Allow attach between instance and volume in different availability zones. If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance.
If that AZ is not in Cinder (or By default there is no availability zone restriction on volume attach. Related options:
|
| string value | Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| string value | If this option is set then it will override service catalog lookup with this template for cinder endpoint Possible values:
Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options:
|
| integer value | Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values:
|
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | Region name of this node. This is used when picking the URL in the service catalog. Possible values:
|
| string value | User’s password |
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| boolean value | Log requests to multiple loggers. |
| string value | Scope for system operations |
| string value | Tenant ID |
| string value | Tenant Name |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
9.1.7. compute
The following table outlines the options available under the [compute]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | Enables reporting of build failures to the scheduler. Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher. Possible values:
Related options:
|
| string value |
Mask of host CPUs that can be used for
The behavior of this option affects the behavior of the deprecated
This behavior will be simplified in a future release when Possible values:
Related options:
|
| string value |
Mask of host CPUs that can be used for
The behavior of this option depends on the definition of the deprecated
This behavior will be simplified in a future release when Possible values:
Related options:
|
| list value | A list of image formats that should not be advertised as supported by this compute node. In some situations, it may be desirable to have a compute node refuse to support an expensive or complex image format. This factors into the decisions made by the scheduler about which compute node to select when booted with a given image. Possible values:
Related options:
|
| boolean value |
Determine if the source compute host should wait for a Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this.
Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host, a
Possible values:
Related options:
|
| integer value | Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit. |
| integer value |
Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the
Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the
Operators changing the
Operators setting The configured maximum is not enforced on shelved offloaded servers, as they have no compute host.
Possible values:
|
| string value | Location of YAML files containing resource provider configuration data. These files allow the operator to specify additional custom inventory and traits to assign to one or more resource providers. Additional documentation is available here: https://docs.openstack.org/nova/latest/admin/managing-resource-providers.html |
| integer value | Interval for updating nova-compute-side cache of the compute node resource provider’s inventories, aggregates, and traits. This option specifies the number of seconds between attempts to update a provider’s inventories, aggregates and traits in the local cache of the compute node. A value of zero disables cache refresh completely. The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the next time the data is accessed. Possible values:
|
| integer value | Time to wait in seconds before resending an ACPI shutdown signal to instances.
The overall time to wait is set by Possible values:
Related options:
|
| list value | A list of strings describing allowed VMDK "create-type" subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no form of VMDK image will be allowed. |
9.1.8. conductor
The following table outlines the options available under the [conductor]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | Number of workers for OpenStack Conductor service. The default will be the number of CPUs available. |
9.1.9. console
The following table outlines the options available under the [console]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values:
|
| string value | OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example:: ssl_ciphers = "kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES"
See the man page for the OpenSSL https://www.openssl.org/docs/man1.1.0/man1/ciphers.html Related options:
|
| string value | Minimum allowed SSL/TLS protocol version. Related options:
|
9.1.10. consoleauth
The following table outlines the options available under the [consoleauth]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | The lifetime of a console auth token (in seconds). A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. |
9.1.11. cors
The following table outlines the options available under the [cors]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Indicate that the actual request can include user credentials |
| list value | Indicate which header field names may be used during the actual request. |
| list value | Indicate which methods can be used during the actual request. |
| list value | Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com |
| list value | Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. |
| integer value | Maximum cache age of CORS preflight requests. |
9.1.12. cyborg
The following table outlines the options available under the [cyborg]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | The default region_name for endpoint URL discovery. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| integer value | Timeout value for http requests |
| list value | List of interfaces, in order of preference, for endpoint URL. |
9.1.13. database
The following table outlines the options available under the [database]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | The back end to use for the database. |
| string value | The SQLAlchemy connection string to use to connect to the database. |
| integer value | Verbosity of SQL debugging information: 0=None, 100=Everything. |
`connection_parameters = ` | string value | Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&… |
| integer value | Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the next time they are checked out from the pool. |
| boolean value | Add Python stack traces to SQL as comment strings. |
| boolean value | If True, increases the interval between retries of a database operation up to db_max_retry_interval. |
| integer value | Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. |
| integer value | If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. |
| integer value | Seconds between retries of a database transaction. |
| integer value | If set, use this value for max_overflow with SQLAlchemy. |
| integer value | Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. |
| integer value | Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. |
| boolean value | If True, transparently enables support for handling MySQL Cluster (NDB). |
| string value | The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= |
| integer value | If set, use this value for pool_timeout with SQLAlchemy. |
| integer value | Interval between retries of opening a SQL connection. |
| string value | The SQLAlchemy connection string to use to connect to the slave database. |
| boolean value | If True, SQLite uses synchronous mode. |
| boolean value | Enable the experimental use of database reconnect on connection lost. |
| boolean value | Enable the experimental use of thread pooling for all DB API calls |
9.1.14. devices
The following table outlines the options available under the [devices]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | The vGPU types enabled in the compute node. Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance.
If more than one single vGPU type is provided, then for each vGPU type an additional section,
If one or more sections are missing (meaning that a specific type is not wanted to use for at least one physical GPU) or if no device addresses are provided, then Nova will only use the first type that was provided by If the same PCI address is provided for two different types, nova-compute will return an InvalidLibvirtGPUConfig exception at restart.
|
9.1.15. ephemeral_storage_encryption
The following table outlines the options available under the [ephemeral_storage_encryption]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Cipher-mode string to be used. The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "<cipher>-<chainmode>-<ivmode>". Possible values:
|
| boolean value | Enables/disables LVM ephemeral storage encryption. |
| integer value | Encryption key length in bits. The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key. |
9.1.16. filter_scheduler
The following table outlines the options available under the [filter_scheduler]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Image property namespace for use in the host aggregate. Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| string value | Separator character(s) for image property namespace and name. When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| multi valued | Filters that the scheduler can use.
An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the By default, this is set to all filters that are included with nova. Possible values:
Related options:
|
| floating point value | Multiplier used for weighing hosts that have had recent build failures. This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| floating point value | CPU weight multiplier ratio. Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| floating point value | Multiplier used for weighing hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving a server, for example during cross-cell resize. By default, when moving an instance, the scheduler will prefer hosts within the same cell since cross-cell move operations can be slower and riskier due to the complicated nature of cross-cell migrations.
Note that this setting only affects scheduling if the
The value of this configuration option can be overridden per host aggregate by setting the aggregate metadata key with the same name ( Possible values:
Related options:
|
| floating point value | Disk weight multipler ratio. Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.
Note that this setting only affects scheduling if the Possible values:
|
| list value | Filters that the scheduler will use. An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient.
All of the filters in this option must be present in the Possible values:
Related options:
|
| integer value | Size of subset of best hosts selected by scheduler. New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Possible values:
|
| floating point value | Hypervisor Version weight multiplier ratio. The multiplier is used for weighting hosts based on the reported hypervisor version. Negative numbers indicate preferring older hosts, the default is to prefer newer hosts to aid with upgrades. Possible values:
Example:
Related options:
|
| string value | The default architecture to be used when using the image properties filter.
When using the Possible values:
|
| floating point value | IO operations weight multipler ratio. This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| list value | List of hosts that can only run certain images. If there is a need to restrict some images to only run on certain designated hosts, list those host names here.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| list value | List of UUIDs for images that can only be run on certain hosts. If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| integer value | Maximum number of instances that can exist on a host. If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option’s value.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| integer value | The number of instances that can be actively performing IO on a host. Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| floating point value | PCI device affinity weight multiplier. The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| floating point value | RAM weight multipler ratio. This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| boolean value | Prevent non-isolated images from being built on isolated hosts.
Note that this setting only affects scheduling if the Related options:
|
| boolean value | Enable spreading the instances between hosts with the same best weight.
Enabling it is beneficial for cases when |
| floating point value | Multiplier used for weighing hosts for group soft-affinity.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| floating point value | Multiplier used for weighing hosts for group soft-anti-affinity.
Note that this setting only affects scheduling if the Possible values:
Related options:
|
| boolean value | Enable querying of individual hosts for instance information. The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.
Related options:
|
| list value | Weighers that the scheduler will use.
Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is By default, this is set to all weighers that are included with Nova. Possible values:
|
9.1.17. glance
The following table outlines the options available under the [glance]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | List of glance api servers endpoints available to nova. https is used for ssl-based glance api servers. Note The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason. Possible values:
Deprecated since: 21.0.0 Reason: Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| boolean value | Enable or disable debug logging with glanceclient. |
| list value | List of certificate IDs for certificates that should be trusted. May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail. Related options:
|
| boolean value | Enable certificate validation for image signature verification. During image signature verification nova will first verify the validity of the image’s signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy. Related options:
Deprecated since: 16.0.0 Reason: This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together. |
| boolean value | Enable download of Glance images directly via RBD. Allow compute hosts to quickly download and cache images localy directly from Ceph rather than slow dowloads from the Glance API. This can reduce download time for images in the ten to hundreds of GBs from tens of minutes to tens of seconds, but requires a Ceph-based deployment and access from the compute nodes to Ceph. Related options:
|
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| integer value | Enable glance operation retries. Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries. |
`rbd_ceph_conf = ` | string value | Path to the ceph configuration file to use. Related options:
|
| integer value | The RADOS client timeout in seconds when initially connecting to the cluster. Related options:
|
`rbd_pool = ` | string value | The RADOS pool in which the Glance images are stored as rbd volumes. Related options:
|
`rbd_user = ` | string value | The RADOS client name for accessing Glance images stored as rbd volumes. Related options:
|
| string value | The default region_name for endpoint URL discovery. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| integer value | Timeout value for http requests |
| list value | List of interfaces, in order of preference, for endpoint URL. |
| boolean value | Enable image signature verification. nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers. Related options:
|
9.1.18. guestfs
The following table outlines the options available under the [guestfs]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Enable/disables guestfs logging. This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed. Related options: Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s.
|
9.1.19. healthcheck
The following table outlines the options available under the [healthcheck]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | Additional backends that can perform health checks and report that information back as part of a request. |
| boolean value | Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. |
| string value | Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. |
| list value | Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. |
| string value | The path to respond to healtcheck requests on. |
9.1.20. hyperv
The following table outlines the options available under the [hyperv]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Mount config drive as a CD drive. OpenStack can be configured to write instance metadata to a config drive, which is then attached to the instance before it boots. The config drive can be attached as a disk drive (default) or as a CD drive. Related options:
|
| boolean value | Inject password to config drive. When enabled, the admin password will be available from the config drive image. Related options:
|
| floating point value | Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values:
|
| boolean value | Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V’s metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer. |
| boolean value | Enable RemoteFX feature This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled. Instances with RemoteFX can be requested with the following flavor extra specs:
os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:: 64, 128, 256, 512, 1024 |
`instances_path_share = ` | string value | Instances path share The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally. Possible values:
Related options:
|
| list value | List of iSCSI initiators that will be used for estabilishing iSCSI sessions. If none are specified, the Microsoft iSCSI initiator service will choose the initiator. |
| boolean value | Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. |
| integer value | Mounted disk query retry count The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached. Possible values:
Related options:
|
| integer value | Mounted disk query retry interval Interval between checks for a mounted disk, in seconds. Possible values:
Related options:
|
| integer value | Power state check timeframe The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe. Possible values:
|
| integer value | Power state event polling interval Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value. Possible values:
|
| string value | qemu-img command qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value. Possible values:
Related options:
|
| boolean value | Use multipath connections when attaching iSCSI or FC disks. This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices. |
| integer value | Volume attach retry count The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached. Possible values:
Related options:
|
| integer value | Volume attach retry interval Interval between volume attachment attempts, in seconds. Possible values:
Related options:
|
| string value | External virtual switch name The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private). Possible values:
|
| integer value | Wait soft reboot seconds Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. Possible values:
|
9.1.21. image_cache
The following table outlines the options available under the [image_cache]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | Number of seconds to wait between runs of the image cache manager.
Note that when using shared storage for the Possible values:
Related options:
|
| integer value | Maximum number of compute hosts to trigger image precaching in parallel. When an image precache request is made, compute nodes will be contacted to initiate the download. This number constrains the number of those that will happen in parallel. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion. |
| boolean value | Should unused base images be removed? |
| integer value | Unused unresized base images younger than this will not be removed. |
| integer value | Unused resized base images younger than this will not be removed. |
| string value | Location of cached images. This is NOT the full path - just a folder name relative to $instances_path. For per-compute-host cached images, set to base$my_ip |
9.1.22. ironic
The following table outlines the options available under the [ironic]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | The number of times to retry when a request conflicts. If set to 0, only try once, no retries. Related options:
|
| integer value | The number of seconds to wait before retrying the request. Related options:
|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value |
Case-insensitive key to limit the set of nodes that may be managed by this service to the set of nodes in Ironic which have a matching conductor_group property. If unset, all available nodes will be eligible to be managed by this service. Note that setting this to the empty string ( |
| string value | User’s password |
| list value | List of hostnames for all nova-compute services (including this host) with this partition_key config value. Nodes matching the partition_key value will be distributed between all services specified here. If partition_key is unset, this option is ignored. |
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| string value | The default region_name for endpoint URL discovery. |
| integer value | Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Scope for system operations |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
| list value | List of interfaces, in order of preference, for endpoint URL. |
9.1.23. key_manager
The following table outlines the options available under the [key_manager]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | The type of authentication credential to create. Possible values are token, password, keystone_token, and keystone_password. Required if no context is passed to the credential factory. |
| string value | Use this endpoint to connect to Keystone. |
| string value | Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. |
| string value | Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. |
| string value | Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. |
| string value | Fixed key returned by key manager, specified in hex. Possible values:
|
| string value | Password for authentication. Required for password and keystone_password auth_type. |
| string value | Project’s domain ID for project. Optional for keystone_token and keystone_password auth_type. |
| string value | Project’s domain name for project. Optional for keystone_token and keystone_password auth_type. |
| string value | Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. |
| string value | Project name for project scoping. Optional for keystone_token and keystone_password auth_type. |
| boolean value | Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. |
| string value | Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. |
| string value | Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. |
| string value | User’s domain ID for authentication. Optional for keystone_token and keystone_password auth_type. |
| string value | User’s domain name for authentication. Optional for keystone_token and keystone_password auth_type. |
| string value | User ID for authentication. Optional for keystone_token and keystone_password auth_type. |
| string value | Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. |
9.1.24. keystone
The following table outlines the options available under the [keystone]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | The default region_name for endpoint URL discovery. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| integer value | Timeout value for http requests |
| list value | List of interfaces, in order of preference, for endpoint URL. |
9.1.25. keystone_authtoken
The following table outlines the options available under the [keystone_authtoken]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. |
| string value | API version of the Identity API endpoint. |
| string value |
Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the |
| string value | A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. |
| string value | Required if identity server requires client certificate |
| boolean value | Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. |
| string value | Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. |
| integer value | Request timeout value for communicating with Identity API server. |
| integer value | How many times are we trying to reconnect when communicating with Identity API Server. |
| boolean value | (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. |
| boolean value | Verify HTTPS connections. |
| string value | Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". |
| string value | Required if identity server requires client certificate |
| integer value | (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. |
| integer value | (Optional) Number of seconds memcached server is considered dead before it is tried again. |
| integer value | (Optional) Maximum total number of open connections to every memcached server. |
| integer value | (Optional) Socket timeout in seconds for communicating with a memcached server. |
| integer value | (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. |
| string value | (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. |
| string value | (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. |
| boolean value | (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. |
| list value | Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. |
| string value | The region in which the identity server can be found. |
| list value | A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. |
| boolean value | For backwards compatibility reasons we must let valid service tokens pass that don’t pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. |
| string value | The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. |
| integer value | In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. |
| string value | Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. |
9.1.26. libvirt
The following table outlines the options available under the [libvirt]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
`connection_uri = ` | string value | Overrides the default libvirt URI of the chosen virtualization type. If set, Nova will use this URI to connect to libvirt. Possible values:
Related options:
|
| string value | Is used to set the CPU mode an instance should have.
If Related options:
|
| list value | Enable or disable guest CPU flags.
To explicitly enable or disable CPU flags, use the [libvirt] cpu_mode = custom cpu_models = Cascadelake-Server cpu_model_extra_flags = -hle, -rtm, +ssbd, mtrr
Nova will disable the
The CPU flags are case-insensitive. In the following example, the [libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid
Specifying extra CPU flags is valid in combination with all the three possible values of
There can be scenarios where you may need to configure extra CPU flags even for
The possible values for
A special note on a particular CPU flag:
The libvirt driver’s default CPU mode, Related options:
|
| list value | An ordered list of CPU models the host supports.
It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example: Possible values:
Related options:
|
| integer value | Maximum number of attempts the driver tries to detach a device in libvirt. Related options:
|
| integer value | Maximum number of seconds the driver waits for the success or the failure event from libvirt for a given device detach attempt before it re-trigger the detach. Related options:
|
| list value | Specific cache modes to use for different disk types. For example: file=directsync,block=none,network=writeback For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment. Possible cache modes:
|
| string value | Override the default disk prefix for the devices attached to an instance. If set, this is used to identify a free disk device name for a bus. Possible values:
Related options:
|
| list value | Performance events to monitor and collect statistics for.
This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statistics via the libvirt driver, which in turn uses the Linux kernel’s For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:: [libvirt] enabled_perf_events = cpu_clock, cache_misses
Possible values: A string list. The list of supported events can be found |
| integer value | Available capacity in MiB for file-backed memory. Set to 0 to disable file-backed memory.
When enabled, instances will create memory files in the directory specified in When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel’s pagecache mechanism.
Related options:
|
| list value | List of guid targets and ranges.Syntax is guest-gid:host-gid:count. Maximum of 5 allowed. |
| string value | Discard option for nova managed disks. Requires:
|
| list value |
For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the :command: |
`images_rbd_ceph_conf = ` | string value | Path to the ceph configuration file to use |
| integer value | The interval in seconds with which to poll Glance after asking for it to copy an image to the local rbd store. This affects how often we ask Glance to report on copy completion, and thus should be short enough that we notice quickly, but not too aggressive that we generate undue load on the Glance server. Related options:
|
| integer value | The overall maximum time we will wait for Glance to complete an image copy to our local rbd store. This should be long enough to allow large images to be copied over the network link between our local store and the one where images typically reside. The downside of setting this too long is just to catch the case where the image copy is stalled or proceeding too slowly to be useful. Actual errors will be reported by Glance and noticed according to the poll interval.
Related options: * images_type - must be set to |
`images_rbd_glance_store_name = ` | string value | The name of the Glance store that represents the rbd cluster in use by this node. If set, this will allow Nova to request that Glance copy an image from an existing non-local store into the one named by this option before booting so that proper Copy-on-Write behavior is maintained. Related options:
|
| string value | The RADOS pool in which rbd volumes are stored |
| string value | VM Images format. If default is specified, then use_cow_images flag is used instead of this one. Related options:
|
| string value | LVM Volume Group that is used for VM images, when you specify images_type=lvm Related options:
|
| boolean value | Allow the injection of an SSH key at boot time.
There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service. Linux distribution guest only. Related options:
|
| integer value | Determines how the file system is chosen to inject data into it. libguestfs is used to inject data. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t boot. Possible values:
Linux distribution guest only. Related options:
|
| boolean value |
Allow the injection of an admin password for instance only at There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume. Linux distribution guest only. Possible values:
Related options:
|
| string value | The iSCSI transport iface to use to connect to target in case offload support is desired.
Default format is of the form |
| boolean value | Use multipath connection of the iSER volume. iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance. |
| integer value | Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details. |
| integer value | Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. Related options:
|
| integer value | Maximum permitted downtime, in milliseconds, for live migration switchover. Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over. Related options:
|
| integer value | Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device. |
| integer value | Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps. |
| host address value | IP address used as the live migration address for this host. This option indicates the IP address which should be used as the target for live migration traffic when migrating to this hypervisor. This metadata is then used by the source of the live migration traffic to construct a migration URI. If this option is set to None, the hostname of the migration target compute node will be used. This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network. |
| boolean value | This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Related options:
|
| boolean value | This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.
When permitted, post-copy mode will be automatically activated if we reach the timeout defined by The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete. When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide. Related options:
|
| string value | URI scheme for live migration used by the source of live migration traffic. Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme. Related options:
|
| string value |
This option will be used to determine what action will be taken against a VM after Related options:
|
| boolean value | Enable tunnelled migration. This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively. Note that this option is NOT compatible with use of block migration. Deprecated since: 23.0.0 Reason: The "tunnelled live migration" has two inherent limitations: it cannot handle live migration of disks in a non-shared storage setup; and it has a huge performance cost. Both these problems are solved by ``live_migration_with_native_tls`` (requires a pre-configured TLS environment), which is the recommended approach for securing all live migration streams. |
| string value | Live migration target URI used by the source of live migration traffic.
Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname, or
If this option is set to None (which is the default), Nova will automatically generate the
Related options:
Deprecated since: 15.0.0 Reason: live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively. |
| boolean value | Use QEMU-native TLS encryption when live migrating. This option will allow both migration stream (guest RAM plus device state) and disk stream to be transported over native TLS, i.e. TLS support built into QEMU. Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permisssions are in place, and are validated. Notes:
Related options:
|
| integer value | The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used. |
| integer value | A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics. |
| string value | Mount options passed to the NFS client. See section of the nfs man page for details. Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point. Possible values:
|
| string value | Directory where the NFS volume is mounted on the compute node. The default is mnt directory of the location where nova’s Python module is installed. NFS provides shared storage for the OpenStack Block Storage service. Possible values:
|
| integer value | Number of times to rediscover AoE target to find volume. Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device. |
| integer value | Number of times to scan iSER target to find volume. iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume. |
| integer value | Maximum number of guests with encrypted memory which can run concurrently on this compute host. For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots. The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0.
If the machine does support memory encryption, for now a value of
Related options:
|
| integer value | Number of times to rediscover NVMe target to find volume Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device. |
| integer value | The number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. By default we have just 1-2 free ports which limits hotplug. More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt Due to QEMU limitations for aarch64/virt maximum value is set to 28. Default value 0 moves calculating amount of ports to libvirt. |
| integer value | Number of times to scan given storage protocol to find volume. |
| list value | Configure persistent memory(pmem) namespaces. These namespaces must have been already created on the host. This config option is in the following format:: "$LABEL:$NSNAME[|$NSNAME][,$LABEL:$NSNAME[|$NSNAME]]"
|
| string value | Path to a Quobyte Client configuration file. |
| string value | Directory where the Quobyte volume is mounted on the compute node. Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted. Possible values:
|
| integer value | The RADOS client timeout in seconds when initially connecting to the cluster. |
| integer value | Number of retries to destroy a RBD volume. Related options:
|
| integer value | Number of seconds to wait between each consecutive retry to destroy a RBD volume. Related options:
|
| string value | The libvirt UUID of the secret for the rbd_user volumes. |
| string value | The RADOS client name for accessing rbd(RADOS Block Devices) volumes. Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server. |
| integer value | In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99) |
| string value | libvirt’s transport method for remote file operations. Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for:
|
| string value | The ID of the image to boot from to rescue data from a corrupted instance. If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used. Possible values:
Related options:
|
| string value | The ID of the kernel (AKI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image. Possible values:
Related options:
|
| string value | The ID of the RAM disk (ARI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon's AMI/AKI/ARI image format is used for the rescue image. Possible values:
Related options:
|
| string value |
The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is |
| integer value | Configure virtio rx queue size. This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7. |
`smbfs_mount_options = ` | string value | Mount options passed to the SMBFS client.
Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu |
| string value | Directory where the SMBFS shares are mounted on the compute node. |
| boolean value |
Enable snapshot compression for
Note: you can set Related options:
|
| string value | Determine the snapshot image format when sending to the image service. If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image. |
| string value | Location where libvirt driver will store snapshots before uploading them to image service |
| boolean value | Create sparse logical volumes (with virtualsize) if this flag is set to True. Deprecated since: 18.0.0 Reason: Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes. |
| boolean value | Enable emulated TPM (Trusted Platform Module) in guests. |
| string value | Group that swtpm binary runs as.
When using emulated TPM, the In order to support cold migration and resize, nova needs to know what group the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options:
|
| string value | User that swtpm binary runs as.
When using emulated TPM, the In order to support cold migration and resize, nova needs to know what user the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options:
|
| string value |
The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS. All choices except |
| integer value | Configure virtio tx queue size. This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10. |
| list value | List of uid targets and ranges.Syntax is guest-uid:host-uid:count. Maximum of 5 allowed. |
| boolean value | Use virtio for bridge interfaces with KVM/QEMU |
| string value | Describes the virtualization type (or so called domain type) libvirt should use. The choice of this type must match the underlying virtualization strategy you have chosen for this host. Related options:
|
| string value | Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage. Related options:
|
| integer value |
Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in Possible values:
Related options:
|
| boolean value | Use multipath connection of the iSCSI or FC volume Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance. |
| string value | Path to the SSD cache file. You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility. This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares. Related options:
|
| string value | Path to vzstorage client log. This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares. Related options:
|
| string value | Mount owner group name. This option defines the owner group of Vzstorage cluster mountpoint. Related options:
|
| list value | Extra mount options for pstorage-mount For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "[-v, -R, 500]" Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options. Related options:
|
| string value | Mount access mode. This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s. Related options:
|
| string value | Directory where the Virtuozzo Storage clusters are mounted on the compute node. This option defines non-standard mountpoint for Vzstorage cluster. Related options:
|
| string value | Mount owner user name. This option defines the owner user of Vzstorage cluster mountpoint. Related options:
|
| integer value | Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. |
9.1.27. metrics
The following table outlines the options available under the [metrics]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Whether metrics are required. This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. Possible values:
Related options:
|
| floating point value | Multiplier used for weighing hosts based on reported metrics. When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:
Possible values:
Related options:
|
| floating point value | Default weight for unavailable metrics. When any of the following conditions are met, this value will be used in place of any actual metric value:
Possible values:
Related options:
|
| list value | Mapping of metric to weight modifier.
This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more name=ratio pairs, separated by commas, where
Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the As an example, let’s consider the case where this option is set to: `name1=1.0, name2=-1.3` The final weight will be: `(name1.value * 1.0) + (name2.value * -1.3)` Possible values:
Related options:
|
9.1.28. mks
The following table outlines the options available under the [mks]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Enables graphical console access for virtual machines. |
| uri value | Location of MKS web console proxy The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured Possible values:
|
9.1.29. neutron
The following table outlines the options available under the [neutron]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Default name for the floating IP pool. Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding reponses. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| integer value | Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait. |
| integer value | Number of times neutronclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values:
|
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
`metadata_proxy_shared_secret = ` | string value | This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the X-Metadata-Provider-Signature header must be supplied in the request. Related options:
|
| string value | Default name for the Open vSwitch integration bridge. Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses. |
| string value | User’s password |
| list value | List of physnets present on this host.
For each physnet listed, an additional section, [neutron] physnets = foo, bar [neutron_physnet_foo] numa_nodes = 0 [neutron_physnet_bar] numa_nodes = 0,1 Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity.
Tunnelled networks (VXLAN, GRE, …) cannot be accounted for in this way and are instead configured using the [neutron_tunnel] numa_nodes = 1 Related options:
|
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| string value | The default region_name for endpoint URL discovery. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the X-Instance-ID header. Related options:
|
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Scope for system operations |
| string value | Tenant ID |
| string value | Tenant Name |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
| list value | List of interfaces, in order of preference, for endpoint URL. |
9.1.30. notifications
The following table outlines the options available under the [notifications]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database. |
| string value | Default notification level for outgoing notifications. |
| string value | Specifies which notification format shall be emitted by nova. The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface. However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default.
Note that notifications can be completely disabled by setting The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html |
| string value | If set, send compute.instance.update notifications on instance state changes. Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications. |
| list value | Specifies the topics for the versioned notifications issued by nova. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html |
9.1.31. oslo_concurrency
The following table outlines the options available under the [oslo_concurrency]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Enables or disables inter-process locks. |
| string value | Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. |
9.1.32. oslo_messaging_amqp
The following table outlines the options available under the [oslo_messaging_amqp]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing |
| string value | Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. |
| string value | address prefix used when broadcasting to all servers |
| integer value | Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. |
| integer value | Seconds to pause before attempting to re-connect. |
| integer value | Maximum limit for connection_retry_interval + connection_retry_backoff |
| string value | Name for the AMQP container. must be globally unique. Defaults to a generated UUID |
| string value | Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify |
| integer value | The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. |
| integer value | The maximum number of attempts to re-send a reply message which failed due to a recoverable error. |
| integer value | The deadline for an rpc reply message delivery. |
| string value | Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc |
| integer value | The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. |
| integer value | The duration to schedule a purge of idle sender links. Detach link after expiry. |
| string value | address prefix when sending to any server in group |
| integer value | Timeout for inactive connections (in seconds) |
| integer value | Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. |
| string value | Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. |
| string value | Address prefix for all generated Notification addresses |
| integer value | Window size for incoming Notification messages |
| multi valued | Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply- send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled |
| boolean value | Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. |
| integer value | Window size for incoming RPC Reply messages. |
| string value | Address prefix for all generated RPC addresses |
| integer value | Window size for incoming RPC Request messages |
`sasl_config_dir = ` | string value | Path to directory that contains the SASL configuration |
`sasl_config_name = ` | string value | Name of configuration file (without .conf suffix) |
`sasl_default_realm = ` | string value | SASL realm to use if no realm present in username |
`sasl_mechanisms = ` | string value | Space separated list of acceptable SASL mechanisms |
| string value | address prefix used when sending to a specific server |
| boolean value | Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system’s CA-bundle to verify the server’s certificate. |
`ssl_ca_file = ` | string value | CA certificate PEM file used to verify the server’s certificate |
`ssl_cert_file = ` | string value | Self-identifying certificate PEM file for client authentication |
`ssl_key_file = ` | string value | Private key PEM file used to sign ssl_cert_file certificate (optional) |
| string value | Password for decrypting ssl_key_file (if encrypted) |
| boolean value | By default SSL checks that the name in the server’s certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server’s SSL certificate uses the virtual host name instead of the DNS name. |
| boolean value | Debug: dump AMQP frames to stdout |
| string value | Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. |
9.1.33. oslo_messaging_kafka
The following table outlines the options available under the [oslo_messaging_kafka]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version |
| integer value | The pool size limit for connections expiration policy |
| integer value | The time-to-live in sec of idle connections in the pool |
| string value | Group id for Kafka consumer. Consumers in one group will coordinate message consumption |
| boolean value | Enable asynchronous consumer commits |
| floating point value | Default timeout(s) for Kafka consumers |
| integer value | Max fetch bytes of Kafka consumer |
| integer value | The maximum number of records returned in a poll call |
| integer value | Pool Size for Kafka Consumers |
| integer value | Size of batch for the producer async send |
| floating point value | Upper bound on the delay for KafkaProducer batching in seconds |
| string value | Mechanism when security protocol is SASL |
| string value | Protocol used to communicate with brokers |
`ssl_cafile = ` | string value | CA certificate PEM file used to verify the server certificate |
`ssl_client_cert_file = ` | string value | Client certificate PEM file used for authentication. |
`ssl_client_key_file = ` | string value | Client key PEM file used for authentication. |
`ssl_client_key_password = ` | string value | Client key password file used for authentication. |
9.1.34. oslo_messaging_notifications
The following table outlines the options available under the [oslo_messaging_notifications]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| multi valued | The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop |
| integer value | The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite |
| list value | AMQP topic used for OpenStack notifications. |
| string value | A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. |
9.1.35. oslo_messaging_rabbit
The following table outlines the options available under the [oslo_messaging_rabbit]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Auto-delete queues in AMQP. |
| boolean value | Use durable queues in AMQP. |
| boolean value | (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore |
| boolean value | Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down |
| boolean value | Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. |
| integer value | How often times during the heartbeat_timeout_threshold we check the heartbeat. |
| integer value | Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disables heartbeat). |
| string value | EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. |
| string value | Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. |
| integer value | How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. |
| floating point value | How long to wait before reconnecting in response to an AMQP consumer cancel notification. |
| boolean value | Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " |
| integer value | Maximum interval of RabbitMQ connection retries. Default is 30 seconds. |
| string value | The RabbitMQ login method. |
| integer value | Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. |
| integer value | How long to backoff for between retries when connecting to RabbitMQ. |
| integer value | How frequently to retry connecting with RabbitMQ. |
| integer value | Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. |
| boolean value | Connect over SSL. |
`ssl_ca_file = ` | string value | SSL certification authority file (valid only if SSL enabled). |
`ssl_cert_file = ` | string value | SSL cert file (valid only if SSL enabled). |
`ssl_key_file = ` | string value | SSL key file (valid only if SSL enabled). |
`ssl_version = ` | string value | SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. |
9.1.36. oslo_middleware
The following table outlines the options available under the [oslo_middleware]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
| integer value | The maximum body size for each request, in bytes. |
| string value | The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
9.1.37. oslo_policy
The following table outlines the options available under the [oslo_policy]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value |
This option controls whether or not to use old deprecated defaults when evaluating policies. If |
| boolean value |
This option controls whether or not to enforce scope when evaluating policies. If |
| string value | Default rule. Enforced when a requested rule is not found. |
| multi valued | Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. |
| string value | The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. |
| string value | Content Type to send and receive data for REST based policy check |
| string value | Absolute path to ca cert file for REST based policy check |
| string value | Absolute path to client cert for REST based policy check |
| string value | Absolute path client key file REST based policy check |
| boolean value | server identity verification for REST based policy check |
9.1.38. pci
The following table outlines the options available under the [pci]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| multi valued | An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements.
This should be configured for the Possible Values:
|
| multi valued | White list of PCI devices available to VMs. Possible values:
|
9.1.39. placement
The following table outlines the options available under the [placement]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| integer value | The maximum number of retries that should be attempted for connection errors. |
| floating point value | Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| string value |
Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | User’s password |
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| string value | The default region_name for endpoint URL discovery. |
| string value | The default service_name for endpoint URL discovery. |
| string value | The default service_type for endpoint URL discovery. |
| boolean value | Log requests to multiple loggers. |
| integer value | The maximum number of retries that should be attempted for retriable HTTP status codes. |
| floating point value | Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. |
| string value | Scope for system operations |
| string value | Tenant ID |
| string value | Tenant Name |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
| list value | List of interfaces, in order of preference, for endpoint URL. |
9.1.40. powervm
The following table outlines the options available under the [powervm]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | The disk driver to use for PowerVM disks. PowerVM provides support for localdisk and PowerVM Shared Storage Pool disk drivers. Related options:
|
| floating point value | Factor used to calculate the amount of physical processor compute power given to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor. |
`volume_group_name = ` | string value | Volume Group to use for block device operations. If disk_driver is localdisk, then this attribute must be specified. It is strongly recommended NOT to use rootvg since that is used by the management partition and filling it will cause failures. |
9.1.41. privsep
The following table outlines the options available under the [privsep]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | List of Linux capabilities retained by the privsep daemon. |
| string value | Group that the privsep daemon should run as. |
| string value | Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. |
| string value | Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. |
| integer value | The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. |
| string value | User that the privsep daemon should run as. |
9.1.42. profiler
The following table outlines the options available under the [profiler]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Connection string for a notifier backend.
Default value is Examples of possible values:
|
| boolean value | Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values:
|
| string value | Document type for notification indexing in elasticsearch. |
| integer value | Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). |
| string value | This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. |
| boolean value | Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values:
|
| string value | Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,…<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
| string value |
Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: |
| floating point value | Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). |
| boolean value | Enable SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values:
|
9.1.43. quota
The following table outlines the options available under the [quota]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | The number of instance cores or vCPUs allowed per project. Possible values:
|
| boolean value | Enable the counting of quota usage from the placement service. Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases. This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases. Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted. Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram. Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved.
The |
| string value | Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. |
| integer value | The number of bytes allowed per injected file. Possible values:
|
| integer value | The maximum allowed injected file path length. Possible values:
|
| integer value | The number of injected files allowed.
File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include Possible values:
|
| integer value | The number of instances allowed per project. Possible Values
|
| integer value | The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values:
|
| integer value | The number of metadata items allowed per instance. Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs. Possible values:
|
| integer value | The number of megabytes of instance RAM allowed per project. Possible values:
|
| boolean value | Recheck quota after resource creation to prevent allowing quota to be exceeded. This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them. The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request. |
| integer value | The maximum number of servers per server group. Possible values:
|
| integer value | The maxiumum number of server groups per project. Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values:
|
9.1.44. rdp
The following table outlines the options available under the [rdp]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Enable Remote Desktop Protocol (RDP) related features. Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V. Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform. Related options:
|
| uri value | The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance. An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack. An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect Possible values:
Related options:
|
9.1.45. remote_debug
The following table outlines the options available under the [remote_debug]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| host address value | Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values:
|
| port value | Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values:
|
9.1.46. scheduler
The following table outlines the options available under the [scheduler]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run. Possible values:
|
| boolean value | Restrict use of aggregates to instances with matching metadata.
This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key Possible values:
|
| boolean value | Use placement to filter hosts based on image metadata. This setting causes the scheduler to transform well known image metadata properties into placement required traits to filter host based on image metadata. This feature requires host support and is currently supported by the following compute drivers:
Possible values:
Related options:
|
| boolean value | Restrict tenants to specific placement aggregates.
This setting causes the scheduler to look up a host aggregate with the metadata key of The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request. Possible values:
Related options:
|
| integer value | The maximum number of schedule attempts.
This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a Possible values:
|
| integer value | The maximum number of placement results to request. This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates. A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler. Possible values:
|
| boolean value | Require a placement aggregate association for all tenants. This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node. Possible values:
Related options:
|
| boolean value | Use placement to determine availability zones.
This setting causes the scheduler to look up a host aggregate with the metadata key of
The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the Note that if you enable this flag, you can disable the (less efficient) AvailabilityZoneFilter in the scheduler. Possible values:
Related options:
|
| boolean value | Use placement to determine host support for the instance’s image type.
This setting causes the scheduler to ask placement only for compute hosts that support the Possible values:
|
| boolean value | Enable the scheduler to filter compute hosts affined to routed network segment aggregates. See https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html for details. |
| integer value | Number of workers for the nova-scheduler service. Defaults to the number of CPUs available. Possible values:
|
9.1.47. serial_console
The following table outlines the options available under the [serial_console]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| uri value |
The URL an end user would use to connect to the
The Related options:
|
| boolean value | Enable the serial console feature.
In order to use this feature, the service |
| string value | A range of TCP ports a guest can use for its backend. Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched. Possible values:
|
| string value |
The IP address to which proxy clients (like
This is typically the IP address of the host of a |
| string value |
The IP address which is used by the
The Related options:
|
| port value |
The port number which is used by the
The Related options:
|
9.1.48. service_user
The following table outlines the options available under the [service_user]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| string value | Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | User’s password |
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| boolean value | When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user’s behalf, we include a service token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. |
| boolean value | Log requests to multiple loggers. |
| string value | Scope for system operations |
| string value | Tenant ID |
| string value | Tenant Name |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
9.1.49. spice
The following table outlines the options available under the [spice]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | Enable the SPICE guest agent support on the instances. The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled:
|
| boolean value | Enable SPICE related features. Related options:
|
| uri value | Location of the SPICE HTML5 console proxy.
End user would use this URL to connect to the
In order to use SPICE console, the service Possible values:
Related options:
|
| host address value |
IP address or a hostname on which the Related options:
|
| port value |
Port on which the Related options:
|
| string value | The address where the SPICE server running on the instances should listen.
Typically, the Possible values:
|
| string value |
The address used by
Typically, the Possible values:
Related options:
|
9.1.50. upgrade_levels
The following table outlines the options available under the [upgrade_levels]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Base API RPC API version cap. Possible values:
|
| string value | Cert RPC API version cap. Possible values:
Deprecated since: 18.0.0 Reason: The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used. |
| string value | Compute RPC API version cap. By default, we always send messages using the most recent version the client knows about. Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can’t understand. Note that we only support upgrading from release N to release N+1. Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Possible values:
|
| string value | Conductor RPC API version cap. Possible values:
|
| string value | Scheduler RPC API version cap. Possible values:
|
9.1.51. vault
The following table outlines the options available under the [vault]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | AppRole role_id for authentication with vault |
| string value | AppRole secret_id for authentication with vault |
| string value | Mountpoint of KV store in Vault to use, for example: secret |
| integer value | Version of KV store in Vault to use, for example: 2 |
| string value | root token for vault |
| string value | Absolute path to ca cert file |
| boolean value | SSL Enabled/Disabled |
| string value | Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" |
9.1.52. vendordata_dynamic_auth
The following table outlines the options available under the [vendordata_dynamic_auth]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | Authentication URL |
| string value | Config Section from which to load plugin specific options |
| string value | Authentication type to load |
| string value | PEM encoded Certificate Authority to use when verifying HTTPs connections. |
| string value | PEM encoded client certificate cert file |
| boolean value | Collect per-API call timing information. |
| string value | Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
| string value | Domain ID to scope to |
| string value | Domain name to scope to |
| boolean value | Verify HTTPS connections. |
| string value | PEM encoded client certificate key file |
| string value | User’s password |
| string value | Domain ID containing project |
| string value | Domain name containing project |
| string value | Project ID to scope to |
| string value | Project name to scope to |
| boolean value | Log requests to multiple loggers. |
| string value | Scope for system operations |
| string value | Tenant ID |
| string value | Tenant Name |
| integer value | Timeout value for http requests |
| string value | Trust ID |
| string value | User’s domain id |
| string value | User’s domain name |
| string value | User ID |
| string value | Username |
9.1.53. vmware
The following table outlines the options available under the [vmware]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| integer value | Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. |
| string value | Specifies the CA bundle file to be used in verifying the vCenter server certificate. |
| string value | This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. Note: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values:
|
| string value | Name of a VMware Cluster ComputeResource. |
| integer value | This option sets the http connection pool size The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice. |
| integer value | Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. |
| string value | Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". Note If no regex is given, it just picks the datastore with the most freespace. Possible values:
|
| host address value | Hostname or IP address for connection to VMware vCenter host. |
| string value | Password for connection to VMware vCenter host. |
| port value | Port for connection to VMware vCenter host. |
| string value | Username for connection to VMware vCenter host. |
| boolean value | If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options: * ca_file: This option is ignored if "ca_file" is set. |
| string value | This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values:
|
| integer value | This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. |
| string value | This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values:
Related options:
|
| boolean value | This option enables or disables storage policy based placement of instances. Related options:
|
| string value | This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values:
|
| string value | Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the serial_log_dir config value of VSPC. |
| uri value | Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values:
Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri |
| string value | Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values:
|
| floating point value | Time interval in seconds to poll remote tasks invoked on VMware VC server. |
| boolean value | This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. |
| string value | Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values:
|
| port value | This option specifies VNC starting port. Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option vnc_port helps you to set default starting port for the VNC client. Possible values:
Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total |
| integer value | Total number of VNC ports. |
9.1.54. vnc
The following table outlines the options available under the [vnc]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| list value | The authentication schemes to use with the compute node. Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first. Related options:
|
| boolean value | Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. |
| uri value | Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.
If using noVNC >= 1.0.0, you should use Related options:
|
| string value | IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Related options:
|
| port value | Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Related options:
|
| host address value | The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node. |
| host address value | Private, internal IP address or hostname of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.
This option sets the private address to which proxy clients, such as |
| string value | The path to the CA certificate PEM file The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server. Related options:
|
| string value | The path to the client key file (for x509) The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication. Realted options:
|
| string value | The path to the client certificate PEM file (for x509) The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication. Related options:
|
9.1.55. workarounds
The following table outlines the options available under the [workarounds]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| boolean value | If this is set, the normal safety check for old compute services will be treated as a warning instead of an error. This is only to be enabled to facilitate a Fast-Forward upgrade where new control services are being started before compute nodes have been able to update their service record. In an FFU, the service records in the database will be more than one version old until the compute nodes start up, but control services need to be online first. |
| boolean value | Disable fallback request for VCPU allocations when using pinned instances.
Starting in Train, compute nodes using the libvirt virt driver can report Deprecated since: 20.0.0 *Reason:*None |
| boolean value | Disable the server group policy check upcall in compute. In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy. Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute. Related options:
|
| boolean value | Disable live snapshots when using the libvirt driver. Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem. When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1334398 Possible values:
Deprecated since: 19.0.0 Reason: This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release. |
| boolean value | When attaching encrypted LUKSv1 Cinder volumes to instances the Libvirt driver configures the encrypted disks to be natively decrypted by QEMU. A performance issue has been discovered in the libgcrypt library used by QEMU that serverly limits the I/O performance in this scenario. For more information please refer to the following bug report: RFE: hardware accelerated AES-XTS mode https://bugzilla.redhat.com/show_bug.cgi?id=1762765 Enabling this workaround option will cause Nova to use the legacy dm-crypt based os-brick encryptor to decrypt the LUKSv1 volume.
Note that enabling this option while using volumes that do not provide a host block device such as Ceph will result in a failure to boot from or attach the volume to an instance. See the Related options:
Deprecated since: 23.0.0 Reason: The underlying performance regression within libgcrypt that prompted this workaround has been resolved as of 1.8.5 |
| boolean value | Use sudo instead of rootwrap. Allow fallback to sudo for performance reasons. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1415106 Possible values:
Interdependencies to other options:
|
| boolean value | Enable live migration of instances with NUMA topologies. Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In previous versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in `bug #1289064`_. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes. Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances. Related options:
Deprecated since: 20.0.0 *Reason:*This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release. |
| boolean value | If it is set to True the libvirt driver will try as a best effort to send the announce-self command to the QEMU monitor so that it generates RARP frames to update network switches in the post live migration phase on the destination. Please note that this causes the domain to be considered tainted by libvirt. Related options:
|
| boolean value | Ensure the instance directory is removed during clean up when using rbd.
When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using https://bugs.launchpad.net/nova/+bug/1414895 https://bugs.launchpad.net/nova/+bug/1761062
Both of these bugs can then result in
Related options:
|
| boolean value | Enable handling of events emitted from compute drivers. Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored. This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature. Care should be taken when this feature is disabled and sync_power_state_interval is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630 Interdependencies to other options:
|
| boolean value | With some kernels initializing the guest apic can result in a kernel hang that renders the guest unusable. This happens as a result of a kernel bug. In most cases the correct fix it to update the guest image kernel to one that is patched however in some cases this is not possible. This workaround allows the emulation of an apic to be disabled per host however it is not recommended to use outside of a CI or developer cloud. |
| boolean value | When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1858877 Enabling this option will cause nova to refuse to boot an instance if it would require downloading the image from glance and uploading it to ceph itself. Related options:
|
| boolean value | Attach RBD Cinder volumes to the compute as host block devices. When enabled this option instructs os-brick to connect RBD volumes locally on the compute host as block devices instead of natively through QEMU. This workaround does not currently support extending attached volumes. This can be used with the disable_native_luksv1 workaround configuration option to avoid the recently discovered performance issues found within the libgcrypt library. This workaround is temporary and will be removed during the W release once all impacted distributions have been able to update their versions of the libgcrypt library. Related options:
Deprecated since: 23.0.0 Reason: The underlying performance regression within libgcrypt that prompted this workaround has been resolved as of 1.8.5 |
| boolean value |
If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the :oslo.config:option:
Such disk reservation is done by a periodic task in the resource tracker that runs every :oslo.config:option: Related options:
|
| boolean value | This will skip the CPU comparison call at the startup of Compute service and lets libvirt handle it. |
| boolean value | When this is enabled, it will skip CPU comparison on the destination host. When using QEMU >= 2.9 and libvirt >= 4.4.0, libvirt will do the correct thing with respect to checking CPU compatibility on the destination host during live migration. |
| boolean value | When this is enabled, it will skip version-checking of hypervisors during live migration. |
| list value | The libvirt virt driver implements power on and hard reboot by tearing down every vif of the instance being rebooted then plug them again. By default nova does not wait for network-vif-plugged event from neutron before it lets the instance run. This can cause the instance to requests the IP via DHCP before the neutron backend has a chance to set up the networking backend after the vif plug. This flag defines which vifs nova expects network-vif-plugged events from during hard reboot. The possible values are neutron port vnic types:
Adding a
Please note that not all neutron networking backends send plug time events, for certain
The ml2/ovs and the networking-odl backends are known to send plug time events for ports with
The neutron in-tree SRIOV backend does not reliably send network-vif-plugged event during plug time for ports with Related options:
|
9.1.56. wsgi
The following table outlines the options available under the [wsgi]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | This option represents a file name for the paste.deploy config for nova-api. Possible values:
|
| integer value | This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. |
| integer value | This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. |
| boolean value | This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. Possible values:
Related options:
|
| integer value | This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length. |
| string value | This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. Possible values:
Warning Do not set this unless you know what you are doing. Make sure ALL of the following are true before setting this (assuming the values from the example above):
If any of those are not true, you should keep this setting set to None. |
| string value | This option allows setting path to the CA certificate file that should be used to verify connecting clients. Possible values:
Related options:
|
| string value | This option allows setting path to the SSL certificate of API server. Possible values:
Related options:
|
| string value | This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. Possible values:
Related options:
|
| integer value | This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. Related options:
|
| string value | It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect. Possible values:
Deprecated since: 16.0.0 Reason: This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi. |
9.1.57. zvm
The following table outlines the options available under the [zvm]
group in the /etc/nova/nova.conf
file.
Configuration option = Default value | Type | Description |
---|---|---|
| string value | CA certificate file to be verified in httpd server with TLS enabled A string, it must be a path to a CA bundle to use. |
| uri value | URL to be used to communicate with z/VM Cloud Connector. |
| string value | The path at which images will be stored (snapshot, deploy, etc). Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location. Possible values: A file system path on the host running the compute service. |
| integer value | Timeout (seconds) to wait for an instance to start. The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted. Possible Values: Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state. |