docs: Remove references to XenAPI driver
Not as many of these as I thought there would be. Also, yes, the change to 'nova.conf.compute' is a doc change :) Change-Id: I27626984ce94544bd81d998c5fdf141875faec92 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
parent
31b2fd114c
commit
58f7582c63
@ -10,7 +10,7 @@ OpenStack Nova
|
||||
|
||||
OpenStack Nova provides a cloud computing fabric controller, supporting a wide
|
||||
variety of compute technologies, including: libvirt (KVM, Xen, LXC and more),
|
||||
Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM.
|
||||
Hyper-V, VMware, OpenStack Ironic and PowerVM.
|
||||
|
||||
Use the following resources to learn more.
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
Before Width: | Height: | Size: 46 KiB |
@ -53,12 +53,6 @@ the ``/etc/shadow`` file inside the virtual machine instance.
|
||||
`CentOS cloud images <http://cloud.centos.org/centos/>`_ which, by default,
|
||||
does not allow :command:`ssh` access to the instance with password.
|
||||
|
||||
.. rubric:: Password injection and XenAPI (XenServer/XCP)
|
||||
|
||||
When using the XenAPI hypervisor back end, Compute uses the XenAPI agent to
|
||||
inject passwords into guests. The virtual machine image must be configured with
|
||||
the agent for password injection to work.
|
||||
|
||||
.. rubric:: Password injection and Windows images (all hypervisors)
|
||||
|
||||
For Windows virtual machines, configure the Windows image to retrieve the admin
|
||||
|
@ -43,7 +43,7 @@ Compute controls hypervisors through an API server. Selecting the best
|
||||
hypervisor to use can be difficult, and you must take budget, resource
|
||||
constraints, supported features, and required technical specifications into
|
||||
account. However, the majority of OpenStack development is done on systems
|
||||
using KVM and Xen-based hypervisors. For a detailed list of features and
|
||||
using KVM-based hypervisors. For a detailed list of features and
|
||||
support across different hypervisors, see :doc:`/user/support-matrix`.
|
||||
|
||||
You can also orchestrate clouds using multiple hypervisors in different
|
||||
@ -72,8 +72,6 @@ availability zones. Compute supports the following hypervisors:
|
||||
|
||||
- `Xen (using libvirt) <https://www.xenproject.org>`__
|
||||
|
||||
- `XenServer <https://xenserver.org>`__
|
||||
|
||||
- `zVM <https://www.ibm.com/it-infrastructure/z/zvm>`__
|
||||
|
||||
For more information about hypervisors, see
|
||||
|
@ -39,11 +39,11 @@ compute host and image.
|
||||
|
||||
.. rubric:: Compute host requirements
|
||||
|
||||
The following virt drivers support the config drive: libvirt, XenServer,
|
||||
The following virt drivers support the config drive: libvirt,
|
||||
Hyper-V, VMware, and (since 17.0.0 Queens) PowerVM. The Bare Metal service also
|
||||
supports the config drive.
|
||||
|
||||
- To use config drives with libvirt, XenServer, or VMware, you must first
|
||||
- To use config drives with libvirt or VMware, you must first
|
||||
install the :command:`genisoimage` package on each compute host. Use the
|
||||
:oslo.config:option:`mkisofs_cmd` config option to set the path where you
|
||||
install the :command:`genisoimage` program. If :command:`genisoimage` is in
|
||||
@ -106,10 +106,4 @@ following to :file:`nova.conf`:
|
||||
when booting instances. For more information, refer to the :ref:`user guide
|
||||
<metadata-config-drive>`.
|
||||
|
||||
.. note::
|
||||
|
||||
If using Xen with a config drive, you must use the
|
||||
:oslo.config:option:`xenserver.disable_agent` config option to disable the
|
||||
agent.
|
||||
|
||||
.. _cloud-init: https://cloudinit.readthedocs.io/en/latest/
|
||||
|
@ -135,8 +135,7 @@ system or find a system with this support.
|
||||
and enable the VT option.
|
||||
|
||||
If KVM acceleration is not supported, configure Compute to use a different
|
||||
hypervisor, such as ``QEMU`` or ``Xen``. See :ref:`compute_qemu` or
|
||||
:ref:`compute_xen_api` for details.
|
||||
hypervisor, such as :ref:`QEMU <compute_qemu>`.
|
||||
|
||||
These procedures help you load the kernel modules for Intel-based and AMD-based
|
||||
processors if they do not load automatically during KVM installation.
|
||||
|
@ -1,475 +0,0 @@
|
||||
.. _compute_xen_api:
|
||||
|
||||
=============================================
|
||||
XenServer (and other XAPI based Xen variants)
|
||||
=============================================
|
||||
|
||||
.. deprecated:: 20.0.0
|
||||
|
||||
The xenapi driver is deprecated and may be removed in a future release.
|
||||
The driver is not tested by the OpenStack project nor does it have clear
|
||||
maintainer(s) and thus its quality can not be ensured. If you are using
|
||||
the driver in production please let us know in freenode IRC and/or the
|
||||
openstack-discuss mailing list.
|
||||
|
||||
.. todo::
|
||||
|
||||
os-xenapi version is 0.3.1 currently.
|
||||
This document should be modified according to the new version.
|
||||
This todo has been reported as `bug 1718606`_.
|
||||
|
||||
.. _bug 1718606: https://bugs.launchpad.net/nova/+bug/1718606
|
||||
|
||||
|
||||
This section describes XAPI managed hypervisors, and how to use them with
|
||||
OpenStack.
|
||||
|
||||
Terminology
|
||||
~~~~~~~~~~~
|
||||
|
||||
Xen
|
||||
---
|
||||
|
||||
A hypervisor that provides the fundamental isolation between virtual machines.
|
||||
Xen is open source (GPLv2) and is managed by `XenProject.org
|
||||
<http://www.xenproject.org>`_, a cross-industry organization and a Linux
|
||||
Foundation Collaborative project.
|
||||
|
||||
Xen is a component of many different products and projects. The hypervisor
|
||||
itself is very similar across all these projects, but the way that it is
|
||||
managed can be different, which can cause confusion if you're not clear which
|
||||
toolstack you are using. Make sure you know what `toolstack
|
||||
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ you want before you get
|
||||
started. If you want to use Xen with libvirt in OpenStack Compute refer to
|
||||
:doc:`hypervisor-xen-libvirt`.
|
||||
|
||||
XAPI
|
||||
----
|
||||
|
||||
XAPI is one of the toolstacks that could control a Xen based hypervisor.
|
||||
XAPI's role is similar to libvirt's in the KVM world. The API provided by XAPI
|
||||
is called XenAPI. To learn more about the provided interface, look at `XenAPI
|
||||
Object Model Overview <https://xapi-project.github.io/xen-api/overview.html>`_
|
||||
for definitions of XAPI specific terms such as SR, VDI, VIF and PIF.
|
||||
|
||||
OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed
|
||||
servers could be used with OpenStack.
|
||||
|
||||
XenAPI
|
||||
------
|
||||
|
||||
XenAPI is the API provided by XAPI. This name is also used by the python
|
||||
library that is a client for XAPI. A set of packages to use XenAPI on existing
|
||||
distributions can be built using the `xenserver/buildroot
|
||||
<https://github.com/xenserver/buildroot>`_ project.
|
||||
|
||||
XenServer
|
||||
---------
|
||||
|
||||
An Open Source virtualization platform that delivers all features needed for
|
||||
any server and datacenter implementation including the Xen hypervisor and XAPI
|
||||
for the management. For more information and product downloads, visit
|
||||
`xenserver.org <http://xenserver.org/>`_.
|
||||
|
||||
XCP
|
||||
---
|
||||
|
||||
XCP is not supported anymore. XCP project recommends all XCP users to upgrade
|
||||
to the latest version of XenServer by visiting `xenserver.org
|
||||
<http://xenserver.org/>`_.
|
||||
|
||||
Privileged and unprivileged domains
|
||||
-----------------------------------
|
||||
|
||||
A Xen host runs a number of virtual machines, VMs, or domains (the terms are
|
||||
synonymous on Xen). One of these is in charge of running the rest of the
|
||||
system, and is known as domain 0, or dom0. It is the first domain to boot after
|
||||
Xen, and owns the storage and networking hardware, the device drivers, and the
|
||||
primary control software. Any other VM is unprivileged, and is known as a domU
|
||||
or guest. All customer VMs are unprivileged, but you should note that on
|
||||
XenServer (and other XenAPI using hypervisors), the OpenStack Compute service
|
||||
(``nova-compute``) also runs in a domU. This gives a level of security
|
||||
isolation between the privileged system software and the OpenStack software
|
||||
(much of which is customer-facing). This architecture is described in more
|
||||
detail later.
|
||||
|
||||
Paravirtualized versus hardware virtualized domains
|
||||
---------------------------------------------------
|
||||
|
||||
A Xen virtual machine can be paravirtualized (PV) or hardware virtualized
|
||||
(HVM). This refers to the interaction between Xen, domain 0, and the guest VM's
|
||||
kernel. PV guests are aware of the fact that they are virtualized and will
|
||||
co-operate with Xen and domain 0; this gives them better performance
|
||||
characteristics. HVM guests are not aware of their environment, and the
|
||||
hardware has to pretend that they are running on an unvirtualized machine. HVM
|
||||
guests do not need to modify the guest operating system, which is essential
|
||||
when running Windows.
|
||||
|
||||
In OpenStack, customer VMs may run in either PV or HVM mode. However, the
|
||||
OpenStack domU (that's the one running ``nova-compute``) must be running in PV
|
||||
mode.
|
||||
|
||||
xapi pool
|
||||
---------
|
||||
|
||||
A resource pool comprises multiple XenServer host installations, bound together
|
||||
into a single managed entity which can host virtual machines. When combined with
|
||||
shared storage, VMs could dynamically move between XenServer hosts, with minimal
|
||||
downtime since no block copying is needed.
|
||||
|
||||
XenAPI deployment architecture
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A basic OpenStack deployment on a XAPI-managed server, assuming that the
|
||||
network provider is neutron network, looks like this:
|
||||
|
||||
.. figure:: /_static/images/xenserver_architecture.png
|
||||
:width: 100%
|
||||
|
||||
Key things to note:
|
||||
|
||||
* The hypervisor: Xen
|
||||
|
||||
* Domain 0: runs XAPI and some small pieces from OpenStack,
|
||||
the XAPI plug-ins.
|
||||
|
||||
* OpenStack VM: The ``Compute`` service runs in a paravirtualized virtual
|
||||
machine, on the host under management. Each host runs a local instance of
|
||||
``Compute``. It is also running neutron plugin-agent
|
||||
(``neutron-openvswitch-agent``) to perform local vSwitch configuration.
|
||||
|
||||
* OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses
|
||||
the Management Network to reach from the OpenStack VM to Domain 0.
|
||||
|
||||
Some notes on the networking:
|
||||
|
||||
* The above diagram assumes DHCP networking.
|
||||
|
||||
* There are three main OpenStack networks:
|
||||
|
||||
* Management network: RabbitMQ, MySQL, inter-host communication, and
|
||||
compute-XAPI communication. Please note that the VM images are downloaded
|
||||
by the XenAPI plug-ins, so make sure that the OpenStack Image service is
|
||||
accessible through this network. It usually means binding those services to
|
||||
the management interface.
|
||||
|
||||
* Tenant network: controlled by neutron, this is used for tenant traffic.
|
||||
|
||||
* Public network: floating IPs, public API endpoints.
|
||||
|
||||
* The networks shown here must be connected to the corresponding physical
|
||||
networks within the data center. In the simplest case, three individual
|
||||
physical network cards could be used. It is also possible to use VLANs to
|
||||
separate these networks. Please note, that the selected configuration must be
|
||||
in line with the networking model selected for the cloud. (In case of VLAN
|
||||
networking, the physical channels have to be able to forward the tagged
|
||||
traffic.)
|
||||
|
||||
* With the Networking service, you should enable Linux bridge in ``Dom0`` which
|
||||
is used for Compute service. ``nova-compute`` will create Linux bridges for
|
||||
security group and ``neutron-openvswitch-agent`` in Compute node will apply
|
||||
security group rules on these Linux bridges. To implement this, you need to
|
||||
remove ``/etc/modprobe.d/blacklist-bridge*`` in ``Dom0``.
|
||||
|
||||
Further reading
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Here are some of the resources available to learn more about Xen:
|
||||
|
||||
* `Citrix XenServer official documentation
|
||||
<http://docs.vmd.citrix.com/XenServer/6.2.0/1.0/en_gb/>`_
|
||||
* `What is Xen? by XenProject.org
|
||||
<http://www.xenproject.org/users/cloud.html>`_
|
||||
* `Xen Hypervisor project
|
||||
<http://www.xenproject.org/developers/teams/hypervisor.html>`_
|
||||
* `Xapi project <http://www.xenproject.org/developers/teams/xapi.html>`_
|
||||
* `Further XenServer and OpenStack information
|
||||
<http://wiki.openstack.org/XenServer>`_
|
||||
|
||||
Install XenServer
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before you can run OpenStack with XenServer, you must install the hypervisor on
|
||||
`an appropriate server <http://docs.vmd.citrix.com/XenServer/
|
||||
6.2.0/1.0/en_gb/installation.html#sys_requirements>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
Xen is a type 1 hypervisor: When your server starts, Xen is the first
|
||||
software that runs. Consequently, you must install XenServer before you
|
||||
install the operating system where you want to run OpenStack code. You then
|
||||
install ``nova-compute`` into a dedicated virtual machine on the host.
|
||||
|
||||
Use the following link to download XenServer's installation media:
|
||||
|
||||
* http://xenserver.org/open-source-virtualization-download.html
|
||||
|
||||
When you install many servers, you might find it easier to perform `PXE boot
|
||||
installations <http://docs.vmd.citrix.com/XenServer/6.2.0/
|
||||
1.0/en_gb/installation.html#pxe_boot_install>`_. You can also package any
|
||||
post-installation changes that you want to make to your XenServer by following
|
||||
the instructions of `creating your own XenServer supplemental pack
|
||||
<http://docs.vmd.citrix.com/
|
||||
XenServer/6.2.0/1.0/en_gb/supplemental_pack_ddk.html>`_.
|
||||
|
||||
.. important::
|
||||
|
||||
When using ``[xenserver]image_handler=direct_vhd`` (the default), make sure
|
||||
you use the EXT type of storage repository (SR). Features that require access
|
||||
to VHD files (such as copy on write, snapshot and migration) do not work when
|
||||
you use the LVM SR. Storage repository (SR) is a XAPI-specific term relating to
|
||||
the physical storage where virtual disks are stored.
|
||||
|
||||
On the XenServer installation screen, choose the :guilabel:`XenDesktop
|
||||
Optimized` option. If you use an answer file, make sure you use
|
||||
``srtype="ext"`` in the ``installation`` tag of the answer file.
|
||||
|
||||
Post-installation steps
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following steps need to be completed after the hypervisor's installation:
|
||||
|
||||
#. For resize and migrate functionality, enable password-less SSH
|
||||
authentication and set up the ``/images`` directory on dom0.
|
||||
|
||||
#. Install the XAPI plug-ins.
|
||||
|
||||
#. To support AMI type images, you must set up ``/boot/guest``
|
||||
symlink/directory in dom0.
|
||||
|
||||
#. Create a paravirtualized virtual machine that can run ``nova-compute``.
|
||||
|
||||
#. Install and configure ``nova-compute`` in the above virtual machine.
|
||||
|
||||
#. To support live migration requiring no block device migration, you should
|
||||
add the current host to a xapi pool using shared storage. You need to know
|
||||
the pool master ip address, user name and password:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
xe pool-join master-address=MASTER_IP master-username=root master-password=MASTER_PASSWORD
|
||||
|
||||
Install XAPI plug-ins
|
||||
---------------------
|
||||
|
||||
When you use a XAPI managed hypervisor, you can install a Python script (or any
|
||||
executable) on the host side, and execute that through XenAPI. These scripts
|
||||
are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack
|
||||
os-xenapi code repository. These plug-ins have to be copied to dom0's
|
||||
filesystem, to the appropriate directory, where XAPI can find them. It is
|
||||
important to ensure that the version of the plug-ins are in line with the
|
||||
OpenStack Compute installation you are using.
|
||||
|
||||
The plugins should typically be copied from the Nova installation running in
|
||||
the Compute's DomU (``pip show os-xenapi`` to find its location), but if you
|
||||
want to download the latest version the following procedure can be used.
|
||||
|
||||
**Manually installing the plug-ins**
|
||||
|
||||
#. Create temporary files/directories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ OS_XENAPI_TARBALL=$(mktemp)
|
||||
$ OS_XENAPI_SOURCES=$(mktemp -d)
|
||||
|
||||
#. Get the source from the openstack.org archives. The example assumes the
|
||||
latest release is used, and the XenServer host is accessible as xenserver.
|
||||
Match those parameters to your setup.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ OS_XENAPI_URL=https://tarballs.openstack.org/os-xenapi/os-xenapi-0.1.1.tar.gz
|
||||
$ wget -qO "$OS_XENAPI_TARBALL" "$OS_XENAPI_URL"
|
||||
$ tar xvf "$OS_XENAPI_TARBALL" -d "$OS_XENAPI_SOURCES"
|
||||
|
||||
#. Copy the plug-ins to the hypervisor:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ PLUGINPATH=$(find $OS_XENAPI_SOURCES -path '*/xapi.d/plugins' -type d -print)
|
||||
$ tar -czf - -C "$PLUGINPATH" ./ |
|
||||
> ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins
|
||||
|
||||
#. Remove temporary files/directories:</para>
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ rm "$OS_XENAPI_TARBALL"
|
||||
$ rm -rf "$OS_XENAPI_SOURCES"
|
||||
|
||||
Prepare for AMI type images
|
||||
---------------------------
|
||||
|
||||
To support AMI type images in your OpenStack installation, you must create the
|
||||
``/boot/guest`` directory on dom0. One of the OpenStack XAPI plugins will
|
||||
extract the kernel and ramdisk from AKI and ARI images and put them to that
|
||||
directory.
|
||||
|
||||
OpenStack maintains the contents of this directory and its size should not
|
||||
increase during normal operation. However, in case of power failures or
|
||||
accidental shutdowns, some files might be left over. To prevent these files
|
||||
from filling up dom0's filesystem, set up this directory as a symlink that
|
||||
points to a subdirectory of the local SR.
|
||||
|
||||
Run these commands in dom0 to achieve this setup:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
|
||||
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
|
||||
# mkdir -p "$LOCALPATH"
|
||||
# ln -s "$LOCALPATH" /boot/guest
|
||||
|
||||
Modify dom0 for resize/migration support
|
||||
----------------------------------------
|
||||
|
||||
To resize servers with XenServer you must:
|
||||
|
||||
* Establish a root trust between all hypervisor nodes of your deployment:
|
||||
|
||||
To do so, generate an ssh key-pair with the :command:`ssh-keygen` command.
|
||||
Ensure that each of your dom0's ``authorized_keys`` file (located in
|
||||
``/root/.ssh/authorized_keys``) contains the public key fingerprint (located
|
||||
in ``/root/.ssh/id_rsa.pub``).
|
||||
|
||||
* Provide a ``/images`` mount point to the dom0 for your hypervisor:
|
||||
|
||||
dom0 space is at a premium so creating a directory in dom0 is potentially
|
||||
dangerous and likely to fail especially when you resize large servers. The
|
||||
least you can do is to symlink ``/images`` to your local storage SR. The
|
||||
following instructions work for an English-based installation of XenServer
|
||||
and in the case of ext3-based SR (with which the resize functionality is
|
||||
known to work correctly).
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
|
||||
# IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
|
||||
# mkdir -p "$IMG_DIR"
|
||||
# ln -s "$IMG_DIR" /images
|
||||
|
||||
XenAPI configuration reference
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following section discusses some commonly changed options when using the
|
||||
XenAPI driver. The table below provides a complete reference of all
|
||||
configuration options available for configuring XAPI with OpenStack.
|
||||
|
||||
The recommended way to use XAPI with OpenStack is through the XenAPI driver.
|
||||
To enable the XenAPI driver, add the following configuration options to
|
||||
``/etc/nova/nova.conf`` and restart ``OpenStack Compute``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
compute_driver = xenapi.XenAPIDriver
|
||||
[xenserver]
|
||||
connection_url = http://your_xenapi_management_ip_address
|
||||
connection_username = root
|
||||
connection_password = your_password
|
||||
ovs_integration_bridge = br-int
|
||||
|
||||
These connection details are used by OpenStack Compute service to contact your
|
||||
hypervisor and are the same details you use to connect XenCenter, the XenServer
|
||||
management console, to your XenServer node.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``connection_url`` is generally the management network IP
|
||||
address of the XenServer.
|
||||
|
||||
Networking configuration
|
||||
------------------------
|
||||
|
||||
The Networking service in the Compute node is running
|
||||
``neutron-openvswitch-agent``. This manages ``dom0``\'s OVS. You should refer
|
||||
to the :neutron-doc:`openvswitch_agent.ini sample
|
||||
<configuration/samples/openvswitch-agent.html>` for details, however there are
|
||||
several specific items to look out for.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
minimize_polling = False
|
||||
root_helper_daemon = xenapi_root_helper
|
||||
|
||||
[ovs]
|
||||
of_listen_address = management_ip_address
|
||||
ovsdb_connection = tcp:your_xenapi_management_ip_address:6640
|
||||
bridge_mappings = <physical_network>:<physical_bridge>, ...
|
||||
integration_bridge = br-int
|
||||
|
||||
[xenapi]
|
||||
connection_url = http://your_xenapi_management_ip_address
|
||||
connection_username = root
|
||||
connection_password = your_pass_word
|
||||
|
||||
.. note::
|
||||
|
||||
The ``ovsdb_connection`` is the connection string for the native OVSDB
|
||||
backend, you need to enable port 6640 in dom0.
|
||||
|
||||
Agent
|
||||
-----
|
||||
|
||||
The agent is a piece of software that runs on the instances, and communicates
|
||||
with OpenStack. In case of the XenAPI driver, the agent communicates with
|
||||
OpenStack through XenStore (see `the Xen Project Wiki
|
||||
<http://wiki.xenproject.org/wiki/XenStore>`_ for more information on XenStore).
|
||||
|
||||
If you don't have the guest agent on your VMs, it takes a long time for
|
||||
OpenStack Compute to detect that the VM has successfully started. Generally a
|
||||
large timeout is required for Windows instances, but you may want to adjust:
|
||||
``agent_version_timeout`` within the ``[xenserver]`` section.
|
||||
|
||||
VNC proxy address
|
||||
-----------------
|
||||
|
||||
Assuming you are talking to XAPI through a management network, and XenServer is
|
||||
on the address: 10.10.1.34 specify the same address for the vnc proxy address:
|
||||
``server_proxyclient_address=10.10.1.34``
|
||||
|
||||
Storage
|
||||
-------
|
||||
|
||||
You can specify which Storage Repository to use with nova by editing the
|
||||
following flag. To use the local-storage setup by the default installer:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sr_matching_filter = "other-config:i18n-key=local-storage"
|
||||
|
||||
Another alternative is to use the "default" storage (for example if you have
|
||||
attached NFS or any other shared storage):
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sr_matching_filter = "default-sr:true"
|
||||
|
||||
Use different image handler
|
||||
---------------------------
|
||||
|
||||
We support three different implementations for glance image handler. You
|
||||
can choose a specific image handler based on the demand:
|
||||
|
||||
* ``direct_vhd``: This image handler will call XAPI plugins to directly
|
||||
process the VHD files in XenServer SR(Storage Repository). So this handler
|
||||
only works when the host's SR type is file system based e.g. ext, nfs.
|
||||
|
||||
* ``vdi_local_dev``: This image handler uploads ``tgz`` compressed raw
|
||||
disk images to the glance image service.
|
||||
|
||||
* ``vdi_remote_stream``: With this image handler, the image data streams
|
||||
between XenServer and the glance image service. As it uses the remote
|
||||
APIs supported by XAPI, this plugin works for all SR types supported by
|
||||
XenServer.
|
||||
|
||||
``direct_vhd`` is the default image handler. If want to use a different image
|
||||
handler, you can change the config setting of ``image_handler`` within the
|
||||
``[xenserver]`` section. For example, the following config setting is to use
|
||||
``vdi_remote_stream`` as the image handler:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[xenserver]
|
||||
image_handler=vdi_remote_stream
|
@ -4,11 +4,8 @@ Xen via libvirt
|
||||
|
||||
OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be
|
||||
integrated with OpenStack Compute via the `libvirt <http://libvirt.org/>`_
|
||||
`toolstack <http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ or via the `XAPI
|
||||
<http://xenproject.org/developers/teams/xapi.html>`_ `toolstack
|
||||
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_. This section describes how
|
||||
to set up OpenStack Compute with Xen and libvirt. For information on how to
|
||||
set up Xen with XAPI refer to :doc:`hypervisor-xen-api`.
|
||||
`toolstack <http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ `toolstack
|
||||
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_.
|
||||
|
||||
Installing Xen with libvirt
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -11,7 +11,6 @@ Hypervisors
|
||||
hypervisor-basics
|
||||
hypervisor-kvm
|
||||
hypervisor-qemu
|
||||
hypervisor-xen-api
|
||||
hypervisor-xen-libvirt
|
||||
hypervisor-lxc
|
||||
hypervisor-vmware
|
||||
@ -47,10 +46,6 @@ The following hypervisors are supported:
|
||||
management interface into ``nova-compute`` to run Linux, Windows, FreeBSD and
|
||||
NetBSD virtual machines.
|
||||
|
||||
* `XenServer`_ - XenServer, Xen Cloud Platform (XCP) and other XAPI based Xen
|
||||
variants runs Linux or Windows virtual machines. You must install the
|
||||
``nova-compute`` service in a para-virtualized VM.
|
||||
|
||||
* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run
|
||||
Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively
|
||||
on the Windows virtualization platform.
|
||||
@ -81,8 +76,6 @@ virt drivers:
|
||||
can be configured via the :oslo.config:option:`libvirt.virt_type` config
|
||||
option.
|
||||
|
||||
* :oslo.config:option:`compute_driver` = ``xenapi.XenAPIDriver``
|
||||
|
||||
* :oslo.config:option:`compute_driver` = ``ironic.IronicDriver``
|
||||
|
||||
* :oslo.config:option:`compute_driver` = ``vmwareapi.VMwareVCDriver``
|
||||
@ -104,7 +97,6 @@ virt drivers:
|
||||
.. _QEMU: https://wiki.qemu.org/Manual
|
||||
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
|
||||
.. _Xen (using libvirt): https://www.xenproject.org
|
||||
.. _XenServer: https://xenserver.org
|
||||
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
|
||||
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
|
||||
.. _PowerVM: https://www.ibm.com/us-en/marketplace/ibm-powervm
|
||||
|
@ -27,12 +27,6 @@ compute host to another is needed to copy the VM file across.
|
||||
Cloud end users can find out how to resize a server by reading
|
||||
:doc:`/user/resize`.
|
||||
|
||||
XenServer
|
||||
~~~~~~~~~
|
||||
|
||||
To get resize to work with XenServer (and XCP), you need to establish a root
|
||||
trust between all hypervisor nodes and provide an ``/image`` mount point to
|
||||
your hypervisors dom0.
|
||||
|
||||
Automatic confirm
|
||||
-----------------
|
||||
|
@ -436,8 +436,8 @@ The image properties that the filter checks for are:
|
||||
This was previously called ``architecture``.
|
||||
|
||||
``img_hv_type``
|
||||
Describes the hypervisor required by the image. Examples are ``qemu``,
|
||||
``xenapi``, and ``hyperv``.
|
||||
Describes the hypervisor required by the image. Examples are ``qemu``
|
||||
and ``hyperv``.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -874,13 +874,6 @@ file. For example to configure metric1 with ratio1 and metric2 with ratio2:
|
||||
|
||||
weight_setting = "metric1=ratio1, metric2=ratio2"
|
||||
|
||||
XenServer hypervisor pools to support live migration
|
||||
----------------------------------------------------
|
||||
|
||||
When using the XenAPI-based hypervisor, the Compute service uses host
|
||||
aggregates to manage XenServer Resource pools, which are used in supporting
|
||||
live migration.
|
||||
|
||||
Allocation ratios
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -981,19 +974,6 @@ HyperV
|
||||
account for this overhead, based on the amount of memory available
|
||||
to instances.
|
||||
|
||||
XenAPI
|
||||
XenServer memory overhead is proportional to the size of the VM and larger
|
||||
flavor VMs become more efficient with respect to overhead. This overhead
|
||||
can be calculated using the following formula::
|
||||
|
||||
overhead (MB) = (instance.memory * 0.00781) + (instance.vcpus * 1.5) + 3
|
||||
|
||||
You should configure the
|
||||
:oslo.config:option:`reserved_host_memory_mb` config option to
|
||||
account for this overhead, based on the size of your hosts and
|
||||
instances. For more information, refer to
|
||||
https://wiki.openstack.org/wiki/XenServer/Overhead.
|
||||
|
||||
Cells considerations
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -10,11 +10,9 @@ source host, but migration can also be useful to redistribute the load when
|
||||
many VM instances are running on a specific physical machine.
|
||||
|
||||
This document covers live migrations using the
|
||||
:ref:`configuring-migrations-kvm-libvirt` and
|
||||
:ref:`configuring-migrations-xenserver` hypervisors.
|
||||
:ref:`configuring-migrations-kvm-libvirt` and VMWare hypervisors
|
||||
|
||||
.. :ref:`_configuring-migrations-kvm-libvirt`
|
||||
.. :ref:`_configuring-migrations-xenserver`
|
||||
|
||||
.. note::
|
||||
|
||||
@ -68,17 +66,17 @@ The migration types are:
|
||||
different host in the same cell, but not across cells.
|
||||
|
||||
The following sections describe how to configure your hosts for live migrations
|
||||
using the KVM and XenServer hypervisors.
|
||||
using the libvirt virt driver and KVM hypervisor.
|
||||
|
||||
.. _configuring-migrations-kvm-libvirt:
|
||||
|
||||
KVM-libvirt
|
||||
~~~~~~~~~~~
|
||||
Libvirt
|
||||
-------
|
||||
|
||||
.. _configuring-migrations-kvm-general:
|
||||
|
||||
General configuration
|
||||
---------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To enable any type of live migration, configure the compute hosts according to
|
||||
the instructions below:
|
||||
@ -135,7 +133,7 @@ the instructions below:
|
||||
.. _`configuring-migrations-securing-live-migration-streams`:
|
||||
|
||||
Securing live migration streams
|
||||
-------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is
|
||||
strongly recommended to secure all your live migration streams by taking
|
||||
@ -148,7 +146,7 @@ on how to set this all up, refer to the
|
||||
.. _configuring-migrations-kvm-block-and-volume-migration:
|
||||
|
||||
Block migration, volume-based live migration
|
||||
--------------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If your environment satisfies the requirements for "QEMU-native TLS",
|
||||
then block migration requires some setup; refer to the above section,
|
||||
@ -161,7 +159,7 @@ Be aware that block migration adds load to the network and storage subsystems.
|
||||
.. _configuring-migrations-kvm-shared-storage:
|
||||
|
||||
Shared storage
|
||||
--------------
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Compute hosts have many options for sharing storage, for example NFS, shared
|
||||
disk array LUNs, Ceph or GlusterFS.
|
||||
@ -221,7 +219,7 @@ hosts.
|
||||
.. _configuring-migrations-kvm-advanced:
|
||||
|
||||
Advanced configuration for KVM and QEMU
|
||||
---------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Live migration copies the instance's memory from the source to the destination
|
||||
compute host. After a memory page has been copied, the instance may write to it
|
||||
@ -319,94 +317,16 @@ memory-intensive instances succeed.
|
||||
The full list of live migration configuration parameters is documented in the
|
||||
:doc:`Nova Configuration Options </configuration/config>`
|
||||
|
||||
.. _configuring-migrations-xenserver:
|
||||
|
||||
XenServer
|
||||
~~~~~~~~~
|
||||
|
||||
.. :ref:Shared Storage
|
||||
.. :ref:Block migration
|
||||
|
||||
.. _configuring-migrations-xenserver-shared-storage:
|
||||
|
||||
Shared storage
|
||||
--------------
|
||||
|
||||
**Prerequisites**
|
||||
|
||||
- **Compatible XenServer hypervisors**.
|
||||
|
||||
For more information, see the `Requirements for Creating Resource Pools
|
||||
<https://docs.citrix.com/en-us/xenserver/7-1.html#pooling_homogeneity_requirements>`_
|
||||
section of the XenServer Administrator's Guide.
|
||||
|
||||
- **Shared storage**.
|
||||
|
||||
An NFS export, visible to all XenServer hosts.
|
||||
|
||||
.. note::
|
||||
|
||||
For the supported NFS versions, see the `NFS and SMB
|
||||
<https://docs.citrix.com/en-us/xenserver/7-1.html#id1002701>`_
|
||||
section of the XenServer Administrator's Guide.
|
||||
|
||||
To use shared storage live migration with XenServer hypervisors, the hosts must
|
||||
be joined to a XenServer pool.
|
||||
|
||||
.. rubric:: Using shared storage live migrations with XenServer Hypervisors
|
||||
|
||||
#. Add an NFS VHD storage to your master XenServer, and set it as the default
|
||||
storage repository. For more information, see NFS VHD in the XenServer
|
||||
Administrator's Guide.
|
||||
|
||||
#. Configure all compute nodes to use the default storage repository (``sr``)
|
||||
for pool operations. Add this line to your ``nova.conf`` configuration files
|
||||
on all compute nodes:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
sr_matching_filter=default-sr:true
|
||||
|
||||
#. To add a host to a pool, you need to know the pool master ip address,
|
||||
user name and password. Run below command on the XenServer host:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ xe pool-join master-address=MASTER_IP master-username=root master-password=MASTER_PASSWORD
|
||||
|
||||
.. note::
|
||||
|
||||
The added compute node and the host will shut down to join the host to
|
||||
the XenServer pool. The operation will fail if any server other than the
|
||||
compute node is running or suspended on the host.
|
||||
|
||||
.. _configuring-migrations-xenserver-block-migration:
|
||||
|
||||
Block migration
|
||||
---------------
|
||||
|
||||
- **Compatible XenServer hypervisors**.
|
||||
|
||||
The hypervisors must support the Storage XenMotion feature. See your
|
||||
XenServer manual to make sure your edition has this feature.
|
||||
|
||||
.. note::
|
||||
|
||||
- To use block migration, you must use the ``--block-migrate`` parameter
|
||||
with the live migration command.
|
||||
|
||||
- Block migration works only with EXT local storage storage repositories,
|
||||
and the server must not have any volumes attached.
|
||||
|
||||
VMware
|
||||
~~~~~~
|
||||
------
|
||||
|
||||
.. :ref:`_configuring-migrations-vmware`
|
||||
|
||||
.. _configuring-migrations-vmware:
|
||||
|
||||
vSphere configuration
|
||||
---------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Enable vMotion on all ESX hosts which are managed by Nova by following the
|
||||
instructions in `this <https://kb.vmware.com/s/article/2054994>`_ KB article.
|
||||
|
@ -2,6 +2,11 @@
|
||||
Attaching virtual GPU devices to guests
|
||||
=======================================
|
||||
|
||||
.. important::
|
||||
|
||||
The functionality described below is only supported by the libvirt/KVM
|
||||
driver.
|
||||
|
||||
The virtual GPU feature in Nova allows a deployment to provide specific GPU
|
||||
types for instances using physical GPUs that can provide virtual devices.
|
||||
|
||||
@ -10,11 +15,11 @@ Graphics Processing Unit (pGPU) can be virtualized as multiple virtual Graphics
|
||||
Processing Units (vGPUs) if the hypervisor supports the hardware driver and has
|
||||
the capability to create guests using those virtual devices.
|
||||
|
||||
This feature is highly dependent on the hypervisor, its version and the
|
||||
physical devices present on the host. In addition, the vendor's vGPU driver software
|
||||
This feature is highly dependent on the version of libvirt and the physical
|
||||
devices present on the host. In addition, the vendor's vGPU driver software
|
||||
must be installed and configured on the host at the same time.
|
||||
|
||||
Hypervisor-specific caveats are mentioned in the `Caveats`_ section.
|
||||
Caveats are mentioned in the `Caveats`_ section.
|
||||
|
||||
To enable virtual GPUs, follow the steps below:
|
||||
|
||||
@ -86,14 +91,15 @@ Configure a flavor to request one virtual GPU:
|
||||
|
||||
.. note::
|
||||
|
||||
As of the Queens release, all hypervisors that support virtual GPUs
|
||||
only accept a single virtual GPU per instance.
|
||||
As of the Queens release, all hypervisors that support virtual GPUs
|
||||
only accept a single virtual GPU per instance.
|
||||
|
||||
The enabled vGPU types on the compute hosts are not exposed to API users.
|
||||
Flavors configured for vGPU support can be tied to host aggregates as a means
|
||||
to properly schedule those flavors onto the compute hosts that support them.
|
||||
See :doc:`/admin/aggregates` for more information.
|
||||
|
||||
|
||||
Create instances with virtual GPU devices
|
||||
-----------------------------------------
|
||||
|
||||
@ -114,92 +120,36 @@ provided by compute nodes.
|
||||
How to discover a GPU type
|
||||
--------------------------
|
||||
|
||||
Depending on your hypervisor:
|
||||
Virtual GPUs are seen as mediated devices. Physical PCI devices (the graphic
|
||||
card here) supporting virtual GPUs propose mediated device (mdev) types. Since
|
||||
mediated devices are supported by the Linux kernel through sysfs files after
|
||||
installing the vendor's virtual GPUs driver software, you can see the required
|
||||
properties as follows:
|
||||
|
||||
- For libvirt, virtual GPUs are seen as mediated devices. Physical PCI devices
|
||||
(the graphic card here) supporting virtual GPUs propose mediated device
|
||||
(mdev) types. Since mediated devices are supported by the Linux kernel
|
||||
through sysfs files after installing the vendor's virtual GPUs driver
|
||||
software, you can see the required properties as follows:
|
||||
.. code-block:: console
|
||||
|
||||
.. code-block:: console
|
||||
$ ls /sys/class/mdev_bus/*/mdev_supported_types
|
||||
/sys/class/mdev_bus/0000:84:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
|
||||
$ ls /sys/class/mdev_bus/*/mdev_supported_types
|
||||
/sys/class/mdev_bus/0000:84:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
/sys/class/mdev_bus/0000:85:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
|
||||
/sys/class/mdev_bus/0000:85:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
/sys/class/mdev_bus/0000:86:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
|
||||
/sys/class/mdev_bus/0000:86:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
/sys/class/mdev_bus/0000:87:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
|
||||
/sys/class/mdev_bus/0000:87:00.0/mdev_supported_types:
|
||||
nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45
|
||||
|
||||
|
||||
- For XenServer, virtual GPU types are created by XenServer at startup
|
||||
depending on the available hardware and config files present in dom0.
|
||||
You can run the command of ``xe vgpu-type-list`` from dom0 to get the
|
||||
available vGPU types. The value for the field of ``model-name ( RO):``
|
||||
is the vGPU type's name which can be used to set the nova config option
|
||||
``[devices]/enabled_vgpu_types``. See the following example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[root@trailblazer-2 ~]# xe vgpu-type-list
|
||||
uuid ( RO) : 78d2d963-41d6-4130-8842-aedbc559709f
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-8Q
|
||||
max-heads ( RO): 4
|
||||
max-resolution ( RO): 4096x2160
|
||||
|
||||
|
||||
uuid ( RO) : a1bb1692-8ce3-4577-a611-6b4b8f35a5c9
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-0Q
|
||||
max-heads ( RO): 2
|
||||
max-resolution ( RO): 2560x1600
|
||||
|
||||
|
||||
uuid ( RO) : 69d03200-49eb-4002-b661-824aec4fd26f
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-2A
|
||||
max-heads ( RO): 1
|
||||
max-resolution ( RO): 1280x1024
|
||||
|
||||
|
||||
uuid ( RO) : c58b1007-8b47-4336-95aa-981a5634d03d
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-4Q
|
||||
max-heads ( RO): 4
|
||||
max-resolution ( RO): 4096x2160
|
||||
|
||||
|
||||
uuid ( RO) : 292a2b20-887f-4a13-b310-98a75c53b61f
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-2Q
|
||||
max-heads ( RO): 4
|
||||
max-resolution ( RO): 4096x2160
|
||||
|
||||
|
||||
uuid ( RO) : d377db6b-a068-4a98-92a8-f94bd8d6cc5d
|
||||
vendor-name ( RO): NVIDIA Corporation
|
||||
model-name ( RO): GRID M60-0B
|
||||
max-heads ( RO): 2
|
||||
max-resolution ( RO): 2560x1600
|
||||
|
||||
...
|
||||
|
||||
Checking allocations and inventories for virtual GPUs
|
||||
-----------------------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The information below is only valid from the 19.0.0 Stein release and only
|
||||
for the libvirt driver. Before this release or when using the Xen driver,
|
||||
inventories and allocations related to a ``VGPU`` resource class are still
|
||||
on the root resource provider related to the compute node.
|
||||
The information below is only valid from the 19.0.0 Stein release. Before
|
||||
this release, inventories and allocations related to a ``VGPU`` resource
|
||||
class are still on the root resource provider related to the compute node.
|
||||
If upgrading from Rocky and using the libvirt driver, ``VGPU`` inventory and
|
||||
allocations are moved to child resource providers that represent actual
|
||||
physical GPUs.
|
||||
@ -348,8 +298,6 @@ Caveats
|
||||
This information is correct as of the 17.0.0 Queens release. Where
|
||||
improvements have been made or issues fixed, they are noted per item.
|
||||
|
||||
For libvirt:
|
||||
|
||||
* Suspending a guest that has vGPUs doesn't yet work because of a libvirt
|
||||
limitation (it can't hot-unplug mediated devices from a guest). Workarounds
|
||||
using other instance actions (like snapshotting the instance or shelving it)
|
||||
@ -405,28 +353,6 @@ For nested vGPUs:
|
||||
- You can ask for a flavor with 2 vGPU with --max 2.
|
||||
- But you can't ask for a flavor with 4 vGPU and --max 2.
|
||||
|
||||
For XenServer:
|
||||
|
||||
* Suspend and live migration with vGPUs attached depends on support from the
|
||||
underlying XenServer version. Please see XenServer release notes for up to
|
||||
date information on when a hypervisor supporting live migration and
|
||||
suspend/resume with vGPUs is available. If a suspend or live migrate operation
|
||||
is attempted with a XenServer version that does not support that operation, an
|
||||
internal exception will occur that will cause nova setting the instance to
|
||||
be in ERROR status. You can use the command of
|
||||
``openstack server set --state active <server>`` to set it back to ACTIVE.
|
||||
|
||||
* Resizing an instance with a new flavor that has vGPU resources doesn't
|
||||
allocate those vGPUs to the instance (the instance is created without
|
||||
vGPU resources). The proposed workaround is to rebuild the instance after
|
||||
resizing it. The rebuild operation allocates vGPUS to the instance.
|
||||
|
||||
* Cold migrating an instance to another host will have the same problem as
|
||||
resize. If you want to migrate an instance, make sure to rebuild it after the
|
||||
migration.
|
||||
|
||||
* Multiple GPU types per compute is not supported by the XenServer driver.
|
||||
|
||||
.. _bug 1778563: https://bugs.launchpad.net/nova/+bug/1778563
|
||||
.. _bug 1762688: https://bugs.launchpad.net/nova/+bug/1762688
|
||||
|
||||
|
@ -176,9 +176,9 @@ from the relevant third party test, on the latest patchset, before a +2 vote
|
||||
can be applied.
|
||||
Specifically, changes to nova/virt/driver/<NNNN> need a +1 vote from the
|
||||
respective third party CI.
|
||||
For example, if you change something in the XenAPI virt driver, you must wait
|
||||
for a +1 from the XenServer CI on the latest patchset, before you can give
|
||||
that patch set a +2 vote.
|
||||
For example, if you change something in the Hyper-V virt driver, you must wait
|
||||
for a +1 from the Hyper-V CI on the latest patchset, before you can give that
|
||||
patch set a +2 vote.
|
||||
|
||||
This is important to ensure:
|
||||
|
||||
|
@ -31,8 +31,6 @@ OpenStack Compute consists of the following areas and their components:
|
||||
A worker daemon that creates and terminates virtual machine instances through
|
||||
hypervisor APIs. For example:
|
||||
|
||||
- XenAPI for XenServer/XCP
|
||||
|
||||
- libvirt for KVM or QEMU
|
||||
|
||||
- VMwareAPI for VMware
|
||||
|
@ -376,12 +376,12 @@ affinity check, you should set
|
||||
``[workarounds]/disable_group_policy_check_upcall=True`` and
|
||||
``[filter_scheduler]/track_instance_changes=False`` in ``nova.conf``.
|
||||
|
||||
The fourth is currently only a problem when performing live migrations
|
||||
using the XenAPI driver and not specifying ``--block-migrate``. The
|
||||
driver will attempt to figure out if block migration should be performed
|
||||
based on source and destination hosts being in the same aggregate. Since
|
||||
aggregates data has migrated to the API database, the cell conductor will
|
||||
not be able to access the aggregate information and will fail.
|
||||
The fourth was previously only a problem when performing live migrations using
|
||||
the since-removed XenAPI driver and not specifying ``--block-migrate``. The
|
||||
driver would attempt to figure out if block migration should be performed based
|
||||
on source and destination hosts being in the same aggregate. Since aggregates
|
||||
data had migrated to the API database, the cell conductor would not be able to
|
||||
access the aggregate information and would fail.
|
||||
|
||||
The fifth is a problem because when a volume is attached to an instance
|
||||
in the *nova-compute* service, and ``[cinder]/cross_az_attach=False`` in
|
||||
|
@ -22,10 +22,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
||||
title=libvirt+xen
|
||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
|
||||
|
||||
[target.xenserver]
|
||||
title=XenServer CI
|
||||
link=https://wiki.openstack.org/wiki/XenServer/XenServer_CI
|
||||
|
||||
[target.vmware]
|
||||
title=VMware CI
|
||||
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
||||
@ -76,7 +72,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=unknown
|
||||
@ -98,7 +93,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=unknown
|
||||
hyperv=unknown
|
||||
ironic=unknown
|
||||
@ -119,7 +113,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=unknown
|
||||
@ -140,7 +133,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=unknown
|
||||
@ -161,7 +153,6 @@ libvirt-virtuozzo-ct=complete
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=unknown
|
||||
@ -181,7 +172,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=complete
|
||||
libvirt-virtuozzo-vm=complete
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=missing
|
||||
@ -206,8 +196,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=missing
|
||||
libvirt-virtuozzo-vm=complete
|
||||
libvirt-xen=complete
|
||||
xenserver=partial
|
||||
driver-notes-xenserver=This is not tested in a CI system, and only partially implemented.
|
||||
vmware=partial
|
||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||
hyperv=complete:n
|
||||
@ -231,8 +219,6 @@ libvirt-virtuozzo-ct=unknown
|
||||
libvirt-virtuozzo-vm=unknown
|
||||
libvirt-xen=partial
|
||||
driver-notes-libvirt-xen=This is not tested in a CI system, but it is implemented.
|
||||
xenserver=partial
|
||||
driver-notes-xenserver=This is not tested in a CI system, but it is implemented.
|
||||
vmware=partial
|
||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||
hyperv=partial
|
||||
@ -256,7 +242,6 @@ libvirt-virtuozzo-ct=missing
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=partial
|
||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||
hyperv=complete
|
||||
@ -279,7 +264,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=missing
|
||||
@ -299,7 +283,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=unknown
|
||||
libvirt-virtuozzo-vm=unknown
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=partial
|
||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||
hyperv=partial
|
||||
@ -323,7 +306,6 @@ libvirt-virtuozzo-ct=partial
|
||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-virtuozzo-vm=complete
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=partial
|
||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||
@ -346,7 +328,6 @@ libvirt-virtuozzo-ct=missing
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=complete
|
||||
hyperv=complete
|
||||
ironic=partial
|
||||
@ -368,8 +349,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=missing
|
||||
libvirt-virtuozzo-vm=missing
|
||||
libvirt-xen=missing
|
||||
xenserver=partial
|
||||
driver-notes-xenserver=This is not tested in a CI system, but it is implemented.
|
||||
vmware=missing
|
||||
hyperv=partial
|
||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||
@ -392,7 +371,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=missing
|
||||
libvirt-virtuozzo-vm=complete
|
||||
libvirt-xen=complete
|
||||
xenserver=complete
|
||||
vmware=missing
|
||||
hyperv=complete
|
||||
ironic=missing
|
||||
|
@ -18,10 +18,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
||||
title=libvirt+xen
|
||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
|
||||
|
||||
[target.xenserver]
|
||||
title=XenServer CI
|
||||
link=https://wiki.openstack.org/wiki/XenServer/XenServer_CI
|
||||
|
||||
[target.vmware]
|
||||
title=VMware CI
|
||||
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
||||
@ -57,7 +53,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
|
||||
libvirt-virtuozzo-vm=partial
|
||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||
libvirt-xen=missing
|
||||
xenserver=partial:k
|
||||
vmware=missing
|
||||
hyperv=missing
|
||||
ironic=unknown
|
||||
@ -75,7 +70,6 @@ libvirt-kvm-s390=unknown
|
||||
libvirt-virtuozzo-ct=unknown
|
||||
libvirt-virtuozzo-vm=unknown
|
||||
libvirt-xen=unknown
|
||||
xenserver=partial:queens
|
||||
vmware=missing
|
||||
hyperv=missing
|
||||
ironic=missing
|
||||
|
@ -71,9 +71,6 @@
|
||||
# document, and merge it with this when their code merges into
|
||||
# Nova core.
|
||||
|
||||
[driver.xenserver]
|
||||
title=XenServer
|
||||
|
||||
[driver.libvirt-kvm-x86]
|
||||
title=Libvirt KVM (x86)
|
||||
|
||||
@ -128,7 +125,6 @@ notes=The attach volume operation provides a means to hotplug
|
||||
is considered to be more of a pet than cattle. Therefore
|
||||
this operation is not considered to be mandatory to support.
|
||||
cli=nova volume-attach <server> <volume>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -152,7 +148,6 @@ status=optional
|
||||
notes=Attach a block device with a tag to an existing server instance. See
|
||||
"Device tags" for more information.
|
||||
cli=nova volume-attach <server> <volume> [--tag <tag>]
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -173,7 +168,6 @@ title=Detach block volume from instance
|
||||
status=optional
|
||||
notes=See notes for attach volume operation.
|
||||
cli=nova volume-detach <server> <volume>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -203,7 +197,6 @@ notes=The extend volume operation provides a means to extend
|
||||
where the instance is considered to be more of a pet than cattle.
|
||||
Therefore this operation is not considered to be mandatory to support.
|
||||
cli=cinder extend <volume> <new_size>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=unknown
|
||||
@ -232,7 +225,6 @@ notes=The attach interface operation provides a means to hotplug
|
||||
In a cloud model it would be more typical to just spin up a
|
||||
new instance with more interfaces.
|
||||
cli=nova interface-attach <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -257,7 +249,6 @@ status=optional
|
||||
notes=Attach a virtual network interface with a tag to an existing
|
||||
server instance. See "Device tags" for more information.
|
||||
cli=nova interface-attach <server> [--tag <tag>]
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -278,7 +269,6 @@ title=Detach virtual network interface from instance
|
||||
status=optional
|
||||
notes=See notes for attach-interface operation.
|
||||
cli=nova interface-detach <server> <port_id>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -308,7 +298,6 @@ notes=This operation allows a host to be placed into maintenance
|
||||
The driver methods to implement are "host_maintenance_mode" and
|
||||
"set_host_enabled".
|
||||
cli=nova host-update <host>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=missing
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -336,7 +325,6 @@ notes=A possible failure scenario in a cloud environment is the outage
|
||||
dropped. That happens in the same way as a rebuild.
|
||||
This is not considered to be a mandatory operation to support.
|
||||
cli=nova evacuate <server>;nova host-evacuate <host>
|
||||
driver.xenserver=unknown
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=unknown
|
||||
@ -361,7 +349,6 @@ notes=A possible use case is additional attributes need to be set
|
||||
'personalities'. Though this is not considered to be a mandatory
|
||||
operation to support.
|
||||
cli=nova rebuild <server> <image>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -385,7 +372,6 @@ notes=Provides realtime information about the power state of the guest
|
||||
tracking changes in guests, this operation is considered mandatory to
|
||||
support.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -407,7 +393,6 @@ status=optional
|
||||
notes=Returns the result of host uptime since power on,
|
||||
it's used to report hypervisor status.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -429,7 +414,6 @@ status=optional
|
||||
notes=Returns the ip of this host, it's used when doing
|
||||
resize and migration.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -460,7 +444,6 @@ notes=Live migration provides a way to move an instance off one
|
||||
built on the container based virtualization. Therefore this
|
||||
operation is not considered mandatory to support.
|
||||
cli=nova live-migration <server>;nova host-evacuate-live <host>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -490,7 +473,6 @@ notes=Live migration provides a way to move a running instance to another
|
||||
a switch to post-copy mode. Otherwise the instance will be suspended
|
||||
until the migration is completed or aborted.
|
||||
cli=nova live-migration-force-complete <server> <migration>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver-notes.libvirt-kvm-x86=Requires libvirt>=1.3.3, qemu>=2.5.0
|
||||
@ -525,7 +507,6 @@ notes=Live migration provides a way to move a running instance to another
|
||||
the job status changes to "running", only some of the hypervisors support
|
||||
this feature.
|
||||
cli=nova live-migration-abort <server> <migration>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -548,7 +529,6 @@ notes=Importing pre-existing running virtual machines on a host is
|
||||
considered out of scope of the cloud paradigm. Therefore this
|
||||
operation is mandatory to support in drivers.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -577,7 +557,6 @@ notes=Stopping an instances CPUs can be thought of as roughly
|
||||
implement it. Therefore this operation is considered optional
|
||||
to support in drivers.
|
||||
cli=nova pause <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -602,7 +581,6 @@ notes=It is reasonable for a guest OS administrator to trigger a
|
||||
reboot can be achieved by a combination of stop+start. Therefore
|
||||
this operation is considered optional.
|
||||
cli=nova reboot <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -630,7 +608,6 @@ notes=The rescue operation starts an instance in a special
|
||||
thrown away and a new instance created. Therefore this
|
||||
operation is considered optional to support in drivers.
|
||||
cli=nova rescue <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -658,7 +635,6 @@ notes=The resize operation allows the user to change a running
|
||||
running instance. Therefore this operation is considered
|
||||
optional to support in drivers.
|
||||
cli=nova resize <server> <flavor>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -681,7 +657,6 @@ title=Restore instance
|
||||
status=optional
|
||||
notes=See notes for the suspend operation
|
||||
cli=nova resume <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -712,8 +687,6 @@ notes=Provides a mechanism to (re)set the password of the administrator
|
||||
this is just a convenient optimization. Therefore this operation is
|
||||
not considered mandatory for drivers to support.
|
||||
cli=nova set-password <server>
|
||||
driver.xenserver=complete
|
||||
driver-notes.xenserver=Requires XenAPI agent on the guest.
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver-notes.libvirt-kvm-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent.
|
||||
@ -747,7 +720,6 @@ notes=The snapshot operation allows the current state of the
|
||||
snapshot cannot be assumed. Therefore this operation is not
|
||||
considered mandatory to support.
|
||||
cli=nova image-create <server> <name>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -787,7 +759,6 @@ notes=Suspending an instance can be thought of as roughly
|
||||
the instance instead of suspending. Therefore this operation
|
||||
is considered optional to support.
|
||||
cli=nova suspend <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -815,7 +786,6 @@ notes=The swap volume operation is a mechanism for changing a running
|
||||
migration to work in the volume service. This is considered optional to
|
||||
support.
|
||||
cli=nova volume-update <server> <attachment> <volume>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -839,7 +809,6 @@ notes=The ability to terminate a virtual machine is required in
|
||||
avoid indefinitely ongoing billing. Therefore this operation
|
||||
is mandatory to support in drivers.
|
||||
cli=nova delete <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -868,7 +837,6 @@ notes=The trigger crash dump operation is a mechanism for triggering
|
||||
a means to dump the production memory image as a dump file which is useful
|
||||
for users. Therefore this operation is considered optional to support.
|
||||
cli=nova trigger-crash-dump <server>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -889,7 +857,6 @@ title=Resume instance CPUs (unpause)
|
||||
status=optional
|
||||
notes=See notes for the "Stop instance CPUs" operation
|
||||
cli=nova unpause <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -912,7 +879,6 @@ notes=Partition and resize FS to match the size specified by
|
||||
flavors.root_gb, As this is hypervisor specific feature.
|
||||
Therefore this operation is considered optional to support.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=missing
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -938,7 +904,6 @@ notes=The ability to set rate limits on virtual disks allows for
|
||||
of doing fine grained tuning. Therefore this is not considered
|
||||
to be an mandatory configuration to support.
|
||||
cli=nova limits
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -967,7 +932,6 @@ notes=The config drive provides an information channel into
|
||||
of the guest setup mechanisms is required to be supported by
|
||||
drivers, in order to enable login access.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver-notes.libvirt-kvm-aarch64=Requires kernel with proper config (oldest known: Ubuntu 4.13 HWE)
|
||||
@ -997,7 +961,6 @@ notes=This allows for the end user to provide data for multiple
|
||||
service or config drive. Therefore this operation is considered
|
||||
optional to support.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1027,8 +990,6 @@ notes=This allows for static networking configuration (IP
|
||||
config drive. Therefore this operation is considered optional
|
||||
to support.
|
||||
cli=
|
||||
driver.xenserver=partial
|
||||
driver-notes.xenserver=Only for Debian derived guests
|
||||
driver.libvirt-kvm-x86=partial
|
||||
driver-notes.libvirt-kvm-x86=Only for Debian derived guests
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
@ -1059,7 +1020,6 @@ notes=This allows the administrator to interact with the graphical
|
||||
mandatory, however, a driver is required to support at least one
|
||||
of the listed console access operations.
|
||||
cli=nova get-rdp-console <server> <console-type>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=missing
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1088,7 +1048,6 @@ notes=This allows the administrator to query the logs of data
|
||||
operation is not mandatory, however, a driver is required to
|
||||
support at least one of the listed console access operations.
|
||||
cli=nova console-log <server>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1118,7 +1077,6 @@ notes=This allows the administrator to interact with the serial
|
||||
This feature was introduced in the Juno release with blueprint
|
||||
https://blueprints.launchpad.net/nova/+spec/serial-ports
|
||||
cli=nova get-serial-console <server>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=unknown
|
||||
@ -1146,7 +1104,6 @@ notes=This allows the administrator to interact with the graphical
|
||||
mandatory, however, a driver is required to support at least one
|
||||
of the listed console access operations.
|
||||
cli=nova get-spice-console <server> <console-type>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1174,7 +1131,6 @@ notes=This allows the administrator to interact with the graphical
|
||||
mandatory, however, a driver is required to support at least one
|
||||
of the listed console access operations.
|
||||
cli=nova get-vnc-console <server> <console-type>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1204,7 +1160,6 @@ notes=Block storage provides instances with direct attached
|
||||
the network. Therefore support for this configuration is not
|
||||
considered mandatory for drivers to support.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1230,7 +1185,6 @@ notes=To maximise performance of the block storage, it may be desirable
|
||||
technology on the compute hosts. Since this is just a performance
|
||||
optimization of the I/O path it is not considered mandatory to support.
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1259,7 +1213,6 @@ notes=If the driver wishes to support block storage, it is common to
|
||||
block storage, then this is considered mandatory to support, otherwise
|
||||
it is considered optional.
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1283,7 +1236,6 @@ notes=If accessing the cinder iSCSI service over an untrusted LAN it
|
||||
protocol. CHAP is the commonly used authentication protocol for
|
||||
iSCSI. This is not considered mandatory to support. (?)
|
||||
cli=
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1309,7 +1261,6 @@ notes=This refers to the ability to boot an instance from an image
|
||||
on external PXE servers is out of scope. Therefore this is considered
|
||||
a mandatory storage feature to support.
|
||||
cli=nova boot --image <image> <name>
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1330,7 +1281,6 @@ title=uefi boot
|
||||
status=optional
|
||||
notes=This allows users to boot a guest with uefi firmware.
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1365,7 +1315,6 @@ notes=This allows users to set tags on virtual devices when creating a
|
||||
Instead, device role tags should be used. Device tags can be
|
||||
applied to virtual network interfaces and block devices.
|
||||
cli=nova boot
|
||||
driver.xenserver=complete
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1388,7 +1337,6 @@ notes=Quiesce the specified instance to prepare for snapshots.
|
||||
For libvirt, guest filesystems will be frozen through qemu
|
||||
agent.
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1409,7 +1357,6 @@ title=unquiesce
|
||||
status=optional
|
||||
notes=See notes for the quiesce operation
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1435,7 +1382,6 @@ notes=The multiattach volume operation is an extension to
|
||||
Note that for the libvirt driver, this is only supported
|
||||
if qemu<2.10 or libvirt>=3.10.
|
||||
cli=nova volume-attach <server> <volume>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1461,7 +1407,6 @@ notes=This is the same as the attach volume operation
|
||||
volume is optional this feature is also optional for
|
||||
compute drivers to support.
|
||||
cli=nova volume-attach <server> <volume>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver-notes.libvirt-kvm-x86=For native QEMU decryption of the
|
||||
encrypted volume (and rbd support), QEMU>=2.6.0 and libvirt>=2.2.0
|
||||
@ -1492,7 +1437,6 @@ notes=Since trusted image certification validation is configurable
|
||||
drivers cannot support the feature since it is mostly just plumbing
|
||||
user requests through the virt driver when downloading images.
|
||||
cli=nova boot --trusted-image-certificate-id ...
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1517,7 +1461,6 @@ notes=The file backed memory feature in Openstack allows a Nova node to serve
|
||||
within the libvirt memory backing directory. This is only supported if
|
||||
qemu>2.6 and libvirt>4.0.0
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=unknown
|
||||
@ -1540,7 +1483,6 @@ notes=The report CPU traits feature in OpenStack allows a Nova node to report
|
||||
its CPU traits according to CPU mode configuration. This gives users the ability
|
||||
to boot instances based on desired CPU traits.
|
||||
cli=
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=unknown
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1565,7 +1507,6 @@ notes=To support neutron SR-IOV ports (vnic_type=direct or vnic_type=macvtap)
|
||||
key in the dict returned from the ComputeDriver.get_available_resource()
|
||||
call.
|
||||
cli=nova boot --nic port-id <neutron port with resource request> ...
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
driver.libvirt-kvm-ppc64=missing
|
||||
@ -1590,7 +1531,6 @@ notes=The feature allows VMs to be booted with their memory
|
||||
other than the user of the VM. The Configuration and Security
|
||||
Guides specify usage of this feature.
|
||||
cli=openstack server create <usual server create parameters>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=partial
|
||||
driver-notes.libvirt-kvm-x86=This feature is currently only
|
||||
available with hosts which support the SEV (Secure Encrypted
|
||||
@ -1618,7 +1558,6 @@ notes=Drivers supporting this feature cache base images on the compute host so
|
||||
support allows priming the cache so that the first boot also benefits. Image
|
||||
caching support is tunable via config options in the [image_cache] group.
|
||||
cli=openstack server create <usual server create parameters>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=complete
|
||||
driver.libvirt-kvm-aarch64=complete
|
||||
driver.libvirt-kvm-ppc64=complete
|
||||
@ -1645,7 +1584,6 @@ notes=Allows VMs to be booted with an emulated trusted platform module (TPM)
|
||||
the user's credentials are required to unlock the virtual device files on the
|
||||
host.
|
||||
cli=openstack server create <usual server create parameters>
|
||||
driver.xenserver=missing
|
||||
driver.libvirt-kvm-x86=partial
|
||||
driver-notes.libvirt-kvm-x86=Move operations are not yet supported.
|
||||
driver.libvirt-kvm-aarch64=missing
|
||||
|
@ -37,7 +37,6 @@ Defines which driver to use for controlling virtualization.
|
||||
Possible values:
|
||||
|
||||
* ``libvirt.LibvirtDriver``
|
||||
* ``xenapi.XenAPIDriver``
|
||||
* ``fake.FakeDriver``
|
||||
* ``ironic.IronicDriver``
|
||||
* ``vmwareapi.VMwareVCDriver``
|
||||
|
Loading…
x
Reference in New Issue
Block a user