Merge "libvirt: Drop support for Xen"
This commit is contained in:
commit
d92c0740c6
@ -68,7 +68,6 @@ availability zones. Compute supports the following hypervisors:
|
|||||||
- `VMware vSphere
|
- `VMware vSphere
|
||||||
<https://www.vmware.com/support/vsphere-hypervisor.html>`__
|
<https://www.vmware.com/support/vsphere-hypervisor.html>`__
|
||||||
|
|
||||||
- `Xen (using libvirt) <https://www.xenproject.org>`__
|
|
||||||
|
|
||||||
- `zVM <https://www.ibm.com/it-infrastructure/z/zvm>`__
|
- `zVM <https://www.ibm.com/it-infrastructure/z/zvm>`__
|
||||||
|
|
||||||
|
@ -811,7 +811,7 @@ Tag VMware images
|
|||||||
In a mixed hypervisor environment, OpenStack Compute uses the
|
In a mixed hypervisor environment, OpenStack Compute uses the
|
||||||
``hypervisor_type`` tag to match images to the correct hypervisor type. For
|
``hypervisor_type`` tag to match images to the correct hypervisor type. For
|
||||||
VMware images, set the hypervisor type to ``vmware``. Other valid hypervisor
|
VMware images, set the hypervisor type to ``vmware``. Other valid hypervisor
|
||||||
types include: ``hyperv``, ``ironic``, ``lxc``, ``qemu``, and ``xen``.
|
types include: ``hyperv``, ``ironic``, ``lxc``, and ``qemu``.
|
||||||
Note that ``qemu`` is used for both QEMU and KVM hypervisor types.
|
Note that ``qemu`` is used for both QEMU and KVM hypervisor types.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
@ -1,239 +0,0 @@
|
|||||||
===============
|
|
||||||
Xen via libvirt
|
|
||||||
===============
|
|
||||||
|
|
||||||
OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be
|
|
||||||
integrated with OpenStack Compute via the `libvirt <http://libvirt.org/>`_
|
|
||||||
`toolstack <http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_ `toolstack
|
|
||||||
<http://wiki.xen.org/wiki/Choice_of_Toolstacks>`_.
|
|
||||||
|
|
||||||
Installing Xen with libvirt
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
At this stage we recommend using the baseline that we use for the `Xen Project
|
|
||||||
OpenStack CI Loop
|
|
||||||
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt>`_, which
|
|
||||||
contains the most recent stability fixes to both Xen and libvirt.
|
|
||||||
|
|
||||||
`Xen 4.5.1
|
|
||||||
<https://xenproject.org/downloads/xen-project-archives/xen-project-4-5-series/xen-project-4-5-1/>`_
|
|
||||||
(or newer) and `libvirt 1.2.15 <http://libvirt.org/sources/>`_ (or newer)
|
|
||||||
contain the minimum required OpenStack improvements for Xen. Although libvirt
|
|
||||||
1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary
|
|
||||||
Xen changes have also been backported to the Xen 4.4.3 stable branch. Please
|
|
||||||
check with the Linux and FreeBSD distros you are intending to use as `Dom 0
|
|
||||||
<http://wiki.xenproject.org/wiki/Category:Host_Install>`_, whether the relevant
|
|
||||||
version of Xen and libvirt are available as installable packages.
|
|
||||||
|
|
||||||
The latest releases of Xen and libvirt packages that fulfil the above minimum
|
|
||||||
requirements for the various openSUSE distributions can always be found and
|
|
||||||
installed from the `Open Build Service
|
|
||||||
<https://build.opensuse.org/project/show/Virtualization>`_ Virtualization
|
|
||||||
project. To install these latest packages, add the Virtualization repository
|
|
||||||
to your software management stack and get the newest packages from there. More
|
|
||||||
information about the latest Xen and libvirt packages are available `here
|
|
||||||
<https://build.opensuse.org/package/show/Virtualization/xen>`__ and `here
|
|
||||||
<https://build.opensuse.org/package/show/Virtualization/libvirt>`__.
|
|
||||||
|
|
||||||
Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package
|
|
||||||
**4.4.1-0ubuntu0.14.04.4** (Xen 4.4.1) and apply the patches outlined `here
|
|
||||||
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt#Baseline>`__.
|
|
||||||
You can also use the Ubuntu LTS 14.04 libvirt package **1.2.2
|
|
||||||
libvirt_1.2.2-0ubuntu13.1.7** as baseline and update it to libvirt version
|
|
||||||
1.2.15, or 1.2.14 with the patches outlined `here
|
|
||||||
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt#Baseline>`__
|
|
||||||
applied. Note that this will require rebuilding these packages partly from
|
|
||||||
source.
|
|
||||||
|
|
||||||
For further information and latest developments, you may want to consult the
|
|
||||||
Xen Project's `mailing lists for OpenStack related issues and questions
|
|
||||||
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/wg-openstack>`_.
|
|
||||||
|
|
||||||
Configuring Xen with libvirt
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
To enable Xen via libvirt, ensure the following options are set in
|
|
||||||
``/etc/nova/nova.conf`` on all hosts running the ``nova-compute`` service.
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
compute_driver = libvirt.LibvirtDriver
|
|
||||||
|
|
||||||
[libvirt]
|
|
||||||
virt_type = xen
|
|
||||||
|
|
||||||
Additional configuration options
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Use the following as a guideline for configuring Xen for use in OpenStack:
|
|
||||||
|
|
||||||
#. **Dom0 memory**: Set it between 1GB and 4GB by adding the following
|
|
||||||
parameter to the Xen Boot Options in the `grub.conf <http://
|
|
||||||
xenbits.xen.org/docs/unstable/misc/xen-command-line.html>`_ file.
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
dom0_mem=1024M
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The above memory limits are suggestions and should be based on the
|
|
||||||
available compute host resources. For large hosts that will run many
|
|
||||||
hundreds of instances, the suggested values may need to be higher.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The location of the grub.conf file depends on the host Linux distribution
|
|
||||||
that you are using. Please refer to the distro documentation for more
|
|
||||||
details (see `Dom 0 <http://wiki.xenproject.org
|
|
||||||
/wiki/Category:Host_Install>`_ for more resources).
|
|
||||||
|
|
||||||
#. **Dom0 vcpus**: Set the virtual CPUs to 4 and employ CPU pinning by adding
|
|
||||||
the following parameters to the Xen Boot Options in the `grub.conf
|
|
||||||
<http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html>`_ file.
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
dom0_max_vcpus=4 dom0_vcpus_pin
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Note that the above virtual CPU limits are suggestions and should be
|
|
||||||
based on the available compute host resources. For large hosts, that will
|
|
||||||
run many hundred of instances, the suggested values may need to be
|
|
||||||
higher.
|
|
||||||
|
|
||||||
#. **PV vs HVM guests**: A Xen virtual machine can be paravirtualized (PV) or
|
|
||||||
hardware virtualized (HVM). The virtualization mode determines the
|
|
||||||
interaction between Xen, Dom 0, and the guest VM's kernel. PV guests are
|
|
||||||
aware of the fact that they are virtualized and will co-operate with Xen and
|
|
||||||
Dom 0. The choice of virtualization mode determines performance
|
|
||||||
characteristics. For an overview of Xen virtualization modes, see `Xen Guest
|
|
||||||
Types <http://wiki.xen.org/wiki/Xen_Overview#Guest_Types>`_.
|
|
||||||
|
|
||||||
In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a
|
|
||||||
property of the operating system image used by the VM, and is changed by
|
|
||||||
adjusting the image metadata stored in the Image service. The image
|
|
||||||
metadata can be changed using the :command:`openstack` commands.
|
|
||||||
|
|
||||||
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use
|
|
||||||
:command:`openstack` to set the ``vm_mode`` property to ``hvm``.
|
|
||||||
|
|
||||||
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one
|
|
||||||
of the following two commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack image set --property vm_mode=hvm IMAGE
|
|
||||||
|
|
||||||
To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one
|
|
||||||
of the following two commands
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack image set --property vm_mode=xen IMAGE
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The default for virtualization mode in nova is PV mode.
|
|
||||||
|
|
||||||
#. **Image formats**: Xen supports raw, qcow2 and vhd image formats. For more
|
|
||||||
information on image formats, refer to the `OpenStack Virtual Image Guide
|
|
||||||
<https://docs.openstack.org/image-guide/introduction.html>`__ and the
|
|
||||||
`Storage Options Guide on the Xen Project Wiki
|
|
||||||
<http://wiki.xenproject.org/wiki/Storage_options>`_.
|
|
||||||
|
|
||||||
#. **Image metadata**: In addition to the ``vm_mode`` property discussed above,
|
|
||||||
the ``hypervisor_type`` property is another important component of the image
|
|
||||||
metadata, especially if your cloud contains mixed hypervisor compute nodes.
|
|
||||||
Setting the ``hypervisor_type`` property allows the nova scheduler to select
|
|
||||||
a compute node running the specified hypervisor when launching instances of
|
|
||||||
the image. Image metadata such as ``vm_mode``, ``hypervisor_type``,
|
|
||||||
architecture, and others can be set when importing the image to the Image
|
|
||||||
service. The metadata can also be changed using the :command:`openstack`
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE
|
|
||||||
|
|
||||||
For more information on image metadata, refer to the `OpenStack Virtual
|
|
||||||
Image Guide <https://docs.openstack.org/image-guide/introduction.html#image-metadata>`__.
|
|
||||||
|
|
||||||
#. **Libguestfs file injection**: OpenStack compute nodes can use `libguestfs
|
|
||||||
<http://libguestfs.org/>`_ to inject files into an instance's image prior to
|
|
||||||
launching the instance. libguestfs uses libvirt's QEMU driver to start a
|
|
||||||
qemu process, which is then used to inject files into the image. When using
|
|
||||||
libguestfs for file injection, the compute node must have the libvirt qemu
|
|
||||||
driver installed, in addition to the Xen driver. In RPM based distributions,
|
|
||||||
the qemu driver is provided by the ``libvirt-daemon-qemu`` package. In
|
|
||||||
Debian and Ubuntu, the qemu driver is provided by the ``libvirt-bin``
|
|
||||||
package.
|
|
||||||
|
|
||||||
Troubleshoot Xen with libvirt
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
**Important log files**: When an instance fails to start, or when you come
|
|
||||||
across other issues, you should first consult the following log files:
|
|
||||||
|
|
||||||
* ``/var/log/nova/nova-compute.log``
|
|
||||||
|
|
||||||
* ``/var/log/libvirt/libxl/libxl-driver.log``,
|
|
||||||
|
|
||||||
* ``/var/log/xen/qemu-dm-${instancename}.log``,
|
|
||||||
|
|
||||||
* ``/var/log/xen/xen-hotplug.log``,
|
|
||||||
|
|
||||||
* ``/var/log/xen/console/guest-${instancename}`` (to enable see `Enabling Guest
|
|
||||||
Console Logs
|
|
||||||
<http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen#Guest_console_logs>`_)
|
|
||||||
|
|
||||||
* Host Console Logs (read `Enabling and Retrieving Host Console Logs
|
|
||||||
<http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen#Host_console_logs>`_).
|
|
||||||
|
|
||||||
If you need further help you can ask questions on the mailing lists `xen-users@
|
|
||||||
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/ xen-users>`_,
|
|
||||||
`wg-openstack@ <http://lists.xenproject.org/cgi-bin/mailman/
|
|
||||||
listinfo/wg-openstack>`_ or `raise a bug <http://wiki.xen.org/wiki/
|
|
||||||
Reporting_Bugs_against_Xen>`_ against Xen.
|
|
||||||
|
|
||||||
Known issues
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
* **Live migration**: Live migration is supported in the libvirt libxl driver
|
|
||||||
since version 1.2.5. However, there were a number of issues when used with
|
|
||||||
OpenStack, in particular with libvirt migration protocol compatibility. It is
|
|
||||||
worth mentioning that libvirt 1.3.0 addresses most of these issues. We do
|
|
||||||
however recommend using libvirt 1.3.2, which is fully supported and tested as
|
|
||||||
part of the Xen Project CI loop. It addresses live migration monitoring
|
|
||||||
related issues and adds support for peer-to-peer migration mode, which nova
|
|
||||||
relies on.
|
|
||||||
|
|
||||||
* **Live migration monitoring**: On compute nodes running Kilo or later, live
|
|
||||||
migration monitoring relies on libvirt APIs that are only implemented from
|
|
||||||
libvirt version 1.3.1 onwards. When attempting to live migrate, the migration
|
|
||||||
monitoring thread would crash and leave the instance state as "MIGRATING". If
|
|
||||||
you experience such an issue and you are running on a version released before
|
|
||||||
libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391
|
|
||||||
from upstream.
|
|
||||||
|
|
||||||
Additional information and resources
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The following section contains links to other useful resources.
|
|
||||||
|
|
||||||
* `wiki.xenproject.org/wiki/OpenStack <http://wiki.xenproject.org/wiki/
|
|
||||||
OpenStack>`_ - OpenStack Documentation on the Xen Project wiki
|
|
||||||
|
|
||||||
* `wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt
|
|
||||||
<http://wiki.xenproject.org/wiki/OpenStack_CI_Loop_for_Xen-Libvirt>`_ -
|
|
||||||
Information about the Xen Project OpenStack CI Loop
|
|
||||||
|
|
||||||
* `wiki.xenproject.org/wiki/OpenStack_via_DevStack
|
|
||||||
<http://wiki.xenproject.org/wiki/OpenStack_via_DevStack>`_ - How to set up
|
|
||||||
OpenStack via DevStack
|
|
||||||
|
|
||||||
* `Mailing lists for OpenStack related issues and questions
|
|
||||||
<http://lists.xenproject.org/cgi-bin/mailman/listinfo/wg-openstack>`_ - This
|
|
||||||
list is dedicated to coordinating bug fixes and issues across Xen, libvirt
|
|
||||||
and OpenStack and the CI loop.
|
|
@ -8,7 +8,6 @@ Hypervisors
|
|||||||
hypervisor-basics
|
hypervisor-basics
|
||||||
hypervisor-kvm
|
hypervisor-kvm
|
||||||
hypervisor-qemu
|
hypervisor-qemu
|
||||||
hypervisor-xen-libvirt
|
|
||||||
hypervisor-lxc
|
hypervisor-lxc
|
||||||
hypervisor-vmware
|
hypervisor-vmware
|
||||||
hypervisor-hyper-v
|
hypervisor-hyper-v
|
||||||
@ -39,10 +38,6 @@ The following hypervisors are supported:
|
|||||||
* `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows
|
* `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows
|
||||||
images through a connection with a vCenter server.
|
images through a connection with a vCenter server.
|
||||||
|
|
||||||
* `Xen (using libvirt)`_ - Xen Project Hypervisor using libvirt as
|
|
||||||
management interface into ``nova-compute`` to run Linux, Windows, FreeBSD and
|
|
||||||
NetBSD virtual machines.
|
|
||||||
|
|
||||||
* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run
|
* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run
|
||||||
Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively
|
Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively
|
||||||
on the Windows virtualization platform.
|
on the Windows virtualization platform.
|
||||||
@ -90,7 +85,6 @@ virt drivers:
|
|||||||
.. _LXC: https://linuxcontainers.org
|
.. _LXC: https://linuxcontainers.org
|
||||||
.. _QEMU: https://wiki.qemu.org/Manual
|
.. _QEMU: https://wiki.qemu.org/Manual
|
||||||
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
|
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
|
||||||
.. _Xen (using libvirt): https://www.xenproject.org
|
|
||||||
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
|
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
|
||||||
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
|
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
|
||||||
.. _PowerVM: https://www.ibm.com/us-en/marketplace/ibm-powervm
|
.. _PowerVM: https://www.ibm.com/us-en/marketplace/ibm-powervm
|
||||||
|
@ -18,10 +18,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_CI
|
|||||||
title=libvirt+virtuozzo VM
|
title=libvirt+virtuozzo VM
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
||||||
|
|
||||||
[target.libvirt-xen]
|
|
||||||
title=libvirt+xen
|
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
|
|
||||||
|
|
||||||
[target.vmware]
|
[target.vmware]
|
||||||
title=VMware CI
|
title=VMware CI
|
||||||
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
||||||
@ -71,7 +67,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -92,7 +87,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=unknown
|
vmware=unknown
|
||||||
hyperv=unknown
|
hyperv=unknown
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -112,7 +106,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -132,7 +125,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -152,7 +144,6 @@ libvirt-kvm-s390=unknown
|
|||||||
libvirt-virtuozzo-ct=complete
|
libvirt-virtuozzo-ct=complete
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -171,7 +162,6 @@ libvirt-kvm=complete
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=complete
|
libvirt-virtuozzo-ct=complete
|
||||||
libvirt-virtuozzo-vm=complete
|
libvirt-virtuozzo-vm=complete
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
@ -195,7 +185,6 @@ libvirt-kvm=complete
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=missing
|
libvirt-virtuozzo-ct=missing
|
||||||
libvirt-virtuozzo-vm=complete
|
libvirt-virtuozzo-vm=complete
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=partial
|
vmware=partial
|
||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=complete:n
|
hyperv=complete:n
|
||||||
@ -217,8 +206,6 @@ libvirt-kvm=complete
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=unknown
|
libvirt-virtuozzo-ct=unknown
|
||||||
libvirt-virtuozzo-vm=unknown
|
libvirt-virtuozzo-vm=unknown
|
||||||
libvirt-xen=partial
|
|
||||||
driver-notes-libvirt-xen=This is not tested in a CI system, but it is implemented.
|
|
||||||
vmware=partial
|
vmware=partial
|
||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=partial
|
hyperv=partial
|
||||||
@ -241,7 +228,6 @@ libvirt-kvm-s390=unknown
|
|||||||
libvirt-virtuozzo-ct=missing
|
libvirt-virtuozzo-ct=missing
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=partial
|
vmware=partial
|
||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
@ -263,7 +249,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
@ -282,7 +267,6 @@ libvirt-kvm=complete
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=unknown
|
libvirt-virtuozzo-ct=unknown
|
||||||
libvirt-virtuozzo-vm=unknown
|
libvirt-virtuozzo-vm=unknown
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=partial
|
vmware=partial
|
||||||
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
|
||||||
hyperv=partial
|
hyperv=partial
|
||||||
@ -305,7 +289,6 @@ libvirt-kvm-s390=unknown
|
|||||||
libvirt-virtuozzo-ct=partial
|
libvirt-virtuozzo-ct=partial
|
||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=complete
|
libvirt-virtuozzo-vm=complete
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
@ -327,7 +310,6 @@ libvirt-kvm-s390=unknown
|
|||||||
libvirt-virtuozzo-ct=missing
|
libvirt-virtuozzo-ct=missing
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=complete
|
vmware=complete
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=partial
|
ironic=partial
|
||||||
@ -348,7 +330,6 @@ driver-notes-libvirt-kvm=This is not tested in a CI system, but it is implemente
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=missing
|
libvirt-virtuozzo-ct=missing
|
||||||
libvirt-virtuozzo-vm=missing
|
libvirt-virtuozzo-vm=missing
|
||||||
libvirt-xen=missing
|
|
||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=partial
|
hyperv=partial
|
||||||
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
|
||||||
@ -370,7 +351,6 @@ libvirt-kvm=complete
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=missing
|
libvirt-virtuozzo-ct=missing
|
||||||
libvirt-virtuozzo-vm=complete
|
libvirt-virtuozzo-vm=complete
|
||||||
libvirt-xen=complete
|
|
||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=complete
|
hyperv=complete
|
||||||
ironic=missing
|
ironic=missing
|
||||||
|
@ -14,10 +14,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_CI
|
|||||||
title=libvirt+virtuozzo VM
|
title=libvirt+virtuozzo VM
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
|
||||||
|
|
||||||
[target.libvirt-xen]
|
|
||||||
title=libvirt+xen
|
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
|
|
||||||
|
|
||||||
[target.vmware]
|
[target.vmware]
|
||||||
title=VMware CI
|
title=VMware CI
|
||||||
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
|
||||||
@ -52,7 +48,6 @@ libvirt-virtuozzo-ct=partial
|
|||||||
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-virtuozzo-vm=partial
|
libvirt-virtuozzo-vm=partial
|
||||||
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
|
||||||
libvirt-xen=missing
|
|
||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=missing
|
hyperv=missing
|
||||||
ironic=unknown
|
ironic=unknown
|
||||||
@ -69,7 +64,6 @@ libvirt-kvm=partial:queens
|
|||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-virtuozzo-ct=unknown
|
libvirt-virtuozzo-ct=unknown
|
||||||
libvirt-virtuozzo-vm=unknown
|
libvirt-virtuozzo-vm=unknown
|
||||||
libvirt-xen=unknown
|
|
||||||
vmware=missing
|
vmware=missing
|
||||||
hyperv=missing
|
hyperv=missing
|
||||||
ironic=missing
|
ironic=missing
|
||||||
|
@ -10,10 +10,6 @@ link=http://docs.openstack.org/infra/manual/developers.html#project-gating
|
|||||||
title=libvirt+kvm (s390x)
|
title=libvirt+kvm (s390x)
|
||||||
link=http://docs.openstack.org/infra/manual/developers.html#project-gating
|
link=http://docs.openstack.org/infra/manual/developers.html#project-gating
|
||||||
|
|
||||||
[target.libvirt-xen]
|
|
||||||
title=libvirt+xen
|
|
||||||
link=https://wiki.openstack.org/wiki/ThirdPartySystems/XenProject_CI
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# Lists all features
|
# Lists all features
|
||||||
#
|
#
|
||||||
@ -33,7 +29,6 @@ admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#
|
|||||||
tempest_test_uuids=9a438d88-10c6-4bcd-8b5b-5b6e25e1346f;585e934c-448e-43c4-acbf-d06a9b899997
|
tempest_test_uuids=9a438d88-10c6-4bcd-8b5b-5b6e25e1346f;585e934c-448e-43c4-acbf-d06a9b899997
|
||||||
libvirt-kvm=partial
|
libvirt-kvm=partial
|
||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-xen=missing
|
|
||||||
|
|
||||||
[operation.cpu-pinning-policy]
|
[operation.cpu-pinning-policy]
|
||||||
title=CPU Pinning Policy
|
title=CPU Pinning Policy
|
||||||
@ -43,7 +38,6 @@ api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server
|
|||||||
admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies
|
admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies
|
||||||
libvirt-kvm=partial
|
libvirt-kvm=partial
|
||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-xen=missing
|
|
||||||
|
|
||||||
[operation.cpu-pinning-thread-policy]
|
[operation.cpu-pinning-thread-policy]
|
||||||
title=CPU Pinning Thread Policy
|
title=CPU Pinning Thread Policy
|
||||||
@ -53,4 +47,3 @@ api_doc_link=https://docs.openstack.org/api-ref/compute/#create-server
|
|||||||
admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies
|
admin_doc_link=https://docs.openstack.org/nova/latest/admin/cpu-topologies.html#customizing-instance-cpu-pinning-policies
|
||||||
libvirt-kvm=partial
|
libvirt-kvm=partial
|
||||||
libvirt-kvm-s390=unknown
|
libvirt-kvm-s390=unknown
|
||||||
libvirt-xen=missing
|
|
||||||
|
@ -42,7 +42,7 @@ hosts.
|
|||||||
|
|
||||||
This mode is only supported when using the Libvirt virt driver.
|
This mode is only supported when using the Libvirt virt driver.
|
||||||
|
|
||||||
This mode is not supported when using LXC or Xen hypervisors as enabled by
|
This mode is not supported when using the LXC hypervisor as enabled by
|
||||||
the :oslo.config:option:`libvirt.virt_type` configurable on the computes.
|
the :oslo.config:option:`libvirt.virt_type` configurable on the computes.
|
||||||
|
|
||||||
Usage
|
Usage
|
||||||
|
@ -95,9 +95,6 @@ title=Libvirt Virtuozzo VM
|
|||||||
[driver.libvirt-vz-ct]
|
[driver.libvirt-vz-ct]
|
||||||
title=Libvirt Virtuozzo CT
|
title=Libvirt Virtuozzo CT
|
||||||
|
|
||||||
[driver.libvirt-xen]
|
|
||||||
title=Libvirt Xen
|
|
||||||
|
|
||||||
[driver.vmware]
|
[driver.vmware]
|
||||||
title=VMware vCenter
|
title=VMware vCenter
|
||||||
|
|
||||||
@ -131,7 +128,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -154,7 +150,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -174,7 +169,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -203,7 +197,6 @@ driver.libvirt-kvm-ppc64=unknown
|
|||||||
driver.libvirt-kvm-s390x=unknown
|
driver.libvirt-kvm-s390x=unknown
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=unknown
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -231,7 +224,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=partial
|
driver.hyperv=partial
|
||||||
driver-notes.hyperv=Works without issue if instance is off. When
|
driver-notes.hyperv=Works without issue if instance is off. When
|
||||||
@ -255,7 +247,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -275,7 +266,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver-notes.hyperv=Works without issue if instance is off. When
|
driver-notes.hyperv=Works without issue if instance is off. When
|
||||||
@ -304,7 +294,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=missing
|
driver.libvirt-qemu-x86=missing
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -331,7 +320,6 @@ driver.libvirt-kvm-ppc64=unknown
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=unknown
|
driver.libvirt-qemu-x86=unknown
|
||||||
driver.libvirt-lxc=unknown
|
driver.libvirt-lxc=unknown
|
||||||
driver.libvirt-xen=unknown
|
|
||||||
driver.vmware=unknown
|
driver.vmware=unknown
|
||||||
driver.hyperv=unknown
|
driver.hyperv=unknown
|
||||||
driver.ironic=unknown
|
driver.ironic=unknown
|
||||||
@ -355,7 +343,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -378,7 +365,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -399,7 +385,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -420,7 +405,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -450,7 +434,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -483,7 +466,6 @@ driver-notes.libvirt-kvm-s390x=Requires libvirt>=1.3.3, qemu>=2.5.0
|
|||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.3.3, qemu>=2.5.0
|
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.3.3, qemu>=2.5.0
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -513,7 +495,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -535,7 +516,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -563,7 +543,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -587,7 +566,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -614,7 +592,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -641,7 +618,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -663,7 +639,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -695,7 +670,6 @@ driver.libvirt-kvm-s390x=missing
|
|||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent.
|
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent.
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -726,8 +700,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=partial
|
|
||||||
driver-notes.libvirt-xen=Only cold snapshots (pause + snapshot) supported
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -765,7 +737,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -792,7 +763,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -819,7 +789,6 @@ driver-notes.libvirt-lxc=Fails in latest Ubuntu Trusty kernel
|
|||||||
from security repository (3.13.0-76-generic), but works in upstream
|
from security repository (3.13.0-76-generic), but works in upstream
|
||||||
3.13.x kernels as well as default Ubuntu Trusty latest kernel
|
3.13.x kernels as well as default Ubuntu Trusty latest kernel
|
||||||
(3.13.0-58-generic).
|
(3.13.0-58-generic).
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -843,7 +812,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -863,7 +831,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -885,7 +852,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=missing
|
driver.libvirt-qemu-x86=missing
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -910,7 +876,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -939,7 +904,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -967,7 +931,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -998,7 +961,6 @@ driver.libvirt-kvm-s390x=missing
|
|||||||
driver.libvirt-qemu-x86=partial
|
driver.libvirt-qemu-x86=partial
|
||||||
driver-notes.libvirt-qemu-x86=Only for Debian derived guests
|
driver-notes.libvirt-qemu-x86=Only for Debian derived guests
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=partial
|
driver.vmware=partial
|
||||||
driver-notes.vmware=requires vmware tools installed
|
driver-notes.vmware=requires vmware tools installed
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
@ -1026,7 +988,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=missing
|
driver.libvirt-qemu-x86=missing
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1054,7 +1015,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1083,7 +1043,6 @@ driver.libvirt-kvm-ppc64=unknown
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=unknown
|
driver.libvirt-qemu-x86=unknown
|
||||||
driver.libvirt-lxc=unknown
|
driver.libvirt-lxc=unknown
|
||||||
driver.libvirt-xen=unknown
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -1110,7 +1069,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1137,7 +1095,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1166,7 +1123,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -1191,7 +1147,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1219,7 +1174,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -1242,7 +1196,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -1267,7 +1220,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=complete
|
driver.ironic=complete
|
||||||
@ -1287,7 +1239,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=complete
|
driver.vmware=complete
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver-notes.hyperv=In order to use uefi, a second generation Hyper-V vm must
|
driver-notes.hyperv=In order to use uefi, a second generation Hyper-V vm must
|
||||||
@ -1321,7 +1272,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=unknown
|
driver.libvirt-lxc=unknown
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=complete
|
driver.hyperv=complete
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1343,7 +1293,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1363,7 +1312,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1388,7 +1336,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1419,7 +1366,6 @@ driver.libvirt-kvm-s390x=unknown
|
|||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver-notes.libvirt-qemu-x86=The same restrictions apply as KVM x86.
|
driver-notes.libvirt-qemu-x86=The same restrictions apply as KVM x86.
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=unknown
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1443,7 +1389,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=complete
|
driver.libvirt-lxc=complete
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1467,7 +1412,6 @@ driver.libvirt-kvm-ppc64=unknown
|
|||||||
driver.libvirt-kvm-s390x=unknown
|
driver.libvirt-kvm-s390x=unknown
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1489,7 +1433,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1513,7 +1456,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1540,7 +1482,6 @@ driver.libvirt-kvm-ppc64=missing
|
|||||||
driver.libvirt-kvm-s390x=missing
|
driver.libvirt-kvm-s390x=missing
|
||||||
driver.libvirt-qemu-x86=missing
|
driver.libvirt-qemu-x86=missing
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1564,7 +1505,6 @@ driver.libvirt-kvm-ppc64=complete
|
|||||||
driver.libvirt-kvm-s390x=complete
|
driver.libvirt-kvm-s390x=complete
|
||||||
driver.libvirt-qemu-x86=complete
|
driver.libvirt-qemu-x86=complete
|
||||||
driver.libvirt-lxc=unknown
|
driver.libvirt-lxc=unknown
|
||||||
driver.libvirt-xen=complete
|
|
||||||
driver.vmware=partial
|
driver.vmware=partial
|
||||||
driver.hyperv=partial
|
driver.hyperv=partial
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
@ -1592,7 +1532,6 @@ driver.libvirt-kvm-s390x=missing
|
|||||||
driver.libvirt-qemu-x86=partial
|
driver.libvirt-qemu-x86=partial
|
||||||
driver-notes.libvirt-qemu-x86=Move operations are not yet supported.
|
driver-notes.libvirt-qemu-x86=Move operations are not yet supported.
|
||||||
driver.libvirt-lxc=missing
|
driver.libvirt-lxc=missing
|
||||||
driver.libvirt-xen=missing
|
|
||||||
driver.vmware=missing
|
driver.vmware=missing
|
||||||
driver.hyperv=missing
|
driver.hyperv=missing
|
||||||
driver.ironic=missing
|
driver.ironic=missing
|
||||||
|
@ -104,7 +104,7 @@ Related options:
|
|||||||
"""),
|
"""),
|
||||||
cfg.StrOpt('virt_type',
|
cfg.StrOpt('virt_type',
|
||||||
default='kvm',
|
default='kvm',
|
||||||
choices=('kvm', 'lxc', 'qemu', 'xen', 'parallels'),
|
choices=('kvm', 'lxc', 'qemu', 'parallels'),
|
||||||
help="""
|
help="""
|
||||||
Describes the virtualization type (or so called domain type) libvirt should
|
Describes the virtualization type (or so called domain type) libvirt should
|
||||||
use.
|
use.
|
||||||
@ -128,7 +128,8 @@ If set, Nova will use this URI to connect to libvirt.
|
|||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
* An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for example.
|
* An URI like ``qemu:///system``.
|
||||||
|
|
||||||
This is only necessary if the URI differs to the commonly known URIs
|
This is only necessary if the URI differs to the commonly known URIs
|
||||||
for the chosen virtualization type.
|
for the chosen virtualization type.
|
||||||
|
|
||||||
@ -273,7 +274,6 @@ in following list:
|
|||||||
|
|
||||||
* 'kvm': 'qemu+tcp://%s/system'
|
* 'kvm': 'qemu+tcp://%s/system'
|
||||||
* 'qemu': 'qemu+tcp://%s/system'
|
* 'qemu': 'qemu+tcp://%s/system'
|
||||||
* 'xen': 'xenmigr://%s/system'
|
|
||||||
* 'parallels': 'parallels+tcp://%s/system'
|
* 'parallels': 'parallels+tcp://%s/system'
|
||||||
|
|
||||||
Related options:
|
Related options:
|
||||||
@ -627,9 +627,6 @@ Related options:
|
|||||||
default='$instances_path/snapshots',
|
default='$instances_path/snapshots',
|
||||||
help='Location where libvirt driver will store snapshots '
|
help='Location where libvirt driver will store snapshots '
|
||||||
'before uploading them to image service'),
|
'before uploading them to image service'),
|
||||||
cfg.StrOpt('xen_hvmloader_path',
|
|
||||||
default='/usr/lib/xen/boot/hvmloader',
|
|
||||||
help='Location where the Xen hvmloader is kept'),
|
|
||||||
cfg.ListOpt('disk_cachemodes',
|
cfg.ListOpt('disk_cachemodes',
|
||||||
default=[],
|
default=[],
|
||||||
help="""
|
help="""
|
||||||
|
@ -1540,11 +1540,6 @@ class InternalError(NovaException):
|
|||||||
msg_fmt = "%(err)s"
|
msg_fmt = "%(err)s"
|
||||||
|
|
||||||
|
|
||||||
class PciDevicePrepareFailed(NovaException):
|
|
||||||
msg_fmt = _("Failed to prepare PCI device %(id)s for instance "
|
|
||||||
"%(instance_uuid)s: %(reason)s")
|
|
||||||
|
|
||||||
|
|
||||||
class PciDeviceDetachFailed(NovaException):
|
class PciDeviceDetachFailed(NovaException):
|
||||||
msg_fmt = _("Failed to detach PCI device %(dev)s: %(reason)s")
|
msg_fmt = _("Failed to detach PCI device %(dev)s: %(reason)s")
|
||||||
|
|
||||||
|
@ -191,11 +191,6 @@ def readpty(path):
|
|||||||
return ''
|
return ''
|
||||||
|
|
||||||
|
|
||||||
@nova.privsep.sys_admin_pctxt.entrypoint
|
|
||||||
def xend_probe():
|
|
||||||
processutils.execute('xend', 'status', check_exit_code=True)
|
|
||||||
|
|
||||||
|
|
||||||
@nova.privsep.sys_admin_pctxt.entrypoint
|
@nova.privsep.sys_admin_pctxt.entrypoint
|
||||||
def create_mdev(physical_device, mdev_type, uuid=None):
|
def create_mdev(physical_device, mdev_type, uuid=None):
|
||||||
"""Instantiate a mediated device."""
|
"""Instantiate a mediated device."""
|
||||||
|
@ -147,12 +147,6 @@ class LibvirtTestCase(test.NoDBTestCase):
|
|||||||
mock_fcntl.F_SETFL, 32769 | os.O_NONBLOCK)])
|
mock_fcntl.F_SETFL, 32769 | os.O_NONBLOCK)])
|
||||||
self.assertIn(mock.call('/fake/path', 'r'), mock_open.mock_calls)
|
self.assertIn(mock.call('/fake/path', 'r'), mock_open.mock_calls)
|
||||||
|
|
||||||
@mock.patch('oslo_concurrency.processutils.execute')
|
|
||||||
def test_xend_probe(self, mock_execute):
|
|
||||||
nova.privsep.libvirt.xend_probe()
|
|
||||||
mock_execute.assert_called_with('xend', 'status',
|
|
||||||
check_exit_code=True)
|
|
||||||
|
|
||||||
def test_create_nmdev(self):
|
def test_create_nmdev(self):
|
||||||
mock_open = mock.mock_open()
|
mock_open = mock.mock_open()
|
||||||
with mock.patch('builtins.open', new=mock_open) as mock_open:
|
with mock.patch('builtins.open', new=mock_open) as mock_open:
|
||||||
|
@ -79,7 +79,6 @@ def fake_kvm_guest():
|
|||||||
obj.features = [
|
obj.features = [
|
||||||
config.LibvirtConfigGuestFeatureACPI(),
|
config.LibvirtConfigGuestFeatureACPI(),
|
||||||
config.LibvirtConfigGuestFeatureAPIC(),
|
config.LibvirtConfigGuestFeatureAPIC(),
|
||||||
config.LibvirtConfigGuestFeaturePAE(),
|
|
||||||
config.LibvirtConfigGuestFeatureKvmHidden()
|
config.LibvirtConfigGuestFeatureKvmHidden()
|
||||||
]
|
]
|
||||||
|
|
||||||
@ -202,7 +201,6 @@ FAKE_KVM_GUEST = """
|
|||||||
<features>
|
<features>
|
||||||
<acpi/>
|
<acpi/>
|
||||||
<apic/>
|
<apic/>
|
||||||
<pae/>
|
|
||||||
<kvm>
|
<kvm>
|
||||||
<hidden state='on'/>
|
<hidden state='on'/>
|
||||||
</kvm>
|
</kvm>
|
||||||
|
@ -1439,12 +1439,13 @@ class Connection(object):
|
|||||||
raise ValueError("URI was None, but fake libvirt is "
|
raise ValueError("URI was None, but fake libvirt is "
|
||||||
"configured to not accept this.")
|
"configured to not accept this.")
|
||||||
|
|
||||||
uri_whitelist = ['qemu:///system',
|
uri_whitelist = [
|
||||||
'qemu:///session',
|
'qemu:///system',
|
||||||
'lxc:///', # from LibvirtDriver._uri()
|
'qemu:///session',
|
||||||
'xen:///', # from LibvirtDriver._uri()
|
'lxc:///', # from LibvirtDriver._uri()
|
||||||
'test:///default',
|
'test:///default',
|
||||||
'parallels:///system']
|
'parallels:///system',
|
||||||
|
]
|
||||||
|
|
||||||
if uri not in uri_whitelist:
|
if uri not in uri_whitelist:
|
||||||
raise make_libvirtError(
|
raise make_libvirtError(
|
||||||
|
@ -1000,12 +1000,10 @@ class LibvirtBlockInfoTest(test.NoDBTestCase):
|
|||||||
|
|
||||||
def test_success_get_disk_bus_for_disk_dev(self):
|
def test_success_get_disk_bus_for_disk_dev(self):
|
||||||
expected = (
|
expected = (
|
||||||
('ide', ("kvm", "hda")),
|
('ide', ("kvm", "hda")),
|
||||||
('scsi', ("kvm", "sdf")),
|
('scsi', ("kvm", "sdf")),
|
||||||
('virtio', ("kvm", "vds")),
|
('virtio', ("kvm", "vds")),
|
||||||
('fdc', ("kvm", "fdc")),
|
('fdc', ("kvm", "fdc")),
|
||||||
('xen', ("xen", "sdf")),
|
|
||||||
('xen', ("xen", "xvdb")),
|
|
||||||
)
|
)
|
||||||
for res, args in expected:
|
for res, args in expected:
|
||||||
self.assertEqual(res, blockinfo.get_disk_bus_for_disk_dev(*args))
|
self.assertEqual(res, blockinfo.get_disk_bus_for_disk_dev(*args))
|
||||||
@ -1219,15 +1217,6 @@ class LibvirtBlockInfoTest(test.NoDBTestCase):
|
|||||||
'device_type': 'disk'},
|
'device_type': 'disk'},
|
||||||
{}, 'virtio')
|
{}, 'virtio')
|
||||||
mock_get_info.reset_mock()
|
mock_get_info.reset_mock()
|
||||||
# xen with incompatible root_device_name/disk_bus combination
|
|
||||||
root_bdm['disk_bus'] = 'xen'
|
|
||||||
blockinfo.get_root_info(instance, 'xen', image_meta, root_bdm,
|
|
||||||
'xen', 'ide', root_device_name='sda')
|
|
||||||
mock_get_info.assert_called_once_with(instance, 'xen', image_meta,
|
|
||||||
{'device_name': 'xvda',
|
|
||||||
'disk_bus': 'xen',
|
|
||||||
'device_type': 'disk'},
|
|
||||||
{}, 'xen')
|
|
||||||
|
|
||||||
def test_get_boot_order_simple(self):
|
def test_get_boot_order_simple(self):
|
||||||
disk_info = {
|
disk_info = {
|
||||||
@ -1305,7 +1294,7 @@ class LibvirtBlockInfoTest(test.NoDBTestCase):
|
|||||||
|
|
||||||
def test_get_rescue_bus(self):
|
def test_get_rescue_bus(self):
|
||||||
# Assert that all supported device bus types are returned. Stable
|
# Assert that all supported device bus types are returned. Stable
|
||||||
# device rescue is not supported by xen or lxc so ignore these.
|
# device rescue is not supported by lxc so ignore this.
|
||||||
for virt_type in ['qemu', 'kvm', 'parallels']:
|
for virt_type in ['qemu', 'kvm', 'parallels']:
|
||||||
for bus in blockinfo.SUPPORTED_DEVICE_BUSES[virt_type]:
|
for bus in blockinfo.SUPPORTED_DEVICE_BUSES[virt_type]:
|
||||||
meta = self._get_rescue_image_meta({'hw_rescue_bus': bus})
|
meta = self._get_rescue_image_meta({'hw_rescue_bus': bus})
|
||||||
|
@ -2059,27 +2059,6 @@ class LibvirtConfigGuestInterfaceTest(LibvirtConfigBaseTest):
|
|||||||
obj2.parse_str(xml)
|
obj2.parse_str(xml)
|
||||||
self.assertXmlEqual(xml, obj2.to_xml())
|
self.assertXmlEqual(xml, obj2.to_xml())
|
||||||
|
|
||||||
def test_config_bridge_xen(self):
|
|
||||||
obj = config.LibvirtConfigGuestInterface()
|
|
||||||
obj.net_type = "bridge"
|
|
||||||
obj.source_dev = "br0"
|
|
||||||
obj.mac_addr = "CA:FE:BE:EF:CA:FE"
|
|
||||||
obj.script = "/path/to/test-vif-openstack"
|
|
||||||
|
|
||||||
xml = obj.to_xml()
|
|
||||||
self.assertXmlEqual(xml, """
|
|
||||||
<interface type="bridge">
|
|
||||||
<mac address="CA:FE:BE:EF:CA:FE"/>
|
|
||||||
<source bridge="br0"/>
|
|
||||||
<script path="/path/to/test-vif-openstack"/>
|
|
||||||
</interface>""")
|
|
||||||
|
|
||||||
# parse the xml from the first object into a new object and make sure
|
|
||||||
# they are the same
|
|
||||||
obj2 = config.LibvirtConfigGuestInterface()
|
|
||||||
obj2.parse_str(xml)
|
|
||||||
self.assertXmlEqual(xml, obj2.to_xml())
|
|
||||||
|
|
||||||
def test_config_8021Qbh(self):
|
def test_config_8021Qbh(self):
|
||||||
obj = config.LibvirtConfigGuestInterface()
|
obj = config.LibvirtConfigGuestInterface()
|
||||||
obj.net_type = "direct"
|
obj.net_type = "direct"
|
||||||
@ -2460,100 +2439,6 @@ class LibvirtConfigGuestTest(LibvirtConfigBaseTest):
|
|||||||
</idmap>
|
</idmap>
|
||||||
</domain>""", xml)
|
</domain>""", xml)
|
||||||
|
|
||||||
def test_config_xen_pv(self):
|
|
||||||
obj = config.LibvirtConfigGuest()
|
|
||||||
obj.virt_type = "xen"
|
|
||||||
obj.memory = 100 * units.Mi
|
|
||||||
obj.vcpus = 2
|
|
||||||
obj.cpuset = set([0, 1, 3, 4, 5])
|
|
||||||
obj.name = "demo"
|
|
||||||
obj.uuid = "b38a3f43-4be2-4046-897f-b67c2f5e0147"
|
|
||||||
obj.os_type = "linux"
|
|
||||||
obj.os_kernel = "/tmp/vmlinuz"
|
|
||||||
obj.os_initrd = "/tmp/ramdisk"
|
|
||||||
obj.os_cmdline = "console=xvc0"
|
|
||||||
|
|
||||||
disk = config.LibvirtConfigGuestDisk()
|
|
||||||
disk.source_type = "file"
|
|
||||||
disk.source_path = "/tmp/img"
|
|
||||||
disk.target_dev = "/dev/xvda"
|
|
||||||
disk.target_bus = "xen"
|
|
||||||
|
|
||||||
obj.add_device(disk)
|
|
||||||
|
|
||||||
xml = obj.to_xml()
|
|
||||||
self.assertXmlEqual(xml, """
|
|
||||||
<domain type="xen">
|
|
||||||
<uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>
|
|
||||||
<name>demo</name>
|
|
||||||
<memory>104857600</memory>
|
|
||||||
<vcpu cpuset="0-1,3-5">2</vcpu>
|
|
||||||
<os>
|
|
||||||
<type>linux</type>
|
|
||||||
<kernel>/tmp/vmlinuz</kernel>
|
|
||||||
<initrd>/tmp/ramdisk</initrd>
|
|
||||||
<cmdline>console=xvc0</cmdline>
|
|
||||||
</os>
|
|
||||||
<devices>
|
|
||||||
<disk type="file" device="disk">
|
|
||||||
<source file="/tmp/img"/>
|
|
||||||
<target bus="xen" dev="/dev/xvda"/>
|
|
||||||
</disk>
|
|
||||||
</devices>
|
|
||||||
</domain>""")
|
|
||||||
|
|
||||||
def test_config_xen_hvm(self):
|
|
||||||
obj = config.LibvirtConfigGuest()
|
|
||||||
obj.virt_type = "xen"
|
|
||||||
obj.memory = 100 * units.Mi
|
|
||||||
obj.vcpus = 2
|
|
||||||
obj.cpuset = set([0, 1, 3, 4, 5])
|
|
||||||
obj.name = "demo"
|
|
||||||
obj.uuid = "b38a3f43-4be2-4046-897f-b67c2f5e0147"
|
|
||||||
obj.os_type = "hvm"
|
|
||||||
obj.os_loader = '/usr/lib/xen/boot/hvmloader'
|
|
||||||
obj.os_root = "root=xvda"
|
|
||||||
obj.os_cmdline = "console=xvc0"
|
|
||||||
obj.features = [
|
|
||||||
config.LibvirtConfigGuestFeatureACPI(),
|
|
||||||
config.LibvirtConfigGuestFeatureAPIC(),
|
|
||||||
config.LibvirtConfigGuestFeaturePAE(),
|
|
||||||
]
|
|
||||||
|
|
||||||
disk = config.LibvirtConfigGuestDisk()
|
|
||||||
disk.source_type = "file"
|
|
||||||
disk.source_path = "/tmp/img"
|
|
||||||
disk.target_dev = "/dev/xvda"
|
|
||||||
disk.target_bus = "xen"
|
|
||||||
|
|
||||||
obj.add_device(disk)
|
|
||||||
|
|
||||||
xml = obj.to_xml()
|
|
||||||
self.assertXmlEqual(xml, """
|
|
||||||
<domain type="xen">
|
|
||||||
<uuid>b38a3f43-4be2-4046-897f-b67c2f5e0147</uuid>
|
|
||||||
<name>demo</name>
|
|
||||||
<memory>104857600</memory>
|
|
||||||
<vcpu cpuset="0-1,3-5">2</vcpu>
|
|
||||||
<os>
|
|
||||||
<type>hvm</type>
|
|
||||||
<loader>/usr/lib/xen/boot/hvmloader</loader>
|
|
||||||
<cmdline>console=xvc0</cmdline>
|
|
||||||
<root>root=xvda</root>
|
|
||||||
</os>
|
|
||||||
<features>
|
|
||||||
<acpi/>
|
|
||||||
<apic/>
|
|
||||||
<pae/>
|
|
||||||
</features>
|
|
||||||
<devices>
|
|
||||||
<disk type="file" device="disk">
|
|
||||||
<source file="/tmp/img"/>
|
|
||||||
<target bus="xen" dev="/dev/xvda"/>
|
|
||||||
</disk>
|
|
||||||
</devices>
|
|
||||||
</domain>""")
|
|
||||||
|
|
||||||
def test_config_kvm(self):
|
def test_config_kvm(self):
|
||||||
obj = fake_libvirt_data.fake_kvm_guest()
|
obj = fake_libvirt_data.fake_kvm_guest()
|
||||||
|
|
||||||
|
@ -1786,17 +1786,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
libvirt_driver.libvirt.VIR_MIGRATE_LIVE |
|
libvirt_driver.libvirt.VIR_MIGRATE_LIVE |
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC))
|
libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC))
|
||||||
|
|
||||||
def test_parse_migration_flags_p2p_xen(self):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self._do_test_parse_migration_flags(
|
|
||||||
lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE |
|
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST |
|
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_LIVE),
|
|
||||||
bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE |
|
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST |
|
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_LIVE |
|
|
||||||
libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC))
|
|
||||||
|
|
||||||
def test_live_migration_tunnelled_true(self):
|
def test_live_migration_tunnelled_true(self):
|
||||||
self.flags(live_migration_tunnelled=True, group='libvirt')
|
self.flags(live_migration_tunnelled=True, group='libvirt')
|
||||||
self._do_test_parse_migration_flags(
|
self._do_test_parse_migration_flags(
|
||||||
@ -2206,19 +2195,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
# The error should have been logged as well.
|
# The error should have been logged as well.
|
||||||
self.assertIn(str(error), log_output)
|
self.assertIn(str(error), log_output)
|
||||||
|
|
||||||
@mock.patch.object(fakelibvirt.virConnect, "nodeDeviceLookupByName")
|
|
||||||
def test_prepare_pci_device(self, mock_lookup):
|
|
||||||
|
|
||||||
pci_devices = [dict(hypervisor_name='xxx')]
|
|
||||||
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
conn = drvr._host.get_connection()
|
|
||||||
|
|
||||||
mock_lookup.side_effect = lambda x: fakelibvirt.NodeDevice(conn)
|
|
||||||
drvr._prepare_pci_devices_for_use(pci_devices)
|
|
||||||
|
|
||||||
@mock.patch('nova.context.get_admin_context')
|
@mock.patch('nova.context.get_admin_context')
|
||||||
@mock.patch('nova.compute.utils.notify_about_libvirt_connect_error')
|
@mock.patch('nova.compute.utils.notify_about_libvirt_connect_error')
|
||||||
def test_versioned_notification(self, mock_notify, mock_get):
|
def test_versioned_notification(self, mock_notify, mock_get):
|
||||||
@ -2238,24 +2214,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
mock_notify.assert_called_once_with(self.context, ip=CONF.my_ip,
|
mock_notify.assert_called_once_with(self.context, ip=CONF.my_ip,
|
||||||
exception=fake_error)
|
exception=fake_error)
|
||||||
|
|
||||||
@mock.patch.object(fakelibvirt.virConnect, "nodeDeviceLookupByName")
|
|
||||||
@mock.patch.object(fakelibvirt.virNodeDevice, "dettach")
|
|
||||||
def test_prepare_pci_device_exception(self, mock_detach, mock_lookup):
|
|
||||||
|
|
||||||
pci_devices = [dict(hypervisor_name='xxx',
|
|
||||||
id='id1',
|
|
||||||
instance_uuid='uuid')]
|
|
||||||
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
conn = drvr._host.get_connection()
|
|
||||||
|
|
||||||
mock_lookup.side_effect = lambda x: fakelibvirt.NodeDevice(conn)
|
|
||||||
mock_detach.side_effect = fakelibvirt.libvirtError("xxxx")
|
|
||||||
|
|
||||||
self.assertRaises(exception.PciDevicePrepareFailed,
|
|
||||||
drvr._prepare_pci_devices_for_use, pci_devices)
|
|
||||||
|
|
||||||
@mock.patch.object(host.Host, "has_min_version", return_value=False)
|
@mock.patch.object(host.Host, "has_min_version", return_value=False)
|
||||||
def test_device_metadata(self, mock_version):
|
def test_device_metadata(self, mock_version):
|
||||||
xml = """
|
xml = """
|
||||||
@ -3653,17 +3611,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
exception.NUMATopologyUnsupported,
|
exception.NUMATopologyUnsupported,
|
||||||
None)
|
None)
|
||||||
|
|
||||||
def test_get_guest_config_numa_xen(self):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self._test_get_guest_config_numa_unsupported(
|
|
||||||
versionutils.convert_version_to_int(
|
|
||||||
libvirt_driver.MIN_LIBVIRT_VERSION),
|
|
||||||
versionutils.convert_version_to_int((4, 5, 0)),
|
|
||||||
'XEN',
|
|
||||||
fields.Architecture.X86_64,
|
|
||||||
exception.NUMATopologyUnsupported,
|
|
||||||
None)
|
|
||||||
|
|
||||||
@mock.patch.object(
|
@mock.patch.object(
|
||||||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||||
def test_get_guest_config_numa_host_instance_fit_w_cpu_pinset(
|
def test_get_guest_config_numa_host_instance_fit_w_cpu_pinset(
|
||||||
@ -5972,8 +5919,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
"pty", vconfig.LibvirtConfigGuestSerial)
|
"pty", vconfig.LibvirtConfigGuestSerial)
|
||||||
_test_consoles(fields.Architecture.S390, True,
|
_test_consoles(fields.Architecture.S390, True,
|
||||||
"tcp", vconfig.LibvirtConfigGuestConsole)
|
"tcp", vconfig.LibvirtConfigGuestConsole)
|
||||||
_test_consoles(fields.Architecture.X86_64, False,
|
|
||||||
"pty", vconfig.LibvirtConfigGuestConsole, 'xen')
|
|
||||||
|
|
||||||
@mock.patch('nova.console.serial.acquire_port')
|
@mock.patch('nova.console.serial.acquire_port')
|
||||||
def test_get_guest_config_serial_console_through_port_rng_exhausted(
|
def test_get_guest_config_serial_console_through_port_rng_exhausted(
|
||||||
@ -6129,39 +6074,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
i = drvr._get_scsi_controller_next_unit(guest)
|
i = drvr._get_scsi_controller_next_unit(guest)
|
||||||
self.assertEqual(expect_num, i)
|
self.assertEqual(expect_num, i)
|
||||||
|
|
||||||
def test_get_guest_config_with_type_xen(self):
|
|
||||||
self.flags(enabled=True, group='vnc')
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self.flags(enabled=False, group='spice')
|
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
instance_ref = objects.Instance(**self.test_instance)
|
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
|
||||||
|
|
||||||
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
|
|
||||||
instance_ref,
|
|
||||||
image_meta)
|
|
||||||
cfg = drvr._get_guest_config(instance_ref, [],
|
|
||||||
image_meta, disk_info)
|
|
||||||
self.assertEqual(len(cfg.devices), 7)
|
|
||||||
self.assertIsInstance(cfg.devices[0],
|
|
||||||
vconfig.LibvirtConfigGuestDisk)
|
|
||||||
self.assertIsInstance(cfg.devices[1],
|
|
||||||
vconfig.LibvirtConfigGuestDisk)
|
|
||||||
self.assertIsInstance(cfg.devices[2],
|
|
||||||
vconfig.LibvirtConfigGuestConsole)
|
|
||||||
self.assertIsInstance(cfg.devices[3],
|
|
||||||
vconfig.LibvirtConfigGuestGraphics)
|
|
||||||
self.assertIsInstance(cfg.devices[4],
|
|
||||||
vconfig.LibvirtConfigGuestVideo)
|
|
||||||
self.assertIsInstance(cfg.devices[5],
|
|
||||||
vconfig.LibvirtConfigGuestUSBHostController)
|
|
||||||
self.assertIsInstance(cfg.devices[6],
|
|
||||||
vconfig.LibvirtConfigMemoryBalloon)
|
|
||||||
|
|
||||||
self.assertEqual(cfg.devices[3].type, "vnc")
|
|
||||||
self.assertEqual(cfg.devices[4].type, "xen")
|
|
||||||
|
|
||||||
@mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',
|
@mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',
|
||||||
return_value=fields.Architecture.S390X)
|
return_value=fields.Architecture.S390X)
|
||||||
def test_get_guest_config_with_type_kvm_on_s390(self, mock_get_arch):
|
def test_get_guest_config_with_type_kvm_on_s390(self, mock_get_arch):
|
||||||
@ -6207,55 +6119,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
return drvr._get_guest_config(instance, [],
|
return drvr._get_guest_config(instance, [],
|
||||||
image_meta, disk_info)
|
image_meta, disk_info)
|
||||||
|
|
||||||
def test_get_guest_config_with_type_xen_pae_hvm(self):
|
|
||||||
self.flags(enabled=True, group='vnc')
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self.flags(enabled=False, group='spice')
|
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
instance_ref = objects.Instance(**self.test_instance)
|
|
||||||
instance_ref['vm_mode'] = fields.VMMode.HVM
|
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
|
||||||
|
|
||||||
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
|
|
||||||
instance_ref,
|
|
||||||
image_meta)
|
|
||||||
|
|
||||||
cfg = drvr._get_guest_config(instance_ref, [],
|
|
||||||
image_meta, disk_info)
|
|
||||||
|
|
||||||
self.assertEqual(cfg.os_type, fields.VMMode.HVM)
|
|
||||||
self.assertEqual(cfg.os_loader, CONF.libvirt.xen_hvmloader_path)
|
|
||||||
self.assertEqual(3, len(cfg.features))
|
|
||||||
self.assertIsInstance(cfg.features[0],
|
|
||||||
vconfig.LibvirtConfigGuestFeaturePAE)
|
|
||||||
self.assertIsInstance(cfg.features[1],
|
|
||||||
vconfig.LibvirtConfigGuestFeatureACPI)
|
|
||||||
self.assertIsInstance(cfg.features[2],
|
|
||||||
vconfig.LibvirtConfigGuestFeatureAPIC)
|
|
||||||
|
|
||||||
def test_get_guest_config_with_type_xen_pae_pvm(self):
|
|
||||||
self.flags(enabled=True, group='vnc')
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self.flags(enabled=False, group='spice')
|
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
instance_ref = objects.Instance(**self.test_instance)
|
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
|
||||||
|
|
||||||
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
|
|
||||||
instance_ref,
|
|
||||||
image_meta)
|
|
||||||
|
|
||||||
cfg = drvr._get_guest_config(instance_ref, [],
|
|
||||||
image_meta, disk_info)
|
|
||||||
|
|
||||||
self.assertEqual(cfg.os_type, fields.VMMode.XEN)
|
|
||||||
self.assertEqual(1, len(cfg.features))
|
|
||||||
self.assertIsInstance(cfg.features[0],
|
|
||||||
vconfig.LibvirtConfigGuestFeaturePAE)
|
|
||||||
|
|
||||||
def test_get_guest_config_with_vnc_and_spice(self):
|
|
||||||
self.flags(enabled=True, group='vnc')
|
self.flags(enabled=True, group='vnc')
|
||||||
self.flags(virt_type='kvm', group='libvirt')
|
self.flags(virt_type='kvm', group='libvirt')
|
||||||
self.flags(enabled=True, agent_enabled=True, group='spice')
|
self.flags(enabled=True, agent_enabled=True, group='spice')
|
||||||
@ -6429,14 +6292,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
True, True, fields.VMMode.HVM, image_meta=image_meta)
|
True, True, fields.VMMode.HVM, image_meta=image_meta)
|
||||||
self.assertIsNotNone(tablet)
|
self.assertIsNotNone(tablet)
|
||||||
|
|
||||||
def test_get_guest_pointer_model_usb_tablet_image_no_HVM(self):
|
|
||||||
self.flags(pointer_model=None)
|
|
||||||
image_meta = {"properties": {"hw_pointer_model": "usbtablet"}}
|
|
||||||
self.assertRaises(
|
|
||||||
exception.UnsupportedPointerModelRequested,
|
|
||||||
self._test_get_guest_usb_tablet,
|
|
||||||
True, True, fields.VMMode.XEN, image_meta=image_meta)
|
|
||||||
|
|
||||||
def test_get_guest_config_with_watchdog_action_flavor(self):
|
def test_get_guest_config_with_watchdog_action_flavor(self):
|
||||||
self.flags(virt_type='kvm', group='libvirt')
|
self.flags(virt_type='kvm', group='libvirt')
|
||||||
|
|
||||||
@ -7514,48 +7369,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self.assertEqual(dev.function, "1")
|
self.assertEqual(dev.function, "1")
|
||||||
self.assertEqual(had_pci, 1)
|
self.assertEqual(had_pci, 1)
|
||||||
|
|
||||||
def test_get_guest_config_with_pci_passthrough_xen(self):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
service_ref, compute_ref = self._create_fake_service_compute()
|
|
||||||
|
|
||||||
instance = objects.Instance(**self.test_instance)
|
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
|
||||||
|
|
||||||
pci_device_info = dict(test_pci_device.fake_db_dev)
|
|
||||||
pci_device_info.update(compute_node_id=1,
|
|
||||||
label='fake',
|
|
||||||
status=fields.PciDeviceStatus.ALLOCATED,
|
|
||||||
address='0000:00:00.2',
|
|
||||||
compute_id=compute_ref.id,
|
|
||||||
instance_uuid=instance.uuid,
|
|
||||||
request_id=None,
|
|
||||||
extra_info={})
|
|
||||||
pci_device = objects.PciDevice(**pci_device_info)
|
|
||||||
pci_list = objects.PciDeviceList()
|
|
||||||
pci_list.objects.append(pci_device)
|
|
||||||
instance.pci_devices = pci_list
|
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
|
|
||||||
instance,
|
|
||||||
image_meta)
|
|
||||||
cfg = drvr._get_guest_config(instance, [],
|
|
||||||
image_meta, disk_info)
|
|
||||||
had_pci = 0
|
|
||||||
# care only about the PCI devices
|
|
||||||
for dev in cfg.devices:
|
|
||||||
if type(dev) == vconfig.LibvirtConfigGuestHostdevPCI:
|
|
||||||
had_pci += 1
|
|
||||||
self.assertEqual(dev.type, 'pci')
|
|
||||||
self.assertEqual(dev.managed, 'no')
|
|
||||||
self.assertEqual(dev.mode, 'subsystem')
|
|
||||||
|
|
||||||
self.assertEqual(dev.domain, "0000")
|
|
||||||
self.assertEqual(dev.bus, "00")
|
|
||||||
self.assertEqual(dev.slot, "00")
|
|
||||||
self.assertEqual(dev.function, "2")
|
|
||||||
self.assertEqual(had_pci, 1)
|
|
||||||
|
|
||||||
def test_get_guest_config_os_command_line_through_image_meta(self):
|
def test_get_guest_config_os_command_line_through_image_meta(self):
|
||||||
self.flags(virt_type="kvm",
|
self.flags(virt_type="kvm",
|
||||||
cpu_mode='none',
|
cpu_mode='none',
|
||||||
@ -8545,24 +8358,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self.assertEqual('virtio', device.model)
|
self.assertEqual('virtio', device.model)
|
||||||
self.assertEqual(10, device.period)
|
self.assertEqual(10, device.period)
|
||||||
|
|
||||||
def test_get_guest_memory_balloon_config_xen(self):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
|
||||||
instance_ref = objects.Instance(**self.test_instance)
|
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
|
||||||
|
|
||||||
disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
|
|
||||||
instance_ref,
|
|
||||||
image_meta)
|
|
||||||
cfg = drvr._get_guest_config(instance_ref, [],
|
|
||||||
image_meta, disk_info)
|
|
||||||
for device in cfg.devices:
|
|
||||||
if device.root_name == 'memballoon':
|
|
||||||
self.assertIsInstance(device,
|
|
||||||
vconfig.LibvirtConfigMemoryBalloon)
|
|
||||||
self.assertEqual('xen', device.model)
|
|
||||||
self.assertEqual(10, device.period)
|
|
||||||
|
|
||||||
def test_get_guest_memory_balloon_config_lxc(self):
|
def test_get_guest_memory_balloon_config_lxc(self):
|
||||||
self.flags(virt_type='lxc', group='libvirt')
|
self.flags(virt_type='lxc', group='libvirt')
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
||||||
@ -8649,19 +8444,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self._check_xml_and_uri(instance_data,
|
self._check_xml_and_uri(instance_data,
|
||||||
expect_kernel=False, expect_ramdisk=False)
|
expect_kernel=False, expect_ramdisk=False)
|
||||||
|
|
||||||
def test_xml_and_uri_no_ramdisk_no_kernel_xen_hvm(self):
|
|
||||||
instance_data = dict(self.test_instance)
|
|
||||||
instance_data.update({'vm_mode': fields.VMMode.HVM})
|
|
||||||
self._check_xml_and_uri(instance_data, expect_kernel=False,
|
|
||||||
expect_ramdisk=False, expect_xen_hvm=True)
|
|
||||||
|
|
||||||
def test_xml_and_uri_no_ramdisk_no_kernel_xen_pv(self):
|
|
||||||
instance_data = dict(self.test_instance)
|
|
||||||
instance_data.update({'vm_mode': fields.VMMode.XEN})
|
|
||||||
self._check_xml_and_uri(instance_data, expect_kernel=False,
|
|
||||||
expect_ramdisk=False, expect_xen_hvm=False,
|
|
||||||
xen_only=True)
|
|
||||||
|
|
||||||
def test_xml_and_uri_no_ramdisk(self):
|
def test_xml_and_uri_no_ramdisk(self):
|
||||||
instance_data = dict(self.test_instance)
|
instance_data = dict(self.test_instance)
|
||||||
instance_data['kernel_id'] = 'aki-deadbeef'
|
instance_data['kernel_id'] = 'aki-deadbeef'
|
||||||
@ -8818,7 +8600,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self.assertEqual(names[1], vm2.name())
|
self.assertEqual(names[1], vm2.name())
|
||||||
self.assertEqual(names[2], vm3.name())
|
self.assertEqual(names[2], vm3.name())
|
||||||
self.assertEqual(names[3], vm4.name())
|
self.assertEqual(names[3], vm4.name())
|
||||||
mock_list.assert_called_with(only_guests=True, only_running=False)
|
mock_list.assert_called_with(only_running=False)
|
||||||
|
|
||||||
@mock.patch.object(host.Host, "list_instance_domains")
|
@mock.patch.object(host.Host, "list_instance_domains")
|
||||||
def test_list_instance_uuids(self, mock_list):
|
def test_list_instance_uuids(self, mock_list):
|
||||||
@ -8835,7 +8617,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self.assertEqual(uuids[1], vm2.UUIDString())
|
self.assertEqual(uuids[1], vm2.UUIDString())
|
||||||
self.assertEqual(uuids[2], vm3.UUIDString())
|
self.assertEqual(uuids[2], vm3.UUIDString())
|
||||||
self.assertEqual(uuids[3], vm4.UUIDString())
|
self.assertEqual(uuids[3], vm4.UUIDString())
|
||||||
mock_list.assert_called_with(only_guests=True, only_running=False)
|
mock_list.assert_called_with(only_running=False)
|
||||||
|
|
||||||
@mock.patch('nova.virt.libvirt.host.Host.get_online_cpus',
|
@mock.patch('nova.virt.libvirt.host.Host.get_online_cpus',
|
||||||
return_value=set([0, 1, 2, 3]))
|
return_value=set([0, 1, 2, 3]))
|
||||||
@ -10387,10 +10169,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
(lambda t: t.find('.').get('type'), 'qemu'),
|
(lambda t: t.find('.').get('type'), 'qemu'),
|
||||||
(lambda t: t.find('./devices/disk/target').get('dev'),
|
(lambda t: t.find('./devices/disk/target').get('dev'),
|
||||||
_get_prefix(prefix, 'vda'))],
|
_get_prefix(prefix, 'vda'))],
|
||||||
'xen': [
|
|
||||||
(lambda t: t.find('.').get('type'), 'xen'),
|
|
||||||
(lambda t: t.find('./devices/disk/target').get('dev'),
|
|
||||||
_get_prefix(prefix, 'xvda'))],
|
|
||||||
'kvm': [
|
'kvm': [
|
||||||
(lambda t: t.find('.').get('type'), 'kvm'),
|
(lambda t: t.find('.').get('type'), 'kvm'),
|
||||||
(lambda t: t.find('./devices/disk/target').get('dev'),
|
(lambda t: t.find('./devices/disk/target').get('dev'),
|
||||||
@ -10520,15 +10298,11 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
"_get_host_sysinfo_serial_hardware",)
|
"_get_host_sysinfo_serial_hardware",)
|
||||||
def _check_xml_and_uri(self, instance, mock_serial,
|
def _check_xml_and_uri(self, instance, mock_serial,
|
||||||
expect_ramdisk=False, expect_kernel=False,
|
expect_ramdisk=False, expect_kernel=False,
|
||||||
rescue=None, expect_xen_hvm=False, xen_only=False):
|
rescue=None):
|
||||||
mock_serial.return_value = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
mock_serial.return_value = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
||||||
instance_ref = objects.Instance(**instance)
|
instance_ref = objects.Instance(**instance)
|
||||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
||||||
|
|
||||||
xen_vm_mode = fields.VMMode.XEN
|
|
||||||
if expect_xen_hvm:
|
|
||||||
xen_vm_mode = fields.VMMode.HVM
|
|
||||||
|
|
||||||
type_uri_map = {'qemu': ('qemu:///system',
|
type_uri_map = {'qemu': ('qemu:///system',
|
||||||
[(lambda t: t.find('.').get('type'), 'qemu'),
|
[(lambda t: t.find('.').get('type'), 'qemu'),
|
||||||
(lambda t: t.find('./os/type').text,
|
(lambda t: t.find('./os/type').text,
|
||||||
@ -10539,15 +10313,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
(lambda t: t.find('./os/type').text,
|
(lambda t: t.find('./os/type').text,
|
||||||
fields.VMMode.HVM),
|
fields.VMMode.HVM),
|
||||||
(lambda t: t.find('./devices/emulator'), None)]),
|
(lambda t: t.find('./devices/emulator'), None)]),
|
||||||
'xen': ('xen:///',
|
}
|
||||||
[(lambda t: t.find('.').get('type'), 'xen'),
|
|
||||||
(lambda t: t.find('./os/type').text,
|
|
||||||
xen_vm_mode)])}
|
|
||||||
|
|
||||||
if expect_xen_hvm or xen_only:
|
hypervisors_to_check = ['qemu', 'kvm']
|
||||||
hypervisors_to_check = ['xen']
|
|
||||||
else:
|
|
||||||
hypervisors_to_check = ['qemu', 'kvm', 'xen']
|
|
||||||
|
|
||||||
for hypervisor_type in hypervisors_to_check:
|
for hypervisor_type in hypervisors_to_check:
|
||||||
check_list = type_uri_map[hypervisor_type][1]
|
check_list = type_uri_map[hypervisor_type][1]
|
||||||
@ -11036,13 +10804,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
instance)
|
instance)
|
||||||
self.assertIsNone(ret)
|
self.assertIsNone(ret)
|
||||||
|
|
||||||
def test_compare_cpu_virt_type_xen(self):
|
|
||||||
instance = objects.Instance(**self.test_instance)
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
|
||||||
ret = conn._compare_cpu(None, None, instance)
|
|
||||||
self.assertIsNone(ret)
|
|
||||||
|
|
||||||
def test_compare_cpu_virt_type_qemu(self):
|
def test_compare_cpu_virt_type_qemu(self):
|
||||||
instance = objects.Instance(**self.test_instance)
|
instance = objects.Instance(**self.test_instance)
|
||||||
self.flags(virt_type='qemu', group='libvirt')
|
self.flags(virt_type='qemu', group='libvirt')
|
||||||
@ -11942,7 +11703,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
)
|
)
|
||||||
|
|
||||||
hypervisor_uri_map = (
|
hypervisor_uri_map = (
|
||||||
('xen', 'xenmigr://%s/system'),
|
|
||||||
('kvm', 'qemu+tcp://%s/system'),
|
('kvm', 'qemu+tcp://%s/system'),
|
||||||
('qemu', 'qemu+tcp://%s/system'),
|
('qemu', 'qemu+tcp://%s/system'),
|
||||||
('parallels', 'parallels+tcp://%s/system'),
|
('parallels', 'parallels+tcp://%s/system'),
|
||||||
@ -11966,7 +11726,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
addresses = ('::1', '0:0:0:0:0:0:0:1', u'::1')
|
addresses = ('::1', '0:0:0:0:0:0:0:1', u'::1')
|
||||||
|
|
||||||
hypervisor_uri_map = (
|
hypervisor_uri_map = (
|
||||||
('xen', 'xenmigr://[%s]/system'),
|
|
||||||
('kvm', 'qemu+tcp://[%s]/system'),
|
('kvm', 'qemu+tcp://[%s]/system'),
|
||||||
('qemu', 'qemu+tcp://[%s]/system'),
|
('qemu', 'qemu+tcp://[%s]/system'),
|
||||||
('parallels', 'parallels+tcp://[%s]/system'),
|
('parallels', 'parallels+tcp://[%s]/system'),
|
||||||
@ -11988,14 +11747,13 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
|
|
||||||
def test_live_migration_uri_forced(self):
|
def test_live_migration_uri_forced(self):
|
||||||
dest = 'destination'
|
dest = 'destination'
|
||||||
for hyperv in ('kvm', 'xen'):
|
self.flags(virt_type='kvm', group='libvirt')
|
||||||
self.flags(virt_type=hyperv, group='libvirt')
|
|
||||||
|
|
||||||
forced_uri = 'foo://%s/bar'
|
forced_uri = 'foo://%s/bar'
|
||||||
self.flags(live_migration_uri=forced_uri, group='libvirt')
|
self.flags(live_migration_uri=forced_uri, group='libvirt')
|
||||||
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
||||||
self.assertEqual(forced_uri % dest, drvr._live_migration_uri(dest))
|
self.assertEqual(forced_uri % dest, drvr._live_migration_uri(dest))
|
||||||
|
|
||||||
def test_live_migration_scheme(self):
|
def test_live_migration_scheme(self):
|
||||||
self.flags(live_migration_scheme='ssh', group='libvirt')
|
self.flags(live_migration_scheme='ssh', group='libvirt')
|
||||||
@ -12014,7 +11772,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
|
|
||||||
def test_migrate_uri(self):
|
def test_migrate_uri(self):
|
||||||
hypervisor_uri_map = (
|
hypervisor_uri_map = (
|
||||||
('xen', None),
|
|
||||||
('kvm', 'tcp://%s'),
|
('kvm', 'tcp://%s'),
|
||||||
('qemu', 'tcp://%s'),
|
('qemu', 'tcp://%s'),
|
||||||
)
|
)
|
||||||
@ -16188,7 +15945,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
@mock.patch('oslo_utils.fileutils.ensure_tree')
|
@mock.patch('oslo_utils.fileutils.ensure_tree')
|
||||||
@mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall')
|
@mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall')
|
||||||
@mock.patch('nova.pci.manager.get_instance_pci_devs')
|
@mock.patch('nova.pci.manager.get_instance_pci_devs')
|
||||||
@mock.patch('nova.virt.libvirt.LibvirtDriver._prepare_pci_devices_for_use')
|
|
||||||
@mock.patch('nova.virt.libvirt.LibvirtDriver._create_guest_with_network')
|
@mock.patch('nova.virt.libvirt.LibvirtDriver._create_guest_with_network')
|
||||||
@mock.patch('nova.virt.libvirt.LibvirtDriver._create_images_and_backing')
|
@mock.patch('nova.virt.libvirt.LibvirtDriver._create_images_and_backing')
|
||||||
@mock.patch('nova.virt.libvirt.LibvirtDriver.'
|
@mock.patch('nova.virt.libvirt.LibvirtDriver.'
|
||||||
@ -16204,7 +15960,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
mock_get_mdev, mock_destroy, mock_get_disk_info,
|
mock_get_mdev, mock_destroy, mock_get_disk_info,
|
||||||
mock_get_guest_config, mock_get_instance_path,
|
mock_get_guest_config, mock_get_instance_path,
|
||||||
mock_get_instance_disk_info, mock_create_images_and_backing,
|
mock_get_instance_disk_info, mock_create_images_and_backing,
|
||||||
mock_create_domand_and_network, mock_prepare_pci_devices_for_use,
|
mock_create_domand_and_network,
|
||||||
mock_get_instance_pci_devs, mock_looping_call, mock_ensure_tree):
|
mock_get_instance_pci_devs, mock_looping_call, mock_ensure_tree):
|
||||||
"""For a hard reboot, we shouldn't need an additional call to glance
|
"""For a hard reboot, we shouldn't need an additional call to glance
|
||||||
to get the image metadata.
|
to get the image metadata.
|
||||||
@ -17687,21 +17443,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
self.assertIsNone(drvr._get_host_numa_topology())
|
self.assertIsNone(drvr._get_host_numa_topology())
|
||||||
self.assertEqual(2, get_caps.call_count)
|
self.assertEqual(2, get_caps.call_count)
|
||||||
|
|
||||||
@mock.patch.object(fakelibvirt.Connection, 'getType')
|
|
||||||
@mock.patch.object(fakelibvirt.Connection, 'getVersion')
|
|
||||||
@mock.patch.object(fakelibvirt.Connection, 'getLibVersion')
|
|
||||||
def test_get_host_numa_topology_xen(self, mock_lib_version,
|
|
||||||
mock_version, mock_type):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
|
||||||
|
|
||||||
mock_lib_version.return_value = versionutils.convert_version_to_int(
|
|
||||||
libvirt_driver.MIN_LIBVIRT_VERSION)
|
|
||||||
mock_version.return_value = versionutils.convert_version_to_int(
|
|
||||||
libvirt_driver.MIN_QEMU_VERSION)
|
|
||||||
mock_type.return_value = host.HV_DRIVER_XEN
|
|
||||||
self.assertIsNone(drvr._get_host_numa_topology())
|
|
||||||
|
|
||||||
@mock.patch.object(host.Host, 'has_min_version', return_value=True)
|
@mock.patch.object(host.Host, 'has_min_version', return_value=True)
|
||||||
def test_get_host_numa_topology_missing_network_metadata(self,
|
def test_get_host_numa_topology_missing_network_metadata(self,
|
||||||
mock_version):
|
mock_version):
|
||||||
@ -18371,7 +18112,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
||||||
|
|
||||||
self.assertEqual(6, drvr._get_vcpu_used())
|
self.assertEqual(6, drvr._get_vcpu_used())
|
||||||
mock_list.assert_called_with(only_guests=True, only_running=True)
|
mock_list.assert_called_with(only_running=True)
|
||||||
|
|
||||||
def test_get_instance_capabilities(self):
|
def test_get_instance_capabilities(self):
|
||||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
||||||
@ -23905,11 +23646,7 @@ class LibvirtDriverTestCase(test.NoDBTestCase, TraitsComparisonMixin):
|
|||||||
'hw_rescue_bus': 'virtio'}}
|
'hw_rescue_bus': 'virtio'}}
|
||||||
rescue_image_meta = objects.ImageMeta.from_dict(rescue_image_meta_dict)
|
rescue_image_meta = objects.ImageMeta.from_dict(rescue_image_meta_dict)
|
||||||
|
|
||||||
# Assert that InstanceNotRescuable is raised for xen and lxc virt_types
|
# Assert that InstanceNotRescuable is raised for lxc virt_type
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
self.assertRaises(exception.InstanceNotRescuable, self.drvr.rescue,
|
|
||||||
self.context, instance, network_info,
|
|
||||||
rescue_image_meta, None, None)
|
|
||||||
|
|
||||||
self.flags(virt_type='lxc', group='libvirt')
|
self.flags(virt_type='lxc', group='libvirt')
|
||||||
self.assertRaises(exception.InstanceNotRescuable, self.drvr.rescue,
|
self.assertRaises(exception.InstanceNotRescuable, self.drvr.rescue,
|
||||||
@ -25529,7 +25266,7 @@ class LibvirtDriverTestCase(test.NoDBTestCase, TraitsComparisonMixin):
|
|||||||
all_traits = set(ot.get_traits('COMPUTE_STORAGE_BUS_'))
|
all_traits = set(ot.get_traits('COMPUTE_STORAGE_BUS_'))
|
||||||
# ensure each virt type reports the correct bus types
|
# ensure each virt type reports the correct bus types
|
||||||
for virt_type, buses in blockinfo.SUPPORTED_DEVICE_BUSES.items():
|
for virt_type, buses in blockinfo.SUPPORTED_DEVICE_BUSES.items():
|
||||||
if virt_type in ('qemu', 'kvm', 'uml'):
|
if virt_type in ('qemu', 'kvm', 'uml', 'xen'):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
self.flags(virt_type=virt_type, group='libvirt')
|
self.flags(virt_type=virt_type, group='libvirt')
|
||||||
|
@ -33,7 +33,6 @@ from nova.tests.unit.virt.libvirt import fake_libvirt_data
|
|||||||
from nova.tests.unit.virt.libvirt import fakelibvirt
|
from nova.tests.unit.virt.libvirt import fakelibvirt
|
||||||
from nova.virt import event
|
from nova.virt import event
|
||||||
from nova.virt.libvirt import config as vconfig
|
from nova.virt.libvirt import config as vconfig
|
||||||
from nova.virt.libvirt import driver as libvirt_driver
|
|
||||||
from nova.virt.libvirt import guest as libvirt_guest
|
from nova.virt.libvirt import guest as libvirt_guest
|
||||||
from nova.virt.libvirt import host
|
from nova.virt.libvirt import host
|
||||||
|
|
||||||
@ -270,20 +269,18 @@ class HostTestCase(test.NoDBTestCase):
|
|||||||
ev = event.LifecycleEvent(
|
ev = event.LifecycleEvent(
|
||||||
"cef19ce0-0ca2-11df-855d-b19fbce37686",
|
"cef19ce0-0ca2-11df-855d-b19fbce37686",
|
||||||
event.EVENT_LIFECYCLE_STOPPED)
|
event.EVENT_LIFECYCLE_STOPPED)
|
||||||
for uri in ("qemu:///system", "xen:///"):
|
spawn_after_mock = mock.Mock()
|
||||||
spawn_after_mock = mock.Mock()
|
greenthread.spawn_after = spawn_after_mock
|
||||||
greenthread.spawn_after = spawn_after_mock
|
hostimpl = host.Host(
|
||||||
hostimpl = host.Host(uri,
|
'qemu:///system', lifecycle_event_handler=lambda e: None)
|
||||||
lifecycle_event_handler=lambda e: None)
|
hostimpl._event_emit_delayed(ev)
|
||||||
hostimpl._event_emit_delayed(ev)
|
spawn_after_mock.assert_called_once_with(
|
||||||
spawn_after_mock.assert_called_once_with(
|
15, hostimpl._event_emit, ev)
|
||||||
15, hostimpl._event_emit, ev)
|
|
||||||
|
|
||||||
@mock.patch.object(greenthread, 'spawn_after')
|
@mock.patch.object(greenthread, 'spawn_after')
|
||||||
def test_event_emit_delayed_call_delayed_pending(self, spawn_after_mock):
|
def test_event_emit_delayed_call_delayed_pending(self, spawn_after_mock):
|
||||||
hostimpl = host.Host("xen:///",
|
hostimpl = host.Host(
|
||||||
lifecycle_event_handler=lambda e: None)
|
'qemu:///system', lifecycle_event_handler=lambda e: None)
|
||||||
|
|
||||||
uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
||||||
gt_mock = mock.Mock()
|
gt_mock = mock.Mock()
|
||||||
hostimpl._events_delayed[uuid] = gt_mock
|
hostimpl._events_delayed[uuid] = gt_mock
|
||||||
@ -294,8 +291,8 @@ class HostTestCase(test.NoDBTestCase):
|
|||||||
self.assertTrue(spawn_after_mock.called)
|
self.assertTrue(spawn_after_mock.called)
|
||||||
|
|
||||||
def test_event_delayed_cleanup(self):
|
def test_event_delayed_cleanup(self):
|
||||||
hostimpl = host.Host("xen:///",
|
hostimpl = host.Host(
|
||||||
lifecycle_event_handler=lambda e: None)
|
'qemu:///system', lifecycle_event_handler=lambda e: None)
|
||||||
uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686"
|
||||||
ev = event.LifecycleEvent(
|
ev = event.LifecycleEvent(
|
||||||
uuid, event.EVENT_LIFECYCLE_STARTED)
|
uuid, event.EVENT_LIFECYCLE_STARTED)
|
||||||
@ -517,14 +514,13 @@ class HostTestCase(test.NoDBTestCase):
|
|||||||
|
|
||||||
@mock.patch.object(fakelibvirt.Connection, "listAllDomains")
|
@mock.patch.object(fakelibvirt.Connection, "listAllDomains")
|
||||||
def test_list_instance_domains(self, mock_list_all):
|
def test_list_instance_domains(self, mock_list_all):
|
||||||
vm0 = FakeVirtDomain(id=0, name="Domain-0") # Xen dom-0
|
|
||||||
vm1 = FakeVirtDomain(id=3, name="instance00000001")
|
vm1 = FakeVirtDomain(id=3, name="instance00000001")
|
||||||
vm2 = FakeVirtDomain(id=17, name="instance00000002")
|
vm2 = FakeVirtDomain(id=17, name="instance00000002")
|
||||||
vm3 = FakeVirtDomain(name="instance00000003")
|
vm3 = FakeVirtDomain(name="instance00000003")
|
||||||
vm4 = FakeVirtDomain(name="instance00000004")
|
vm4 = FakeVirtDomain(name="instance00000004")
|
||||||
|
|
||||||
def fake_list_all(flags):
|
def fake_list_all(flags):
|
||||||
vms = [vm0]
|
vms = []
|
||||||
if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE:
|
if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE:
|
||||||
vms.extend([vm1, vm2])
|
vms.extend([vm1, vm2])
|
||||||
if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_INACTIVE:
|
if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_INACTIVE:
|
||||||
@ -556,26 +552,13 @@ class HostTestCase(test.NoDBTestCase):
|
|||||||
self.assertEqual(doms[2].name(), vm3.name())
|
self.assertEqual(doms[2].name(), vm3.name())
|
||||||
self.assertEqual(doms[3].name(), vm4.name())
|
self.assertEqual(doms[3].name(), vm4.name())
|
||||||
|
|
||||||
doms = self.host.list_instance_domains(only_guests=False)
|
|
||||||
|
|
||||||
mock_list_all.assert_called_once_with(
|
|
||||||
fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE)
|
|
||||||
mock_list_all.reset_mock()
|
|
||||||
|
|
||||||
self.assertEqual(len(doms), 3)
|
|
||||||
self.assertEqual(doms[0].name(), vm0.name())
|
|
||||||
self.assertEqual(doms[1].name(), vm1.name())
|
|
||||||
self.assertEqual(doms[2].name(), vm2.name())
|
|
||||||
|
|
||||||
@mock.patch.object(host.Host, "list_instance_domains")
|
@mock.patch.object(host.Host, "list_instance_domains")
|
||||||
def test_list_guests(self, mock_list_domains):
|
def test_list_guests(self, mock_list_domains):
|
||||||
dom0 = mock.Mock(spec=fakelibvirt.virDomain)
|
dom0 = mock.Mock(spec=fakelibvirt.virDomain)
|
||||||
dom1 = mock.Mock(spec=fakelibvirt.virDomain)
|
dom1 = mock.Mock(spec=fakelibvirt.virDomain)
|
||||||
mock_list_domains.return_value = [
|
mock_list_domains.return_value = [dom0, dom1]
|
||||||
dom0, dom1]
|
result = self.host.list_guests(True)
|
||||||
result = self.host.list_guests(True, False)
|
mock_list_domains.assert_called_once_with(only_running=True)
|
||||||
mock_list_domains.assert_called_once_with(
|
|
||||||
only_running=True, only_guests=False)
|
|
||||||
self.assertEqual(dom0, result[0]._domain)
|
self.assertEqual(dom0, result[0]._domain)
|
||||||
self.assertEqual(dom1, result[1]._domain)
|
self.assertEqual(dom1, result[1]._domain)
|
||||||
|
|
||||||
@ -964,60 +947,6 @@ Active: 8381604 kB
|
|||||||
|
|
||||||
self.assertEqual(6866, self.host.get_memory_mb_used())
|
self.assertEqual(6866, self.host.get_memory_mb_used())
|
||||||
|
|
||||||
def test_sum_domain_memory_mb_xen(self):
|
|
||||||
class DiagFakeDomain(object):
|
|
||||||
def __init__(self, id, memmb):
|
|
||||||
self.id = id
|
|
||||||
self.memmb = memmb
|
|
||||||
|
|
||||||
def info(self):
|
|
||||||
return [0, 0, self.memmb * 1024]
|
|
||||||
|
|
||||||
def ID(self):
|
|
||||||
return self.id
|
|
||||||
|
|
||||||
def name(self):
|
|
||||||
return "instance000001"
|
|
||||||
|
|
||||||
def UUIDString(self):
|
|
||||||
return uuids.fake
|
|
||||||
|
|
||||||
m = mock.mock_open(read_data="""
|
|
||||||
MemTotal: 16194180 kB
|
|
||||||
MemFree: 233092 kB
|
|
||||||
MemAvailable: 8892356 kB
|
|
||||||
Buffers: 567708 kB
|
|
||||||
Cached: 8362404 kB
|
|
||||||
SwapCached: 0 kB
|
|
||||||
Active: 8381604 kB
|
|
||||||
""")
|
|
||||||
|
|
||||||
with test.nested(
|
|
||||||
mock.patch('builtins.open', m, create=True),
|
|
||||||
mock.patch.object(host.Host,
|
|
||||||
"list_guests"),
|
|
||||||
mock.patch.object(libvirt_driver.LibvirtDriver,
|
|
||||||
"_conn"),
|
|
||||||
) as (mock_file, mock_list, mock_conn):
|
|
||||||
mock_list.return_value = [
|
|
||||||
libvirt_guest.Guest(DiagFakeDomain(0, 15814)),
|
|
||||||
libvirt_guest.Guest(DiagFakeDomain(1, 750)),
|
|
||||||
libvirt_guest.Guest(DiagFakeDomain(2, 1042))]
|
|
||||||
mock_conn.getInfo.return_value = [
|
|
||||||
obj_fields.Architecture.X86_64, 15814, 8, 1208, 1, 1, 4, 2]
|
|
||||||
|
|
||||||
self.assertEqual(8657, self.host._sum_domain_memory_mb())
|
|
||||||
mock_list.assert_called_with(only_guests=False)
|
|
||||||
|
|
||||||
def test_get_memory_used_xen(self):
|
|
||||||
self.flags(virt_type='xen', group='libvirt')
|
|
||||||
with mock.patch.object(
|
|
||||||
self.host, "_sum_domain_memory_mb"
|
|
||||||
) as mock_sumDomainMemory:
|
|
||||||
mock_sumDomainMemory.return_value = 8192
|
|
||||||
self.assertEqual(8192, self.host.get_memory_mb_used())
|
|
||||||
mock_sumDomainMemory.assert_called_once_with(include_host=True)
|
|
||||||
|
|
||||||
def test_sum_domain_memory_mb_file_backed(self):
|
def test_sum_domain_memory_mb_file_backed(self):
|
||||||
class DiagFakeDomain(object):
|
class DiagFakeDomain(object):
|
||||||
def __init__(self, id, memmb):
|
def __init__(self, id, memmb):
|
||||||
@ -1043,8 +972,7 @@ Active: 8381604 kB
|
|||||||
libvirt_guest.Guest(DiagFakeDomain(2, 1024)),
|
libvirt_guest.Guest(DiagFakeDomain(2, 1024)),
|
||||||
libvirt_guest.Guest(DiagFakeDomain(3, 1024))]
|
libvirt_guest.Guest(DiagFakeDomain(3, 1024))]
|
||||||
|
|
||||||
self.assertEqual(8192,
|
self.assertEqual(8192, self.host._sum_domain_memory_mb())
|
||||||
self.host._sum_domain_memory_mb(include_host=False))
|
|
||||||
|
|
||||||
def test_get_memory_used_file_backed(self):
|
def test_get_memory_used_file_backed(self):
|
||||||
self.flags(file_backed_memory=1048576,
|
self.flags(file_backed_memory=1048576,
|
||||||
@ -1055,7 +983,7 @@ Active: 8381604 kB
|
|||||||
) as mock_sumDomainMemory:
|
) as mock_sumDomainMemory:
|
||||||
mock_sumDomainMemory.return_value = 8192
|
mock_sumDomainMemory.return_value = 8192
|
||||||
self.assertEqual(8192, self.host.get_memory_mb_used())
|
self.assertEqual(8192, self.host.get_memory_mb_used())
|
||||||
mock_sumDomainMemory.assert_called_once_with(include_host=False)
|
mock_sumDomainMemory.assert_called_once_with()
|
||||||
|
|
||||||
def test_get_cpu_stats(self):
|
def test_get_cpu_stats(self):
|
||||||
stats = self.host.get_cpu_stats()
|
stats = self.host.get_cpu_stats()
|
||||||
|
@ -22,7 +22,6 @@ import tempfile
|
|||||||
import ddt
|
import ddt
|
||||||
import mock
|
import mock
|
||||||
import os_traits
|
import os_traits
|
||||||
from oslo_concurrency import processutils
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
from oslo_utils import fileutils
|
from oslo_utils import fileutils
|
||||||
from oslo_utils.fixture import uuidsentinel as uuids
|
from oslo_utils.fixture import uuidsentinel as uuids
|
||||||
@ -158,76 +157,6 @@ class LibvirtUtilsTestCase(test.NoDBTestCase):
|
|||||||
mock.call('5G', 'expanded', expected_fs_type,
|
mock.call('5G', 'expanded', expected_fs_type,
|
||||||
'/some/path/root.hds')])
|
'/some/path/root.hds')])
|
||||||
|
|
||||||
def test_pick_disk_driver_name(self):
|
|
||||||
type_map = {'kvm': ([True, 'qemu'], [False, 'qemu'], [None, 'qemu']),
|
|
||||||
'qemu': ([True, 'qemu'], [False, 'qemu'], [None, 'qemu']),
|
|
||||||
'lxc': ([True, None], [False, None], [None, None])}
|
|
||||||
# NOTE(aloga): Xen is tested in test_pick_disk_driver_name_xen
|
|
||||||
|
|
||||||
version = 1005001
|
|
||||||
for (virt_type, checks) in type_map.items():
|
|
||||||
self.flags(virt_type=virt_type, group='libvirt')
|
|
||||||
for (is_block_dev, expected_result) in checks:
|
|
||||||
result = libvirt_utils.pick_disk_driver_name(version,
|
|
||||||
is_block_dev)
|
|
||||||
self.assertEqual(result, expected_result)
|
|
||||||
|
|
||||||
@mock.patch('nova.privsep.libvirt.xend_probe')
|
|
||||||
@mock.patch('oslo_concurrency.processutils.execute')
|
|
||||||
def test_pick_disk_driver_name_xen(self, mock_execute, mock_xend_probe):
|
|
||||||
|
|
||||||
def execute_side_effect(*args, **kwargs):
|
|
||||||
if args == ('tap-ctl', 'check'):
|
|
||||||
if mock_execute.blktap is True:
|
|
||||||
return ('ok\n', '')
|
|
||||||
elif mock_execute.blktap is False:
|
|
||||||
return ('some error\n', '')
|
|
||||||
else:
|
|
||||||
raise OSError(2, "No such file or directory")
|
|
||||||
raise Exception('Unexpected call')
|
|
||||||
mock_execute.side_effect = execute_side_effect
|
|
||||||
|
|
||||||
def xend_probe_side_effect():
|
|
||||||
if mock_execute.xend is True:
|
|
||||||
return ('', '')
|
|
||||||
elif mock_execute.xend is False:
|
|
||||||
raise processutils.ProcessExecutionError("error")
|
|
||||||
else:
|
|
||||||
raise OSError(2, "No such file or directory")
|
|
||||||
mock_xend_probe.side_effect = xend_probe_side_effect
|
|
||||||
|
|
||||||
self.flags(virt_type="xen", group='libvirt')
|
|
||||||
versions = [4000000, 4001000, 4002000, 4003000, 4005000]
|
|
||||||
for version in versions:
|
|
||||||
# block dev
|
|
||||||
result = libvirt_utils.pick_disk_driver_name(version, True)
|
|
||||||
self.assertEqual(result, "phy")
|
|
||||||
self.assertFalse(mock_execute.called)
|
|
||||||
mock_execute.reset_mock()
|
|
||||||
# file dev
|
|
||||||
for blktap in True, False, None:
|
|
||||||
mock_execute.blktap = blktap
|
|
||||||
for xend in True, False, None:
|
|
||||||
mock_execute.xend = xend
|
|
||||||
result = libvirt_utils.pick_disk_driver_name(version,
|
|
||||||
False)
|
|
||||||
# qemu backend supported only by libxl which is
|
|
||||||
# production since xen 4.2. libvirt use libxl if
|
|
||||||
# xend service not started.
|
|
||||||
if version >= 4002000 and xend is not True:
|
|
||||||
self.assertEqual(result, 'qemu')
|
|
||||||
elif blktap:
|
|
||||||
if version == 4000000:
|
|
||||||
self.assertEqual(result, 'tap')
|
|
||||||
else:
|
|
||||||
self.assertEqual(result, 'tap2')
|
|
||||||
else:
|
|
||||||
self.assertEqual(result, 'file')
|
|
||||||
# default is_block_dev False
|
|
||||||
self.assertEqual(result,
|
|
||||||
libvirt_utils.pick_disk_driver_name(version))
|
|
||||||
mock_execute.reset_mock()
|
|
||||||
|
|
||||||
def test_copy_image(self):
|
def test_copy_image(self):
|
||||||
dst_fd, dst_path = tempfile.mkstemp()
|
dst_fd, dst_path = tempfile.mkstemp()
|
||||||
try:
|
try:
|
||||||
|
@ -906,15 +906,6 @@ class LibvirtVifTestCase(test.NoDBTestCase):
|
|||||||
|
|
||||||
self._assertModel(xml, network_model.VIF_MODEL_VIRTIO, "qemu")
|
self._assertModel(xml, network_model.VIF_MODEL_VIRTIO, "qemu")
|
||||||
|
|
||||||
def test_model_xen(self):
|
|
||||||
self.flags(use_virtio_for_bridges=True,
|
|
||||||
virt_type='xen',
|
|
||||||
group='libvirt')
|
|
||||||
|
|
||||||
d = vif.LibvirtGenericVIFDriver()
|
|
||||||
xml = self._get_instance_xml(d, self.vif_bridge)
|
|
||||||
self._assertModel(xml)
|
|
||||||
|
|
||||||
def test_generic_driver_none(self):
|
def test_generic_driver_none(self):
|
||||||
d = vif.LibvirtGenericVIFDriver()
|
d = vif.LibvirtGenericVIFDriver()
|
||||||
self.assertRaises(exception.NovaException,
|
self.assertRaises(exception.NovaException,
|
||||||
@ -1094,7 +1085,7 @@ class LibvirtVifTestCase(test.NoDBTestCase):
|
|||||||
self, mock_create_tap_dev, mock_set_mtu, mock_device_exists):
|
self, mock_create_tap_dev, mock_set_mtu, mock_device_exists):
|
||||||
|
|
||||||
self.flags(use_virtio_for_bridges=True,
|
self.flags(use_virtio_for_bridges=True,
|
||||||
virt_type='xen',
|
virt_type='lxc',
|
||||||
group='libvirt')
|
group='libvirt')
|
||||||
|
|
||||||
d1 = vif.LibvirtGenericVIFDriver()
|
d1 = vif.LibvirtGenericVIFDriver()
|
||||||
|
@ -98,7 +98,7 @@ class HostOps(object):
|
|||||||
|
|
||||||
# NOTE(claudiub): The hypervisor_version will be stored in the database
|
# NOTE(claudiub): The hypervisor_version will be stored in the database
|
||||||
# as an Integer and it will be used by the scheduler, if required by
|
# as an Integer and it will be used by the scheduler, if required by
|
||||||
# the image property 'hypervisor_version_requires'.
|
# the image property 'img_hv_requested_version'.
|
||||||
# The hypervisor_version will then be converted back to a version
|
# The hypervisor_version will then be converted back to a version
|
||||||
# by splitting the int in groups of 3 digits.
|
# by splitting the int in groups of 3 digits.
|
||||||
# E.g.: hypervisor_version 6003 is converted to '6.3'.
|
# E.g.: hypervisor_version 6003 is converted to '6.3'.
|
||||||
|
@ -59,7 +59,7 @@ variables / types used
|
|||||||
|
|
||||||
* 'disk_bus': the guest bus type ('ide', 'virtio', 'scsi', etc)
|
* 'disk_bus': the guest bus type ('ide', 'virtio', 'scsi', etc)
|
||||||
|
|
||||||
* 'disk_dev': the device name 'vda', 'hdc', 'sdf', 'xvde' etc
|
* 'disk_dev': the device name 'vda', 'hdc', 'sdf', etc
|
||||||
|
|
||||||
* 'device_type': type of device eg 'disk', 'cdrom', 'floppy'
|
* 'device_type': type of device eg 'disk', 'cdrom', 'floppy'
|
||||||
|
|
||||||
@ -93,12 +93,12 @@ BOOT_DEV_FOR_TYPE = {'disk': 'hd', 'cdrom': 'cdrom', 'floppy': 'fd'}
|
|||||||
SUPPORTED_DEVICE_BUSES = {
|
SUPPORTED_DEVICE_BUSES = {
|
||||||
'qemu': ['virtio', 'scsi', 'ide', 'usb', 'fdc', 'sata'],
|
'qemu': ['virtio', 'scsi', 'ide', 'usb', 'fdc', 'sata'],
|
||||||
'kvm': ['virtio', 'scsi', 'ide', 'usb', 'fdc', 'sata'],
|
'kvm': ['virtio', 'scsi', 'ide', 'usb', 'fdc', 'sata'],
|
||||||
'xen': ['xen', 'ide'],
|
|
||||||
# we no longer support UML, but we keep track of its bus types so we can
|
|
||||||
# reject them for other virt types
|
|
||||||
'uml': ['uml'],
|
|
||||||
'lxc': ['lxc'],
|
'lxc': ['lxc'],
|
||||||
'parallels': ['ide', 'scsi'],
|
'parallels': ['ide', 'scsi'],
|
||||||
|
# we no longer support UML or Xen, but we keep track of their bus types so
|
||||||
|
# we can reject them for other virt types
|
||||||
|
'xen': ['xen', 'ide'],
|
||||||
|
'uml': ['uml'],
|
||||||
}
|
}
|
||||||
SUPPORTED_DEVICE_TYPES = ('disk', 'cdrom', 'floppy', 'lun')
|
SUPPORTED_DEVICE_TYPES = ('disk', 'cdrom', 'floppy', 'lun')
|
||||||
|
|
||||||
@ -138,8 +138,6 @@ def get_dev_prefix_for_disk_bus(disk_bus):
|
|||||||
return "hd"
|
return "hd"
|
||||||
elif disk_bus == "virtio":
|
elif disk_bus == "virtio":
|
||||||
return "vd"
|
return "vd"
|
||||||
elif disk_bus == "xen":
|
|
||||||
return "xvd"
|
|
||||||
elif disk_bus == "scsi":
|
elif disk_bus == "scsi":
|
||||||
return "sd"
|
return "sd"
|
||||||
elif disk_bus == "usb":
|
elif disk_bus == "usb":
|
||||||
@ -244,15 +242,7 @@ def get_disk_bus_for_device_type(instance,
|
|||||||
return disk_bus
|
return disk_bus
|
||||||
|
|
||||||
# Otherwise pick a hypervisor default disk bus
|
# Otherwise pick a hypervisor default disk bus
|
||||||
if virt_type == "lxc":
|
if virt_type in ("qemu", "kvm"):
|
||||||
return "lxc"
|
|
||||||
elif virt_type == "xen":
|
|
||||||
guest_vm_mode = obj_fields.VMMode.get_from_instance(instance)
|
|
||||||
if guest_vm_mode == obj_fields.VMMode.HVM:
|
|
||||||
return "ide"
|
|
||||||
else:
|
|
||||||
return "xen"
|
|
||||||
elif virt_type in ("qemu", "kvm"):
|
|
||||||
if device_type == "cdrom":
|
if device_type == "cdrom":
|
||||||
guestarch = libvirt_utils.get_arch(image_meta)
|
guestarch = libvirt_utils.get_arch(image_meta)
|
||||||
if guestarch in (
|
if guestarch in (
|
||||||
@ -278,6 +268,8 @@ def get_disk_bus_for_device_type(instance,
|
|||||||
return "virtio"
|
return "virtio"
|
||||||
elif device_type == "floppy":
|
elif device_type == "floppy":
|
||||||
return "fdc"
|
return "fdc"
|
||||||
|
elif virt_type == "lxc":
|
||||||
|
return "lxc"
|
||||||
elif virt_type == "parallels":
|
elif virt_type == "parallels":
|
||||||
if device_type == "cdrom":
|
if device_type == "cdrom":
|
||||||
return "ide"
|
return "ide"
|
||||||
@ -293,12 +285,11 @@ def get_disk_bus_for_device_type(instance,
|
|||||||
def get_disk_bus_for_disk_dev(virt_type, disk_dev):
|
def get_disk_bus_for_disk_dev(virt_type, disk_dev):
|
||||||
"""Determine the disk bus for a disk device.
|
"""Determine the disk bus for a disk device.
|
||||||
|
|
||||||
Given a disk device like 'hda', 'sdf', 'xvdb', etc
|
Given a disk device like 'hda' or 'sdf', guess what the most appropriate
|
||||||
guess what the most appropriate disk bus is for
|
disk bus is for the currently configured virtualization technology
|
||||||
the currently configured virtualization technology
|
|
||||||
|
|
||||||
Returns the disk bus, or raises an Exception if
|
:return: The preferred disk bus for the given disk prefix.
|
||||||
the disk device prefix is unknown.
|
:raises: InternalError if the disk device prefix is unknown.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if disk_dev.startswith('hd'):
|
if disk_dev.startswith('hd'):
|
||||||
@ -307,16 +298,11 @@ def get_disk_bus_for_disk_dev(virt_type, disk_dev):
|
|||||||
# Reverse mapping 'sd' is not reliable
|
# Reverse mapping 'sd' is not reliable
|
||||||
# there are many possible mappings. So
|
# there are many possible mappings. So
|
||||||
# this picks the most likely mappings
|
# this picks the most likely mappings
|
||||||
if virt_type == "xen":
|
return "scsi"
|
||||||
return "xen"
|
|
||||||
else:
|
|
||||||
return "scsi"
|
|
||||||
elif disk_dev.startswith('vd'):
|
elif disk_dev.startswith('vd'):
|
||||||
return "virtio"
|
return "virtio"
|
||||||
elif disk_dev.startswith('fd'):
|
elif disk_dev.startswith('fd'):
|
||||||
return "fdc"
|
return "fdc"
|
||||||
elif disk_dev.startswith('xvd'):
|
|
||||||
return "xen"
|
|
||||||
else:
|
else:
|
||||||
msg = _("Unable to determine disk bus for '%s'") % disk_dev[:1]
|
msg = _("Unable to determine disk bus for '%s'") % disk_dev[:1]
|
||||||
raise exception.InternalError(msg)
|
raise exception.InternalError(msg)
|
||||||
@ -455,16 +441,11 @@ def get_root_info(instance, virt_type, image_meta, root_bdm,
|
|||||||
|
|
||||||
if not get_device_name(root_bdm) and root_device_name:
|
if not get_device_name(root_bdm) and root_device_name:
|
||||||
root_bdm = root_bdm.copy()
|
root_bdm = root_bdm.copy()
|
||||||
# it can happen, eg for libvirt+Xen, that the root_device_name is
|
|
||||||
# incompatible with the disk bus. In that case fix the root_device_name
|
|
||||||
if virt_type == 'xen':
|
|
||||||
dev_prefix = get_dev_prefix_for_disk_bus(disk_bus)
|
|
||||||
if not root_device_name.startswith(dev_prefix):
|
|
||||||
letter = block_device.get_device_letter(root_device_name)
|
|
||||||
root_device_name = '%s%s' % (dev_prefix, letter)
|
|
||||||
root_bdm['device_name'] = root_device_name
|
root_bdm['device_name'] = root_device_name
|
||||||
return get_info_from_bdm(instance, virt_type, image_meta,
|
|
||||||
root_bdm, {}, disk_bus)
|
return get_info_from_bdm(
|
||||||
|
instance, virt_type, image_meta, root_bdm, {}, disk_bus,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def default_device_names(virt_type, context, instance, block_device_info,
|
def default_device_names(virt_type, context, instance, block_device_info,
|
||||||
|
@ -2608,13 +2608,6 @@ class LibvirtConfigGuestFeatureAPIC(LibvirtConfigGuestFeature):
|
|||||||
**kwargs)
|
**kwargs)
|
||||||
|
|
||||||
|
|
||||||
class LibvirtConfigGuestFeaturePAE(LibvirtConfigGuestFeature):
|
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
super(LibvirtConfigGuestFeaturePAE, self).__init__("pae",
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class LibvirtConfigGuestFeatureKvmHidden(LibvirtConfigGuestFeature):
|
class LibvirtConfigGuestFeatureKvmHidden(LibvirtConfigGuestFeature):
|
||||||
|
|
||||||
def __init__(self, **kwargs):
|
def __init__(self, **kwargs):
|
||||||
|
@ -21,8 +21,7 @@
|
|||||||
"""
|
"""
|
||||||
A connection to a hypervisor through libvirt.
|
A connection to a hypervisor through libvirt.
|
||||||
|
|
||||||
Supports KVM, LXC, QEMU, XEN and Parallels.
|
Supports KVM, LXC, QEMU, and Parallels.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import binascii
|
import binascii
|
||||||
@ -924,10 +923,8 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
migration_flags |= libvirt.VIR_MIGRATE_LIVE
|
migration_flags |= libvirt.VIR_MIGRATE_LIVE
|
||||||
|
|
||||||
# Adding p2p flag only if xen is not in use, because xen does not
|
# Enable support for p2p migrations
|
||||||
# support p2p migrations
|
migration_flags |= libvirt.VIR_MIGRATE_PEER2PEER
|
||||||
if CONF.libvirt.virt_type != 'xen':
|
|
||||||
migration_flags |= libvirt.VIR_MIGRATE_PEER2PEER
|
|
||||||
|
|
||||||
# Adding VIR_MIGRATE_UNDEFINE_SOURCE because, without it, migrated
|
# Adding VIR_MIGRATE_UNDEFINE_SOURCE because, without it, migrated
|
||||||
# instance will remain defined on the source host
|
# instance will remain defined on the source host
|
||||||
@ -1025,9 +1022,7 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _uri():
|
def _uri():
|
||||||
if CONF.libvirt.virt_type == 'xen':
|
if CONF.libvirt.virt_type == 'lxc':
|
||||||
uri = CONF.libvirt.connection_uri or 'xen:///'
|
|
||||||
elif CONF.libvirt.virt_type == 'lxc':
|
|
||||||
uri = CONF.libvirt.connection_uri or 'lxc:///'
|
uri = CONF.libvirt.connection_uri or 'lxc:///'
|
||||||
elif CONF.libvirt.virt_type == 'parallels':
|
elif CONF.libvirt.virt_type == 'parallels':
|
||||||
uri = CONF.libvirt.connection_uri or 'parallels:///system'
|
uri = CONF.libvirt.connection_uri or 'parallels:///system'
|
||||||
@ -1040,7 +1035,6 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
uris = {
|
uris = {
|
||||||
'kvm': 'qemu+%(scheme)s://%(dest)s/system',
|
'kvm': 'qemu+%(scheme)s://%(dest)s/system',
|
||||||
'qemu': 'qemu+%(scheme)s://%(dest)s/system',
|
'qemu': 'qemu+%(scheme)s://%(dest)s/system',
|
||||||
'xen': 'xenmigr://%(dest)s/system',
|
|
||||||
'parallels': 'parallels+tcp://%(dest)s/system',
|
'parallels': 'parallels+tcp://%(dest)s/system',
|
||||||
}
|
}
|
||||||
dest = oslo_netutils.escape_ipv6(dest)
|
dest = oslo_netutils.escape_ipv6(dest)
|
||||||
@ -3230,8 +3224,6 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
# NOTE(vish): This actually could take slightly longer than the
|
# NOTE(vish): This actually could take slightly longer than the
|
||||||
# FLAG defines depending on how long the get_info
|
# FLAG defines depending on how long the get_info
|
||||||
# call takes to return.
|
# call takes to return.
|
||||||
self._prepare_pci_devices_for_use(
|
|
||||||
pci_manager.get_instance_pci_devs(instance, 'all'))
|
|
||||||
for x in range(CONF.libvirt.wait_soft_reboot_seconds):
|
for x in range(CONF.libvirt.wait_soft_reboot_seconds):
|
||||||
guest = self._host.get_guest(instance)
|
guest = self._host.get_guest(instance)
|
||||||
|
|
||||||
@ -3324,8 +3316,6 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
self._create_guest_with_network(
|
self._create_guest_with_network(
|
||||||
context, xml, instance, network_info, block_device_info,
|
context, xml, instance, network_info, block_device_info,
|
||||||
vifs_already_plugged=True)
|
vifs_already_plugged=True)
|
||||||
self._prepare_pci_devices_for_use(
|
|
||||||
pci_manager.get_instance_pci_devs(instance, 'all'))
|
|
||||||
|
|
||||||
def _wait_for_reboot():
|
def _wait_for_reboot():
|
||||||
"""Called at an interval until the VM is running again."""
|
"""Called at an interval until the VM is running again."""
|
||||||
@ -3565,14 +3555,15 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
if hardware.check_hw_rescue_props(image_meta):
|
if hardware.check_hw_rescue_props(image_meta):
|
||||||
LOG.info("Attempting a stable device rescue", instance=instance)
|
LOG.info("Attempting a stable device rescue", instance=instance)
|
||||||
# NOTE(lyarwood): Stable device rescue is not supported when using
|
# NOTE(lyarwood): Stable device rescue is not supported when using
|
||||||
# the LXC and Xen virt_types as they do not support the required
|
# the LXC virt_type as it does not support the required
|
||||||
# <boot order=''> definitions allowing an instance to boot from the
|
# <boot order=''> definitions allowing an instance to boot from the
|
||||||
# rescue device added as a final device to the domain.
|
# rescue device added as a final device to the domain.
|
||||||
if virt_type in ('lxc', 'xen'):
|
if virt_type == 'lxc':
|
||||||
reason = ("Stable device rescue is not supported by virt_type "
|
reason = _(
|
||||||
"%s", virt_type)
|
"Stable device rescue is not supported by virt_type '%s'"
|
||||||
raise exception.InstanceNotRescuable(instance_id=instance.uuid,
|
)
|
||||||
reason=reason)
|
raise exception.InstanceNotRescuable(
|
||||||
|
instance_id=instance.uuid, reason=reason % virt_type)
|
||||||
# NOTE(lyarwood): Stable device rescue provides the original disk
|
# NOTE(lyarwood): Stable device rescue provides the original disk
|
||||||
# mapping of the instance with the rescue device appened to the
|
# mapping of the instance with the rescue device appened to the
|
||||||
# end. As a result we need to provide the original image_meta, the
|
# end. As a result we need to provide the original image_meta, the
|
||||||
@ -3781,10 +3772,10 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
# NOTE(markus_z): The virt_types kvm and qemu are the only ones
|
# NOTE(markus_z): The virt_types kvm and qemu are the only ones
|
||||||
# which create a dedicated file device for the console logging.
|
# which create a dedicated file device for the console logging.
|
||||||
# Other virt_types like xen, lxc, and parallels depend on the
|
# Other virt_types like lxc and parallels depend on the flush of
|
||||||
# flush of that pty device into the "console.log" file to ensure
|
# that PTY device into the "console.log" file to ensure that a
|
||||||
# that a series of "get_console_output" calls return the complete
|
# series of "get_console_output" calls return the complete content
|
||||||
# content even after rebooting a guest.
|
# even after rebooting a guest.
|
||||||
nova.privsep.path.writefile(console_log, 'a+', data)
|
nova.privsep.path.writefile(console_log, 'a+', data)
|
||||||
|
|
||||||
# set console path to logfile, not to pty device
|
# set console path to logfile, not to pty device
|
||||||
@ -4919,7 +4910,7 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
def _set_managed_mode(self, pcidev):
|
def _set_managed_mode(self, pcidev):
|
||||||
# only kvm support managed mode
|
# only kvm support managed mode
|
||||||
if CONF.libvirt.virt_type in ('xen', 'parallels',):
|
if CONF.libvirt.virt_type in ('parallels',):
|
||||||
pcidev.managed = 'no'
|
pcidev.managed = 'no'
|
||||||
if CONF.libvirt.virt_type in ('kvm', 'qemu'):
|
if CONF.libvirt.virt_type in ('kvm', 'qemu'):
|
||||||
pcidev.managed = 'yes'
|
pcidev.managed = 'yes'
|
||||||
@ -5291,8 +5282,6 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
"""Returns the guest OS type based on virt type."""
|
"""Returns the guest OS type based on virt type."""
|
||||||
if virt_type == "lxc":
|
if virt_type == "lxc":
|
||||||
ret = fields.VMMode.EXE
|
ret = fields.VMMode.EXE
|
||||||
elif virt_type == "xen":
|
|
||||||
ret = fields.VMMode.XEN
|
|
||||||
else:
|
else:
|
||||||
ret = fields.VMMode.HVM
|
ret = fields.VMMode.HVM
|
||||||
return ret
|
return ret
|
||||||
@ -5383,19 +5372,11 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
flavor.extra_specs.get('hw:hide_hypervisor_id')) or
|
flavor.extra_specs.get('hw:hide_hypervisor_id')) or
|
||||||
image_meta.properties.get('img_hide_hypervisor_id'))
|
image_meta.properties.get('img_hide_hypervisor_id'))
|
||||||
|
|
||||||
if virt_type == "xen":
|
if virt_type in ('qemu', 'kvm'):
|
||||||
# PAE only makes sense in X86
|
|
||||||
if caps.host.cpu.arch in (fields.Architecture.I686,
|
|
||||||
fields.Architecture.X86_64):
|
|
||||||
guest.features.append(vconfig.LibvirtConfigGuestFeaturePAE())
|
|
||||||
|
|
||||||
if (virt_type not in ("lxc", "parallels", "xen") or
|
|
||||||
(virt_type == "xen" and guest.os_type == fields.VMMode.HVM)):
|
|
||||||
guest.features.append(vconfig.LibvirtConfigGuestFeatureACPI())
|
guest.features.append(vconfig.LibvirtConfigGuestFeatureACPI())
|
||||||
guest.features.append(vconfig.LibvirtConfigGuestFeatureAPIC())
|
guest.features.append(vconfig.LibvirtConfigGuestFeatureAPIC())
|
||||||
|
|
||||||
if (virt_type in ("qemu", "kvm") and
|
if virt_type in ('qemu', 'kvm') and os_type == 'windows':
|
||||||
os_type == 'windows'):
|
|
||||||
hv = vconfig.LibvirtConfigGuestFeatureHyperV()
|
hv = vconfig.LibvirtConfigGuestFeatureHyperV()
|
||||||
hv.relaxed = True
|
hv.relaxed = True
|
||||||
|
|
||||||
@ -5456,9 +5437,7 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
# be overridden by the user with image_meta.properties, which
|
# be overridden by the user with image_meta.properties, which
|
||||||
# is carried out in the next if statement below this one.
|
# is carried out in the next if statement below this one.
|
||||||
guestarch = libvirt_utils.get_arch(image_meta)
|
guestarch = libvirt_utils.get_arch(image_meta)
|
||||||
if guest.os_type == fields.VMMode.XEN:
|
if CONF.libvirt.virt_type == 'parallels':
|
||||||
video.type = 'xen'
|
|
||||||
elif CONF.libvirt.virt_type == 'parallels':
|
|
||||||
video.type = 'vga'
|
video.type = 'vga'
|
||||||
elif guestarch in (fields.Architecture.PPC,
|
elif guestarch in (fields.Architecture.PPC,
|
||||||
fields.Architecture.PPC64,
|
fields.Architecture.PPC64,
|
||||||
@ -5724,12 +5703,7 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
def _configure_guest_by_virt_type(self, guest, virt_type, caps, instance,
|
def _configure_guest_by_virt_type(self, guest, virt_type, caps, instance,
|
||||||
image_meta, flavor, root_device_name,
|
image_meta, flavor, root_device_name,
|
||||||
sev_enabled):
|
sev_enabled):
|
||||||
if virt_type == "xen":
|
if virt_type in ("kvm", "qemu"):
|
||||||
if guest.os_type == fields.VMMode.HVM:
|
|
||||||
guest.os_loader = CONF.libvirt.xen_hvmloader_path
|
|
||||||
else:
|
|
||||||
guest.os_cmdline = CONSOLE
|
|
||||||
elif virt_type in ("kvm", "qemu"):
|
|
||||||
if caps.host.cpu.arch in (fields.Architecture.I686,
|
if caps.host.cpu.arch in (fields.Architecture.I686,
|
||||||
fields.Architecture.X86_64):
|
fields.Architecture.X86_64):
|
||||||
guest.sysinfo = self._get_guest_config_sysinfo(instance)
|
guest.sysinfo = self._get_guest_config_sysinfo(instance)
|
||||||
@ -6320,8 +6294,10 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _guest_add_spice_channel(guest):
|
def _guest_add_spice_channel(guest):
|
||||||
if (CONF.spice.enabled and CONF.spice.agent_enabled and
|
if (
|
||||||
guest.virt_type not in ('lxc', 'xen')):
|
CONF.spice.enabled and CONF.spice.agent_enabled and
|
||||||
|
guest.virt_type != 'lxc'
|
||||||
|
):
|
||||||
channel = vconfig.LibvirtConfigGuestChannel()
|
channel = vconfig.LibvirtConfigGuestChannel()
|
||||||
channel.type = 'spicevmc'
|
channel.type = 'spicevmc'
|
||||||
channel.target_name = "com.redhat.spice.0"
|
channel.target_name = "com.redhat.spice.0"
|
||||||
@ -6329,15 +6305,13 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _guest_add_memory_balloon(guest):
|
def _guest_add_memory_balloon(guest):
|
||||||
virt_type = guest.virt_type
|
# Memory balloon device only support 'qemu/kvm' hypervisor
|
||||||
# Memory balloon device only support 'qemu/kvm' and 'xen' hypervisor
|
if (
|
||||||
if (virt_type in ('xen', 'qemu', 'kvm') and
|
guest.virt_type in ('qemu', 'kvm') and
|
||||||
CONF.libvirt.mem_stats_period_seconds > 0):
|
CONF.libvirt.mem_stats_period_seconds > 0
|
||||||
|
):
|
||||||
balloon = vconfig.LibvirtConfigMemoryBalloon()
|
balloon = vconfig.LibvirtConfigMemoryBalloon()
|
||||||
if virt_type in ('qemu', 'kvm'):
|
balloon.model = 'virtio'
|
||||||
balloon.model = 'virtio'
|
|
||||||
else:
|
|
||||||
balloon.model = 'xen'
|
|
||||||
balloon.period = CONF.libvirt.mem_stats_period_seconds
|
balloon.period = CONF.libvirt.mem_stats_period_seconds
|
||||||
guest.add_device(balloon)
|
guest.add_device(balloon)
|
||||||
|
|
||||||
@ -6360,13 +6334,12 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
|
|
||||||
def _guest_add_pci_devices(self, guest, instance):
|
def _guest_add_pci_devices(self, guest, instance):
|
||||||
virt_type = guest.virt_type
|
virt_type = guest.virt_type
|
||||||
if virt_type in ('xen', 'qemu', 'kvm'):
|
if virt_type in ('qemu', 'kvm'):
|
||||||
# Get all generic PCI devices (non-SR-IOV).
|
# Get all generic PCI devices (non-SR-IOV).
|
||||||
for pci_dev in pci_manager.get_instance_pci_devs(instance):
|
for pci_dev in pci_manager.get_instance_pci_devs(instance):
|
||||||
guest.add_device(self._get_guest_pci_device(pci_dev))
|
guest.add_device(self._get_guest_pci_device(pci_dev))
|
||||||
else:
|
else:
|
||||||
# PCI devices is only supported for hypervisors
|
# PCI devices is only supported for QEMU/KVM hypervisor
|
||||||
# 'xen', 'qemu' and 'kvm'.
|
|
||||||
if pci_manager.get_instance_pci_devs(instance, 'all'):
|
if pci_manager.get_instance_pci_devs(instance, 'all'):
|
||||||
raise exception.PciDeviceUnsupportedHypervisor(type=virt_type)
|
raise exception.PciDeviceUnsupportedHypervisor(type=virt_type)
|
||||||
|
|
||||||
@ -6395,7 +6368,7 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||||||
graphics.listen = CONF.vnc.server_listen
|
graphics.listen = CONF.vnc.server_listen
|
||||||
guest.add_device(graphics)
|
guest.add_device(graphics)
|
||||||
add_video_driver = True
|
add_video_driver = True
|
||||||
if CONF.spice.enabled and guest.virt_type not in ('lxc', 'xen'):
|
if CONF.spice.enabled and guest.virt_type != 'lxc':
|
||||||
graphics = vconfig.LibvirtConfigGuestGraphics()
|
graphics = vconfig.LibvirtConfigGuestGraphics()
|
||||||
graphics.type = "spice"
|
graphics.type = "spice"
|
||||||
graphics.listen = CONF.spice.server_listen
|
graphics.listen = CONF.spice.server_listen
|
||||||
|
@ -77,7 +77,6 @@ CONF = nova.conf.CONF
|
|||||||
# This list is for libvirt hypervisor drivers that need special handling.
|
# This list is for libvirt hypervisor drivers that need special handling.
|
||||||
# This is *not* the complete list of supported hypervisor drivers.
|
# This is *not* the complete list of supported hypervisor drivers.
|
||||||
HV_DRIVER_QEMU = "QEMU"
|
HV_DRIVER_QEMU = "QEMU"
|
||||||
HV_DRIVER_XEN = "Xen"
|
|
||||||
|
|
||||||
SEV_KERNEL_PARAM_FILE = '/sys/module/kvm_amd/parameters/sev'
|
SEV_KERNEL_PARAM_FILE = '/sys/module/kvm_amd/parameters/sev'
|
||||||
|
|
||||||
@ -622,31 +621,27 @@ class Host(object):
|
|||||||
'ex': ex})
|
'ex': ex})
|
||||||
raise exception.InternalError(msg)
|
raise exception.InternalError(msg)
|
||||||
|
|
||||||
def list_guests(self, only_running=True, only_guests=True):
|
def list_guests(self, only_running=True):
|
||||||
"""Get a list of Guest objects for nova instances
|
"""Get a list of Guest objects for nova instances
|
||||||
|
|
||||||
:param only_running: True to only return running instances
|
:param only_running: True to only return running instances
|
||||||
:param only_guests: True to filter out any host domain (eg Dom-0)
|
|
||||||
|
|
||||||
See method "list_instance_domains" for more information.
|
See method "list_instance_domains" for more information.
|
||||||
|
|
||||||
:returns: list of Guest objects
|
:returns: list of Guest objects
|
||||||
"""
|
"""
|
||||||
return [libvirt_guest.Guest(dom) for dom in self.list_instance_domains(
|
domains = self.list_instance_domains(only_running=only_running)
|
||||||
only_running=only_running, only_guests=only_guests)]
|
return [libvirt_guest.Guest(dom) for dom in domains]
|
||||||
|
|
||||||
def list_instance_domains(self, only_running=True, only_guests=True):
|
def list_instance_domains(self, only_running=True):
|
||||||
"""Get a list of libvirt.Domain objects for nova instances
|
"""Get a list of libvirt.Domain objects for nova instances
|
||||||
|
|
||||||
:param only_running: True to only return running instances
|
:param only_running: True to only return running instances
|
||||||
:param only_guests: True to filter out any host domain (eg Dom-0)
|
|
||||||
|
|
||||||
Query libvirt to a get a list of all libvirt.Domain objects
|
Query libvirt to a get a list of all libvirt.Domain objects
|
||||||
that correspond to nova instances. If the only_running parameter
|
that correspond to nova instances. If the only_running parameter
|
||||||
is true this list will only include active domains, otherwise
|
is true this list will only include active domains, otherwise
|
||||||
inactive domains will be included too. If the only_guests parameter
|
inactive domains will be included too.
|
||||||
is true the list will have any "host" domain (aka Xen Domain-0)
|
|
||||||
filtered out.
|
|
||||||
|
|
||||||
:returns: list of libvirt.Domain objects
|
:returns: list of libvirt.Domain objects
|
||||||
"""
|
"""
|
||||||
@ -662,8 +657,6 @@ class Host(object):
|
|||||||
|
|
||||||
doms = []
|
doms = []
|
||||||
for dom in alldoms:
|
for dom in alldoms:
|
||||||
if only_guests and dom.ID() == 0:
|
|
||||||
continue
|
|
||||||
doms.append(dom)
|
doms.append(dom)
|
||||||
|
|
||||||
return doms
|
return doms
|
||||||
@ -1073,14 +1066,10 @@ class Host(object):
|
|||||||
else:
|
else:
|
||||||
return self._get_hardware_info()[1]
|
return self._get_hardware_info()[1]
|
||||||
|
|
||||||
def _sum_domain_memory_mb(self, include_host=True):
|
def _sum_domain_memory_mb(self):
|
||||||
"""Get the total memory consumed by guest domains
|
"""Get the total memory consumed by guest domains."""
|
||||||
|
|
||||||
If include_host is True, subtract available host memory from guest 0
|
|
||||||
to get real used memory within dom0 within xen
|
|
||||||
"""
|
|
||||||
used = 0
|
used = 0
|
||||||
for guest in self.list_guests(only_guests=False):
|
for guest in self.list_guests():
|
||||||
try:
|
try:
|
||||||
# TODO(sahid): Use get_info...
|
# TODO(sahid): Use get_info...
|
||||||
dom_mem = int(guest._get_domain_info()[2])
|
dom_mem = int(guest._get_domain_info()[2])
|
||||||
@ -1089,12 +1078,7 @@ class Host(object):
|
|||||||
" %(uuid)s, exception: %(ex)s",
|
" %(uuid)s, exception: %(ex)s",
|
||||||
{"uuid": guest.uuid, "ex": e})
|
{"uuid": guest.uuid, "ex": e})
|
||||||
continue
|
continue
|
||||||
if include_host and guest.id == 0:
|
used += dom_mem
|
||||||
# Memory usage for the host domain (dom0 in xen) is the
|
|
||||||
# reported memory minus available memory
|
|
||||||
used += (dom_mem - self._get_avail_memory_kb())
|
|
||||||
else:
|
|
||||||
used += dom_mem
|
|
||||||
# Convert it to MB
|
# Convert it to MB
|
||||||
return used // units.Ki
|
return used // units.Ki
|
||||||
|
|
||||||
@ -1115,13 +1099,10 @@ class Host(object):
|
|||||||
|
|
||||||
:returns: the total usage of memory(MB).
|
:returns: the total usage of memory(MB).
|
||||||
"""
|
"""
|
||||||
if CONF.libvirt.virt_type == 'xen':
|
if CONF.libvirt.file_backed_memory > 0:
|
||||||
# For xen, report the sum of all domains, with
|
|
||||||
return self._sum_domain_memory_mb(include_host=True)
|
|
||||||
elif CONF.libvirt.file_backed_memory > 0:
|
|
||||||
# For file_backed_memory, report the total usage of guests,
|
# For file_backed_memory, report the total usage of guests,
|
||||||
# ignoring host memory
|
# ignoring host memory
|
||||||
return self._sum_domain_memory_mb(include_host=False)
|
return self._sum_domain_memory_mb()
|
||||||
else:
|
else:
|
||||||
return (self.get_memory_mb_total() -
|
return (self.get_memory_mb_total() -
|
||||||
(self._get_avail_memory_kb() // units.Ki))
|
(self._get_avail_memory_kb() // units.Ki))
|
||||||
|
@ -144,6 +144,7 @@ class Image(metaclass=abc.ABCMeta):
|
|||||||
"""
|
"""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
# TODO(stephenfin): Remove the unused hypervisor_version parameter
|
||||||
def libvirt_info(self, disk_info, cache_mode,
|
def libvirt_info(self, disk_info, cache_mode,
|
||||||
extra_specs, hypervisor_version, boot_order=None,
|
extra_specs, hypervisor_version, boot_order=None,
|
||||||
disk_unit=None):
|
disk_unit=None):
|
||||||
@ -165,9 +166,10 @@ class Image(metaclass=abc.ABCMeta):
|
|||||||
info.driver_discard = self.discard_mode
|
info.driver_discard = self.discard_mode
|
||||||
info.driver_io = self.driver_io
|
info.driver_io = self.driver_io
|
||||||
info.driver_format = self.driver_format
|
info.driver_format = self.driver_format
|
||||||
driver_name = libvirt_utils.pick_disk_driver_name(hypervisor_version,
|
if CONF.libvirt.virt_type in ('qemu', 'kvm'):
|
||||||
self.is_block_dev)
|
# the QEMU backend supports multiple backends, so tell libvirt
|
||||||
info.driver_name = driver_name
|
# which one to use
|
||||||
|
info.driver_name = 'qemu'
|
||||||
info.source_path = self.path
|
info.source_path = self.path
|
||||||
info.boot_order = boot_order
|
info.boot_order = boot_order
|
||||||
|
|
||||||
@ -856,6 +858,7 @@ class Rbd(Image):
|
|||||||
|
|
||||||
self.discard_mode = CONF.libvirt.hw_disk_discard
|
self.discard_mode = CONF.libvirt.hw_disk_discard
|
||||||
|
|
||||||
|
# TODO(stephenfin): Remove the unused hypervisor_version parameter
|
||||||
def libvirt_info(self, disk_info, cache_mode,
|
def libvirt_info(self, disk_info, cache_mode,
|
||||||
extra_specs, hypervisor_version, boot_order=None,
|
extra_specs, hypervisor_version, boot_order=None,
|
||||||
disk_unit=None):
|
disk_unit=None):
|
||||||
|
@ -18,7 +18,6 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
import errno
|
|
||||||
import grp
|
import grp
|
||||||
import os
|
import os
|
||||||
import pwd
|
import pwd
|
||||||
@ -185,63 +184,6 @@ def create_ploop_image(
|
|||||||
nova.privsep.libvirt.ploop_init(size, disk_format, fs_type, disk_path)
|
nova.privsep.libvirt.ploop_init(size, disk_format, fs_type, disk_path)
|
||||||
|
|
||||||
|
|
||||||
def pick_disk_driver_name(
|
|
||||||
hypervisor_version: int, is_block_dev: bool = False,
|
|
||||||
) -> ty.Optional[str]:
|
|
||||||
"""Pick the libvirt primary backend driver name
|
|
||||||
|
|
||||||
If the hypervisor supports multiple backend drivers we have to tell libvirt
|
|
||||||
which one should be used.
|
|
||||||
|
|
||||||
Xen supports the following drivers: "tap", "tap2", "phy", "file", or
|
|
||||||
"qemu", being "qemu" the preferred one. Qemu only supports "qemu".
|
|
||||||
|
|
||||||
:param is_block_dev:
|
|
||||||
:returns: driver_name or None
|
|
||||||
"""
|
|
||||||
if CONF.libvirt.virt_type == "xen":
|
|
||||||
if is_block_dev:
|
|
||||||
return "phy"
|
|
||||||
else:
|
|
||||||
# 4002000 == 4.2.0
|
|
||||||
if hypervisor_version >= 4002000:
|
|
||||||
try:
|
|
||||||
nova.privsep.libvirt.xend_probe()
|
|
||||||
except OSError as exc:
|
|
||||||
if exc.errno == errno.ENOENT:
|
|
||||||
LOG.debug("xend is not found")
|
|
||||||
# libvirt will try to use libxl toolstack
|
|
||||||
return 'qemu'
|
|
||||||
else:
|
|
||||||
raise
|
|
||||||
except processutils.ProcessExecutionError:
|
|
||||||
LOG.debug("xend is not started")
|
|
||||||
# libvirt will try to use libxl toolstack
|
|
||||||
return 'qemu'
|
|
||||||
# libvirt will use xend/xm toolstack
|
|
||||||
try:
|
|
||||||
out, err = processutils.execute('tap-ctl', 'check',
|
|
||||||
check_exit_code=False)
|
|
||||||
if out == 'ok\n':
|
|
||||||
# 4000000 == 4.0.0
|
|
||||||
if hypervisor_version > 4000000:
|
|
||||||
return "tap2"
|
|
||||||
else:
|
|
||||||
return "tap"
|
|
||||||
else:
|
|
||||||
LOG.info("tap-ctl check: %s", out)
|
|
||||||
except OSError as exc:
|
|
||||||
if exc.errno == errno.ENOENT:
|
|
||||||
LOG.debug("tap-ctl tool is not installed")
|
|
||||||
else:
|
|
||||||
raise
|
|
||||||
return "file"
|
|
||||||
elif CONF.libvirt.virt_type in ('kvm', 'qemu'):
|
|
||||||
return "qemu"
|
|
||||||
else: # parallels
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_disk_size(path: str, format: ty.Optional[str] = None) -> int:
|
def get_disk_size(path: str, format: ty.Optional[str] = None) -> int:
|
||||||
"""Get the (virtual) size of a disk image
|
"""Get the (virtual) size of a disk image
|
||||||
|
|
||||||
@ -403,11 +345,7 @@ def find_disk(guest: libvirt_guest.Guest) -> ty.Tuple[str, ty.Optional[str]]:
|
|||||||
raise RuntimeError(_("Can't retrieve root device path "
|
raise RuntimeError(_("Can't retrieve root device path "
|
||||||
"from instance libvirt configuration"))
|
"from instance libvirt configuration"))
|
||||||
|
|
||||||
# This is a legacy quirk of libvirt/xen. Everything else should
|
return disk_path, disk_format
|
||||||
# report the on-disk format in type.
|
|
||||||
if disk_format == 'aio':
|
|
||||||
disk_format = 'raw'
|
|
||||||
return (disk_path, disk_format)
|
|
||||||
|
|
||||||
|
|
||||||
def get_disk_type_from_path(path: str) -> ty.Optional[str]:
|
def get_disk_type_from_path(path: str) -> ty.Optional[str]:
|
||||||
|
@ -57,7 +57,8 @@ SUPPORTED_VIF_MODELS = {
|
|||||||
network_model.VIF_MODEL_E1000E,
|
network_model.VIF_MODEL_E1000E,
|
||||||
network_model.VIF_MODEL_LAN9118,
|
network_model.VIF_MODEL_LAN9118,
|
||||||
network_model.VIF_MODEL_SPAPR_VLAN,
|
network_model.VIF_MODEL_SPAPR_VLAN,
|
||||||
network_model.VIF_MODEL_VMXNET3],
|
network_model.VIF_MODEL_VMXNET3,
|
||||||
|
],
|
||||||
'kvm': [
|
'kvm': [
|
||||||
network_model.VIF_MODEL_VIRTIO,
|
network_model.VIF_MODEL_VIRTIO,
|
||||||
network_model.VIF_MODEL_NE2K_PCI,
|
network_model.VIF_MODEL_NE2K_PCI,
|
||||||
@ -66,18 +67,14 @@ SUPPORTED_VIF_MODELS = {
|
|||||||
network_model.VIF_MODEL_E1000,
|
network_model.VIF_MODEL_E1000,
|
||||||
network_model.VIF_MODEL_E1000E,
|
network_model.VIF_MODEL_E1000E,
|
||||||
network_model.VIF_MODEL_SPAPR_VLAN,
|
network_model.VIF_MODEL_SPAPR_VLAN,
|
||||||
network_model.VIF_MODEL_VMXNET3],
|
network_model.VIF_MODEL_VMXNET3,
|
||||||
'xen': [
|
],
|
||||||
network_model.VIF_MODEL_NETFRONT,
|
|
||||||
network_model.VIF_MODEL_NE2K_PCI,
|
|
||||||
network_model.VIF_MODEL_PCNET,
|
|
||||||
network_model.VIF_MODEL_RTL8139,
|
|
||||||
network_model.VIF_MODEL_E1000],
|
|
||||||
'lxc': [],
|
'lxc': [],
|
||||||
'parallels': [
|
'parallels': [
|
||||||
network_model.VIF_MODEL_VIRTIO,
|
network_model.VIF_MODEL_VIRTIO,
|
||||||
network_model.VIF_MODEL_RTL8139,
|
network_model.VIF_MODEL_RTL8139,
|
||||||
network_model.VIF_MODEL_E1000],
|
network_model.VIF_MODEL_E1000,
|
||||||
|
],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -18,13 +18,13 @@
|
|||||||
|
|
||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
import nova.conf
|
||||||
from nova import exception
|
from nova import exception
|
||||||
from nova import profiler
|
from nova import profiler
|
||||||
from nova.virt import block_device as driver_block_device
|
from nova.virt import block_device as driver_block_device
|
||||||
from nova.virt.libvirt import config as vconfig
|
from nova.virt.libvirt import config as vconfig
|
||||||
from nova.virt.libvirt import utils as libvirt_utils
|
|
||||||
|
|
||||||
|
|
||||||
|
CONF = nova.conf.CONF
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@ -38,10 +38,6 @@ class LibvirtBaseVolumeDriver(object):
|
|||||||
def get_config(self, connection_info, disk_info):
|
def get_config(self, connection_info, disk_info):
|
||||||
"""Returns xml for libvirt."""
|
"""Returns xml for libvirt."""
|
||||||
conf = vconfig.LibvirtConfigGuestDisk()
|
conf = vconfig.LibvirtConfigGuestDisk()
|
||||||
conf.driver_name = libvirt_utils.pick_disk_driver_name(
|
|
||||||
self.host.get_version(),
|
|
||||||
self.is_block_dev
|
|
||||||
)
|
|
||||||
|
|
||||||
conf.source_device = disk_info['type']
|
conf.source_device = disk_info['type']
|
||||||
conf.driver_format = "raw"
|
conf.driver_format = "raw"
|
||||||
@ -50,6 +46,11 @@ class LibvirtBaseVolumeDriver(object):
|
|||||||
conf.target_bus = disk_info['bus']
|
conf.target_bus = disk_info['bus']
|
||||||
conf.serial = connection_info.get('serial')
|
conf.serial = connection_info.get('serial')
|
||||||
|
|
||||||
|
if CONF.libvirt.virt_type in ('qemu', 'kvm'):
|
||||||
|
# the QEMU backend supports multiple backends, so tell libvirt
|
||||||
|
# which one to use
|
||||||
|
conf.driver_name = 'qemu'
|
||||||
|
|
||||||
# Support for block size tuning
|
# Support for block size tuning
|
||||||
data = {}
|
data = {}
|
||||||
if 'data' in connection_info:
|
if 'data' in connection_info:
|
||||||
|
@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
upgrade:
|
||||||
|
- |
|
||||||
|
Support for the libvirt+xen hypervisor model has been removed. This has
|
||||||
|
not been validated in some time and was not supported.
|
||||||
|
- |
|
||||||
|
The ``[libvirt] xen_hvmloader_path`` config option has been removed. This
|
||||||
|
was only used with the libvirt+xen hypervisor, which is no longer
|
||||||
|
supported.
|
Loading…
x
Reference in New Issue
Block a user