docs: Remove duplicated PCI passthrough extra spec info
There's also a PCI passthrough guide. Use that instead, allowing us to remove the sections for various extra specs from the 'user/flavors' guide: - hw:pci_numa_affinity_policy - pci_passthrough:alias Change-Id: I5701d284c2cfdadf825f8e2f699651b3f8c0c9ab Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
parent
94c03e8d66
commit
c5ebaef610
@ -16,11 +16,12 @@ different guests. In the case of PCI passthrough, the full physical device is
|
|||||||
assigned to only one guest and cannot be shared.
|
assigned to only one guest and cannot be shared.
|
||||||
|
|
||||||
PCI devices are requested through flavor extra specs, specifically via the
|
PCI devices are requested through flavor extra specs, specifically via the
|
||||||
``pci_passthrough:alias=<alias>`` flavor extra spec. This guide demonstrates
|
:nova:extra-spec:`pci_passthrough:alias` flavor extra spec.
|
||||||
how to enable PCI passthrough for a type of PCI device with a vendor ID of
|
This guide demonstrates how to enable PCI passthrough for a type of PCI device
|
||||||
``8086`` and a product ID of ``154d`` - an Intel X520 Network Adapter - by
|
with a vendor ID of ``8086`` and a product ID of ``154d`` - an Intel X520
|
||||||
mapping them to the alias ``a1``. You should adjust the instructions for other
|
Network Adapter - by mapping them to the alias ``a1``.
|
||||||
devices with potentially different capabilities.
|
You should adjust the instructions for other devices with potentially different
|
||||||
|
capabilities.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
@ -50,9 +51,12 @@ devices with potentially different capabilities.
|
|||||||
Nova will ignore PCI devices reported by the hypervisor if the address is
|
Nova will ignore PCI devices reported by the hypervisor if the address is
|
||||||
outside of these ranges.
|
outside of these ranges.
|
||||||
|
|
||||||
Configure host (Compute)
|
Enabling PCI passthrough
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
|
Configure compute host
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To enable PCI passthrough on an x86, Linux-based compute node, the following
|
To enable PCI passthrough on an x86, Linux-based compute node, the following
|
||||||
are required:
|
are required:
|
||||||
|
|
||||||
@ -83,9 +87,8 @@ passthrough`__.
|
|||||||
|
|
||||||
.. __: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/
|
.. __: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/
|
||||||
|
|
||||||
|
Configure ``nova-compute``
|
||||||
Configure ``nova-compute`` (Compute)
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
------------------------------------
|
|
||||||
|
|
||||||
Once PCI passthrough has been configured for the host, :program:`nova-compute`
|
Once PCI passthrough has been configured for the host, :program:`nova-compute`
|
||||||
must be configured to allow the PCI device to pass through to VMs. This is done
|
must be configured to allow the PCI device to pass through to VMs. This is done
|
||||||
@ -115,9 +118,10 @@ In addition, it is necessary to configure the :oslo.config:option:`pci.alias`
|
|||||||
option, which is a JSON-style configuration option that allows you to map a
|
option, which is a JSON-style configuration option that allows you to map a
|
||||||
given device type, identified by the standard PCI ``vendor_id`` and (optional)
|
given device type, identified by the standard PCI ``vendor_id`` and (optional)
|
||||||
``product_id`` fields, to an arbitrary name or *alias*. This alias can then be
|
``product_id`` fields, to an arbitrary name or *alias*. This alias can then be
|
||||||
used to request a PCI device using the ``pci_passthrough:alias=<alias>`` flavor
|
used to request a PCI device using the :nova:extra-spec:`pci_passthrough:alias`
|
||||||
extra spec, as discussed previously. For our sample device with a vendor ID of
|
flavor extra spec, as discussed previously.
|
||||||
``0x8086`` and a product ID of ``0x154d``, this would be:
|
For our sample device with a vendor ID of ``0x8086`` and a product ID of
|
||||||
|
``0x154d``, this would be:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
@ -152,9 +156,8 @@ Refer to :oslo.config:option:`pci.alias` for syntax information.
|
|||||||
|
|
||||||
Once configured, restart the :program:`nova-compute` service.
|
Once configured, restart the :program:`nova-compute` service.
|
||||||
|
|
||||||
|
Configure ``nova-scheduler``
|
||||||
Configure ``nova-scheduler`` (Controller)
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
-----------------------------------------
|
|
||||||
|
|
||||||
The :program:`nova-scheduler` service must be configured to enable the
|
The :program:`nova-scheduler` service must be configured to enable the
|
||||||
``PciPassthroughFilter``. To do this, add this filter to the list of filters
|
``PciPassthroughFilter``. To do this, add this filter to the list of filters
|
||||||
@ -170,11 +173,8 @@ specified in :oslo.config:option:`filter_scheduler.enabled_filters` and set
|
|||||||
|
|
||||||
Once done, restart the :program:`nova-scheduler` service.
|
Once done, restart the :program:`nova-scheduler` service.
|
||||||
|
|
||||||
|
Configure ``nova-api``
|
||||||
.. _pci-passthrough-alias:
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Configure ``nova-api`` (Controller)
|
|
||||||
-----------------------------------
|
|
||||||
|
|
||||||
It is necessary to also configure the :oslo.config:option:`pci.alias` config
|
It is necessary to also configure the :oslo.config:option:`pci.alias` config
|
||||||
option on the controller. This configuration should match the configuration
|
option on the controller. This configuration should match the configuration
|
||||||
@ -186,13 +186,14 @@ found on the compute nodes. For example:
|
|||||||
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1", "numa_policy":"preferred" }
|
alias = { "vendor_id":"8086", "product_id":"154d", "device_type":"type-PF", "name":"a1", "numa_policy":"preferred" }
|
||||||
|
|
||||||
Refer to :oslo.config:option:`pci.alias` for syntax information.
|
Refer to :oslo.config:option:`pci.alias` for syntax information.
|
||||||
Refer to :ref:`Affinity <pci_numa_affinity_policy>` for ``numa_policy`` information.
|
Refer to :ref:`Affinity <pci-numa-affinity-policy>` for ``numa_policy``
|
||||||
|
information.
|
||||||
|
|
||||||
Once configured, restart the :program:`nova-api` service.
|
Once configured, restart the :program:`nova-api` service.
|
||||||
|
|
||||||
|
|
||||||
Configure a flavor (API)
|
Configuring a flavor or image
|
||||||
------------------------
|
-----------------------------
|
||||||
|
|
||||||
Once the alias has been configured, it can be used for an flavor extra spec.
|
Once the alias has been configured, it can be used for an flavor extra spec.
|
||||||
For example, to request two of the PCI devices referenced by alias ``a1``, run:
|
For example, to request two of the PCI devices referenced by alias ``a1``, run:
|
||||||
@ -202,15 +203,76 @@ For example, to request two of the PCI devices referenced by alias ``a1``, run:
|
|||||||
$ openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2"
|
$ openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2"
|
||||||
|
|
||||||
For more information about the syntax for ``pci_passthrough:alias``, refer to
|
For more information about the syntax for ``pci_passthrough:alias``, refer to
|
||||||
:ref:`Flavors <extra-spec-pci-passthrough>`.
|
:doc:`the documentation </configuration/extra-specs>`.
|
||||||
|
|
||||||
|
|
||||||
Create instances with PCI passthrough devices
|
.. _pci-numa-affinity-policy:
|
||||||
---------------------------------------------
|
|
||||||
|
|
||||||
The :program:`nova-scheduler` service selects a destination host that has PCI
|
PCI-NUMA affinity policies
|
||||||
devices available that match the ``alias`` specified in the flavor.
|
--------------------------
|
||||||
|
|
||||||
|
By default, the libvirt driver enforces strict NUMA affinity for PCI devices,
|
||||||
|
be they PCI passthrough devices or neutron SR-IOV interfaces. This means that
|
||||||
|
by default a PCI device must be allocated from the same host NUMA node as at
|
||||||
|
least one of the instance's CPUs. This isn't always necessary, however, and you
|
||||||
|
can configure this policy using the
|
||||||
|
:nova:extra-spec:`hw:pci_numa_affinity_policy` flavor extra spec or equivalent
|
||||||
|
image metadata property. There are three possible values allowed:
|
||||||
|
|
||||||
|
**required**
|
||||||
|
This policy means that nova will boot instances with PCI devices **only**
|
||||||
|
if at least one of the NUMA nodes of the instance is associated with these
|
||||||
|
PCI devices. It means that if NUMA node info for some PCI devices could not
|
||||||
|
be determined, those PCI devices wouldn't be consumable by the instance.
|
||||||
|
This provides maximum performance.
|
||||||
|
|
||||||
|
**socket**
|
||||||
|
This policy means that the PCI device must be affined to the same host
|
||||||
|
socket as at least one of the guest NUMA nodes. For example, consider a
|
||||||
|
system with two sockets, each with two NUMA nodes, numbered node 0 and node
|
||||||
|
1 on socket 0, and node 2 and node 3 on socket 1. There is a PCI device
|
||||||
|
affined to node 0. An PCI instance with two guest NUMA nodes and the
|
||||||
|
``socket`` policy can be affined to either:
|
||||||
|
|
||||||
|
* node 0 and node 1
|
||||||
|
* node 0 and node 2
|
||||||
|
* node 0 and node 3
|
||||||
|
* node 1 and node 2
|
||||||
|
* node 1 and node 3
|
||||||
|
|
||||||
|
The instance cannot be affined to node 2 and node 3, as neither of those
|
||||||
|
are on the same socket as the PCI device. If the other nodes are consumed
|
||||||
|
by other instances and only nodes 2 and 3 are available, the instance
|
||||||
|
will not boot.
|
||||||
|
|
||||||
|
**preferred**
|
||||||
|
This policy means that ``nova-scheduler`` will choose a compute host
|
||||||
|
with minimal consideration for the NUMA affinity of PCI devices.
|
||||||
|
``nova-compute`` will attempt a best effort selection of PCI devices
|
||||||
|
based on NUMA affinity, however, if this is not possible then
|
||||||
|
``nova-compute`` will fall back to scheduling on a NUMA node that is not
|
||||||
|
associated with the PCI device.
|
||||||
|
|
||||||
|
**legacy**
|
||||||
|
This is the default policy and it describes the current nova behavior.
|
||||||
|
Usually we have information about association of PCI devices with NUMA
|
||||||
|
nodes. However, some PCI devices do not provide such information. The
|
||||||
|
``legacy`` value will mean that nova will boot instances with PCI device
|
||||||
|
if either:
|
||||||
|
|
||||||
|
* The PCI device is associated with at least one NUMA nodes on which the
|
||||||
|
instance will be booted
|
||||||
|
|
||||||
|
* There is no information about PCI-NUMA affinity available
|
||||||
|
|
||||||
|
For example, to configure a flavor to use the ``preferred`` PCI NUMA affinity
|
||||||
|
policy for any neutron SR-IOV interfaces attached by the user:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# openstack server create --flavor m1.large --image cirros-0.3.5-x86_64-uec --wait test-pci
|
$ openstack flavor set $FLAVOR \
|
||||||
|
--property hw:pci_numa_affinity_policy=preferred
|
||||||
|
|
||||||
|
You can also configure this for PCI passthrough devices by specifying the
|
||||||
|
policy in the alias configuration via :oslo.config:option:`pci.alias`. For more
|
||||||
|
information, refer to :oslo.config:option:`the documentation <pci.alias>`.
|
||||||
|
@ -184,61 +184,6 @@ Performance Monitoring Unit (vPMU)
|
|||||||
required, such workloads should set ``hw:pmu=False``. For most workloads
|
required, such workloads should set ``hw:pmu=False``. For most workloads
|
||||||
the default of unset or enabling the vPMU ``hw:pmu=True`` will be correct.
|
the default of unset or enabling the vPMU ``hw:pmu=True`` will be correct.
|
||||||
|
|
||||||
.. _pci_numa_affinity_policy:
|
|
||||||
|
|
||||||
PCI NUMA Affinity Policy
|
|
||||||
For the libvirt driver, you can specify the NUMA affinity policy for
|
|
||||||
PCI passthrough devices and neutron SR-IOV interfaces via the
|
|
||||||
``hw:pci_numa_affinity_policy`` flavor extra spec or
|
|
||||||
``hw_pci_numa_affinity_policy`` image property. The allowed values are
|
|
||||||
``required``, ``socket``, ``preferred`` or ``legacy`` (default).
|
|
||||||
|
|
||||||
**required**
|
|
||||||
This value will mean that nova will boot instances with PCI devices
|
|
||||||
**only** if at least one of the NUMA nodes of the instance is associated
|
|
||||||
with these PCI devices. It means that if NUMA node info for some PCI
|
|
||||||
devices could not be determined, those PCI devices wouldn't be consumable
|
|
||||||
by the instance. This provides maximum performance.
|
|
||||||
|
|
||||||
**socket**
|
|
||||||
This means that the PCI device must be affined to the same host socket as
|
|
||||||
at least one of the guest NUMA nodes. For example, consider a system with
|
|
||||||
two sockets, each with two NUMA nodes, numbered node 0 and node 1 on
|
|
||||||
socket 0, and node 2 and node 3 on socket 1. There is a PCI device
|
|
||||||
affined to node 0. An PCI instance with two guest NUMA nodes and the
|
|
||||||
``socket`` policy can be affined to either:
|
|
||||||
|
|
||||||
* node 0 and node 1
|
|
||||||
* node 0 and node 2
|
|
||||||
* node 0 and node 3
|
|
||||||
* node 1 and node 2
|
|
||||||
* node 1 and node 3
|
|
||||||
|
|
||||||
The instance cannot be affined to node 2 and node 3, as neither of those
|
|
||||||
are on the same socket as the PCI device. If the other nodes are consumed
|
|
||||||
by other instances and only nodes 2 and 3 are available, the instance
|
|
||||||
will not boot.
|
|
||||||
|
|
||||||
**preferred**
|
|
||||||
This value will mean that ``nova-scheduler`` will choose a compute host
|
|
||||||
with minimal consideration for the NUMA affinity of PCI devices.
|
|
||||||
``nova-compute`` will attempt a best effort selection of PCI devices
|
|
||||||
based on NUMA affinity, however, if this is not possible then
|
|
||||||
``nova-compute`` will fall back to scheduling on a NUMA node that is not
|
|
||||||
associated with the PCI device.
|
|
||||||
|
|
||||||
**legacy**
|
|
||||||
This is the default value and it describes the current nova behavior.
|
|
||||||
Usually we have information about association of PCI devices with NUMA
|
|
||||||
nodes. However, some PCI devices do not provide such information. The
|
|
||||||
``legacy`` value will mean that nova will boot instances with PCI device
|
|
||||||
if either:
|
|
||||||
|
|
||||||
* The PCI device is associated with at least one NUMA nodes on which the
|
|
||||||
instance will be booted
|
|
||||||
|
|
||||||
* There is no information about PCI-NUMA affinity available
|
|
||||||
|
|
||||||
.. _extra-specs-memory-encryption:
|
.. _extra-specs-memory-encryption:
|
||||||
|
|
||||||
Hardware encryption of guest memory
|
Hardware encryption of guest memory
|
||||||
@ -251,24 +196,6 @@ Hardware encryption of guest memory
|
|||||||
$ openstack flavor set FLAVOR-NAME \
|
$ openstack flavor set FLAVOR-NAME \
|
||||||
--property hw:mem_encryption=True
|
--property hw:mem_encryption=True
|
||||||
|
|
||||||
.. _extra-spec-pci-passthrough:
|
|
||||||
|
|
||||||
PCI passthrough
|
|
||||||
You can assign PCI devices to a guest by specifying them in the flavor.
|
|
||||||
|
|
||||||
.. code:: console
|
|
||||||
|
|
||||||
$ openstack flavor set FLAVOR-NAME \
|
|
||||||
--property pci_passthrough:alias=ALIAS:COUNT
|
|
||||||
|
|
||||||
Where:
|
|
||||||
|
|
||||||
- ALIAS: (string) The alias which correspond to a particular PCI device class
|
|
||||||
as configured in the nova configuration file (see
|
|
||||||
:oslo.config:option:`pci.alias`).
|
|
||||||
- COUNT: (integer) The amount of PCI devices of type ALIAS to be assigned to
|
|
||||||
a guest.
|
|
||||||
|
|
||||||
.. _extra-specs-hiding-hypervisor-signature:
|
.. _extra-specs-hiding-hypervisor-signature:
|
||||||
|
|
||||||
Hiding hypervisor signature
|
Hiding hypervisor signature
|
||||||
|
Loading…
x
Reference in New Issue
Block a user