Remove linuxbridge driver
It's been experimental for some time. The project struggles with a matrix of configurations, obsolete docs etc. It's better to remove code than let it rot. Interested parties are recommended to migrate to a supported driver (preferrably the default driver for neutron - OVN), or take over maintenance of the linuxbridge driver out-of-tree. Change-Id: I2b3a08352fa5935db8ecb9da312b7ea4b4f7e43a
@ -8,7 +8,6 @@ omit =
|
||||
neutron/cmd/eventlet/agents/metadata.py
|
||||
neutron/cmd/eventlet/agents/ovn_metadata.py
|
||||
neutron/cmd/eventlet/agents/ovn_neutron_agent.py
|
||||
neutron/cmd/eventlet/plugins/linuxbridge_neutron_agent.py
|
||||
neutron/cmd/eventlet/plugins/macvtap_neutron_agent.py
|
||||
neutron/cmd/eventlet/plugins/ovs_neutron_agent.py
|
||||
neutron/cmd/eventlet/plugins/sriov_nic_neutron_agent.py
|
||||
|
@ -14,14 +14,6 @@ function octavia_create_network_interface_device {
|
||||
openstack subnet set --gateway none lb-mgmt-subnet
|
||||
fi
|
||||
sudo ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} $INTERFACE -- set Interface $INTERFACE type=internal -- set Interface $INTERFACE external-ids:iface-status=active -- set Interface $INTERFACE external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface $INTERFACE external-ids:iface-id=$MGMT_PORT_ID -- set Interface $INTERFACE external-ids:skip_cleanup=true
|
||||
elif [[ $NEUTRON_AGENT == "linuxbridge" || $Q_AGENT == "linuxbridge" ]]; then
|
||||
if ! ip link show $INTERFACE ; then
|
||||
sudo ip link add $INTERFACE type veth peer name o-bhm0
|
||||
NETID=$(openstack network show lb-mgmt-net -c id -f value)
|
||||
BRNAME=brq$(echo $NETID|cut -c 1-11)
|
||||
sudo ip link set o-bhm0 master $BRNAME
|
||||
sudo ip link set o-bhm0 up
|
||||
fi
|
||||
else
|
||||
die "Unknown network controller - $NEUTRON_AGENT/$Q_AGENT"
|
||||
fi
|
||||
@ -31,10 +23,6 @@ function octavia_delete_network_interface_device {
|
||||
|
||||
if [[ $NEUTRON_AGENT == "openvswitch" || $Q_AGENT == "openvswitch" || $NEUTRON_AGENT == "ovn" || $Q_AGENT == "ovn" ]]; then
|
||||
: # Do nothing
|
||||
elif [[ $NEUTRON_AGENT == "linuxbridge" || $Q_AGENT == "linuxbridge" ]]; then
|
||||
if ip link show $INTERFACE ; then
|
||||
sudo ip link del $INTERFACE
|
||||
fi
|
||||
else
|
||||
die "Unknown network controller - $NEUTRON_AGENT/$Q_AGENT"
|
||||
fi
|
||||
|
@ -78,11 +78,11 @@ if [[ "$1" == "stack" ]]; then
|
||||
if is_service_enabled q-agt neutron-agent; then
|
||||
configure_l2_agent
|
||||
fi
|
||||
#Note: sriov agent should run with OVS or linux bridge agent
|
||||
#because they are the mechanisms that bind the DHCP and router ports.
|
||||
#Note: sriov agent should run with OVS agent
|
||||
#because it is the mechanism that binds the DHCP and router ports.
|
||||
#Currently devstack lacks the option to run two agents on the same node.
|
||||
#Therefore we create new service, q-sriov-agt, and the
|
||||
# q-agt/neutron-agent should be OVS or linux bridge.
|
||||
# q-agt/neutron-agent should be OVS.
|
||||
if is_service_enabled q-sriov-agt neutron-sriov-agent; then
|
||||
configure_l2_agent
|
||||
configure_l2_agent_sriovnicswitch
|
||||
|
@ -1,532 +0,0 @@
|
||||
.. _config-dhcp-ha:
|
||||
|
||||
======================
|
||||
DHCP High-availability
|
||||
======================
|
||||
|
||||
This section describes how to use the agent management (alias agent) and
|
||||
scheduler (alias agent_scheduler) extensions for DHCP agents
|
||||
scalability and HA.
|
||||
|
||||
.. note::
|
||||
|
||||
Use the :command:`openstack extension list` command to check if these
|
||||
extensions are enabled. Check ``agent`` and ``agent_scheduler``
|
||||
are included in the output.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list --network -c Name -c Alias
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
| Name | Alias |
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
| Default Subnetpools | default-subnetpools |
|
||||
| Network IP Availability | network-ip-availability |
|
||||
| Network Availability Zone | network_availability_zone |
|
||||
| Auto Allocated Topology Services | auto-allocated-topology |
|
||||
| Neutron L3 Configurable external gateway mode | ext-gw-mode |
|
||||
| Port Binding | binding |
|
||||
| Neutron Metering | metering |
|
||||
| agent | agent |
|
||||
| Subnet Allocation | subnet_allocation |
|
||||
| L3 Agent Scheduler | l3_agent_scheduler |
|
||||
| Neutron external network | external-net |
|
||||
| Neutron Service Flavors | flavors |
|
||||
| Network MTU | net-mtu |
|
||||
| Availability Zone | availability_zone |
|
||||
| Quota management support | quotas |
|
||||
| HA Router extension | l3-ha |
|
||||
| Provider Network | provider |
|
||||
| Multi Provider Network | multi-provider |
|
||||
| Address scope | address-scope |
|
||||
| Neutron Extra Route | extraroute |
|
||||
| Subnet service types | subnet-service-types |
|
||||
| Resource timestamps | standard-attr-timestamp |
|
||||
| Neutron Service Type Management | service-type |
|
||||
| Router Flavor Extension | l3-flavors |
|
||||
| Neutron Extra DHCP opts | extra_dhcp_opt |
|
||||
| Resource revision numbers | standard-attr-revisions |
|
||||
| Pagination support | pagination |
|
||||
| Sorting support | sorting |
|
||||
| security-group | security-group |
|
||||
| DHCP Agent Scheduler | dhcp_agent_scheduler |
|
||||
| Router Availability Zone | router_availability_zone |
|
||||
| RBAC Policies | rbac-policies |
|
||||
| standard-attr-description | standard-attr-description |
|
||||
| Neutron L3 Router | router |
|
||||
| Allowed Address Pairs | allowed-address-pairs |
|
||||
| project_id field enabled | project-id |
|
||||
| Distributed Virtual Router | dvr |
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
|
||||
Demo setup
|
||||
~~~~~~~~~~
|
||||
|
||||
.. figure:: figures/demo_multiple_dhcp_agents.png
|
||||
|
||||
There will be three hosts in the setup.
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Host
|
||||
- Description
|
||||
* - OpenStack controller host - controlnode
|
||||
- Runs the Networking, Identity, and Compute services that are required
|
||||
to deploy VMs. The node must have at least one network interface that
|
||||
is connected to the Management Network. Note that ``nova-network`` should
|
||||
not be running because it is replaced by Neutron.
|
||||
* - HostA
|
||||
- Runs ``nova-compute``, the Neutron L2 agent and DHCP agent
|
||||
* - HostB
|
||||
- Same as HostA
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
**controlnode: neutron server**
|
||||
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
core_plugin = linuxbridge
|
||||
host = controlnode
|
||||
agent_down_time = 5
|
||||
dhcp_agents_per_network = 1
|
||||
|
||||
[database]
|
||||
connection = mysql+pymysql://root:root@127.0.0.1:3306/neutron
|
||||
|
||||
.. note::
|
||||
|
||||
In the above configuration, we use ``dhcp_agents_per_network = 1``
|
||||
for this demonstration. In usual deployments, we suggest setting
|
||||
``dhcp_agents_per_network`` to more than one to match the number of
|
||||
DHCP agents in your deployment.
|
||||
See :ref:`conf-dhcp-agents-per-network`.
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql+pymysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
**HostA and HostB: L2 agent**
|
||||
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# host = HostB on hostb
|
||||
host = HostA
|
||||
|
||||
[database]
|
||||
connection = mysql+pymysql://root:root@127.0.0.1:3306/neutron
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
#. Update the nova configuration file ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron]
|
||||
username=neutron
|
||||
password=servicepassword
|
||||
auth_url=http://controlnode:35357/v2.0/
|
||||
auth_strategy=keystone
|
||||
project_name=servicetenant
|
||||
url=http://203.0.113.10:9696/
|
||||
|
||||
**HostA and HostB: DHCP agent**
|
||||
|
||||
- Update the DHCP configuration file ``/etc/neutron/dhcp_agent.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
|
||||
Prerequisites for demonstration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Admin role is required to use the agent management and scheduler extensions.
|
||||
Ensure you run the following commands under a project with an admin role.
|
||||
|
||||
To experiment, you need VMs and a neutron network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------+--------+----------------+--------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+-----------+--------+----------------+--------+----------+
|
||||
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=192.0.2.3 | cirros | m1.tiny |
|
||||
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=192.0.2.4 | ubuntu | m1.small |
|
||||
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=192.0.2.5 | centos | m1.small |
|
||||
+--------------------------------------+-----------+--------+----------------+--------+----------+
|
||||
|
||||
$ openstack network list
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ad88e059-e7fa-4cf7-8857-6731a2a3a554 | net1 | 8086db87-3a7a-4cad-88c9-7bab9bc69258 |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
|
||||
Managing agents in neutron deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List all agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | DHCP agent | HostA | None | True | UP | neutron-dhcp-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Every agent that supports these extensions will register itself with the
|
||||
neutron server when it starts up.
|
||||
|
||||
The output shows information for four agents. The ``alive`` field shows
|
||||
``True`` if the agent reported its state within the period defined by the
|
||||
``agent_down_time`` option in the ``neutron.conf`` file. Otherwise the
|
||||
``alive`` is ``False``.
|
||||
|
||||
#. List DHCP agents that host a specified network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list --network net1
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | HostA | UP | True |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
|
||||
#. List the networks hosted by a given DHCP agent:
|
||||
|
||||
This command is to show which networks a given dhcp agent is managing.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list --agent 22467163-01ea-4231-ba45-3bd316f425e6
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
| ad88e059-e7fa- | net1 | 8086db87-3a7a-4cad- |
|
||||
| 4cf7-8857-6731a2a3a554 | | 88c9-7bab9bc69258 |
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
|
||||
#. Show agent details.
|
||||
|
||||
The :command:`openstack network agent show` command shows details for a
|
||||
specified agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent show 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b
|
||||
+---------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | DHCP agent |
|
||||
| alive | True |
|
||||
| availability_zone | nova |
|
||||
| binary | neutron-dhcp-agent |
|
||||
| configurations | dhcp_driver='neutron.agent.linux.dhcp.Dnsmasq', |
|
||||
| | dhcp_lease_duration='86400', |
|
||||
| | log_agent_heartbeats='False', networks='1', |
|
||||
| | notifies_port_ready='True', ports='3', |
|
||||
| | subnets='1' |
|
||||
| created_at | 2016-12-14 00:25:54 |
|
||||
| description | None |
|
||||
| last_heartbeat_at | 2016-12-14 06:53:24 |
|
||||
| host | HostA |
|
||||
| id | 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b |
|
||||
| started_at | 2016-12-14 00:25:54 |
|
||||
| topic | dhcp_agent |
|
||||
+---------------------+--------------------------------------------------+
|
||||
|
||||
In this output, ``last_heartbeat_at`` is the time on the neutron
|
||||
server. You do not need to synchronize all agents to this time for this
|
||||
extension to run correctly. ``configurations`` describes the static
|
||||
configuration for the agent or run time data. This agent is a DHCP agent
|
||||
and it hosts one network, one subnet, and three ports.
|
||||
|
||||
Different types of agents show different details. The following output
|
||||
shows information for a Linux bridge agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent show 22467163-01ea-4231-ba45-3bd316f425e6
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | Linux bridge agent |
|
||||
| alive | True |
|
||||
| availability_zone | nova |
|
||||
| binary | neutron-linuxbridge-agent |
|
||||
| configurations | { |
|
||||
| | "physnet1": "eth0", |
|
||||
| | "devices": "4" |
|
||||
| | } |
|
||||
| created_at | 2016-12-14 00:26:54 |
|
||||
| description | None |
|
||||
| last_heartbeat_at | 2016-12-14 06:53:24 |
|
||||
| host | HostA |
|
||||
| id | 22467163-01ea-4231-ba45-3bd316f425e6 |
|
||||
| started_at | 2016-12-14T06:48:39.000000 |
|
||||
| topic | N/A |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
The output shows ``bridge-mapping`` and the number of virtual network
|
||||
devices on this L2 agent.
|
||||
|
||||
Managing assignment of networks to DHCP agent
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A single network can be assigned to more than one DHCP agents and
|
||||
one DHCP agent can host more than one network.
|
||||
You can add a network to a DHCP agent and remove one from it.
|
||||
|
||||
#. Default scheduling.
|
||||
|
||||
When you create a network with one port, the network will be scheduled to
|
||||
an active DHCP agent. If many active DHCP agents are running, select one
|
||||
randomly. You can design more sophisticated scheduling algorithms in the
|
||||
same way as nova-schedule later on.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create net2
|
||||
$ openstack subnet create --network net2 --subnet-range 198.51.100.0/24 subnet2
|
||||
$ openstack port create port2 --network net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
|
||||
It is allocated to DHCP agent on HostA. If you want to validate the
|
||||
behavior through the :command:`dnsmasq` command, you must create a subnet for
|
||||
the network because the DHCP agent starts the dnsmasq service only if
|
||||
there is a DHCP.
|
||||
|
||||
#. Assign a network to a given DHCP agent.
|
||||
|
||||
To add another DHCP agent to host the network, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent add network --dhcp \
|
||||
55569f4e-6f31-41a6-be9d-526efce1f7fe net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
|
||||
Both DHCP agents host the ``net2`` network.
|
||||
|
||||
#. Remove a network from a specified DHCP agent.
|
||||
|
||||
This command is the sibling command for the previous one. Remove
|
||||
``net2`` from the DHCP agent for HostA:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent remove network --dhcp \
|
||||
2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
You can see that only the DHCP agent for HostB is hosting the ``net2``
|
||||
network.
|
||||
|
||||
HA of DHCP agents
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Boot a VM on ``net2``. Let both DHCP agents host ``net2``. Fail the agents
|
||||
in turn to see if the VM can still get the desired IP.
|
||||
|
||||
#. Boot a VM on ``net2``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ad88e059-e7fa-4cf7-8857-6731a2a3a554 | net1 | 8086db87-3a7a-4cad-88c9-7bab9bc69258 |
|
||||
| 9b96b14f-71b8-4918-90aa-c5d705606b1a | net2 | 6979b71a-0ae8-448c-aa87-65f68eedcaaa |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
$ openstack server create --image tty --flavor 1 myserver4 \
|
||||
--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a
|
||||
...
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------+--------+-------------------+---------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+-----------+--------+-------------------+---------+----------+
|
||||
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=192.0.2.3 | cirros | m1.tiny |
|
||||
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=192.0.2.4 | ubuntu | m1.small |
|
||||
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=192.0.2.5 | centos | m1.small |
|
||||
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=198.51.100.2 | cirros1 | m1.tiny |
|
||||
+--------------------------------------+-----------+--------+-------------------+---------+----------+
|
||||
|
||||
#. Make sure both DHCP agents hosting ``net2``:
|
||||
|
||||
Use the previous commands to assign the network to agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
To test the HA of DHCP agent:
|
||||
|
||||
#. Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or
|
||||
other DHCP client.
|
||||
|
||||
#. Stop the DHCP agent on HostA. Besides stopping the
|
||||
``neutron-dhcp-agent`` binary, you must stop the ``dnsmasq`` processes.
|
||||
|
||||
#. Run a DHCP client in VM to see if it can get the wanted IP.
|
||||
|
||||
#. Stop the DHCP agent on HostB too.
|
||||
|
||||
#. Run ``udhcpc`` in the VM; it cannot get the wanted IP.
|
||||
|
||||
#. Start DHCP agent on HostB. The VM gets the wanted IP again.
|
||||
|
||||
No HA for metadata service on isolated networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
All Neutron backends using the DHCP agent can also provide `metadata service
|
||||
<https://docs.openstack.org/nova/latest/user/metadata.html>`_ in isolated
|
||||
networks (i.e. networks without a router). In this case the DHCP agent manages
|
||||
the metadata service (see config option `enable_isolated_metadata
|
||||
<https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.enable_isolated_metadata>`_).
|
||||
|
||||
Note however that the metadata service is only redundant for IPv4, and not
|
||||
IPv6, even when the DHCP service is configured to be highly available
|
||||
(config option `dhcp_agents_per_network
|
||||
<https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT.dhcp_agents_per_network>`_
|
||||
> 1). This is because the DHCP agent will insert a route to the well known
|
||||
metadata IPv4 address (`169.254.169.254`) via its own IP address, so it will
|
||||
be reachable as long as the DHCP service is available at that IP address.
|
||||
This also means that recovery after a failure is tied to the renewal of the
|
||||
DHCP lease, since that route will only change if the DHCP server for a VM
|
||||
changes.
|
||||
|
||||
With IPv6, the well known metadata IPv6 address (`fe80::a9fe:a9fe`) is used,
|
||||
but directly configured in the DHCP agent network namespace.
|
||||
Due to the enforcement of duplicate address detection (DAD), this address
|
||||
can only be configured in at most one DHCP network namespaces at any time.
|
||||
See `RFC 4862 <https://www.rfc-editor.org/rfc/rfc4862#section-5.4>`_ for
|
||||
details on the DAD process.
|
||||
|
||||
For this reason, even when you have multiple DHCP agents, an arbitrary one
|
||||
(where the metadata IPv6 address is not in `dadfailed` state) will serve all
|
||||
metadata requests over IPv6. When that metadata service instance becomes
|
||||
unreachable there is no failover and the service will become unreachable.
|
||||
|
||||
Disabling and removing an agent
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An administrator might want to disable an agent if a system hardware or
|
||||
software upgrade is planned. Some agents that support scheduling also
|
||||
support disabling and enabling agents, such as L3 and DHCP agents. After
|
||||
the agent is disabled, the scheduler does not schedule new resources to
|
||||
the agent.
|
||||
|
||||
After the agent is disabled, you can safely remove the agent.
|
||||
Even after disabling the agent, resources on the agent are kept assigned.
|
||||
Ensure you remove the resources on the agent before you delete the agent.
|
||||
|
||||
Disable the DHCP agent on HostA before you stop it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent set 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b --disable
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | DHCP agent | HostA | None | True | DOWN | neutron-dhcp-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
After you stop the DHCP agent on HostA, you can delete it by the following
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent delete 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
After deletion, if you restart the DHCP agent, it appears on the agent
|
||||
list again.
|
||||
|
||||
.. _conf-dhcp-agents-per-network:
|
||||
|
||||
Enabling DHCP high availability by default
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can control the default number of DHCP agents assigned to a network
|
||||
by setting the following configuration option
|
||||
in the file ``/etc/neutron/neutron.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
dhcp_agents_per_network = 3
|
@ -13,8 +13,8 @@ the SNAT service to a backup DVR/SNAT router on an l3-agent running on a
|
||||
different node.
|
||||
|
||||
SNAT high availability is implemented in a manner similar to the
|
||||
:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where
|
||||
``keepalived`` uses VRRP to provide quick failover of SNAT services.
|
||||
:ref:`deploy-ovs-ha-vrrp` example where ``keepalived`` uses VRRP to provide
|
||||
quick failover of SNAT services.
|
||||
|
||||
During normal operation, the primary router periodically transmits *heartbeat*
|
||||
packets over a hidden project network that connects all HA routers for a
|
||||
|
@ -19,14 +19,14 @@ them in the ``experimental`` section of ``neutron.conf``.
|
||||
<https://governance.openstack.org/tc/reference/projects/neutron.html>`_.
|
||||
|
||||
The following table shows the Neutron features currently designated as
|
||||
experimetal:
|
||||
experimental:
|
||||
|
||||
.. table:: **Neutron Experimental features**
|
||||
|
||||
========================= ===================================
|
||||
Feature Option in neutron.conf to enable
|
||||
========================= ===================================
|
||||
ML2 Linuxbridge driver linuxbridge
|
||||
IPv6 Prefix Delegation ipv6_pd_enabled
|
||||
========================= ===================================
|
||||
|
||||
This is an example of how to enable the use of an experimental feature:
|
||||
@ -34,4 +34,4 @@ This is an example of how to enable the use of an experimental feature:
|
||||
.. code-block:: ini
|
||||
|
||||
[experimental]
|
||||
linuxbridge = true
|
||||
ipv6_pd_enabled = true
|
||||
|
@ -195,7 +195,7 @@ Project network considerations
|
||||
Dataplane
|
||||
---------
|
||||
|
||||
All dataplane modules, including OVN, Open vSwitch and Linux bridge,
|
||||
All dataplane modules, including OVN and Open vSwitch,
|
||||
support forwarding IPv6
|
||||
packets amongst the guests and router ports. Similar to IPv4, there is no
|
||||
special configuration or setup required to enable the dataplane to properly
|
||||
|
@ -198,8 +198,7 @@ To enable the logging service, follow the below steps.
|
||||
|
||||
.. note::
|
||||
|
||||
Fwaas v2 log is currently only supported by openvswitch, the firewall
|
||||
logging driver of linuxbridge is not implemented.
|
||||
Fwaas v2 log is currently only supported by openvswitch.
|
||||
|
||||
#. To enable logging service for ``firewall_group`` in Layer 3, add
|
||||
``fwaas_v2_log`` to option ``extensions`` in section ``[AGENT]`` in
|
||||
@ -273,8 +272,7 @@ Service workflow for Operator
|
||||
.. note::
|
||||
|
||||
- In VM ports, logging for ``security_group`` in currently works with
|
||||
``openvswitch`` firewall driver only. ``linuxbridge`` is under
|
||||
development.
|
||||
``openvswitch`` firewall driver.
|
||||
- Logging for ``firewall_group`` works on internal router ports only. VM
|
||||
ports would be supported in the future.
|
||||
|
||||
|
@ -11,8 +11,7 @@ Consider the following attributes of this mechanism driver to determine
|
||||
practicality in your environment:
|
||||
|
||||
* Supports only instance ports. Ports for DHCP and layer-3 (routing)
|
||||
services must use another mechanism driver such as Linux bridge or
|
||||
Open vSwitch (OVS).
|
||||
services must use another mechanism driver such as Open vSwitch (OVS).
|
||||
|
||||
* Supports only untagged (flat) and tagged (VLAN) networks.
|
||||
|
||||
@ -25,7 +24,7 @@ practicality in your environment:
|
||||
|
||||
* Only compute resources can be attached via macvtap. Attaching other
|
||||
resources like DHCP, Routers and others is not supported. Therefore run
|
||||
either OVS or linux bridge in VLAN or flat mode on the controller node.
|
||||
either OVS in VLAN or flat mode on the controller node.
|
||||
|
||||
* Instance migration requires the same values for the
|
||||
``physical_interface_mapping`` configuration option on each compute node.
|
||||
@ -35,13 +34,13 @@ practicality in your environment:
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
You can add this mechanism driver to an existing environment using either
|
||||
the Linux bridge or OVS mechanism drivers with only provider networks or
|
||||
You can add this mechanism driver to an existing environment using the
|
||||
OVS mechanism driver with only provider networks or
|
||||
provider and self-service networks. You can change the configuration of
|
||||
existing compute nodes or add compute nodes with the Macvtap mechanism
|
||||
driver. The example configuration assumes addition of compute nodes with
|
||||
the Macvtap mechanism driver to the :ref:`deploy-lb-selfservice` or
|
||||
:ref:`deploy-ovs-selfservice` deployment examples.
|
||||
the Macvtap mechanism driver to the :ref:`deploy-ovs-selfservice` deployment
|
||||
example.
|
||||
|
||||
Add one or more compute nodes with the following components:
|
||||
|
||||
@ -176,6 +175,6 @@ content for the prerequisite deployment example.
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This mechanism driver simply removes the Linux bridge handling security
|
||||
groups on the compute nodes. Thus, you can reference the network traffic
|
||||
flow scenarios for the prerequisite deployment example.
|
||||
This mechanism driver simply removes handling security groups on the compute
|
||||
nodes. Thus, you can reference the network traffic flow scenario for the
|
||||
prerequisite deployment example.
|
||||
|
@ -58,12 +58,6 @@ ML2 driver support matrix
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - Linux bridge
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- no
|
||||
- no
|
||||
* - OVN
|
||||
- yes
|
||||
- yes
|
||||
@ -93,10 +87,9 @@ ML2 driver support matrix
|
||||
|
||||
L2 population is a special mechanism driver that optimizes BUM (Broadcast,
|
||||
unknown destination address, multicast) traffic in the overlay networks
|
||||
VXLAN, GRE and Geneve. It needs to be used in conjunction with either the
|
||||
Linux bridge or the Open vSwitch mechanism driver and cannot be used as
|
||||
standalone mechanism driver. For more information, see the
|
||||
*Mechanism drivers* section below.
|
||||
VXLAN, GRE and Geneve. It needs to be used in conjunction with the
|
||||
Open vSwitch mechanism driver and cannot be used as standalone mechanism
|
||||
driver. For more information, see the *Mechanism drivers* section below.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
@ -167,10 +160,6 @@ More information about provider networks see
|
||||
VXLAN multicast group configuration is not applicable for the Open
|
||||
vSwitch agent.
|
||||
|
||||
As of today it is not used in the Linux bridge agent. The Linux bridge
|
||||
agent has its own agent specific configuration option. For more details,
|
||||
see the `Bug 1523614 <https://bugs.launchpad.net/neutron/+bug/1523614>`__.
|
||||
|
||||
Project network types
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
@ -227,12 +216,6 @@ To enable mechanism drivers in the ML2 plug-in, edit the
|
||||
For more details, see the
|
||||
`Configuration Reference <../configuration/ml2-conf.html#ml2>`__.
|
||||
|
||||
* Linux bridge
|
||||
|
||||
No additional configurations required for the mechanism driver. Additional
|
||||
agent configuration is required. For details, see the related *L2 agent*
|
||||
section below.
|
||||
|
||||
* Open vSwitch
|
||||
|
||||
No additional configurations required for the mechanism driver. Additional
|
||||
@ -244,7 +227,7 @@ For more details, see the
|
||||
The administrator must configure some additional configuration options for
|
||||
the mechanism driver. When this driver is used, architecture of the Neutron
|
||||
application in the cluster is different from what it is with other drivers
|
||||
like e.g. Open vSwitch or Linuxbridge.
|
||||
like e.g. Open vSwitch.
|
||||
For details, see :ref:`OVN reference architecture<refarch-refarch>`.
|
||||
|
||||
* SRIOV
|
||||
@ -292,9 +275,6 @@ mechanism driver's ``supported_vnic_types`` list.
|
||||
* - mech driver / supported_vnic_types
|
||||
- supported VNIC types
|
||||
- prohibiting available
|
||||
* - Linux bridge
|
||||
- normal
|
||||
- no
|
||||
* - OVN
|
||||
- normal, direct, direct_macvtap, direct_physical
|
||||
- no
|
||||
@ -342,18 +322,6 @@ resources. It typically runs on each Network Node and on each Compute Node.
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <../configuration/openvswitch-agent.html>`__.
|
||||
|
||||
* Linux bridge agent
|
||||
|
||||
The Linux bridge agent configures Linux bridges to realize L2 networks for
|
||||
OpenStack resources.
|
||||
|
||||
Configuration for the Linux bridge agent is typically done in the
|
||||
``linuxbridge_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <../configuration/linuxbridge-agent.html>`__.
|
||||
|
||||
* SRIOV Nic Switch agent
|
||||
|
||||
The sriov nic switch agent configures PCI virtual functions to realize L2
|
||||
@ -465,8 +433,6 @@ implementations:
|
||||
- L2 agent
|
||||
* - Open vSwitch
|
||||
- Open vSwitch agent
|
||||
* - Linux bridge
|
||||
- Linux bridge agent
|
||||
* - OVN
|
||||
- No (there is ovn-controller running on nodes)
|
||||
* - SRIOV
|
||||
@ -474,7 +440,7 @@ implementations:
|
||||
* - MacVTap
|
||||
- MacVTap agent
|
||||
* - L2 population
|
||||
- Open vSwitch agent, Linux bridge agent
|
||||
- Open vSwitch agent
|
||||
|
||||
The following tables shows which reference implementations support which
|
||||
non-L2 neutron agents:
|
||||
@ -492,11 +458,6 @@ non-L2 neutron agents:
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - Linux bridge & Linux bridge agent
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - OVN
|
||||
- no (own L3 implementation)
|
||||
- no (DHCP provided by OVN, fully distributed)
|
||||
@ -532,11 +493,6 @@ This guide characterizes the L2 reference implementations that currently exist.
|
||||
Can be used for instance network attachments as well as for attachments of
|
||||
other network resources like routers, DHCP, and so on.
|
||||
|
||||
* Linux bridge mechanism and Linux bridge agent
|
||||
|
||||
Can be used for instance network attachments as well as for attachments of
|
||||
other network resources like routers, DHCP, and so on.
|
||||
|
||||
* OVN mechanism driver
|
||||
|
||||
Can be used for instance network attachments as well as for attachments of
|
||||
@ -546,8 +502,8 @@ This guide characterizes the L2 reference implementations that currently exist.
|
||||
|
||||
Can only be used for instance network attachments (device_owner = compute).
|
||||
|
||||
Is deployed besides an other mechanism driver and L2 agent such as OVS or
|
||||
Linux bridge. It offers instances direct access to the network adapter
|
||||
Is deployed besides an other mechanism driver and L2 agent such as OVS. It
|
||||
offers instances direct access to the network adapter
|
||||
through a PCI Virtual Function (VF). This gives an instance direct access to
|
||||
hardware capabilities and high performance networking.
|
||||
|
||||
@ -564,8 +520,8 @@ This guide characterizes the L2 reference implementations that currently exist.
|
||||
Can only be used for instance network attachments (device_owner = compute)
|
||||
and not for attachment of other resources like routers, DHCP, and so on.
|
||||
|
||||
It is positioned as alternative to Open vSwitch or Linux bridge support on
|
||||
the compute node for internal deployments.
|
||||
It is positioned as alternative to Open vSwitch support on the compute node
|
||||
for internal deployments.
|
||||
|
||||
MacVTap offers a direct connection with very little overhead between
|
||||
instances and down to the adapter. You can use MacVTap agent on the
|
||||
|
@ -55,15 +55,15 @@ traffic directions (from the VM point of view).
|
||||
|
||||
.. table:: **Networking back ends, supported rules, and traffic direction**
|
||||
|
||||
==================== ============================= ======================= =================== ===================
|
||||
Rule \\ back end Open vSwitch SR-IOV Linux bridge OVN
|
||||
==================== ============================= ======================= =================== ===================
|
||||
Bandwidth limit Egress \\ Ingress Egress (1) Egress \\ Ingress Egress \\ Ingress
|
||||
Packet rate limit Egress \\ Ingress - - -
|
||||
Minimum bandwidth Egress \\ Ingress (2) Egress \\ Ingress (2) - -
|
||||
Minimum packet rate - - - -
|
||||
DSCP marking Egress - Egress Egress
|
||||
==================== ============================= ======================= =================== ===================
|
||||
==================== ============================= ======================= ===================
|
||||
Rule \\ back end Open vSwitch SR-IOV OVN
|
||||
==================== ============================= ======================= ===================
|
||||
Bandwidth limit Egress \\ Ingress Egress (1) Egress \\ Ingress
|
||||
Packet rate limit Egress \\ Ingress - -
|
||||
Minimum bandwidth Egress \\ Ingress (2) Egress \\ Ingress (2) -
|
||||
Minimum packet rate - - -
|
||||
DSCP marking Egress - Egress
|
||||
==================== ============================= ======================= ===================
|
||||
|
||||
.. note::
|
||||
|
||||
@ -74,12 +74,12 @@ traffic directions (from the VM point of view).
|
||||
|
||||
.. table:: **Neutron backends, supported directions and enforcement types for Minimum Bandwidth rule**
|
||||
|
||||
============================ ==================== ==================== ============== =====
|
||||
Enforcement type \ Backend Open vSwitch SR-IOV Linux Bridge OVN
|
||||
============================ ==================== ==================== ============== =====
|
||||
Dataplane Egress (3) Egress (1) - -
|
||||
Placement Egress/Ingress (2) Egress/Ingress (2) - -
|
||||
============================ ==================== ==================== ============== =====
|
||||
============================ ==================== ==================== =====
|
||||
Enforcement type \ Backend Open vSwitch SR-IOV OVN
|
||||
============================ ==================== ==================== =====
|
||||
Dataplane Egress (3) Egress (1) -
|
||||
Placement Egress/Ingress (2) Egress/Ingress (2) -
|
||||
============================ ==================== ==================== =====
|
||||
|
||||
.. note::
|
||||
|
||||
@ -95,12 +95,12 @@ traffic directions (from the VM point of view).
|
||||
|
||||
.. table:: **Neutron backends, supported directions and enforcement types for Minimum Packet Rate rule**
|
||||
|
||||
============================ ========================== ==================== ============== =====
|
||||
Enforcement type \ Backend Open vSwitch SR-IOV Linux Bridge OVN
|
||||
============================ ========================== ==================== ============== =====
|
||||
Dataplane - - - -
|
||||
Placement Any(1)/Egress/Ingress (2) - - -
|
||||
============================ ========================== ==================== ============== =====
|
||||
============================ ========================== ==================== =====
|
||||
Enforcement type \ Backend Open vSwitch SR-IOV OVN
|
||||
============================ ========================== ==================== =====
|
||||
Dataplane - - -
|
||||
Placement Any(1)/Egress/Ingress (2) - -
|
||||
============================ ========================== ==================== =====
|
||||
|
||||
.. note::
|
||||
|
||||
@ -281,8 +281,8 @@ On the network and compute nodes:
|
||||
|
||||
.. note::
|
||||
|
||||
QoS currently works with ml2 only (SR-IOV, Open vSwitch, and linuxbridge
|
||||
are drivers enabled for QoS).
|
||||
QoS currently works with ml2 only (SR-IOV and Open vSwitch are drivers
|
||||
enabled for QoS).
|
||||
|
||||
DSCP marking on outer header for overlay networks
|
||||
-------------------------------------------------
|
||||
@ -411,7 +411,7 @@ First, create a QoS policy and its bandwidth limit rule:
|
||||
.. note::
|
||||
|
||||
The QoS implementation requires a burst value to ensure proper behavior of
|
||||
bandwidth limit rules in the Open vSwitch and Linux bridge agents.
|
||||
bandwidth limit rules in the Open vSwitch agent.
|
||||
Configuring the proper burst value is very important. If the burst value is
|
||||
set too low, bandwidth usage will be throttled even with a proper bandwidth
|
||||
limit setting. This issue is discussed in various documentation sources, for
|
||||
|
@ -11,7 +11,7 @@ Among those of special interest are:
|
||||
|
||||
#. The neutron-server that provides API endpoints and serves as a single point
|
||||
of access to the database. It usually runs on the controller nodes.
|
||||
#. Layer2 agent that can utilize Open vSwitch, Linux Bridge or other
|
||||
#. Layer2 agent that can utilize Open vSwitch or other
|
||||
vendor-specific technology to provide network segmentation and isolation
|
||||
for project networks.
|
||||
The L2 agent should run on every node where it is deemed
|
||||
@ -70,7 +70,7 @@ L2 agents
|
||||
|
||||
The ``admin_state_up`` field of the agent in the Neutron database is set to
|
||||
``False``, but the agent is still capable of binding ports.
|
||||
This is true for openvswitch-agent, linuxbridge-agent, and sriov-agent.
|
||||
This is true for openvswitch-agent and sriov-agent.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -393,8 +393,8 @@ Enable neutron-sriov-nic-agent (Compute)
|
||||
(Optional) FDB L2 agent extension
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Forwarding DataBase (FDB) population is an L2 agent extension to OVS agent or
|
||||
Linux bridge. Its objective is to update the FDB table for existing instance
|
||||
Forwarding DataBase (FDB) population is an L2 agent extension to OVS agent. Its
|
||||
objective is to update the FDB table for existing instance
|
||||
using normal port. This enables communication between SR-IOV instances and
|
||||
normal instances. The use cases of the FDB population extension are:
|
||||
|
||||
@ -407,8 +407,7 @@ For additional information describing the problem, refer to:
|
||||
`Virtual switching technologies and Linux bridge.
|
||||
<https://events.static.linuxfound.org/sites/events/files/slides/LinuxConJapan2014_makita_0.pdf>`_
|
||||
|
||||
#. Edit the ``ovs_agent.ini`` or ``linuxbridge_agent.ini`` file on each compute
|
||||
node. For example:
|
||||
#. Edit the ``ovs_agent.ini`` file on each compute node. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -56,12 +56,11 @@ Example configuration
|
||||
The ML2 plug-in supports trunking with the following mechanism drivers:
|
||||
|
||||
* Open vSwitch (OVS)
|
||||
* Linux bridge
|
||||
* Open Virtual Network (OVN)
|
||||
|
||||
When using a ``segmentation-type`` of ``vlan``, the OVS and Linux bridge
|
||||
drivers present the network of the parent port as the untagged VLAN and all
|
||||
subports as tagged VLANs.
|
||||
When using a ``segmentation-type`` of ``vlan``, the OVS driver present the
|
||||
network of the parent port as the untagged VLAN and all subports as tagged
|
||||
VLANs.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
@ -14,7 +14,6 @@ Configuration
|
||||
config-bgp-dynamic-routing
|
||||
config-bgp-floating-ip-over-l2-segmented-network
|
||||
config-services-agent
|
||||
config-dhcp-ha
|
||||
config-dns-int
|
||||
config-dns-int-ext-serv
|
||||
config-dns-res
|
||||
|
@ -1,182 +0,0 @@
|
||||
.. _deploy-lb-ha-vrrp:
|
||||
|
||||
==========================================
|
||||
Linux bridge: High availability using VRRP
|
||||
==========================================
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp.txt
|
||||
|
||||
.. warning::
|
||||
|
||||
This high-availability mechanism is not compatible with the layer-2
|
||||
population mechanism. You must disable layer-2 population in the
|
||||
``linuxbridge_agent.ini`` file and restart the Linux bridge agent
|
||||
on all existing network and compute nodes prior to deploying the example
|
||||
configuration.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network nodes.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-overview.png
|
||||
:alt: High-availability using Linux bridge with VRRP - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
with a port on the overlay physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-compconn1.png
|
||||
:alt: High-availability using Linux bridge with VRRP - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using VRRP to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable VRRP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
l3_ha = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node 1
|
||||
--------------
|
||||
|
||||
No changes.
|
||||
|
||||
Network node 2
|
||||
--------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent and layer-3
|
||||
agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
.. warning::
|
||||
|
||||
By default, Linux uses UDP port ``8472`` for VXLAN tunnel traffic. This
|
||||
default value doesn't follow the IANA standard, which assigned UDP port
|
||||
``4789`` for VXLAN communication. As a consequence, if this node is part
|
||||
of a mixed deployment, where nodes with both OVS and Linux bridge must
|
||||
communicate over VXLAN tunnels, it is recommended that a line containing
|
||||
``udp_dstport = 4789`` be added to the [vxlan] section of all the Linux
|
||||
bridge agents. OVS follows the IANA standard.
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | None | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | None | True | UP | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 670e5805-340b-4182-9825-fa8319c99f23 | Linux bridge agent | network2 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 96224e89-7c15-42e9-89c4-8caac7abdd54 | L3 agent | network2 | nova | True | UP | neutron-l3-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt
|
||||
|
||||
Verify failover operation
|
||||
-------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt
|
||||
|
||||
Keepalived VRRP health check
|
||||
----------------------------
|
||||
|
||||
.. include:: shared/keepalived-vrrp-healthcheck.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-lb-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-lb-selfservice-networktrafficflow>` for normal operation.
|
@ -1,365 +0,0 @@
|
||||
.. _deploy-lb-provider:
|
||||
|
||||
===============================
|
||||
Linux bridge: Provider networks
|
||||
===============================
|
||||
|
||||
The provider networks architecture example provides layer-2 connectivity
|
||||
between instances and the physical network infrastructure using VLAN
|
||||
(802.1q) tagging. It supports one untagged (flat) network and up to
|
||||
4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends
|
||||
on the physical network infrastructure. For more information on provider
|
||||
networks, see :ref:`intro-os-networking-provider`.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
One controller node with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking server service and ML2 plug-in.
|
||||
|
||||
Two compute nodes with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent,
|
||||
and any dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
Larger deployments typically deploy the DHCP and metadata agents on a
|
||||
subset of compute nodes to increase performance and redundancy. However,
|
||||
too many agents can overwhelm the message bus. Also, to further simplify
|
||||
any deployment, you can omit the metadata agent and use a configuration
|
||||
drive to provide metadata to instances.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-provider-overview.png
|
||||
:alt: Provider networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one untagged
|
||||
(flat) network. In this particular case, the instance resides on the
|
||||
same compute node as the DHCP agent for the network. If the DHCP agent
|
||||
resides on another compute node, the latter only contains a DHCP namespace
|
||||
and Linux bridge with a port on the provider physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn1.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
The following figure describes virtual connectivity among components for
|
||||
two tagged (VLAN) networks. Essentially, each network uses a separate
|
||||
bridge that contains a port on the VLAN sub-interface on the provider
|
||||
physical network interface. Similar to the single untagged network case,
|
||||
the DHCP agent may reside on a different compute node.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn2.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - multiple networks
|
||||
|
||||
.. note::
|
||||
|
||||
These figures omit the controller node because it does not handle instance
|
||||
network traffic.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to deploy provider
|
||||
networks in your environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Install the Networking service components that provides the
|
||||
``neutron-server`` service and ML2 plug-in.
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
* Disable service plug-ins because provider networks do not require
|
||||
any. However, this breaks portions of the dashboard that manage
|
||||
the Networking service. See the latest
|
||||
`Install Tutorials and Guides <../install/>`__
|
||||
for more information.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins =
|
||||
|
||||
* Enable two DHCP agents per network so both compute nodes can
|
||||
provide DHCP service provider networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
* If necessary, :ref:`configure MTU <config-mtu>`.
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Configure drivers and network types:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan
|
||||
tenant_network_types =
|
||||
mechanism_drivers = linuxbridge
|
||||
extension_drivers = port_security
|
||||
|
||||
* Configure network mappings:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider
|
||||
|
||||
.. note::
|
||||
|
||||
The ``tenant_network_types`` option contains no value because the
|
||||
architecture does not support self-service networks.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
|
||||
ID ranges to support use of arbitrary VLAN IDs.
|
||||
|
||||
#. Populate the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
#. In the ``dhcp_agent.ini`` file, configure the DHCP agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
enable_isolated_metadata = True
|
||||
force_metadata = True
|
||||
|
||||
.. note::
|
||||
|
||||
The ``force_metadata`` option forces the DHCP agent to provide
|
||||
a host route to the metadata service on ``169.254.169.254``
|
||||
regardless of whether the subnet contains an interface on a
|
||||
router, thus maintaining similar and predictable metadata behavior
|
||||
among subnets.
|
||||
|
||||
#. In the ``metadata_agent.ini`` file, configure the metadata agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
nova_metadata_host = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
The value of ``METADATA_SECRET`` must match the value of the same option
|
||||
in the ``[neutron]`` section of the ``nova.conf`` file.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* DHCP agent
|
||||
* Metadata agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | None | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | None | True | UP | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-provider-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: deploy-provider-verifynetworkoperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-provider-networktrafficflow.txt
|
||||
|
||||
North-south scenario: Instance with a fixed IP address
|
||||
------------------------------------------------------
|
||||
|
||||
* The instance resides on compute node 1 and uses provider network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1.
|
||||
|
||||
#. The instance interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from the provider network (8) to the
|
||||
external network (9) and forwards the packet to the switch (10).
|
||||
#. The switch forwards the packet to the external network (11).
|
||||
#. The external network (12) receives the packet.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowns1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - north/south
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances on the same network communicate directly between compute nodes
|
||||
containing those instances.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 2 and uses provider network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch forwards the packet from compute node 1 to compute node 2 (7).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The physical network interface (8) removes VLAN tag 101 from the packet
|
||||
and forwards it to the VLAN sub-interface port (9) on the provider bridge.
|
||||
#. Security group rules (10) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (11) forwards the packet to
|
||||
the instance 2 interface (12) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances communicate via router on the physical network infrastructure.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 1 and uses provider network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VLAN
|
||||
tagging enables multiple logical layer-2 networks to use the same
|
||||
physical layer-2 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from provider network 1 (8) to provider
|
||||
network 2 (9).
|
||||
#. The router forwards the packet to the switch (10).
|
||||
#. The switch adds VLAN tag 102 to the packet and forwards it to compute
|
||||
node 1 (11).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network interface (12) removes VLAN tag 102 from the packet
|
||||
and forwards it to the VLAN sub-interface port (13) on the provider bridge.
|
||||
#. Security group rules (14) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (15) forwards the packet to
|
||||
the instance 2 interface (16) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew2.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
@ -1,435 +0,0 @@
|
||||
.. _deploy-lb-selfservice:
|
||||
|
||||
===================================
|
||||
Linux bridge: Self-service networks
|
||||
===================================
|
||||
|
||||
This architecture example augments :ref:`deploy-lb-provider` to support
|
||||
a nearly limitless quantity of entirely virtual networks. Although the
|
||||
Networking service supports VLAN self-service networks, this example
|
||||
focuses on VXLAN self-service networks. For more information on
|
||||
self-service networks, see :ref:`intro-os-networking-selfservice`.
|
||||
|
||||
.. note::
|
||||
|
||||
The Linux bridge agent lacks support for other overlay protocols such
|
||||
as GRE and Geneve.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Add one network interface: overlay.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network node.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-overview.png
|
||||
:alt: Self-service networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) provider network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace and Linux bridge with a port on the overlay physical network
|
||||
interface.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-compconn1.png
|
||||
:alt: Self-service networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
self-service networks to an existing operational environment that supports
|
||||
provider networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable routing and allow overlapping IP address ranges.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = router
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``vxlan`` to type drivers and project network types.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* Enable the layer-2 population mechanism driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
|
||||
* Configure the VXLAN network ID (VNI) range.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = VNI_START:VNI_END
|
||||
|
||||
Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical
|
||||
values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. Install the Networking service layer-3 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
.. warning::
|
||||
|
||||
By default, Linux uses UDP port ``8472`` for VXLAN tunnel traffic. This
|
||||
default value doesn't follow the IANA standard, which assigned UDP port
|
||||
``4789`` for VXLAN communication. As a consequence, if this node is part
|
||||
of a mixed deployment, where nodes with both OVS and Linux bridge must
|
||||
communicate over VXLAN tunnels, it is recommended that a line containing
|
||||
``udp_dstport = 4789`` be added to the [vxlan] section of all the Linux
|
||||
bridge agents. OVS follows the IANA standard.
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, enable VXLAN support including
|
||||
layer-2 population.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. warning::
|
||||
|
||||
By default, Linux uses UDP port ``8472`` for VXLAN tunnel traffic. This
|
||||
default value doesn't follow the IANA standard, which assigned UDP port
|
||||
``4789`` for VXLAN communication. As a consequence, if this node is part
|
||||
of a mixed deployment, where nodes with both OVS and Linux bridge must
|
||||
communicate over VXLAN tunnels, it is recommended that a line containing
|
||||
``udp_dstport = 4789`` be added to the [vxlan] section of all the Linux
|
||||
bridge agents. OVS follows the IANA standard.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | None | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | None | True | UP | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: deploy-selfservice-verifynetworkoperation.txt
|
||||
|
||||
.. _deploy-lb-selfservice-networktrafficflow:
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
For instances with a fixed IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from self-service to external networks
|
||||
such as the Internet. For instances with a fixed IPv6 address, the network
|
||||
node performs conventional routing of traffic between self-service and
|
||||
external networks.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network interface (10) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs SNAT on the packet which changes the
|
||||
source IP address to the router IP address on the provider network
|
||||
and sends it to the gateway IP address on the provider network via
|
||||
the gateway interface on the provider network (11).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the provider network, via the
|
||||
provider gateway interface (11).
|
||||
|
||||
#. The router forwards the packet to the provider bridge router
|
||||
port (12).
|
||||
#. The VLAN sub-interface port (13) on the provider bridge forwards
|
||||
the packet to the provider physical network interface (14).
|
||||
#. The provider physical network interface (14) adds VLAN tag 101 to the packet
|
||||
and forwards it to the Internet via physical network infrastructure (15).
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse. However, without a
|
||||
floating IPv4 address, hosts on the provider or external networks cannot
|
||||
originate connections to instances on the self-service network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
|
||||
Thus, the network node routes IPv6 traffic in this scenario.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface removes VLAN tag 101 and forwards
|
||||
the packet to the VLAN sub-interface on the provider bridge.
|
||||
#. The provider bridge forwards the packet to the self-service
|
||||
router gateway port on the provider network (5).
|
||||
|
||||
* For IPv4, the router performs DNAT on the packet which changes the
|
||||
destination IP address to the instance IP address on the self-service
|
||||
network and sends it to the gateway IP address on the self-service
|
||||
network via the self-service interface (6).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the self-service network, via
|
||||
the self-service interface (6).
|
||||
|
||||
#. The router forwards the packet to the self-service bridge router
|
||||
port (7).
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (8)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (11) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (12) which unwraps the packet.
|
||||
#. Security group rules (13) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (14) forwards the packet to
|
||||
the instance interface (15) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Egress instance traffic flows similar to north-south scenario 1, except SNAT
|
||||
changes the source IP address of the packet to the floating IPv4 address
|
||||
rather than the router IP address on the provider network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 2
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same network
|
||||
communicate directly between compute nodes containing those instances.
|
||||
|
||||
By default, the VXLAN protocol lacks knowledge of target location
|
||||
and uses multicast to discover it. After discovery, it stores the
|
||||
location in the local forwarding database. In large deployments,
|
||||
the discovery process can generate a significant amount of network
|
||||
that all nodes must process. To eliminate the latter and generally
|
||||
increase efficiency, the Networking service includes the layer-2
|
||||
population mechanism driver that automatically populates the
|
||||
forwarding database for VXLAN interfaces. The example configuration
|
||||
enables this driver. For more information, see :ref:`config-plugin-ml2`.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 2 and uses self-service network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the
|
||||
self-service bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to compute node 2 via the overlay network (6).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. Security group rules (9) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (10) forwards the packet to
|
||||
the instance 1 interface (11) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate
|
||||
via router on the network node. The self-service networks must reside on the
|
||||
same router.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 1 and uses self-service network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VXLAN
|
||||
enables multiple overlays to use the same layer-3 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network 1 interface (10) in the router namespace.
|
||||
#. The router sends the packet to the next-hop IP address, typically the
|
||||
gateway IP address on self-service network 2, via the self-service
|
||||
network 2 interface (11).
|
||||
#. The router forwards the packet to the self-service network 2 bridge router
|
||||
port (12).
|
||||
#. The self-service network 2 bridge forwards the packet to the VXLAN
|
||||
interface (13) which wraps the packet using VNI 102.
|
||||
#. The physical network interface (14) for the VXLAN interface sends the
|
||||
packet to the compute node via the overlay network (15).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (16) for the VXLAN interface sends
|
||||
the packet to the VXLAN interface (17) which unwraps the packet.
|
||||
#. Security group rules (18) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (19) forwards the packet to
|
||||
the instance 2 interface (20) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 2
|
@ -1,83 +0,0 @@
|
||||
.. _deploy-lb:
|
||||
|
||||
=============================
|
||||
Linux bridge mechanism driver
|
||||
=============================
|
||||
|
||||
The Linux bridge mechanism driver uses only Linux bridges and ``veth`` pairs
|
||||
as interconnection devices. A layer-2 agent manages Linux bridges on each
|
||||
compute node and any other node that provides layer-3 (routing), DHCP,
|
||||
metadata, or other network services.
|
||||
|
||||
Compatibility with nftables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`nftables <https://netfilter.org/projects/nftables/>`_ replaces iptables,
|
||||
ip6tables, arptables and ebtables, in order to provide a single API for all
|
||||
``Netfilter`` operations. ``nftables`` provides a backwards compatibility set
|
||||
of tools for those replaced binaries that present the legacy API to the user
|
||||
while using the new packet classification framework. As reported in
|
||||
`LP#1915341 <https://bugs.launchpad.net/neutron/+bug/1915341>`_ and
|
||||
`LP#1922892 <https://bugs.launchpad.net/neutron/+bug/1922892>`_, the tool
|
||||
``ebtables-nft`` is not totally compatible with the legacy API and returns some
|
||||
errors. To use Linux Bridge mechanism driver in newer operating systems that
|
||||
use ``nftables`` by default, it is needed to switch back to the legacy tool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# /usr/bin/update-alternatives --set ebtables /usr/sbin/ebtables-legacy
|
||||
|
||||
|
||||
Since `LP#1922127 <https://bugs.launchpad.net/neutron/+bug/1922127>`_ and
|
||||
`LP#1922892 <https://bugs.launchpad.net/neutron/+bug/1922892>`_ were fixed,
|
||||
Neutron Linux Bridge mechanism driver is compatible with the ``nftables``
|
||||
binaries using the legacy API.
|
||||
|
||||
.. note::
|
||||
|
||||
Just to unravel the possible terminology confusion, these are the three
|
||||
``Netfilter`` available framework alternatives:
|
||||
|
||||
* The legacy binaries (``iptables``, ``ip6tables``, ``arptables`` and
|
||||
``ebtables``) that use the legacy API.
|
||||
* The new ``nftables`` binaries that use the legacy API, to help in the
|
||||
transition to this new framework. Those binaries replicate the same
|
||||
commands as the legacy one but using the new framework. The binaries
|
||||
have the same name ended in ``-nft``.
|
||||
* The new ``nftables`` framework using the new API. All Netfilter
|
||||
operations are executed using this new API and one single binary, ``nft``.
|
||||
|
||||
Currently we support the first two options. The migration (total or partial)
|
||||
to the new API is tracked in
|
||||
`LP#1508155 <https://bugs.launchpad.net/neutron/+bug/1508155>`_.
|
||||
|
||||
|
||||
In order to use the ``nftables`` binaries with the legacy API, it is needed to
|
||||
execute the following commands.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
|
||||
# /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
|
||||
# /usr/bin/update-alternatives --set ebtables /usr/sbin/ebtables-nft
|
||||
# /usr/bin/update-alternatives --set arptables /usr/sbin/arptables-nft
|
||||
|
||||
|
||||
The ``ipset`` tool is not compatible with ``nftables``. To disable it,
|
||||
``enable_ipset`` must be set to ``False`` in the ML2 plugin configuration file
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini``.
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
# ...
|
||||
enable_ipset = False
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-lb-provider
|
||||
deploy-lb-selfservice
|
||||
deploy-lb-ha-vrrp
|
@ -158,7 +158,6 @@ Verify service operation
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | None | True | UP | neutron-metadata-agent |
|
||||
| 2a2e9a90-51b8-4163-a7d6-3e199ba2374b | L3 agent | compute2 | nova | True | UP | neutron-l3-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | None | True | UP | neutron-openvswitch-agent |
|
||||
| 513caa68-0391-4e53-a530-082e2c23e819 | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | None | True | UP | neutron-openvswitch-agent |
|
||||
|
@ -6,9 +6,9 @@ Deployment examples
|
||||
|
||||
The following deployment examples provide building blocks of increasing
|
||||
architectural complexity using the Networking service reference architecture
|
||||
which implements the Modular Layer 2 (ML2) plug-in and either the Open
|
||||
vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers support
|
||||
the same basic features such as provider networks, self-service networks,
|
||||
which implements the Modular Layer 2 (ML2) plug-in with the Open
|
||||
vSwitch (OVS) mechanism driver. The mechanism driver supports
|
||||
basic features such as provider networks, self-service networks,
|
||||
and routers. However, more complex features often require a particular
|
||||
mechanism driver. Thus, you should consider the requirements (or goals) of
|
||||
your cloud before choosing a mechanism driver.
|
||||
@ -136,5 +136,4 @@ Mechanism drivers
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
deploy-lb
|
||||
deploy-ovs
|
||||
|
Before Width: | Height: | Size: 212 KiB |
Before Width: | Height: | Size: 55 KiB |
Before Width: | Height: | Size: 176 KiB |
Before Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 84 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 118 KiB |
Before Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 29 KiB |
Before Width: | Height: | Size: 102 KiB |
Before Width: | Height: | Size: 33 KiB |
Before Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 115 KiB |
Before Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 140 KiB |
Before Width: | Height: | Size: 39 KiB |
Before Width: | Height: | Size: 74 KiB |
Before Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 116 KiB |
Before Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 179 KiB |
Before Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 25 KiB |
@ -15,10 +15,10 @@ of bandwidth constraints that limit performance. However, it supports random
|
||||
distribution of routers on different network nodes to reduce the chances of
|
||||
bandwidth constraints and to improve scaling.
|
||||
|
||||
This section references parts of :ref:`deploy-lb-ha-vrrp` and
|
||||
:ref:`deploy-ovs-ha-vrrp`. For details regarding needed infrastructure and
|
||||
configuration to allow actual L3 HA deployment, read the relevant guide
|
||||
before continuing with the migration process.
|
||||
This section references parts of :ref:`deploy-ovs-ha-vrrp`. For details
|
||||
regarding needed infrastructure and configuration to allow actual L3 HA
|
||||
deployment, read the relevant guide before continuing with the migration
|
||||
process.
|
||||
|
||||
Migration
|
||||
~~~~~~~~~
|
||||
|
@ -9,5 +9,4 @@ Miscellaneous
|
||||
|
||||
fwaas-v2-scenario
|
||||
misc-libvirt
|
||||
neutron_linuxbridge
|
||||
vpnaas-scenario
|
||||
|
@ -1,35 +0,0 @@
|
||||
====================================
|
||||
neutron-linuxbridge-cleanup utility
|
||||
====================================
|
||||
|
||||
Description
|
||||
~~~~~~~~~~~
|
||||
|
||||
Automated removal of empty bridges has been disabled to fix a race condition
|
||||
between the Compute (nova) and Networking (neutron) services. Previously, it
|
||||
was possible for a bridge to be deleted during the time when the only instance
|
||||
using it was rebooted.
|
||||
|
||||
Usage
|
||||
~~~~~
|
||||
|
||||
Use this script to remove empty bridges on compute nodes by running the
|
||||
following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron-linuxbridge-cleanup
|
||||
|
||||
.. important::
|
||||
|
||||
Do not use this tool when creating or migrating an instance as it
|
||||
throws an error when the bridge does not exist.
|
||||
|
||||
.. note::
|
||||
|
||||
Using this script can still trigger the original race condition. Only
|
||||
run this script if you have evacuated all instances off a compute
|
||||
node and you want to clean up the bridges. In addition to evacuating
|
||||
all instances, you should fence off the compute node where you are going
|
||||
to run this script so new instances do not get scheduled on it.
|
||||
|
@ -164,11 +164,9 @@ the default set of quotas are enforced for all projects, so no
|
||||
|
||||
.. note::
|
||||
|
||||
Only some plug-ins support per-project quotas.
|
||||
Specifically, OVN, Open vSwitch, and Linux Bridge
|
||||
support them, but new versions of other plug-ins might
|
||||
bring additional functionality. See the documentation for
|
||||
each plug-in.
|
||||
Only some plug-ins support per-project quotas. Specifically, OVN and Open
|
||||
vSwitch support them, but new versions of other plug-ins might bring
|
||||
additional functionality. See the documentation for each plug-in.
|
||||
|
||||
#. List projects who have per-project quota support.
|
||||
|
||||
|
@ -30,7 +30,7 @@
|
||||
|
||||
.. note::
|
||||
|
||||
The namespace for router 1 from :ref:`deploy-lb-selfservice` should
|
||||
The namespace for router 1 from :ref:`deploy-ovs-selfservice` should
|
||||
only appear on network node 1 because of creation prior to enabling
|
||||
VRRP.
|
||||
|
||||
|
@ -250,7 +250,6 @@ latex_elements = {
|
||||
_config_generator_config_files = [
|
||||
'dhcp_agent.ini',
|
||||
'l3_agent.ini',
|
||||
'linuxbridge_agent.ini',
|
||||
'macvtap_agent.ini',
|
||||
'metadata_agent.ini',
|
||||
'metering_agent.ini',
|
||||
|
@ -11,7 +11,6 @@ Sample Configuration Files
|
||||
:maxdepth: 1
|
||||
|
||||
samples/ml2-conf.rst
|
||||
samples/linuxbridge-agent.rst
|
||||
samples/macvtap-agent.rst
|
||||
samples/openvswitch-agent.rst
|
||||
samples/sriov-agent.rst
|
||||
|
@ -28,7 +28,6 @@ arbitrary file names.
|
||||
:maxdepth: 1
|
||||
|
||||
ml2-conf.rst
|
||||
linuxbridge-agent.rst
|
||||
macvtap-agent.rst
|
||||
openvswitch-agent.rst
|
||||
sriov-agent.rst
|
||||
|
@ -1,6 +0,0 @@
|
||||
=====================
|
||||
linuxbridge_agent.ini
|
||||
=====================
|
||||
|
||||
.. show-options::
|
||||
:config-file: etc/oslo-config-generator/linuxbridge_agent.ini
|
@ -1,8 +0,0 @@
|
||||
============================
|
||||
Sample linuxbridge_agent.ini
|
||||
============================
|
||||
|
||||
This sample configuration can also be viewed in `the raw format
|
||||
<../../_static/config-samples/linuxbridge_agent.conf.sample>`_.
|
||||
|
||||
.. literalinclude:: ../../_static/config-samples/linuxbridge_agent.conf.sample
|
@ -45,7 +45,6 @@ Neutron Internals
|
||||
l2_agents
|
||||
l3_agent_extensions
|
||||
layer3
|
||||
linuxbridge_agent
|
||||
live_migration
|
||||
local_ips
|
||||
metadata
|
||||
|
@ -41,14 +41,3 @@ hardened bridge objects with cookie values allocated for calling extensions::
|
||||
Bridge objects returned by those methods already have new default cookie values
|
||||
allocated for extension flows. All flow management methods (add_flow, mod_flow,
|
||||
...) enforce those allocated cookies.
|
||||
|
||||
Linuxbridge agent API
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_agent_extension_api
|
||||
|
||||
The Linux bridge agent extension API object includes a method that returns an
|
||||
instance of the IptablesManager class, which is used by the L2 agent to manage
|
||||
security group rules::
|
||||
|
||||
#. get_iptables_manager
|
||||
|
@ -4,6 +4,5 @@ L2 Agent Networking
|
||||
:maxdepth: 3
|
||||
|
||||
openvswitch_agent
|
||||
linuxbridge_agent
|
||||
sriov_nic_agent
|
||||
l2_agent_extensions
|
||||
|
@ -1,45 +0,0 @@
|
||||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
|
||||
Convention for heading levels in Neutron devref:
|
||||
======= Heading 0 (reserved for the title in a document)
|
||||
------- Heading 1
|
||||
~~~~~~~ Heading 2
|
||||
+++++++ Heading 3
|
||||
''''''' Heading 4
|
||||
(Avoid deeper levels because they do not render well.)
|
||||
|
||||
|
||||
Linux Bridge Networking L2 Agent
|
||||
================================
|
||||
|
||||
This Agent uses the `Linux Bridge
|
||||
<http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge>`_ to
|
||||
provide L2 connectivity for VM instances running on the compute node to the
|
||||
public network. A graphical illustration of the deployment can be found in
|
||||
`Networking Guide <../../admin/deploy-lb-provider.html#architecture>`_.
|
||||
|
||||
In most common deployments, there is a compute and a network node. On both the
|
||||
compute and the network node, the Linux Bridge Agent will manage virtual
|
||||
switches, connectivity among them, and interaction via virtual ports with other
|
||||
network components such as namespaces and underlying interfaces. Additionally,
|
||||
on the compute node, the Linux Bridge Agent will manage security groups.
|
||||
|
||||
Three use cases and their packet flow are documented as follows:
|
||||
|
||||
1. `Linux Bridge: Provider networks <../../admin/deploy-lb-provider.html>`_
|
||||
|
||||
2. `Linux Bridge: Self-service networks <../../admin/deploy-lb-selfservice.html>`_
|
||||
|
||||
3. `Linux Bridge: High availability using VRRP <../../admin/deploy-lb-ha-vrrp.html>`_
|
@ -301,7 +301,6 @@ interface:
|
||||
|
||||
* Open vSwitch (QosOVSAgentDriver);
|
||||
* SR-IOV (QosSRIOVAgentDriver);
|
||||
* Linux bridge (QosLinuxbridgeAgentDriver).
|
||||
|
||||
For the Networking back ends, QoS supported rules, and traffic directions
|
||||
(from the VM point of view), please see the table:
|
||||
@ -404,10 +403,6 @@ tc. Details about how it is calculated can be found in
|
||||
`here <http://unix.stackexchange.com/a/100797>`_.
|
||||
This solution is similar to Open vSwitch implementation.
|
||||
|
||||
The Linux bridge DSCP marking implementation relies on the
|
||||
linuxbridge_extension_api to request access to the IptablesManager class
|
||||
and to manage chains in the ``mangle`` table in iptables.
|
||||
|
||||
QoS driver design
|
||||
-----------------
|
||||
|
||||
|
@ -31,9 +31,9 @@ services. Among those of special interest:
|
||||
|
||||
#. neutron-server that provides API endpoints and serves as a single point of
|
||||
access to the database. It usually runs on nodes called Controllers.
|
||||
#. Layer2 agent that can utilize Open vSwitch, Linuxbridge or other vendor
|
||||
specific technology to provide network segmentation and isolation for
|
||||
project networks. The L2 agent should run on every node where it is deemed
|
||||
#. Layer2 agent that can utilize Open vSwitch or other vendor specific
|
||||
technology to provide network segmentation and isolation for project
|
||||
networks. The L2 agent should run on every node where it is deemed
|
||||
responsible for wiring and securing virtual interfaces (usually both Compute
|
||||
and Network nodes).
|
||||
#. Layer3 agent that runs on Network node and provides East-West and
|
||||
|
@ -120,7 +120,7 @@ sent by older versions of agents which are part of the cloud.
|
||||
|
||||
The recommended order of agent upgrade (per node) is:
|
||||
|
||||
#. first, L2 agents (openvswitch, linuxbridge, sr-iov).
|
||||
#. first, L2 agents (openvswitch, sr-iov).
|
||||
#. then, all other agents (L3, DHCP, Metadata, ...).
|
||||
|
||||
The rationale of the agent upgrade order is that L2 agent is usually
|
||||
|
@ -419,8 +419,6 @@ more will be added over time if needed.
|
||||
+-------------------------------+-----------------------------------------+--------------------------+
|
||||
| lib_ | An issue affecting neutron-lib | Neutron PTL |
|
||||
+-------------------------------+-----------------------------------------+--------------------------+
|
||||
| linuxbridge_ | A bug affecting ML2/linuxbridge | N/A |
|
||||
+-------------------------------+-----------------------------------------+--------------------------+
|
||||
| loadimpact_ | Performance penalty/improvements | Miguel Lavalle/ |
|
||||
| | | Oleg Bondarev |
|
||||
+-------------------------------+-----------------------------------------+--------------------------+
|
||||
@ -648,14 +646,6 @@ Lib
|
||||
|
||||
* `Lib - All bugs <https://bugs.launchpad.net/neutron/+bugs?field.tag=lib>`_
|
||||
|
||||
.. _linuxbridge:
|
||||
|
||||
LinuxBridge
|
||||
+++++++++++
|
||||
|
||||
* `LinuxBridge - All bugs <https://bugs.launchpad.net/neutron/+bugs?field.tag=linuxbridge>`_
|
||||
* `LinuxBridge - In progress <https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=INPROGRESS&field.tag=linuxbridge>`_
|
||||
|
||||
.. _loadimpact:
|
||||
|
||||
Load Impact
|
||||
|
@ -48,12 +48,9 @@ neutron/tests/fullstack/test_connectivity.py.
|
||||
|
||||
Full stack testing can simulate multi node testing by starting an agent
|
||||
multiple times. Specifically, each node would have its own copy of the
|
||||
OVS/LinuxBridge/DHCP/L3 agents, all configured with the same "host" value.
|
||||
OVS/DHCP/L3 agents, all configured with the same "host" value.
|
||||
Each OVS agent is connected to its own pair of br-int/br-ex, and those bridges
|
||||
are then interconnected.
|
||||
For LinuxBridge agent each agent is started in its own namespace, called
|
||||
"host-<some_random_value>". Such namespaces are connected with OVS "central"
|
||||
bridge to each other.
|
||||
|
||||
.. image:: images/fullstack_multinode_simulation.png
|
||||
|
||||
|
@ -13,9 +13,6 @@
|
||||
[driver.ovs]
|
||||
title=Open vSwitch
|
||||
|
||||
[driver.linuxbridge]
|
||||
title=Linux Bridge
|
||||
|
||||
[driver.ovn]
|
||||
title=OVN
|
||||
|
||||
@ -27,7 +24,6 @@ cli=openstack network *
|
||||
notes=The ability to create, modify and delete networks.
|
||||
https://docs.openstack.org/api-ref/network/v2/#networks
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Subnets]
|
||||
@ -38,7 +34,6 @@ cli=openstack subnet *
|
||||
notes=The ability to create and manipulate subnets and subnet pools.
|
||||
https://docs.openstack.org/api-ref/network/v2/#subnets
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Ports]
|
||||
@ -49,7 +44,6 @@ cli=openstack port *
|
||||
notes=The ability to create and manipulate ports.
|
||||
https://docs.openstack.org/api-ref/network/v2/#ports
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Router]
|
||||
@ -60,7 +54,6 @@ cli=openstack router *
|
||||
notes=The ability to create and manipulate routers.
|
||||
https://docs.openstack.org/api-ref/network/v2/#routers-routers
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Security_Groups]
|
||||
@ -72,7 +65,6 @@ notes=Security groups are set by default, and can be modified to control
|
||||
ingress & egress traffic.
|
||||
https://docs.openstack.org/api-ref/network/v2/#security-groups-security-groups
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.External_Nets]
|
||||
@ -82,7 +74,6 @@ api=external-net
|
||||
notes=The ability to create an external network to provide internet access
|
||||
to and from instances using floating IP addresses and security group rules.
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.DVR]
|
||||
@ -92,7 +83,6 @@ api=dvr
|
||||
notes=The ability to support the distributed virtual routers.
|
||||
https://wiki.openstack.org/wiki/Neutron/DVR
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=missing
|
||||
driver.ovn=partial
|
||||
|
||||
[operation.L3_HA]
|
||||
@ -102,7 +92,6 @@ api=l3-ha
|
||||
notes=The ability to support the High Availability features and extensions.
|
||||
https://wiki.openstack.org/wiki/Neutron/L3_High_Availability_VRRP.
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=partial
|
||||
|
||||
[operation.QoS]
|
||||
@ -112,7 +101,6 @@ api=qos
|
||||
notes=Support for Neutron Quality of Service policies and API.
|
||||
https://docs.openstack.org/api-ref/network/v2/#qos-policies-qos
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=partial
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.BGP]
|
||||
@ -120,7 +108,6 @@ title=Border Gateway Protocol
|
||||
status=immature
|
||||
notes=https://docs.openstack.org/api-ref/network/v2/#bgp-mpls-vpn-interconnection
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=unknown
|
||||
driver.ovn=unknown
|
||||
|
||||
[operation.DNS]
|
||||
@ -130,7 +117,6 @@ api=dns-integration
|
||||
notes=The ability to integrate with an external DNS
|
||||
as a Service. https://docs.openstack.org/neutron/latest/admin/config-dns-int.html
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Trunk_Ports]
|
||||
@ -141,7 +127,6 @@ notes=Neutron extension to access lots of neutron networks over
|
||||
a single vNIC as tagged/encapsulated traffic.
|
||||
https://docs.openstack.org/api-ref/network/v2/#trunk-networking
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.Metering]
|
||||
@ -151,7 +136,6 @@ api=metering
|
||||
notes=Meter traffic at the L3 router levels.
|
||||
https://docs.openstack.org/api-ref/network/v2/#metering-labels-and-rules-metering-labels-metering-label-rules
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=unknown
|
||||
|
||||
[operations.Routed_Provider_Networks]
|
||||
@ -160,5 +144,4 @@ status=immature
|
||||
notes=The ability to present a multi-segment layer-3 network as a
|
||||
single entity. https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html
|
||||
driver.ovs=partial
|
||||
driver.linuxbridge=partial
|
||||
driver.ovn=partial
|
||||
|
@ -13,9 +13,6 @@
|
||||
[driver.ovs]
|
||||
title=Open vSwitch
|
||||
|
||||
[driver.linuxbridge]
|
||||
title=Linux Bridge
|
||||
|
||||
[driver.ovn]
|
||||
title=OVN
|
||||
|
||||
@ -23,26 +20,22 @@ title=OVN
|
||||
title=VLAN provider network support
|
||||
status=mature
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=complete
|
||||
|
||||
[operation.VXLAN]
|
||||
title=VXLAN provider network support
|
||||
status=mature
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=complete
|
||||
driver.ovn=missing
|
||||
|
||||
[operation.GRE]
|
||||
title=GRE provider network support
|
||||
status=immature
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=unknown
|
||||
driver.ovn=missing
|
||||
|
||||
[operation.Geneve]
|
||||
title=Geneve provider network support
|
||||
status=immature
|
||||
driver.ovs=complete
|
||||
driver.linuxbridge=unknown
|
||||
driver.ovn=complete
|
||||
|
@ -17,8 +17,8 @@ OpenStack Networking plug-ins and agents
|
||||
Plug and unplug ports, create networks or subnets, and provide
|
||||
IP addressing. These plug-ins and agents differ depending on the
|
||||
vendor and technologies used in the particular cloud. OpenStack
|
||||
Networking ships with plug-ins and agents for Open vSwitch, Linux
|
||||
bridging, Open Virtual Network (OVN), SR-IOV and Macvtap.
|
||||
Networking ships with plug-ins and agents for Open vSwitch and
|
||||
Open Virtual Network (OVN), as well as for SR-IOV and Macvtap.
|
||||
|
||||
The common agents are L3 (layer 3), DHCP (dynamic host IP
|
||||
addressing), and a plug-in agent.
|
||||
|
@ -1,6 +0,0 @@
|
||||
[DEFAULT]
|
||||
output_file = etc/neutron/plugins/ml2/linuxbridge_agent.ini.sample
|
||||
wrap_width = 79
|
||||
|
||||
namespace = neutron.ml2.linuxbridge.agent
|
||||
namespace = oslo.log
|
@ -25,8 +25,6 @@ from pyroute2.netlink import exceptions as netlink_exceptions
|
||||
|
||||
from neutron.agent.linux import bridge_lib
|
||||
from neutron.conf.agent import l2_ext_fdb_population
|
||||
from neutron.plugins.ml2.drivers.linuxbridge.agent.common import (
|
||||
constants as linux_bridge_constants)
|
||||
|
||||
l2_ext_fdb_population.register_fdb_population_opts()
|
||||
|
||||
@ -35,8 +33,8 @@ LOG = logging.getLogger(__name__)
|
||||
|
||||
class FdbPopulationAgentExtension(
|
||||
l2_extension.L2AgentExtension):
|
||||
"""The FDB population is an agent extension to OVS or linux bridge
|
||||
who's objective is to update the FDB table for existing instance
|
||||
"""The FDB population is an agent extension to OVS
|
||||
whose objective is to update the FDB table for existing instance
|
||||
using normal port, thus enabling communication between SR-IOV instances
|
||||
and normal instances.
|
||||
Additional information describing the problem can be found here:
|
||||
@ -117,12 +115,9 @@ class FdbPopulationAgentExtension(
|
||||
# class FdbPopulationAgentExtension implementation:
|
||||
def initialize(self, connection, driver_type):
|
||||
"""Perform FDB Agent Extension initialization."""
|
||||
valid_driver_types = (linux_bridge_constants.EXTENSION_DRIVER_TYPE,
|
||||
ovs_constants.EXTENSION_DRIVER_TYPE)
|
||||
if driver_type not in valid_driver_types:
|
||||
LOG.error('FDB extension is only supported for OVS and '
|
||||
'linux bridge agent, currently uses '
|
||||
'%(driver_type)s', {'driver_type': driver_type})
|
||||
if driver_type != ovs_constants.EXTENSION_DRIVER_TYPE:
|
||||
LOG.error('FDB extension is only supported for OVS agent, '
|
||||
f'currently uses {driver_type}')
|
||||
sys.exit(1)
|
||||
|
||||
self.device_mappings = helpers.parse_mappings(
|
||||
|
@ -491,49 +491,3 @@ class OVSInterfaceDriver(LinuxInterfaceDriver):
|
||||
else:
|
||||
ns_dev = ip_lib.IPWrapper(namespace=namespace).device(device_name)
|
||||
ns_dev.link.set_mtu(mtu)
|
||||
|
||||
|
||||
class BridgeInterfaceDriver(LinuxInterfaceDriver):
|
||||
"""Driver for creating bridge interfaces."""
|
||||
|
||||
DEV_NAME_PREFIX = 'ns-'
|
||||
|
||||
def plug_new(self, network_id, port_id, device_name, mac_address,
|
||||
bridge=None, namespace=None, prefix=None, mtu=None):
|
||||
"""Plugin the interface."""
|
||||
ip = ip_lib.IPWrapper()
|
||||
|
||||
# Enable agent to define the prefix
|
||||
tap_name = device_name.replace(prefix or self.DEV_NAME_PREFIX,
|
||||
constants.TAP_DEVICE_PREFIX)
|
||||
# Create ns_veth in a namespace if one is configured.
|
||||
root_veth, ns_veth = ip.add_veth(tap_name, device_name,
|
||||
namespace2=namespace)
|
||||
root_veth.disable_ipv6()
|
||||
ns_veth.link.set_address(mac_address)
|
||||
|
||||
if mtu:
|
||||
self.set_mtu(device_name, mtu, namespace=namespace, prefix=prefix)
|
||||
else:
|
||||
LOG.warning("No MTU configured for port %s", port_id)
|
||||
|
||||
root_veth.link.set_up()
|
||||
ns_veth.link.set_up()
|
||||
|
||||
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
|
||||
"""Unplug the interface."""
|
||||
device = ip_lib.IPDevice(device_name, namespace=namespace)
|
||||
try:
|
||||
device.link.delete()
|
||||
LOG.debug("Unplugged interface '%s'", device_name)
|
||||
except RuntimeError:
|
||||
LOG.error("Failed unplugging interface '%s'",
|
||||
device_name)
|
||||
|
||||
def set_mtu(self, device_name, mtu, namespace=None, prefix=None):
|
||||
tap_name = device_name.replace(prefix or self.DEV_NAME_PREFIX,
|
||||
constants.TAP_DEVICE_PREFIX)
|
||||
root_dev, ns_dev = _get_veth(
|
||||
tap_name, device_name, namespace2=namespace)
|
||||
root_dev.link.set_mtu(mtu)
|
||||
ns_dev.link.set_mtu(mtu)
|
||||
|
@ -379,8 +379,7 @@ class SecurityGroupServerAPIShim(sg_rpc_base.SecurityGroupInfoAPIMixin):
|
||||
port['fixed_ips'] = [str(f['ip_address'])
|
||||
for f in port['fixed_ips']]
|
||||
# NOTE(kevinbenton): this id==device is only safe for OVS. a lookup
|
||||
# will be required for linux bridge and others that don't have the
|
||||
# full port UUID
|
||||
# will be required for others that don't have the full port UUID
|
||||
port['device'] = port['id']
|
||||
port['port_security_enabled'] = getattr(
|
||||
ovo.security, 'port_security_enabled', True)
|
||||
|
@ -1,28 +0,0 @@
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import setproctitle
|
||||
|
||||
import \
|
||||
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent \
|
||||
as agent_main
|
||||
from neutron_lib import constants
|
||||
|
||||
|
||||
def main():
|
||||
proctitle = "{} ({})".format(
|
||||
constants.AGENT_PROCESS_LINUXBRIDGE, setproctitle.getproctitle())
|
||||
setproctitle.setproctitle(proctitle)
|
||||
|
||||
agent_main.main()
|
@ -1,75 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from neutron_lib.utils import helpers
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
||||
from neutron.common import config as common_config
|
||||
from neutron.conf.agent import common as config
|
||||
from neutron.plugins.ml2.drivers.linuxbridge.agent \
|
||||
import linuxbridge_neutron_agent
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def remove_empty_bridges():
|
||||
try:
|
||||
interface_mappings = helpers.parse_mappings(
|
||||
cfg.CONF.LINUX_BRIDGE.physical_interface_mappings)
|
||||
except ValueError as e:
|
||||
LOG.error("Parsing physical_interface_mappings failed: %s.", e)
|
||||
sys.exit(1)
|
||||
LOG.info("Interface mappings: %s.", interface_mappings)
|
||||
|
||||
try:
|
||||
bridge_mappings = helpers.parse_mappings(
|
||||
cfg.CONF.LINUX_BRIDGE.bridge_mappings)
|
||||
except ValueError as e:
|
||||
LOG.error("Parsing bridge_mappings failed: %s.", e)
|
||||
sys.exit(1)
|
||||
LOG.info("Bridge mappings: %s.", bridge_mappings)
|
||||
|
||||
lb_manager = linuxbridge_neutron_agent.LinuxBridgeManager(
|
||||
bridge_mappings, interface_mappings)
|
||||
|
||||
bridge_names = lb_manager.get_deletable_bridges()
|
||||
for bridge_name in bridge_names:
|
||||
if lb_manager.get_tap_devices_count(bridge_name):
|
||||
continue
|
||||
|
||||
try:
|
||||
lb_manager.delete_bridge(bridge_name)
|
||||
LOG.info("Linux bridge %s deleted", bridge_name)
|
||||
except RuntimeError:
|
||||
LOG.exception("Linux bridge %s delete failed", bridge_name)
|
||||
LOG.info("Linux bridge cleanup completed successfully")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main method for cleaning up empty linux bridges.
|
||||
|
||||
This tool deletes every empty linux bridge managed by linuxbridge agent
|
||||
(brq.* linux bridges) except these ones defined using bridge_mappings
|
||||
option in section LINUX_BRIDGE (created by deployers).
|
||||
|
||||
This tool should not be called during an instance create, migrate, etc. as
|
||||
it can delete a linux bridge about to be used by nova.
|
||||
"""
|
||||
common_config.register_common_config_options()
|
||||
cfg.CONF(sys.argv[1:])
|
||||
config.setup_logging()
|
||||
config.setup_privsep()
|
||||
remove_empty_bridges()
|
@ -28,7 +28,6 @@ from neutron.conf.agent import securitygroups_rpc
|
||||
from neutron.conf import common as common_config
|
||||
from neutron.conf.db import l3_hamode_db
|
||||
from neutron.conf.plugins.ml2 import config as ml2_conf
|
||||
from neutron.conf.plugins.ml2.drivers import linuxbridge as lb_conf
|
||||
from neutron.conf.plugins.ml2.drivers.mech_sriov import agent_common as \
|
||||
sriov_conf
|
||||
from neutron.conf.plugins.ml2.drivers import ovs_conf
|
||||
@ -40,7 +39,6 @@ LOG = logging.getLogger(__name__)
|
||||
def setup_conf():
|
||||
config.register_common_config_options()
|
||||
ovs_conf.register_ovs_agent_opts(cfg.CONF)
|
||||
lb_conf.register_linuxbridge_opts(cfg.CONF)
|
||||
sriov_conf.register_agent_sriov_nic_opts(cfg.CONF)
|
||||
ml2_conf.register_ml2_plugin_opts(cfg.CONF)
|
||||
securitygroups_rpc.register_securitygroups_opts(cfg.CONF)
|
||||
|
@ -23,8 +23,8 @@ fdb_population_opt = [
|
||||
help=_("Comma-separated list of "
|
||||
"<physical_network>:<network_device> tuples mapping "
|
||||
"physical network names to the agent's node-specific "
|
||||
"shared physical network device between "
|
||||
"SR-IOV and OVS or SR-IOV and linux bridge"))
|
||||
"shared physical network device between SR-IOV and "
|
||||
"OVS"))
|
||||
]
|
||||
|
||||
|
||||
|
@ -16,13 +16,8 @@ from oslo_config import cfg
|
||||
from neutron._i18n import _
|
||||
|
||||
EXPERIMENTAL_CFG_GROUP = 'experimental'
|
||||
EXPERIMENTAL_LINUXBRIDGE = 'linuxbridge'
|
||||
EXPERIMENTAL_IPV6_PD = 'ipv6_pd_enabled'
|
||||
experimental_opts = [
|
||||
cfg.BoolOpt(EXPERIMENTAL_LINUXBRIDGE,
|
||||
default=False,
|
||||
help=_('Enable execution of the experimental Linuxbridge '
|
||||
'agent.')),
|
||||
cfg.BoolOpt(EXPERIMENTAL_IPV6_PD,
|
||||
default=False,
|
||||
help=_('Enable execution of the experimental IPv6 Prefix '
|
||||
|
@ -1,117 +0,0 @@
|
||||
# Copyright 2012 Cisco Systems, Inc. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from neutron._i18n import _
|
||||
|
||||
DEFAULT_BRIDGE_MAPPINGS = []
|
||||
DEFAULT_INTERFACE_MAPPINGS = []
|
||||
DEFAULT_VXLAN_GROUP = '224.0.0.1'
|
||||
DEFAULT_KERNEL_HZ_VALUE = 250 # [Hz]
|
||||
DEFAULT_TC_TBF_LATENCY = 50 # [ms]
|
||||
|
||||
vxlan_opts = [
|
||||
cfg.BoolOpt('enable_vxlan', default=True,
|
||||
help=_("Enable VXLAN on the agent. Can be enabled when "
|
||||
"agent is managed by ML2 plugin using Linux bridge "
|
||||
"mechanism driver")),
|
||||
cfg.IntOpt('ttl',
|
||||
help=_("TTL for VXLAN interface protocol packets.")),
|
||||
cfg.IntOpt('tos',
|
||||
deprecated_for_removal=True,
|
||||
help=_("TOS for VXLAN interface protocol packets. This option "
|
||||
"is deprecated in favor of the DSCP option in the AGENT "
|
||||
"section and will be removed in a future release. "
|
||||
"To convert the TOS value to DSCP, divide by 4.")),
|
||||
cfg.StrOpt('vxlan_group', default=DEFAULT_VXLAN_GROUP,
|
||||
help=_("Multicast group(s) for VXLAN interface. A range of "
|
||||
"group addresses may be specified by using CIDR "
|
||||
"notation. Specifying a range allows different VNIs to "
|
||||
"use different group addresses, reducing or eliminating "
|
||||
"spurious broadcast traffic to the tunnel endpoints. "
|
||||
"To reserve a unique group for each possible "
|
||||
"(24-bit) VNI, use a /8 such as 239.0.0.0/8. This "
|
||||
"setting must be the same on all the agents.")),
|
||||
cfg.IPOpt('local_ip',
|
||||
help=_("IP address of local overlay (tunnel) network endpoint. "
|
||||
"Use either an IPv4 or IPv6 address that resides on one "
|
||||
"of the host network interfaces. The IP version of this "
|
||||
"value must match the value of the 'overlay_ip_version' "
|
||||
"option in the ML2 plug-in configuration file on the "
|
||||
"neutron server node(s).")),
|
||||
cfg.PortOpt('udp_srcport_min', default=0,
|
||||
help=_("The minimum of the UDP source port range used for "
|
||||
"VXLAN communication.")),
|
||||
cfg.PortOpt('udp_srcport_max', default=0,
|
||||
help=_("The maximum of the UDP source port range used for "
|
||||
"VXLAN communication.")),
|
||||
cfg.PortOpt('udp_dstport',
|
||||
help=_("The UDP port used for VXLAN communication. By "
|
||||
"default, the Linux kernel does not use the IANA "
|
||||
"assigned standard value, so if you want to use it, "
|
||||
"this option must be set to 4789. It is not set by "
|
||||
"default because of backward compatibility.")),
|
||||
cfg.BoolOpt('l2_population', default=False,
|
||||
help=_("Extension to use alongside ML2 plugin's l2population "
|
||||
"mechanism driver. It enables the plugin to populate "
|
||||
"the VXLAN forwarding table.")),
|
||||
cfg.BoolOpt('arp_responder', default=False,
|
||||
help=_("Enable local ARP responder which provides local "
|
||||
"responses instead of performing ARP broadcast into "
|
||||
"the overlay. Enabling local ARP responder is not "
|
||||
"fully compatible with the allowed-address-pairs "
|
||||
"extension.")
|
||||
),
|
||||
cfg.ListOpt('multicast_ranges',
|
||||
default=[],
|
||||
help=_("Optional comma-separated list of "
|
||||
"<multicast address>:<vni_min>:<vni_max> triples "
|
||||
"describing how to assign a multicast address to "
|
||||
"VXLAN according to its VNI ID.")),
|
||||
]
|
||||
|
||||
bridge_opts = [
|
||||
cfg.ListOpt('physical_interface_mappings',
|
||||
default=DEFAULT_INTERFACE_MAPPINGS,
|
||||
help=_("Comma-separated list of "
|
||||
"<physical_network>:<physical_interface> tuples "
|
||||
"mapping physical network names to the agent's "
|
||||
"node-specific physical network interfaces to be used "
|
||||
"for flat and VLAN networks. All physical networks "
|
||||
"listed in network_vlan_ranges on the server should "
|
||||
"have mappings to appropriate interfaces on each "
|
||||
"agent.")),
|
||||
cfg.ListOpt('bridge_mappings',
|
||||
default=DEFAULT_BRIDGE_MAPPINGS,
|
||||
help=_("List of <physical_network>:<physical_bridge>")),
|
||||
]
|
||||
|
||||
qos_options = [
|
||||
cfg.IntOpt('kernel_hz', default=DEFAULT_KERNEL_HZ_VALUE,
|
||||
help=_("Value of host kernel tick rate (hz) for calculating "
|
||||
"minimum burst value in bandwidth limit rules for "
|
||||
"a port with QoS. See kernel configuration file for "
|
||||
"HZ value and tc-tbf manual for more information.")),
|
||||
cfg.IntOpt('tbf_latency', default=DEFAULT_TC_TBF_LATENCY,
|
||||
help=_("Value of latency (ms) for calculating size of queue "
|
||||
"for a port with QoS. See tc-tbf manual for more "
|
||||
"information."))
|
||||
]
|
||||
|
||||
|
||||
def register_linuxbridge_opts(cfg=cfg.CONF):
|
||||
cfg.register_opts(vxlan_opts, "VXLAN")
|
||||
cfg.register_opts(bridge_opts, "LINUX_BRIDGE")
|
||||
cfg.register_opts(qos_options, "QOS")
|
@ -27,7 +27,7 @@ class Agent(model_base.BASEV2, model_base.HasId):
|
||||
model_base.BASEV2.__table_args__
|
||||
)
|
||||
|
||||
# L3 agent, DHCP agent, OVS agent, LinuxBridge
|
||||
# L3 agent, DHCP agent, OVS agent
|
||||
agent_type = sa.Column(sa.String(255), nullable=False)
|
||||
binary = sa.Column(sa.String(255), nullable=False)
|
||||
# TOPIC is a fanout exchange topic
|
||||
|
@ -45,7 +45,6 @@ import neutron.conf.extensions.conntrack_helper
|
||||
import neutron.conf.plugins.ml2.config
|
||||
import neutron.conf.plugins.ml2.drivers.agent
|
||||
import neutron.conf.plugins.ml2.drivers.driver_type
|
||||
import neutron.conf.plugins.ml2.drivers.linuxbridge
|
||||
import neutron.conf.plugins.ml2.drivers.macvtap
|
||||
import neutron.conf.plugins.ml2.drivers.mech_sriov.agent_common
|
||||
import neutron.conf.plugins.ml2.drivers.mech_sriov.mech_sriov_conf
|
||||
@ -242,27 +241,6 @@ def list_dhcp_agent_opts():
|
||||
]
|
||||
|
||||
|
||||
def list_linux_bridge_opts():
|
||||
return [
|
||||
('DEFAULT',
|
||||
neutron.conf.service.RPC_EXTRA_OPTS),
|
||||
('linux_bridge',
|
||||
neutron.conf.plugins.ml2.drivers.linuxbridge.bridge_opts),
|
||||
('vxlan',
|
||||
neutron.conf.plugins.ml2.drivers.linuxbridge.vxlan_opts),
|
||||
('agent',
|
||||
itertools.chain(
|
||||
neutron.conf.plugins.ml2.drivers.agent.agent_opts,
|
||||
neutron.conf.agent.agent_extensions_manager.
|
||||
AGENT_EXT_MANAGER_OPTS)
|
||||
),
|
||||
('securitygroup',
|
||||
neutron.conf.agent.securitygroups_rpc.security_group_opts),
|
||||
('network_log',
|
||||
neutron.conf.services.logging.log_driver_opts)
|
||||
]
|
||||
|
||||
|
||||
def list_l3_agent_opts():
|
||||
return [
|
||||
('DEFAULT',
|
||||
|
@ -1,14 +1,12 @@
|
||||
The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack
|
||||
Networking to simultaneously utilize the variety of layer 2 networking
|
||||
technologies found in complex real-world data centers. It supports the
|
||||
Open vSwitch, Linux bridge, and Hyper-V L2 agents, replacing and
|
||||
deprecating the monolithic plugins previously associated with those
|
||||
agents, and can also support hardware devices and SDN controllers. The
|
||||
ML2 framework is intended to greatly simplify adding support for new
|
||||
L2 networking technologies, requiring much less initial and ongoing
|
||||
effort than would be required for an additional monolithic core
|
||||
plugin. It is also intended to foster innovation through its
|
||||
organization as optional driver modules.
|
||||
The Modular Layer 2 (ML2) plugin is a framework allowing OpenStack Networking
|
||||
to simultaneously utilize the variety of layer 2 networking technologies found
|
||||
in complex real-world data centers. It supports Open vSwitch L2 agent,
|
||||
replacing and deprecating the monolithic plugins previously associated with
|
||||
those agents, and can also support hardware devices and SDN controllers. The
|
||||
ML2 framework is intended to greatly simplify adding support for new L2
|
||||
networking technologies, requiring much less initial and ongoing effort than
|
||||
would be required for an additional monolithic core plugin. It is also intended
|
||||
to foster innovation through its organization as optional driver modules.
|
||||
|
||||
The ML2 plugin supports all the non-vendor-specific neutron API
|
||||
extensions, and works with the standard neutron DHCP agent. It
|
||||
@ -43,7 +41,7 @@ interact with external devices and controllers. Mechanism drivers are
|
||||
also called as part of the port binding process, to determine whether
|
||||
the associated mechanism can provide connectivity for the network, and
|
||||
if so, the network segment and VIF driver to be used. The havana
|
||||
release includes mechanism drivers for the Open vSwitch, Linux bridge,
|
||||
release included mechanism drivers for the Open vSwitch, Linux bridge,
|
||||
and Hyper-V L2 agents, and for vendor switches/controllers/etc.
|
||||
It also includes an L2 Population mechanism driver that
|
||||
can help optimize tunneled virtual network traffic.
|
||||
|
@ -121,9 +121,7 @@ class CommonAgentManagerBase(metaclass=abc.ABCMeta):
|
||||
This value will be stored in the Plug-in and be part of the
|
||||
device_details.
|
||||
|
||||
Typically this list is retrieved from the sysfs. E.g. for linuxbridge
|
||||
it returns all names of devices of type 'tap' that start with a certain
|
||||
prefix.
|
||||
Typically this list is retrieved from the sysfs.
|
||||
|
||||
:return: set -- the set of all devices e.g. ['tap1', 'tap2']
|
||||
"""
|
||||
|