doc: Import installation guide

Import all docs from openstack-manuals.

Part of bp: doc-migration

Change-Id: If1fa15f5495a8a207042e7a43d34d32671c59ee1
Co-Authored-By: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
chenxing 2017-06-26 10:56:29 +00:00 committed by Sean Dague
parent 01b3ae2e70
commit 036642ce33
23 changed files with 2731 additions and 0 deletions

View File

@ -235,6 +235,14 @@ Module Reference
.. toctree::
:hidden:
Installation Guide
==================
.. toctree::
:maxdepth: 2
install/index
Metadata
========

View File

@ -0,0 +1,288 @@
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service on a
compute node. The service supports several hypervisors to deploy instances or
virtual machines (VMs). For simplicity, this configuration uses the Quick
EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute
nodes that support hardware acceleration for virtual machines. On legacy
hardware, this configuration uses the generic QEMU hypervisor. You can follow
these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.
.. note::
This section assumes that you are following the instructions in this guide
step-by-step to configure the first compute node. If you want to configure
additional compute nodes, prepare them in a similar fashion to the first
compute node in the :ref:`example architectures
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# zypper install openstack-nova-compute genisoimage qemu-kvm libvirt
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and metadata APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
* In the ``[DEFAULT]`` section, set the ``compute_driver``:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
compute_driver = libvirt.LibvirtDriver
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network interface on your compute node, typically ``10.0.0.31``
for the first node in the :ref:`example architecture
<overview-example-architectures>`.
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since
Networking includes a firewall service, you must disable the Compute
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, enable and configure remote console access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy
component only listens on the management interface IP address of
the compute node. The base URL indicates the location where you
can use a web browser to access remote consoles of instances
on this compute node.
.. note::
If the web browser to access remote consoles resides on
a host that cannot resolve the ``controller`` hostname,
you must replace ``controller`` with the management
interface IP address of the controller node.
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/run/nova
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
#. Ensure the kernel module ``nbd`` is loaded.
.. code-block:: console
# modprobe nbd
#. Ensure the module loads on every boot by adding ``nbd`` to the
``/etc/modules-load.d/nbd.conf`` file.
Finalize installation
---------------------
#. Determine whether your compute node supports hardware acceleration for
virtual machines:
.. code-block:: console
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of ``one or greater``, your compute node
supports hardware acceleration which typically requires no additional
configuration.
If this command returns a value of ``zero``, your compute node does not
support hardware acceleration and you must configure ``libvirt`` to use QEMU
instead of KVM.
* Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as
follows:
.. path /etc/nova/nova.conf
.. code-block:: ini
[libvirt]
# ...
virt_type = qemu
#. Start the Compute service including its dependencies and configure them to
start automatically when the system boots:
.. code-block:: console
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
.. note::
If the ``nova-compute`` service fails to start, check
``/var/log/nova/nova-compute.log``. The error message ``AMQP server on
controller:5672 is unreachable`` likely indicates that the firewall on the
controller node is preventing access to port 5672. Configure the firewall
to open port 5672 on the controller node and restart ``nova-compute``
service on the compute node.
Add the compute node to the cell database
-----------------------------------------
.. important::
Run the following commands on the **controller** node.
#. Source the admin credentials to enable admin-only CLI commands, then confirm
there are compute hosts in the database:
.. code-block:: console
$ . admin-openrc
$ openstack compute service list --service nova-compute
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+
#. Discover compute hosts:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
.. note::
When you add new compute nodes, you must run ``nova-manage cell_v2
discover_hosts`` on the controller node to register those new compute
nodes. Alternatively, you can set an appropriate interval in
``/etc/nova/nova.conf``:
.. code-block:: ini
[scheduler]
discover_hosts_in_cells_interval = 300

View File

@ -0,0 +1,270 @@
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service on a
compute node. The service supports several hypervisors to deploy instances or
virtual machines (VMs). For simplicity, this configuration uses the Quick
EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute
nodes that support hardware acceleration for virtual machines. On legacy
hardware, this configuration uses the generic QEMU hypervisor. You can follow
these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.
.. note::
This section assumes that you are following the instructions in this guide
step-by-step to configure the first compute node. If you want to configure
additional compute nodes, prepare them in a similar fashion to the first
compute node in the :ref:`example architectures
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# yum install openstack-nova-compute
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and
metadata APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network interface on your compute node, typically 10.0.0.31 for
the first node in the :ref:`example architecture
<overview-example-architectures>`.
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since Networking
includes a firewall service, you must disable the Compute firewall
service by using the ``nova.virt.firewall.NoopFirewallDriver`` firewall
driver.
* In the ``[vnc]`` section, enable and configure remote console access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component
only listens on the management interface IP address of the compute node.
The base URL indicates the location where you can use a web browser to
access remote consoles of instances on this compute node.
.. note::
If the web browser to access remote consoles resides on a host that
cannot resolve the ``controller`` hostname, you must replace
``controller`` with the management interface IP address of the
controller node.
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
Finalize installation
---------------------
#. Determine whether your compute node supports hardware acceleration for
virtual machines:
.. code-block:: console
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of ``one or greater``, your compute node
supports hardware acceleration which typically requires no additional
configuration.
If this command returns a value of ``zero``, your compute node does not
support hardware acceleration and you must configure ``libvirt`` to use QEMU
instead of KVM.
* Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as
follows:
.. path /etc/nova/nova.conf
.. code-block:: ini
[libvirt]
# ...
virt_type = qemu
#. Start the Compute service including its dependencies and configure them to
start automatically when the system boots:
.. code-block:: console
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
.. note::
If the ``nova-compute`` service fails to start, check
``/var/log/nova/nova-compute.log``. The error message ``AMQP server on
controller:5672 is unreachable`` likely indicates that the firewall on the
controller node is preventing access to port 5672. Configure the firewall
to open port 5672 on the controller node and restart ``nova-compute``
service on the compute node.
Add the compute node to the cell database
-----------------------------------------
.. important::
Run the following commands on the **controller** node.
#. Source the admin credentials to enable admin-only CLI commands, then confirm
there are compute hosts in the database:
.. code-block:: console
$ . admin-openrc
$ openstack compute service list --service nova-compute
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+
#. Discover compute hosts:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
.. note::
When you add new compute nodes, you must run ``nova-manage cell_v2
discover_hosts`` on the controller node to register those new compute
nodes. Alternatively, you can set an appropriate interval in
``/etc/nova/nova.conf``:
.. code-block:: ini
[scheduler]
discover_hosts_in_cells_interval = 300

View File

@ -0,0 +1,265 @@
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service on a
compute node. The service supports several hypervisors to deploy instances or
virtual machines (VMs). For simplicity, this configuration uses the Quick
EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute
nodes that support hardware acceleration for virtual machines. On legacy
hardware, this configuration uses the generic QEMU hypervisor. You can follow
these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.
.. note::
This section assumes that you are following the instructions in this guide
step-by-step to configure the first compute node. If you want to configure
additional compute nodes, prepare them in a similar fashion to the first
compute node in the :ref:`example architectures
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# apt install nova-compute
2. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the
``[keystone_authtoken]`` section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the
management network interface on your compute node, typically 10.0.0.31 for
the first node in the :ref:`example architecture
<overview-example-architectures>`.
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since Networking
includes a firewall service, you must disable the Compute firewall
service by using the ``nova.virt.firewall.NoopFirewallDriver`` firewall
driver.
* In the ``[vnc]`` section, enable and configure remote console access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component
only listens on the management interface IP address of the compute node.
The base URL indicates the location where you can use a web browser to
access remote consoles of instances on this compute node.
.. note::
If the web browser to access remote consoles resides on a host that
cannot resolve the ``controller`` hostname, you must replace
``controller`` with the management interface IP address of the
controller node.
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
.. todo::
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667
* Due to a packaging bug, remove the ``log_dir`` option from the
``[DEFAULT]`` section.
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
Finalize installation
---------------------
#. Determine whether your compute node supports hardware acceleration for
virtual machines:
.. code-block:: console
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of ``one or greater``, your compute node
supports hardware acceleration which typically requires no additional
configuration.
If this command returns a value of ``zero``, your compute node does not
support hardware acceleration and you must configure ``libvirt`` to use QEMU
instead of KVM.
* Edit the ``[libvirt]`` section in the ``/etc/nova/nova-compute.conf`` file as
follows:
.. path /etc/nova/nova-compute.conf
.. code-block:: ini
[libvirt]
# ...
virt_type = qemu
#. Restart the Compute service:
.. code-block:: console
# service nova-compute restart
.. note::
If the ``nova-compute`` service fails to start, check
``/var/log/nova/nova-compute.log``. The error message ``AMQP server on
controller:5672 is unreachable`` likely indicates that the firewall on the
controller node is preventing access to port 5672. Configure the firewall
to open port 5672 on the controller node and restart ``nova-compute``
service on the compute node.
Add the compute node to the cell database
-----------------------------------------
.. important::
Run the following commands on the **controller** node.
#. Source the admin credentials to enable admin-only CLI commands, then confirm
there are compute hosts in the database:
.. code-block:: console
$ . admin-openrc
$ openstack compute service list --service nova-compute
+----+-------+--------------+------+-------+---------+----------------------------+
| ID | Host | Binary | Zone | State | Status | Updated At |
+----+-------+--------------+------+-------+---------+----------------------------+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+----+-------+--------------+------+-------+---------+----------------------------+
#. Discover compute hosts:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
.. note::
When you add new compute nodes, you must run ``nova-manage cell_v2
discover_hosts`` on the controller node to register those new compute
nodes. Alternatively, you can set an appropriate interval in
``/etc/nova/nova.conf``:
.. code-block:: ini
[scheduler]
discover_hosts_in_cells_interval = 300

View File

@ -0,0 +1,25 @@
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service on a
compute node. The service supports several hypervisors to deploy instances or
virtual machines (VMs). For simplicity, this configuration uses the Quick
EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute
nodes that support hardware acceleration for virtual machines. On legacy
hardware, this configuration uses the generic QEMU hypervisor. You can follow
these instructions with minor modifications to horizontally scale your
environment with additional compute nodes.
.. note::
This section assumes that you are following the instructions in this guide
step-by-step to configure the first compute node. If you want to configure
additional compute nodes, prepare them in a similar fashion to the first
compute node in the :ref:`example architectures
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
.. toctree::
:glob:
compute-install-*

View File

@ -0,0 +1,486 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service,
code-named nova, on the controller node.
Prerequisites
-------------
Before you install and configure the Compute service, you must create
databases, service credentials, and API endpoints.
#. To create the databases, complete these steps:
* Use the database access client to connect to the database server as the
``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
* Grant proper access to the databases:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
Replace ``NOVA_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. Create the Compute service credentials:
* Create the ``nova`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
* Add the ``admin`` role to the ``nova`` user:
.. code-block:: console
$ openstack role add --project service --user nova admin
.. note::
This command provides no output.
* Create the ``nova`` service entity:
.. code-block:: console
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
#. Create the Compute API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
#. Create a Placement service user using your chosen ``PLACEMENT_PASS``:
.. code-block:: console
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the Placement user to the service project with the admin role:
.. code-block:: console
$ openstack role add --project service --user placement admin
.. note::
This command provides no output.
#. Create the Placement API entry in the service catalog:
.. code-block:: console
$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+
#. Create the Placement API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
.. note::
As of the Newton release, SUSE OpenStack packages are shipped with the
upstream default configuration files. For example, ``/etc/nova/nova.conf``
has customizations in ``/etc/nova/nova.conf.d/010-nova.conf``. While the
following instructions modify the default configuration file, adding a new
file in ``/etc/nova/nova.conf.d`` achieves the same result.
#. Install the packages:
.. code-block:: console
# zypper install openstack-nova-api openstack-nova-scheduler \
openstack-nova-conductor openstack-nova-consoleauth \
openstack-nova-novncproxy openstack-nova-placement-api \
iptables
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and metadata
APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
* In the ``[api_database]`` and ``[database]`` sections, configure database
access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
Replace ``NOVA_DBPASS`` with the password you chose for the Compute
databases.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall driver. Since the
Networking service includes a firewall driver, you must disable the
Compute firewall driver by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, configure the VNC proxy to use the management
interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/run/nova
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
#. Populate the ``nova-api`` database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage api_db sync" nova
.. note::
Ignore any deprecation messages in this output.
#. Register the ``cell0`` database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#. Create the ``cell1`` cell:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
#. Populate the nova database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage db sync" nova
#. Verify nova cell0 and cell1 are registered correctly:
.. code-block:: console
# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+
Finalize installation
---------------------
* Enable the placement API Apache vhost:
.. code-block:: console
# mv /etc/apache2/vhosts.d/nova-placement-api.conf.sample /etc/apache2/vhosts.d/nova-placement-api.conf
# systemctl reload apache2.service
* Start the Compute services and configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

View File

@ -0,0 +1,493 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service,
code-named nova, on the controller node.
Prerequisites
-------------
Before you install and configure the Compute service, you must create
databases, service credentials, and API endpoints.
#. To create the databases, complete these steps:
* Use the database access client to connect to the database server as the
``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
* Grant proper access to the databases:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
Replace ``NOVA_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. Create the Compute service credentials:
* Create the ``nova`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
* Add the ``admin`` role to the ``nova`` user:
.. code-block:: console
$ openstack role add --project service --user nova admin
.. note::
This command provides no output.
* Create the ``nova`` service entity:
.. code-block:: console
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
#. Create the Compute API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
#. Create a Placement service user using your chosen ``PLACEMENT_PASS``:
.. code-block:: console
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the Placement user to the service project with the admin role:
.. code-block:: console
$ openstack role add --project service --user placement admin
.. note::
This command provides no output.
#. Create the Placement API entry in the service catalog:
.. code-block:: console
$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+
#. Create the Placement API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and metadata APIs:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
* In the ``[api_database]`` and ``[database]`` sections, configure database
access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
Replace ``NOVA_DBPASS`` with the password you chose for the Compute
databases.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall driver. Since the
Networking service includes a firewall driver, you must disable the
Compute firewall driver by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, configure the VNC proxy to use the management
interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
* Due to a `packaging bug
<https://bugzilla.redhat.com/show_bug.cgi?id=1430540>`_, you must enable
access to the Placement API by adding the following configuration to
``/etc/httpd/conf.d/00-nova-placement-api.conf``:
.. path /etc/httpd/conf.d/00-nova-placement-api.conf
.. code-block:: ini
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
* Restart the httpd service:
.. code-block:: console
# systemctl restart httpd
#. Populate the ``nova-api`` database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage api_db sync" nova
.. note::
Ignore any deprecation messages in this output.
4. Register the ``cell0`` database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
5. Create the ``cell1`` cell:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
6. Populate the nova database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage db sync" nova
7. Verify nova cell0 and cell1 are registered correctly:
.. code-block:: console
# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+
Finalize installation
---------------------
* Start the Compute services and configure them to start when the system boots:
.. code-block:: console
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

View File

@ -0,0 +1,465 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service,
code-named nova, on the controller node.
Prerequisites
-------------
Before you install and configure the Compute service, you must create
databases, service credentials, and API endpoints.
#. To create the databases, complete these steps:
* Use the database access client to connect to the database
server as the ``root`` user:
.. code-block:: console
# mysql
* Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases:
.. code-block:: console
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
* Grant proper access to the databases:
.. code-block:: console
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
Replace ``NOVA_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. Create the Compute service credentials:
* Create the ``nova`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
* Add the ``admin`` role to the ``nova`` user:
.. code-block:: console
$ openstack role add --project service --user nova admin
.. note::
This command provides no output.
* Create the ``nova`` service entity:
.. code-block:: console
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
#. Create the Compute API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+-------------------------------------------+
#. Create a Placement service user using your chosen ``PLACEMENT_PASS``:
.. code-block:: console
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
#. Add the Placement user to the service project with the admin role:
.. code-block:: console
$ openstack role add --project service --user placement admin
.. note::
This command provides no output.
#. Create the Placement API entry in the service catalog:
.. code-block:: console
$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+-------------+----------------------------------+
#. Create the Placement API service endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# apt install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler nova-placement-api
#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
* In the ``[api_database]`` and ``[database]`` sections, configure database
access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
Replace ``NOVA_DBPASS`` with the password you chose for the Compute
databases.
* In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
account in ``RabbitMQ``.
* In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity
service access:
.. path /etc/nova/nova.conf
.. code-block:: ini
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in
the Identity service.
.. note::
Comment out or remove any other options in the ``[keystone_authtoken]``
section.
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the
management interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. path /etc/nova/nova.conf
.. code-block:: ini
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall driver. Since the
Networking service includes a firewall driver, you must disable the
Compute firewall driver by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, configure the VNC proxy to use the management
interface IP address of the controller node:
.. path /etc/nova/nova.conf
.. code-block:: ini
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
* In the ``[glance]`` section, configure the location of the Image service
API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[glance]
# ...
api_servers = http://controller:9292
* In the ``[oslo_concurrency]`` section, configure the lock path:
.. path /etc/nova/nova.conf
.. code-block:: ini
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
.. todo::
https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667
* Due to a packaging bug, remove the ``log_dir`` option from the
``[DEFAULT]`` section.
* In the ``[placement]`` section, configure the Placement API:
.. path /etc/nova/nova.conf
.. code-block:: ini
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
Replace ``PLACEMENT_PASS`` with the password you choose for the
``placement`` user in the Identity service. Comment out any other options
in the ``[placement]`` section.
#. Populate the nova-api database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage api_db sync" nova
.. note::
Ignore any deprecation messages in this output.
#. Register the ``cell0`` database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#. Create the ``cell1`` cell:
.. code-block:: console
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
#. Populate the nova database:
.. code-block:: console
# su -s /bin/sh -c "nova-manage db sync" nova
#. Verify nova cell0 and cell1 are registered correctly:
.. code-block:: console
# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+
Finalize installation
---------------------
* Restart the Compute services:
.. code-block:: console
# service nova-api restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart

View File

@ -0,0 +1,10 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute service,
code-named nova, on the controller node.
.. toctree::
:glob:
controller-install-*

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 49 KiB

View File

@ -0,0 +1,102 @@
========================
Compute service overview
========================
.. todo:: Update a lot of the links in here.
Use OpenStack Compute to host and manage cloud computing systems. OpenStack
Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. The
main modules are implemented in Python.
OpenStack Compute interacts with OpenStack Identity for authentication;
OpenStack Image service for disk and server images; and OpenStack Dashboard for
the user and administrative interface. Image access is limited by projects, and
by users; quotas are limited per project (the number of instances, for
example). OpenStack Compute can scale horizontally on standard hardware, and
download images to launch instances.
OpenStack Compute consists of the following areas and their components:
``nova-api`` service
Accepts and responds to end user compute API calls. The service supports the
OpenStack Compute API, the Amazon EC2 API, and a special Admin API for
privileged users to perform administrative actions. It enforces some policies
and initiates most orchestration activities, such as running an instance.
``nova-api-metadata`` service
Accepts metadata requests from instances. The ``nova-api-metadata`` service
is generally used when you run in multi-host mode with ``nova-network``
installations. For details, see `Metadata service
<https://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service>`__
in the OpenStack Administrator Guide.
``nova-compute`` service
A worker daemon that creates and terminates virtual machine instances through
hypervisor APIs. For example:
- XenAPI for XenServer/XCP
- libvirt for KVM or QEMU
- VMwareAPI for VMware
Processing is fairly complex. Basically, the daemon accepts actions from the
queue and performs a series of system commands such as launching a KVM
instance and updating its state in the database.
``nova-placement-api`` service
Tracks the inventory and usage of each provider. For details, see `Placement
API <https://docs.openstack.org/developer/nova/placement.html>`__.
``nova-scheduler`` service
Takes a virtual machine instance request from the queue and determines on
which compute server host it runs.
``nova-conductor`` module
Mediates interactions between the ``nova-compute`` service and the database.
It eliminates direct accesses to the cloud database made by the
``nova-compute`` service. The ``nova-conductor`` module scales horizontally.
However, do not deploy it on nodes where the ``nova-compute`` service runs.
For more information, see `Configuration Reference Guide
<https://docs.openstack.org/ocata/config-reference/compute/config-options.html#nova-conductor>`__.
``nova-consoleauth`` daemon
Authorizes tokens for users that console proxies provide. See
``nova-novncproxy`` and ``nova-xvpvncproxy``. This service must be running
for console proxies to work. You can run proxies of either type against a
single nova-consoleauth service in a cluster configuration. For information,
see `About nova-consoleauth
<https://docs.openstack.org/admin-guide/compute-remote-console-access.html#about-nova-consoleauth>`__.
``nova-novncproxy`` daemon
Provides a proxy for accessing running instances through a VNC connection.
Supports browser-based novnc clients.
``nova-spicehtml5proxy`` daemon
Provides a proxy for accessing running instances through a SPICE connection.
Supports browser-based HTML5 client.
``nova-xvpvncproxy`` daemon
Provides a proxy for accessing running instances through a VNC connection.
Supports an OpenStack-specific Java client.
The queue
A central hub for passing messages between daemons. Usually implemented with
`RabbitMQ <https://www.rabbitmq.com/>`__, also can be implemented with
another AMQP message queue, such as `ZeroMQ <http://www.zeromq.org/>`__.
SQL database
Stores most build-time and run-time states for a cloud infrastructure,
including:
- Available instance types
- Instances in use
- Available networks
- Projects
Theoretically, OpenStack Compute can support any database that SQLAlchemy
supports. Common databases are SQLite3 for test and development work, MySQL,
MariaDB, and PostgreSQL.

View File

@ -0,0 +1,11 @@
===============
Compute service
===============
.. toctree::
overview.rst
get-started-compute.rst
controller-install.rst
compute-install.rst
verify.rst

View File

@ -0,0 +1,174 @@
========
Overview
========
The OpenStack project is an open source cloud computing platform that supports
all types of cloud environments. The project aims for simple implementation,
massive scalability, and a rich set of features. Cloud computing experts from
around the world contribute to the project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a
variety of complementary services. Each service offers an Application
Programming Interface (API) that facilitates this integration.
This guide covers step-by-step deployment of the major OpenStack services using
a functional example architecture suitable for new users of OpenStack with
sufficient Linux experience. This guide is not intended to be used for
production system installations, but to create a minimum proof-of-concept for
the purpose of learning about OpenStack.
After becoming familiar with basic installation, configuration, operation, and
troubleshooting of these OpenStack services, you should consider the following
steps toward deployment using a production architecture:
* Determine and implement the necessary core and optional services to meet
performance and redundancy requirements.
* Increase security using methods such as firewalls, encryption, and service
policies.
* Implement a deployment tool such as Ansible, Chef, Puppet, or Salt to
automate deployment and management of the production environment.
.. _overview-example-architectures:
Example architecture
~~~~~~~~~~~~~~~~~~~~
The example architecture requires at least two nodes (hosts) to launch a basic
virtual machine (VM) or instance. Optional services such as Block Storage and
Object Storage require additional nodes.
.. important::
The example architecture used in this guide is a minimum configuration, and
is not intended for production system installations. It is designed to
provide a minimum proof-of-concept for the purpose of learning about
OpenStack. For information on creating architectures for specific use cases,
or how to determine which architecture is required, see the `Architecture
Design Guide <https://docs.openstack.org/arch-design/>`_.
This example architecture differs from a minimal production architecture as
follows:
* Networking agents reside on the controller node instead of one or more
dedicated network nodes.
* Overlay (tunnel) traffic for self-service networks traverses the management
network instead of a dedicated network.
For more information on production architectures, see the `Architecture Design
Guide <https://docs.openstack.org/arch-design/>`_, `OpenStack Operations Guide
<https://docs.openstack.org/ops-guide/>`_, and `OpenStack Networking Guide
<https://docs.openstack.org/ocata/networking-guide/>`_.
.. _figure-hwreqs:
.. figure:: figures/hwreqs.png
:alt: Hardware requirements
**Hardware requirements**
Controller
----------
The controller node runs the Identity service, Image service, management
portions of Compute, management portion of Networking, various Networking
agents, and the Dashboard. It also includes supporting services such as an SQL
database, message queue, and Network Time Protocol (NTP).
Optionally, the controller node runs portions of the Block Storage, Object
Storage, Orchestration, and Telemetry services.
The controller node requires a minimum of two network interfaces.
Compute
-------
The compute node runs the hypervisor portion of Compute that operates
instances. By default, Compute uses the kernel-based VM (KVM) hypervisor. The
compute node also runs a Networking service agent that connects instances to
virtual networks and provides firewalling services to instances via security
groups.
You can deploy more than one compute node. Each node requires a minimum of two
network interfaces.
Block Storage
-------------
The optional Block Storage node contains the disks that the Block Storage and
Shared File System services provision for instances.
For simplicity, service traffic between compute nodes and this node uses the
management network. Production environments should implement a separate storage
network to increase performance and security.
You can deploy more than one block storage node. Each node requires a minimum
of one network interface.
Object Storage
--------------
The optional Object Storage node contain the disks that the Object Storage
service uses for storing accounts, containers, and objects.
For simplicity, service traffic between compute nodes and this node uses the
management network. Production environments should implement a separate storage
network to increase performance and security.
This service requires two nodes. Each node requires a minimum of one network
interface. You can deploy more than two object storage nodes.
Networking
~~~~~~~~~~
Choose one of the following virtual networking options.
.. _network1:
Networking Option 1: Provider networks
--------------------------------------
The provider networks option deploys the OpenStack Networking service in the
simplest way possible with primarily layer-2 (bridging/switching) services and
VLAN segmentation of networks. Essentially, it bridges virtual networks to
physical networks and relies on physical network infrastructure for layer-3
(routing) services. Additionally, a DHCP<Dynamic Host Configuration Protocol
(DHCP) service provides IP address information to instances.
The OpenStack user requires more information about the underlying network
infrastructure to create a virtual network to exactly match the infrastructure.
.. warning::
This option lacks support for self-service (private) networks, layer-3
(routing) services, and advanced services such as Load-Balancer-as-a-Service
(LBaaS) and FireWall-as-a-Service (FWaaS). Consider the self-service
networks option below if you desire these features.
.. _figure-network1-services:
.. figure:: figures/network1-services.png
:alt: Networking Option 1: Provider networks - Service layout
.. _network2:
Networking Option 2: Self-service networks
------------------------------------------
The self-service networks option augments the provider networks option with
layer-3 (routing) services that enable self-service networks using overlay
segmentation methods such as Virtual Extensible LAN (VXLAN). Essentially, it
routes virtual networks to physical networks using Network Address Translation
(NAT). Additionally, this option provides the foundation for advanced services
such as LBaaS and FWaaS.
The OpenStack user can create virtual networks without the knowledge of
underlying infrastructure on the data network. This can also include VLAN
networks if the layer-2 plug-in is configured accordingly.
.. _figure-network2-services:
.. figure:: figures/network2-services.png
:alt: Networking Option 2: Self-service networks - Service layout

View File

@ -0,0 +1,6 @@
.. note::
Default configuration files vary by distribution. You might need to add
these sections and options rather than modifying existing sections and
options. Also, an ellipsis (``...``) in the configuration snippets indicates
potential default configuration options that you should retain.

View File

@ -0,0 +1,119 @@
Verify operation
~~~~~~~~~~~~~~~~
Verify operation of the Compute service.
.. note::
Perform these commands on the controller node.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. List service components to verify successful launch and registration of each
process:
.. code-block:: console
$ openstack compute service list
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
.. note::
This output should indicate three service components enabled on the
controller node and one service component enabled on the compute node.
#. List API endpoints in the Identity service to verify connectivity with the
Identity service:
.. note::
Below endpoints list may differ depending on the installation of
OpenStack components.
.. code-block:: console
$ openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:35357/v3/ |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | |
+-----------+-----------+-----------------------------------------+
.. note::
Ignore any warnings in this output.
#. List images in the Image service to verify connectivity with the Image
service:
.. code-block:: console
$ openstack image list
+--------------------------------------+-------------+-------------+
| ID | Name | Status |
+--------------------------------------+-------------+-------------+
| 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active |
+--------------------------------------+-------------+-------------+
#. Check the cells and placement API are working successfully:
.. code-block:: console
# nova-status upgrade check
+---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+