diff --git a/doc/source/index.rst b/doc/source/index.rst index 46bae298f7f5..91c65ccdad8b 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -235,6 +235,14 @@ Module Reference .. toctree:: :hidden: +Installation Guide +================== + +.. toctree:: + :maxdepth: 2 + + install/index + Metadata ======== diff --git a/doc/source/install/compute-install-obs.rst b/doc/source/install/compute-install-obs.rst new file mode 100644 index 000000000000..bd190932ef2c --- /dev/null +++ b/doc/source/install/compute-install-obs.rst @@ -0,0 +1,288 @@ +Install and configure a compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service on a +compute node. The service supports several hypervisors to deploy instances or +virtual machines (VMs). For simplicity, this configuration uses the Quick +EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute +nodes that support hardware acceleration for virtual machines. On legacy +hardware, this configuration uses the generic QEMU hypervisor. You can follow +these instructions with minor modifications to horizontally scale your +environment with additional compute nodes. + +.. note:: + + This section assumes that you are following the instructions in this guide + step-by-step to configure the first compute node. If you want to configure + additional compute nodes, prepare them in a similar fashion to the first + compute node in the :ref:`example architectures + ` section. Each additional compute node + requires a unique IP address. + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +#. Install the packages: + + .. code-block:: console + + # zypper install openstack-nova-compute genisoimage qemu-kvm libvirt + +#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + enabled_apis = osapi_compute,metadata + + * In the ``[DEFAULT]`` section, set the ``compute_driver``: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + compute_driver = libvirt.LibvirtDriver + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the ``[keystone_authtoken]`` + section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS + + Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the + management network interface on your compute node, typically ``10.0.0.31`` + for the first node in the :ref:`example architecture + `. + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall service. Since + Networking includes a firewall service, you must disable the Compute + firewall service by using the + ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. + + * In the ``[vnc]`` section, enable and configure remote console access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + # ... + enabled = True + vncserver_listen = 0.0.0.0 + vncserver_proxyclient_address = $my_ip + novncproxy_base_url = http://controller:6080/vnc_auto.html + + The server component listens on all IP addresses and the proxy + component only listens on the management interface IP address of + the compute node. The base URL indicates the location where you + can use a web browser to access remote consoles of instances + on this compute node. + + .. note:: + + If the web browser to access remote consoles resides on + a host that cannot resolve the ``controller`` hostname, + you must replace ``controller`` with the management + interface IP address of the controller node. + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/run/nova + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + +#. Ensure the kernel module ``nbd`` is loaded. + + .. code-block:: console + + # modprobe nbd + +#. Ensure the module loads on every boot by adding ``nbd`` to the + ``/etc/modules-load.d/nbd.conf`` file. + +Finalize installation +--------------------- + +#. Determine whether your compute node supports hardware acceleration for + virtual machines: + + .. code-block:: console + + $ egrep -c '(vmx|svm)' /proc/cpuinfo + + If this command returns a value of ``one or greater``, your compute node + supports hardware acceleration which typically requires no additional + configuration. + + If this command returns a value of ``zero``, your compute node does not + support hardware acceleration and you must configure ``libvirt`` to use QEMU + instead of KVM. + + * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as + follows: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [libvirt] + # ... + virt_type = qemu + +#. Start the Compute service including its dependencies and configure them to + start automatically when the system boots: + + .. code-block:: console + + # systemctl enable libvirtd.service openstack-nova-compute.service + # systemctl start libvirtd.service openstack-nova-compute.service + +.. note:: + + If the ``nova-compute`` service fails to start, check + ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on + controller:5672 is unreachable`` likely indicates that the firewall on the + controller node is preventing access to port 5672. Configure the firewall + to open port 5672 on the controller node and restart ``nova-compute`` + service on the compute node. + +Add the compute node to the cell database +----------------------------------------- + +.. important:: + + Run the following commands on the **controller** node. + +#. Source the admin credentials to enable admin-only CLI commands, then confirm + there are compute hosts in the database: + + .. code-block:: console + + $ . admin-openrc + + $ openstack compute service list --service nova-compute + +----+-------+--------------+------+-------+---------+----------------------------+ + | ID | Host | Binary | Zone | State | Status | Updated At | + +----+-------+--------------+------+-------+---------+----------------------------+ + | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | + +----+-------+--------------+------+-------+---------+----------------------------+ + +#. Discover compute hosts: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova + + Found 2 cell mappings. + Skipping cell0 since it does not contain hosts. + Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc + Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc + Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + + .. note:: + + When you add new compute nodes, you must run ``nova-manage cell_v2 + discover_hosts`` on the controller node to register those new compute + nodes. Alternatively, you can set an appropriate interval in + ``/etc/nova/nova.conf``: + + .. code-block:: ini + + [scheduler] + discover_hosts_in_cells_interval = 300 diff --git a/doc/source/install/compute-install-rdo.rst b/doc/source/install/compute-install-rdo.rst new file mode 100644 index 000000000000..3d228826e5ea --- /dev/null +++ b/doc/source/install/compute-install-rdo.rst @@ -0,0 +1,270 @@ +Install and configure a compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service on a +compute node. The service supports several hypervisors to deploy instances or +virtual machines (VMs). For simplicity, this configuration uses the Quick +EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute +nodes that support hardware acceleration for virtual machines. On legacy +hardware, this configuration uses the generic QEMU hypervisor. You can follow +these instructions with minor modifications to horizontally scale your +environment with additional compute nodes. + +.. note:: + + This section assumes that you are following the instructions in this guide + step-by-step to configure the first compute node. If you want to configure + additional compute nodes, prepare them in a similar fashion to the first + compute node in the :ref:`example architectures + ` section. Each additional compute node + requires a unique IP address. + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +#. Install the packages: + + .. code-block:: console + + # yum install openstack-nova-compute + +#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[DEFAULT]`` section, enable only the compute and + metadata APIs: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + enabled_apis = osapi_compute,metadata + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the ``[keystone_authtoken]`` + section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS + + Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the + management network interface on your compute node, typically 10.0.0.31 for + the first node in the :ref:`example architecture + `. + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall service. Since Networking + includes a firewall service, you must disable the Compute firewall + service by using the ``nova.virt.firewall.NoopFirewallDriver`` firewall + driver. + + * In the ``[vnc]`` section, enable and configure remote console access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + # ... + enabled = True + vncserver_listen = 0.0.0.0 + vncserver_proxyclient_address = $my_ip + novncproxy_base_url = http://controller:6080/vnc_auto.html + + The server component listens on all IP addresses and the proxy component + only listens on the management interface IP address of the compute node. + The base URL indicates the location where you can use a web browser to + access remote consoles of instances on this compute node. + + .. note:: + + If the web browser to access remote consoles resides on a host that + cannot resolve the ``controller`` hostname, you must replace + ``controller`` with the management interface IP address of the + controller node. + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/nova/tmp + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + +Finalize installation +--------------------- + +#. Determine whether your compute node supports hardware acceleration for + virtual machines: + + .. code-block:: console + + $ egrep -c '(vmx|svm)' /proc/cpuinfo + + If this command returns a value of ``one or greater``, your compute node + supports hardware acceleration which typically requires no additional + configuration. + + If this command returns a value of ``zero``, your compute node does not + support hardware acceleration and you must configure ``libvirt`` to use QEMU + instead of KVM. + + * Edit the ``[libvirt]`` section in the ``/etc/nova/nova.conf`` file as + follows: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [libvirt] + # ... + virt_type = qemu + +#. Start the Compute service including its dependencies and configure them to + start automatically when the system boots: + + .. code-block:: console + + # systemctl enable libvirtd.service openstack-nova-compute.service + # systemctl start libvirtd.service openstack-nova-compute.service + +.. note:: + + If the ``nova-compute`` service fails to start, check + ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on + controller:5672 is unreachable`` likely indicates that the firewall on the + controller node is preventing access to port 5672. Configure the firewall + to open port 5672 on the controller node and restart ``nova-compute`` + service on the compute node. + +Add the compute node to the cell database +----------------------------------------- + +.. important:: + + Run the following commands on the **controller** node. + +#. Source the admin credentials to enable admin-only CLI commands, then confirm + there are compute hosts in the database: + + .. code-block:: console + + $ . admin-openrc + + $ openstack compute service list --service nova-compute + +----+-------+--------------+------+-------+---------+----------------------------+ + | ID | Host | Binary | Zone | State | Status | Updated At | + +----+-------+--------------+------+-------+---------+----------------------------+ + | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | + +----+-------+--------------+------+-------+---------+----------------------------+ + +#. Discover compute hosts: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova + + Found 2 cell mappings. + Skipping cell0 since it does not contain hosts. + Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc + Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc + Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + + .. note:: + + When you add new compute nodes, you must run ``nova-manage cell_v2 + discover_hosts`` on the controller node to register those new compute + nodes. Alternatively, you can set an appropriate interval in + ``/etc/nova/nova.conf``: + + .. code-block:: ini + + [scheduler] + discover_hosts_in_cells_interval = 300 diff --git a/doc/source/install/compute-install-ubuntu.rst b/doc/source/install/compute-install-ubuntu.rst new file mode 100644 index 000000000000..fdfa6760e63d --- /dev/null +++ b/doc/source/install/compute-install-ubuntu.rst @@ -0,0 +1,265 @@ +Install and configure a compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service on a +compute node. The service supports several hypervisors to deploy instances or +virtual machines (VMs). For simplicity, this configuration uses the Quick +EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute +nodes that support hardware acceleration for virtual machines. On legacy +hardware, this configuration uses the generic QEMU hypervisor. You can follow +these instructions with minor modifications to horizontally scale your +environment with additional compute nodes. + +.. note:: + + This section assumes that you are following the instructions in this guide + step-by-step to configure the first compute node. If you want to configure + additional compute nodes, prepare them in a similar fashion to the first + compute node in the :ref:`example architectures + ` section. Each additional compute node + requires a unique IP address. + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +#. Install the packages: + + .. code-block:: console + + # apt install nova-compute + +2. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the + ``[keystone_authtoken]`` section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS + + Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address of the + management network interface on your compute node, typically 10.0.0.31 for + the first node in the :ref:`example architecture + `. + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall service. Since Networking + includes a firewall service, you must disable the Compute firewall + service by using the ``nova.virt.firewall.NoopFirewallDriver`` firewall + driver. + + * In the ``[vnc]`` section, enable and configure remote console access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + # ... + enabled = True + vncserver_listen = 0.0.0.0 + vncserver_proxyclient_address = $my_ip + novncproxy_base_url = http://controller:6080/vnc_auto.html + + The server component listens on all IP addresses and the proxy component + only listens on the management interface IP address of the compute node. + The base URL indicates the location where you can use a web browser to + access remote consoles of instances on this compute node. + + .. note:: + + If the web browser to access remote consoles resides on a host that + cannot resolve the ``controller`` hostname, you must replace + ``controller`` with the management interface IP address of the + controller node. + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/nova/tmp + +.. todo:: + + https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667 + + * Due to a packaging bug, remove the ``log_dir`` option from the + ``[DEFAULT]`` section. + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + +Finalize installation +--------------------- + +#. Determine whether your compute node supports hardware acceleration for + virtual machines: + + .. code-block:: console + + $ egrep -c '(vmx|svm)' /proc/cpuinfo + + If this command returns a value of ``one or greater``, your compute node + supports hardware acceleration which typically requires no additional + configuration. + + If this command returns a value of ``zero``, your compute node does not + support hardware acceleration and you must configure ``libvirt`` to use QEMU + instead of KVM. + + * Edit the ``[libvirt]`` section in the ``/etc/nova/nova-compute.conf`` file as + follows: + + .. path /etc/nova/nova-compute.conf + .. code-block:: ini + + [libvirt] + # ... + virt_type = qemu + +#. Restart the Compute service: + + .. code-block:: console + + # service nova-compute restart + +.. note:: + + If the ``nova-compute`` service fails to start, check + ``/var/log/nova/nova-compute.log``. The error message ``AMQP server on + controller:5672 is unreachable`` likely indicates that the firewall on the + controller node is preventing access to port 5672. Configure the firewall + to open port 5672 on the controller node and restart ``nova-compute`` + service on the compute node. + +Add the compute node to the cell database +----------------------------------------- + +.. important:: + + Run the following commands on the **controller** node. + +#. Source the admin credentials to enable admin-only CLI commands, then confirm + there are compute hosts in the database: + + .. code-block:: console + + $ . admin-openrc + + $ openstack compute service list --service nova-compute + +----+-------+--------------+------+-------+---------+----------------------------+ + | ID | Host | Binary | Zone | State | Status | Updated At | + +----+-------+--------------+------+-------+---------+----------------------------+ + | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | + +----+-------+--------------+------+-------+---------+----------------------------+ + +#. Discover compute hosts: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova + + Found 2 cell mappings. + Skipping cell0 since it does not contain hosts. + Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc + Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc + Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 + + .. note:: + + When you add new compute nodes, you must run ``nova-manage cell_v2 + discover_hosts`` on the controller node to register those new compute + nodes. Alternatively, you can set an appropriate interval in + ``/etc/nova/nova.conf``: + + .. code-block:: ini + + [scheduler] + discover_hosts_in_cells_interval = 300 diff --git a/doc/source/install/compute-install.rst b/doc/source/install/compute-install.rst new file mode 100644 index 000000000000..69217d949bfd --- /dev/null +++ b/doc/source/install/compute-install.rst @@ -0,0 +1,25 @@ +Install and configure a compute node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service on a +compute node. The service supports several hypervisors to deploy instances or +virtual machines (VMs). For simplicity, this configuration uses the Quick +EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute +nodes that support hardware acceleration for virtual machines. On legacy +hardware, this configuration uses the generic QEMU hypervisor. You can follow +these instructions with minor modifications to horizontally scale your +environment with additional compute nodes. + +.. note:: + + This section assumes that you are following the instructions in this guide + step-by-step to configure the first compute node. If you want to configure + additional compute nodes, prepare them in a similar fashion to the first + compute node in the :ref:`example architectures + ` section. Each additional compute node + requires a unique IP address. + +.. toctree:: + :glob: + + compute-install-* diff --git a/doc/source/install/controller-install-obs.rst b/doc/source/install/controller-install-obs.rst new file mode 100644 index 000000000000..623c674ebd30 --- /dev/null +++ b/doc/source/install/controller-install-obs.rst @@ -0,0 +1,486 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service, +code-named nova, on the controller node. + +Prerequisites +------------- + +Before you install and configure the Compute service, you must create +databases, service credentials, and API endpoints. + +#. To create the databases, complete these steps: + + * Use the database access client to connect to the database server as the + ``root`` user: + + .. code-block:: console + + $ mysql -u root -p + + * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: + + .. code-block:: console + + MariaDB [(none)]> CREATE DATABASE nova_api; + MariaDB [(none)]> CREATE DATABASE nova; + MariaDB [(none)]> CREATE DATABASE nova_cell0; + + * Grant proper access to the databases: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + Replace ``NOVA_DBPASS`` with a suitable password. + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI commands: + + .. code-block:: console + + $ . admin-openrc + +#. Create the Compute service credentials: + + * Create the ``nova`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt nova + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | 8a7dbf5279404537b1c7b86c033620fe | + | name | nova | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + * Add the ``admin`` role to the ``nova`` user: + + .. code-block:: console + + $ openstack role add --project service --user nova admin + + .. note:: + + This command provides no output. + + * Create the ``nova`` service entity: + + .. code-block:: console + + $ openstack service create --name nova \ + --description "OpenStack Compute" compute + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Compute | + | enabled | True | + | id | 060d59eac51b4594815603d75a00aba2 | + | name | nova | + | type | compute | + +-------------+----------------------------------+ + +#. Create the Compute API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + compute public http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 3c1caa473bfe4390a11e7177894bcc7b | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute internal http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | e3c918de680746a586eac1f2d9bc10ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute admin http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 38f7af91666a47cfb97b4dc790b94424 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + +#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt placement + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fa742015a6494a949f67629884fc7ec8 | + | name | placement | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + +#. Add the Placement user to the service project with the admin role: + + .. code-block:: console + + $ openstack role add --project service --user placement admin + + .. note:: + + This command provides no output. + +#. Create the Placement API entry in the service catalog: + + .. code-block:: console + + $ openstack service create --name placement --description "Placement API" placement + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | Placement API | + | enabled | True | + | id | 2d1a27022e6e4185b86adac4444c495f | + | name | placement | + | type | placement | + +-------------+----------------------------------+ + +#. Create the Placement API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne placement public http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 2b1b2637908b4137a9c2e0470487cbc0 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement internal http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 02bcda9a150a4bd7993ff4879df971ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement admin http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 3d71177b9e0f406f98cbff198d74b182 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +.. note:: + + As of the Newton release, SUSE OpenStack packages are shipped with the + upstream default configuration files. For example, ``/etc/nova/nova.conf`` + has customizations in ``/etc/nova/nova.conf.d/010-nova.conf``. While the + following instructions modify the default configuration file, adding a new + file in ``/etc/nova/nova.conf.d`` achieves the same result. + +#. Install the packages: + + .. code-block:: console + + # zypper install openstack-nova-api openstack-nova-scheduler \ + openstack-nova-conductor openstack-nova-consoleauth \ + openstack-nova-novncproxy openstack-nova-placement-api \ + iptables + +#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[DEFAULT]`` section, enable only the compute and metadata + APIs: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + enabled_apis = osapi_compute,metadata + + * In the ``[api_database]`` and ``[database]`` sections, configure database + access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api_database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api + + [database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova + + Replace ``NOVA_DBPASS`` with the password you chose for the Compute + databases. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the ``[keystone_authtoken]`` + section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the + management interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = 10.0.0.11 + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall driver. Since the + Networking service includes a firewall driver, you must disable the + Compute firewall driver by using the + ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. + + * In the ``[vnc]`` section, configure the VNC proxy to use the management + interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + enabled = true + # ... + vncserver_listen = $my_ip + vncserver_proxyclient_address = $my_ip + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/run/nova + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + +#. Populate the ``nova-api`` database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage api_db sync" nova + + .. note:: + + Ignore any deprecation messages in this output. + +#. Register the ``cell0`` database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova + +#. Create the ``cell1`` cell: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova + 109e1d4b-536a-40d0-83c6-5f121b82b650 + +#. Populate the nova database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage db sync" nova + +#. Verify nova cell0 and cell1 are registered correctly: + + .. code-block:: console + + # nova-manage cell_v2 list_cells + +-------+--------------------------------------+ + | Name | UUID | + +-------+--------------------------------------+ + | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | + | cell0 | 00000000-0000-0000-0000-000000000000 | + +-------+--------------------------------------+ + +Finalize installation +--------------------- + +* Enable the placement API Apache vhost: + + .. code-block:: console + + # mv /etc/apache2/vhosts.d/nova-placement-api.conf.sample /etc/apache2/vhosts.d/nova-placement-api.conf + # systemctl reload apache2.service + +* Start the Compute services and configure them to start when the system boots: + + .. code-block:: console + + # systemctl enable openstack-nova-api.service \ + openstack-nova-consoleauth.service openstack-nova-scheduler.service \ + openstack-nova-conductor.service openstack-nova-novncproxy.service + # systemctl start openstack-nova-api.service \ + openstack-nova-consoleauth.service openstack-nova-scheduler.service \ + openstack-nova-conductor.service openstack-nova-novncproxy.service diff --git a/doc/source/install/controller-install-rdo.rst b/doc/source/install/controller-install-rdo.rst new file mode 100644 index 000000000000..54f59deab59e --- /dev/null +++ b/doc/source/install/controller-install-rdo.rst @@ -0,0 +1,493 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service, +code-named nova, on the controller node. + +Prerequisites +------------- + +Before you install and configure the Compute service, you must create +databases, service credentials, and API endpoints. + +#. To create the databases, complete these steps: + + * Use the database access client to connect to the database server as the + ``root`` user: + + .. code-block:: console + + $ mysql -u root -p + + * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: + + .. code-block:: console + + MariaDB [(none)]> CREATE DATABASE nova_api; + MariaDB [(none)]> CREATE DATABASE nova; + MariaDB [(none)]> CREATE DATABASE nova_cell0; + + * Grant proper access to the databases: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + Replace ``NOVA_DBPASS`` with a suitable password. + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI commands: + + .. code-block:: console + + $ . admin-openrc + +#. Create the Compute service credentials: + + * Create the ``nova`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt nova + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | 8a7dbf5279404537b1c7b86c033620fe | + | name | nova | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + * Add the ``admin`` role to the ``nova`` user: + + .. code-block:: console + + $ openstack role add --project service --user nova admin + + .. note:: + + This command provides no output. + + * Create the ``nova`` service entity: + + .. code-block:: console + + $ openstack service create --name nova \ + --description "OpenStack Compute" compute + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Compute | + | enabled | True | + | id | 060d59eac51b4594815603d75a00aba2 | + | name | nova | + | type | compute | + +-------------+----------------------------------+ + +#. Create the Compute API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + compute public http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 3c1caa473bfe4390a11e7177894bcc7b | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute internal http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | e3c918de680746a586eac1f2d9bc10ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute admin http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 38f7af91666a47cfb97b4dc790b94424 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + +#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt placement + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fa742015a6494a949f67629884fc7ec8 | + | name | placement | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + +#. Add the Placement user to the service project with the admin role: + + .. code-block:: console + + $ openstack role add --project service --user placement admin + + .. note:: + + This command provides no output. + +#. Create the Placement API entry in the service catalog: + + .. code-block:: console + + $ openstack service create --name placement --description "Placement API" placement + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | Placement API | + | enabled | True | + | id | 2d1a27022e6e4185b86adac4444c495f | + | name | placement | + | type | placement | + +-------------+----------------------------------+ + +#. Create the Placement API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne placement public http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 2b1b2637908b4137a9c2e0470487cbc0 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement internal http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 02bcda9a150a4bd7993ff4879df971ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement admin http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 3d71177b9e0f406f98cbff198d74b182 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +#. Install the packages: + + .. code-block:: console + + # yum install openstack-nova-api openstack-nova-conductor \ + openstack-nova-console openstack-nova-novncproxy \ + openstack-nova-scheduler openstack-nova-placement-api + +#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[DEFAULT]`` section, enable only the compute and metadata APIs: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + enabled_apis = osapi_compute,metadata + + * In the ``[api_database]`` and ``[database]`` sections, configure database + access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api_database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api + + [database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova + + Replace ``NOVA_DBPASS`` with the password you chose for the Compute + databases. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the ``[keystone_authtoken]`` + section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the + management interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = 10.0.0.11 + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall driver. Since the + Networking service includes a firewall driver, you must disable the + Compute firewall driver by using the + ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. + + * In the ``[vnc]`` section, configure the VNC proxy to use the management + interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + enabled = true + # ... + vncserver_listen = $my_ip + vncserver_proxyclient_address = $my_ip + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/nova/tmp + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + + * Due to a `packaging bug + `_, you must enable + access to the Placement API by adding the following configuration to + ``/etc/httpd/conf.d/00-nova-placement-api.conf``: + + .. path /etc/httpd/conf.d/00-nova-placement-api.conf + .. code-block:: ini + + + = 2.4> + Require all granted + + + Order allow,deny + Allow from all + + + + * Restart the httpd service: + + .. code-block:: console + + # systemctl restart httpd + +#. Populate the ``nova-api`` database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage api_db sync" nova + + .. note:: + + Ignore any deprecation messages in this output. + +4. Register the ``cell0`` database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova + +5. Create the ``cell1`` cell: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova + 109e1d4b-536a-40d0-83c6-5f121b82b650 + +6. Populate the nova database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage db sync" nova + +7. Verify nova cell0 and cell1 are registered correctly: + + .. code-block:: console + + # nova-manage cell_v2 list_cells + +-------+--------------------------------------+ + | Name | UUID | + +-------+--------------------------------------+ + | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | + | cell0 | 00000000-0000-0000-0000-000000000000 | + +-------+--------------------------------------+ + +Finalize installation +--------------------- + +* Start the Compute services and configure them to start when the system boots: + + .. code-block:: console + + # systemctl enable openstack-nova-api.service \ + openstack-nova-consoleauth.service openstack-nova-scheduler.service \ + openstack-nova-conductor.service openstack-nova-novncproxy.service + # systemctl start openstack-nova-api.service \ + openstack-nova-consoleauth.service openstack-nova-scheduler.service \ + openstack-nova-conductor.service openstack-nova-novncproxy.service diff --git a/doc/source/install/controller-install-ubuntu.rst b/doc/source/install/controller-install-ubuntu.rst new file mode 100644 index 000000000000..d6aa010c5eaf --- /dev/null +++ b/doc/source/install/controller-install-ubuntu.rst @@ -0,0 +1,465 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service, +code-named nova, on the controller node. + +Prerequisites +------------- + +Before you install and configure the Compute service, you must create +databases, service credentials, and API endpoints. + +#. To create the databases, complete these steps: + + * Use the database access client to connect to the database + server as the ``root`` user: + + .. code-block:: console + + # mysql + + * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: + + .. code-block:: console + + MariaDB [(none)]> CREATE DATABASE nova_api; + MariaDB [(none)]> CREATE DATABASE nova; + MariaDB [(none)]> CREATE DATABASE nova_cell0; + + * Grant proper access to the databases: + + .. code-block:: console + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ + IDENTIFIED BY 'NOVA_DBPASS'; + MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ + IDENTIFIED BY 'NOVA_DBPASS'; + + Replace ``NOVA_DBPASS`` with a suitable password. + + * Exit the database access client. + +#. Source the ``admin`` credentials to gain access to admin-only CLI commands: + + .. code-block:: console + + $ . admin-openrc + +#. Create the Compute service credentials: + + * Create the ``nova`` user: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt nova + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | 8a7dbf5279404537b1c7b86c033620fe | + | name | nova | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + + * Add the ``admin`` role to the ``nova`` user: + + .. code-block:: console + + $ openstack role add --project service --user nova admin + + .. note:: + + This command provides no output. + + * Create the ``nova`` service entity: + + .. code-block:: console + + $ openstack service create --name nova \ + --description "OpenStack Compute" compute + + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | OpenStack Compute | + | enabled | True | + | id | 060d59eac51b4594815603d75a00aba2 | + | name | nova | + | type | compute | + +-------------+----------------------------------+ + +#. Create the Compute API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne \ + compute public http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 3c1caa473bfe4390a11e7177894bcc7b | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute internal http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | e3c918de680746a586eac1f2d9bc10ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + compute admin http://controller:8774/v2.1 + + +--------------+-------------------------------------------+ + | Field | Value | + +--------------+-------------------------------------------+ + | enabled | True | + | id | 38f7af91666a47cfb97b4dc790b94424 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 060d59eac51b4594815603d75a00aba2 | + | service_name | nova | + | service_type | compute | + | url | http://controller:8774/v2.1 | + +--------------+-------------------------------------------+ + +#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: + + .. code-block:: console + + $ openstack user create --domain default --password-prompt placement + + User Password: + Repeat User Password: + +---------------------+----------------------------------+ + | Field | Value | + +---------------------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | fa742015a6494a949f67629884fc7ec8 | + | name | placement | + | options | {} | + | password_expires_at | None | + +---------------------+----------------------------------+ + +#. Add the Placement user to the service project with the admin role: + + .. code-block:: console + + $ openstack role add --project service --user placement admin + + .. note:: + + This command provides no output. + +#. Create the Placement API entry in the service catalog: + + .. code-block:: console + + $ openstack service create --name placement --description "Placement API" placement + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | Placement API | + | enabled | True | + | id | 2d1a27022e6e4185b86adac4444c495f | + | name | placement | + | type | placement | + +-------------+----------------------------------+ + +#. Create the Placement API service endpoints: + + .. code-block:: console + + $ openstack endpoint create --region RegionOne placement public http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 2b1b2637908b4137a9c2e0470487cbc0 | + | interface | public | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement internal http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 02bcda9a150a4bd7993ff4879df971ab | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + + $ openstack endpoint create --region RegionOne placement admin http://controller:8778 + +--------------+----------------------------------+ + | Field | Value | + +--------------+----------------------------------+ + | enabled | True | + | id | 3d71177b9e0f406f98cbff198d74b182 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | 2d1a27022e6e4185b86adac4444c495f | + | service_name | placement | + | service_type | placement | + | url | http://controller:8778 | + +--------------+----------------------------------+ + +Install and configure components +-------------------------------- + +.. include:: shared/note_configuration_vary_by_distribution.rst + +#. Install the packages: + + .. code-block:: console + + # apt install nova-api nova-conductor nova-consoleauth \ + nova-novncproxy nova-scheduler nova-placement-api + +#. Edit the ``/etc/nova/nova.conf`` file and complete the following actions: + + * In the ``[api_database]`` and ``[database]`` sections, configure database + access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api_database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api + + [database] + # ... + connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova + + Replace ``NOVA_DBPASS`` with the password you chose for the Compute + databases. + + * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` message queue access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + transport_url = rabbit://openstack:RABBIT_PASS@controller + + Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` + account in ``RabbitMQ``. + + * In the ``[api]`` and ``[keystone_authtoken]`` sections, configure Identity + service access: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [api] + # ... + auth_strategy = keystone + + [keystone_authtoken] + # ... + auth_uri = http://controller:5000 + auth_url = http://controller:35357 + memcached_servers = controller:11211 + auth_type = password + project_domain_name = default + user_domain_name = default + project_name = service + username = nova + password = NOVA_PASS + + Replace ``NOVA_PASS`` with the password you chose for the ``nova`` user in + the Identity service. + + .. note:: + + Comment out or remove any other options in the ``[keystone_authtoken]`` + section. + + * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to use the + management interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + my_ip = 10.0.0.11 + + * In the ``[DEFAULT]`` section, enable support for the Networking service: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [DEFAULT] + # ... + use_neutron = True + firewall_driver = nova.virt.firewall.NoopFirewallDriver + + .. note:: + + By default, Compute uses an internal firewall driver. Since the + Networking service includes a firewall driver, you must disable the + Compute firewall driver by using the + ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. + + * In the ``[vnc]`` section, configure the VNC proxy to use the management + interface IP address of the controller node: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [vnc] + enabled = true + # ... + vncserver_listen = $my_ip + vncserver_proxyclient_address = $my_ip + + * In the ``[glance]`` section, configure the location of the Image service + API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [glance] + # ... + api_servers = http://controller:9292 + + * In the ``[oslo_concurrency]`` section, configure the lock path: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [oslo_concurrency] + # ... + lock_path = /var/lib/nova/tmp + + .. todo:: + + https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667 + + * Due to a packaging bug, remove the ``log_dir`` option from the + ``[DEFAULT]`` section. + + * In the ``[placement]`` section, configure the Placement API: + + .. path /etc/nova/nova.conf + .. code-block:: ini + + [placement] + # ... + os_region_name = RegionOne + project_domain_name = Default + project_name = service + auth_type = password + user_domain_name = Default + auth_url = http://controller:35357/v3 + username = placement + password = PLACEMENT_PASS + + Replace ``PLACEMENT_PASS`` with the password you choose for the + ``placement`` user in the Identity service. Comment out any other options + in the ``[placement]`` section. + +#. Populate the nova-api database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage api_db sync" nova + + .. note:: + + Ignore any deprecation messages in this output. + +#. Register the ``cell0`` database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova + +#. Create the ``cell1`` cell: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova + 109e1d4b-536a-40d0-83c6-5f121b82b650 + +#. Populate the nova database: + + .. code-block:: console + + # su -s /bin/sh -c "nova-manage db sync" nova + +#. Verify nova cell0 and cell1 are registered correctly: + + .. code-block:: console + + # nova-manage cell_v2 list_cells + +-------+--------------------------------------+ + | Name | UUID | + +-------+--------------------------------------+ + | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | + | cell0 | 00000000-0000-0000-0000-000000000000 | + +-------+--------------------------------------+ + +Finalize installation +--------------------- + +* Restart the Compute services: + + .. code-block:: console + + # service nova-api restart + # service nova-consoleauth restart + # service nova-scheduler restart + # service nova-conductor restart + # service nova-novncproxy restart diff --git a/doc/source/install/controller-install.rst b/doc/source/install/controller-install.rst new file mode 100644 index 000000000000..fbba43eeb6ff --- /dev/null +++ b/doc/source/install/controller-install.rst @@ -0,0 +1,10 @@ +Install and configure controller node +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This section describes how to install and configure the Compute service, +code-named nova, on the controller node. + +.. toctree:: + :glob: + + controller-install-* diff --git a/doc/source/install/figures/hwreqs.graffle b/doc/source/install/figures/hwreqs.graffle new file mode 100644 index 000000000000..522bb03cba53 Binary files /dev/null and b/doc/source/install/figures/hwreqs.graffle differ diff --git a/doc/source/install/figures/hwreqs.png b/doc/source/install/figures/hwreqs.png new file mode 100644 index 000000000000..5c7e2d0e8bf6 Binary files /dev/null and b/doc/source/install/figures/hwreqs.png differ diff --git a/doc/source/install/figures/hwreqs.svg b/doc/source/install/figures/hwreqs.svg new file mode 100644 index 000000000000..0b58db752fe0 --- /dev/null +++ b/doc/source/install/figures/hwreqs.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:57:28 +0000Canvas 1Layer 1Controller NodeCompute Node 11-2CPUBlock Storage Node 1Object Storage Node 1Object Storage Node 2Hardware RequirementsCore componentOptional component8 GBRAM100 GBStorage2-4+CPU8+ GBRAM100+ GBStorage1-2CPU4 GBRAM2NIC2NIC1NIC1NIC4+ GBRAM1-2CPU1NIC100+ GBStorage100+ GBStorage/dev/sdb/dev/sdb/dev/sdc/dev/sdb/dev/sdc1-2CPU4+ GBRAM100+ GBStorage/dev/sdc diff --git a/doc/source/install/figures/network1-services.graffle b/doc/source/install/figures/network1-services.graffle new file mode 100644 index 000000000000..3e5bea9c616c Binary files /dev/null and b/doc/source/install/figures/network1-services.graffle differ diff --git a/doc/source/install/figures/network1-services.png b/doc/source/install/figures/network1-services.png new file mode 100644 index 000000000000..e83bf5bbf6d5 Binary files /dev/null and b/doc/source/install/figures/network1-services.png differ diff --git a/doc/source/install/figures/network1-services.svg b/doc/source/install/figures/network1-services.svg new file mode 100644 index 000000000000..153385b16a69 --- /dev/null +++ b/doc/source/install/figures/network1-services.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:56:09 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 1: Provider NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationTelemetryManagementObject StorageProxy ServiceNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceTelemetryAgentiSCSI TargetServiceNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemServiceShared File SystemManagementNoSQL DatabaseServiceNetworkingMetadata AgentDatabaseManagement diff --git a/doc/source/install/figures/network2-services.graffle b/doc/source/install/figures/network2-services.graffle new file mode 100644 index 000000000000..3642050ea6f5 Binary files /dev/null and b/doc/source/install/figures/network2-services.graffle differ diff --git a/doc/source/install/figures/network2-services.png b/doc/source/install/figures/network2-services.png new file mode 100644 index 000000000000..72b1fc915bf3 Binary files /dev/null and b/doc/source/install/figures/network2-services.png differ diff --git a/doc/source/install/figures/network2-services.svg b/doc/source/install/figures/network2-services.svg new file mode 100644 index 000000000000..4ff05a0c904f --- /dev/null +++ b/doc/source/install/figures/network2-services.svg @@ -0,0 +1,3 @@ + + + Produced by OmniGraffle 6.5.2 2016-04-26 14:55:33 +0000Canvas 1Layer 1 Controller NodeSQL DatabaseServiceBlock Storage Nodes Object Storage NodesNetworking Option 2: Self-Service NetworksService LayoutCore componentOptional componentMessage QueueIdentityImage ServiceComputeManagementNetworkingManagementBlock StorageManagementNetwork Time ServiceOrchestrationDatabaseManagementObject StorageProxy ServiceNetworkingL3 AgentNetworkingDHCP Agent Compute NodesKVM HypervisorComputeNetworkingLinux Bridge AgentTelemetryAgentTelemetryAgent(s)NetworkingML2 Plug-inObject StorageAccount ServiceObject StorageContainer ServiceObject StorageObject ServiceBlock StorageVolume ServiceShared File SystemServiceiSCSI TargetServiceNetworkingMetadata AgentNetworkingLinux Bridge AgentLinux NetworkUtilitiesLinux NetworkUtilitiesShared File SystemManagementTelemetryAgentNoSQL DatabaseServiceTelemetryManagement diff --git a/doc/source/install/get-started-compute.rst b/doc/source/install/get-started-compute.rst new file mode 100644 index 000000000000..349e1891b163 --- /dev/null +++ b/doc/source/install/get-started-compute.rst @@ -0,0 +1,102 @@ +======================== +Compute service overview +======================== + +.. todo:: Update a lot of the links in here. + +Use OpenStack Compute to host and manage cloud computing systems. OpenStack +Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. The +main modules are implemented in Python. + +OpenStack Compute interacts with OpenStack Identity for authentication; +OpenStack Image service for disk and server images; and OpenStack Dashboard for +the user and administrative interface. Image access is limited by projects, and +by users; quotas are limited per project (the number of instances, for +example). OpenStack Compute can scale horizontally on standard hardware, and +download images to launch instances. + +OpenStack Compute consists of the following areas and their components: + +``nova-api`` service + Accepts and responds to end user compute API calls. The service supports the + OpenStack Compute API, the Amazon EC2 API, and a special Admin API for + privileged users to perform administrative actions. It enforces some policies + and initiates most orchestration activities, such as running an instance. + +``nova-api-metadata`` service + Accepts metadata requests from instances. The ``nova-api-metadata`` service + is generally used when you run in multi-host mode with ``nova-network`` + installations. For details, see `Metadata service + `__ + in the OpenStack Administrator Guide. + +``nova-compute`` service + A worker daemon that creates and terminates virtual machine instances through + hypervisor APIs. For example: + + - XenAPI for XenServer/XCP + + - libvirt for KVM or QEMU + + - VMwareAPI for VMware + + Processing is fairly complex. Basically, the daemon accepts actions from the + queue and performs a series of system commands such as launching a KVM + instance and updating its state in the database. + +``nova-placement-api`` service + Tracks the inventory and usage of each provider. For details, see `Placement + API `__. + +``nova-scheduler`` service + Takes a virtual machine instance request from the queue and determines on + which compute server host it runs. + +``nova-conductor`` module + Mediates interactions between the ``nova-compute`` service and the database. + It eliminates direct accesses to the cloud database made by the + ``nova-compute`` service. The ``nova-conductor`` module scales horizontally. + However, do not deploy it on nodes where the ``nova-compute`` service runs. + For more information, see `Configuration Reference Guide + `__. + +``nova-consoleauth`` daemon + Authorizes tokens for users that console proxies provide. See + ``nova-novncproxy`` and ``nova-xvpvncproxy``. This service must be running + for console proxies to work. You can run proxies of either type against a + single nova-consoleauth service in a cluster configuration. For information, + see `About nova-consoleauth + `__. + +``nova-novncproxy`` daemon + Provides a proxy for accessing running instances through a VNC connection. + Supports browser-based novnc clients. + +``nova-spicehtml5proxy`` daemon + Provides a proxy for accessing running instances through a SPICE connection. + Supports browser-based HTML5 client. + +``nova-xvpvncproxy`` daemon + Provides a proxy for accessing running instances through a VNC connection. + Supports an OpenStack-specific Java client. + +The queue + A central hub for passing messages between daemons. Usually implemented with + `RabbitMQ `__, also can be implemented with + another AMQP message queue, such as `ZeroMQ `__. + +SQL database + Stores most build-time and run-time states for a cloud infrastructure, + including: + + - Available instance types + + - Instances in use + + - Available networks + + - Projects + + Theoretically, OpenStack Compute can support any database that SQLAlchemy + supports. Common databases are SQLite3 for test and development work, MySQL, + MariaDB, and PostgreSQL. diff --git a/doc/source/install/index.rst b/doc/source/install/index.rst new file mode 100644 index 000000000000..ee6c0a7c42a7 --- /dev/null +++ b/doc/source/install/index.rst @@ -0,0 +1,11 @@ +=============== +Compute service +=============== + +.. toctree:: + + overview.rst + get-started-compute.rst + controller-install.rst + compute-install.rst + verify.rst diff --git a/doc/source/install/overview.rst b/doc/source/install/overview.rst new file mode 100644 index 000000000000..c8ff14146408 --- /dev/null +++ b/doc/source/install/overview.rst @@ -0,0 +1,174 @@ +======== +Overview +======== + +The OpenStack project is an open source cloud computing platform that supports +all types of cloud environments. The project aims for simple implementation, +massive scalability, and a rich set of features. Cloud computing experts from +around the world contribute to the project. + +OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a +variety of complementary services. Each service offers an Application +Programming Interface (API) that facilitates this integration. + +This guide covers step-by-step deployment of the major OpenStack services using +a functional example architecture suitable for new users of OpenStack with +sufficient Linux experience. This guide is not intended to be used for +production system installations, but to create a minimum proof-of-concept for +the purpose of learning about OpenStack. + +After becoming familiar with basic installation, configuration, operation, and +troubleshooting of these OpenStack services, you should consider the following +steps toward deployment using a production architecture: + +* Determine and implement the necessary core and optional services to meet + performance and redundancy requirements. + +* Increase security using methods such as firewalls, encryption, and service + policies. + +* Implement a deployment tool such as Ansible, Chef, Puppet, or Salt to + automate deployment and management of the production environment. + +.. _overview-example-architectures: + +Example architecture +~~~~~~~~~~~~~~~~~~~~ + +The example architecture requires at least two nodes (hosts) to launch a basic +virtual machine (VM) or instance. Optional services such as Block Storage and +Object Storage require additional nodes. + +.. important:: + + The example architecture used in this guide is a minimum configuration, and + is not intended for production system installations. It is designed to + provide a minimum proof-of-concept for the purpose of learning about + OpenStack. For information on creating architectures for specific use cases, + or how to determine which architecture is required, see the `Architecture + Design Guide `_. + +This example architecture differs from a minimal production architecture as +follows: + +* Networking agents reside on the controller node instead of one or more + dedicated network nodes. + +* Overlay (tunnel) traffic for self-service networks traverses the management + network instead of a dedicated network. + +For more information on production architectures, see the `Architecture Design +Guide `_, `OpenStack Operations Guide +`_, and `OpenStack Networking Guide +`_. + +.. _figure-hwreqs: + +.. figure:: figures/hwreqs.png + :alt: Hardware requirements + + **Hardware requirements** + +Controller +---------- + +The controller node runs the Identity service, Image service, management +portions of Compute, management portion of Networking, various Networking +agents, and the Dashboard. It also includes supporting services such as an SQL +database, message queue, and Network Time Protocol (NTP). + +Optionally, the controller node runs portions of the Block Storage, Object +Storage, Orchestration, and Telemetry services. + +The controller node requires a minimum of two network interfaces. + +Compute +------- + +The compute node runs the hypervisor portion of Compute that operates +instances. By default, Compute uses the kernel-based VM (KVM) hypervisor. The +compute node also runs a Networking service agent that connects instances to +virtual networks and provides firewalling services to instances via security +groups. + +You can deploy more than one compute node. Each node requires a minimum of two +network interfaces. + +Block Storage +------------- + +The optional Block Storage node contains the disks that the Block Storage and +Shared File System services provision for instances. + +For simplicity, service traffic between compute nodes and this node uses the +management network. Production environments should implement a separate storage +network to increase performance and security. + +You can deploy more than one block storage node. Each node requires a minimum +of one network interface. + +Object Storage +-------------- + +The optional Object Storage node contain the disks that the Object Storage +service uses for storing accounts, containers, and objects. + +For simplicity, service traffic between compute nodes and this node uses the +management network. Production environments should implement a separate storage +network to increase performance and security. + +This service requires two nodes. Each node requires a minimum of one network +interface. You can deploy more than two object storage nodes. + +Networking +~~~~~~~~~~ + +Choose one of the following virtual networking options. + +.. _network1: + +Networking Option 1: Provider networks +-------------------------------------- + +The provider networks option deploys the OpenStack Networking service in the +simplest way possible with primarily layer-2 (bridging/switching) services and +VLAN segmentation of networks. Essentially, it bridges virtual networks to +physical networks and relies on physical network infrastructure for layer-3 +(routing) services. Additionally, a DHCP