openstack-manuals/doc/install-guide/source/nova-compute-install-debian.rst
Doug Hellmann c7bfdbb44f split install guide into separate files by OS
Provide a script for interpreting the "only" directives and splitting
the existing content up into standalone files for each OS to make it
easier for project teams to copy the parts they need into their own
project documentation trees without requiring separate platform builds.

The files have been hand-edited to pass the niceness check and to allow
the install guide to build.

The script for building the guide has been changed to not build separate
copies per OS.

Change-Id: Ib88f373190e2a4fbf14186418852d971b33dca85
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
2017-06-19 11:29:52 -04:00

8.5 KiB

Install and configure a compute node

This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors <hypervisor> to deploy instances <instance> or VMs <virtual machine (VM)>. For simplicity, this configuration uses the QEMU <Quick EMUlator (QEMU)> hypervisor with the KVM <kernel-based VM (KVM)> extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

Note

This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures <overview-example-architectures> section. Each additional compute node requires a unique IP address.

Install and configure components

  1. Install the packages:

    # apt install nova-compute

Respond to prompts for debconf.

  1. Edit the /etc/nova/nova.conf file and complete the following actions:
    • In the [DEFAULT] section, configure RabbitMQ message queue access:

      [DEFAULT]
      # ...
      transport_url = rabbit://openstack:RABBIT_PASS@controller

      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

    • In the [api] and [keystone_authtoken] sections, configure Identity service access:

      [api]
      # ...
      auth_strategy = keystone
      
      [keystone_authtoken]
      # ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = nova
      password = NOVA_PASS

      Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

      Note

      Comment out or remove any other options in the [keystone_authtoken] section.

  • In the [DEFAULT] section, check that the my_ip option is correctly set (this value is handled by the config and postinst scripts of the nova-common package using debconf):

    [DEFAULT]
    # ...
    my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

    Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture <overview-example-architectures>.

    • In the [vnc] section, enable and configure remote console access:

      [vnc]
      # ...
      enabled = True
      vncserver_listen = 0.0.0.0
      vncserver_proxyclient_address = $my_ip
      novncproxy_base_url = http://controller:6080/vnc_auto.html

      The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.

      Note

      If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.

    • In the [glance] section, configure the location of the Image service API:

      [glance]
      # ...
      api_servers = http://controller:9292
    • In the [placement] section, configure the Placement API:

      [placement]
      # ...
      os_region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:35357/v3
      username = placement
      password = PLACEMENT_PASS

      Replace PLACEMENT_PASS with the password you choose for the placement user in the Identity service. Comment out any other options in the [placement] section.

  1. Ensure the kernel module nbd is loaded.

    # modprobe nbd
  2. Ensure the module loads on every boot by adding nbd to the /etc/modules-load.d/nbd.conf file.

Finalize installation

  1. Determine whether your compute node supports hardware acceleration for virtual machines:

    $ egrep -c '(vmx|svm)' /proc/cpuinfo

    If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.

    If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

  • Replace the nova-compute-kvm package with nova-compute-qemu which automatically changes the /etc/nova/nova-compute.conf file and installs the necessary dependencies:

    # apt install nova-compute-qemu
  1. Restart the Compute service:

    # service nova-compute restart

Note

If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.

Add the compute node to the cell database

Important

Run the following commands on the controller node.

  1. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:

    $ . admin-openrc
    
    $ openstack compute service list --service nova-compute
    +----+-------+--------------+------+-------+---------+----------------------------+
    | ID | Host  | Binary       | Zone | State | Status  | Updated At                 |
    +----+-------+--------------+------+-------+---------+----------------------------+
    | 1  | node1 | nova-compute | nova | up    | enabled | 2017-04-14T15:30:44.000000 |
    +----+-------+--------------+------+-------+---------+----------------------------+
  2. Discover compute hosts:

    # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    
    Found 2 cell mappings.
    Skipping cell0 since it does not contain hosts.
    Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
    Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
    Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
    Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

    Note

    When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:

    [scheduler]
    discover_hosts_in_cells_interval = 300