
We currently have three cells v2 documents in-tree: - A 'user/cellsv2-layout' document that details the structure or architecture of a cells v2 deployment (which is to say, any modern nova deployment) - A 'user/cells' document, which is written from a pre-cells v2 viewpoint and details the changes that cells v2 *will* require and the benefits it *would* bring. It also includes steps for upgrading from pre-cells v2 (that is, pre-Pike) deployment or a deployment with cells v1 (which we removed in Train and probably broke long before) - An 'admin/cells' document, which doesn't contain much other than some advice for handling down cells Clearly there's a lot of cruft to be cleared out as well as some centralization of information that's possible. As such, we combine all of these documents into one document, 'admin/cells'. This is chosen over 'users/cells' since cells are not an end-user-facing feature. References to cells v1 and details on upgrading from pre-cells v2 deployments are mostly dropped, as are some duplicated installation/configuration steps. Formatting is fixed and Sphinx-isms used to cross reference config option where possible. Finally, redirects are added so that people can continue to find the relevant resources. The result is (hopefully) a one stop shop for all things cells v2-related that operators can use to configure and understand their deployments. Change-Id: If39db50fd8b109a5a13dec70f8030f3663555065 Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
3.0 KiB
Nova System Architecture
Nova comprises multiple server processes, each performing different functions. The user-facing interface is a REST API, while internally Nova components communicate via an RPC message passing mechanism.
The API servers process REST requests, which typically involve
database reads/writes, optionally sending RPC messages to other Nova
services, and generating responses to the REST calls. RPC messaging is
done via the oslo.messaging library, an abstraction on
top of message queues. Most of the major nova components can be run on
multiple servers, and have a manager that is listening for RPC messages.
The one major exception is nova-compute
, where a single
process runs on the hypervisor it is managing (except when using the
VMware or Ironic drivers). The manager also, optionally, has periodic
tasks. For more details on our RPC system, please see: /reference/rpc
Nova also uses a central database that is (logically) shared between
all components. However, to aid upgrade, the DB is accessed through an
object layer that ensures an upgraded control plane can still
communicate with a nova-compute
running the previous
release. To make this possible nova-compute proxies DB requests over RPC
to a central manager called nova-conductor
.
To horizontally expand Nova deployments, we have a deployment
sharding concept called cells. For more information please see: /admin/cells
Components
Below you will find a helpful explanation of the key components of a typical Nova deployment.
- DB: sql database for data storage.
- API: component that receives HTTP requests, converts commands and communicates with other components via the oslo.messaging queue or HTTP.
- Scheduler: decides which host gets each instance.
- Compute: manages communication with hypervisor and virtual machines.
- Conductor: handles requests that need coordination (build/resize), acts as a database proxy, or handles object conversions.
Placement <>
: tracks resource provider inventories and usages.
While all services are designed to be horizontally scalable, you should have significantly more computes than anything else.