split install guide into separate files by OS
Provide a script for interpreting the "only" directives and splitting the existing content up into standalone files for each OS to make it easier for project teams to copy the parts they need into their own project documentation trees without requiring separate platform builds. The files have been hand-edited to pass the niceness check and to allow the install guide to build. The script for building the guide has been changed to not build separate copies per OS. Change-Id: Ib88f373190e2a4fbf14186418852d971b33dca85 Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This commit is contained in:
parent
82292f74fb
commit
c7bfdbb44f
71
doc/install-guide/source/cinder-backup-install-debian.rst
Normal file
71
doc/install-guide/source/cinder-backup-install-debian.rst
Normal file
@ -0,0 +1,71 @@
|
||||
:orphan:
|
||||
|
||||
Install and configure the backup service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Optionally, install and configure the backup service. For simplicity,
|
||||
this configuration uses the Block Storage node and the Object Storage
|
||||
(swift) driver, thus depending on the
|
||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
||||
to installing and configuring the backup service.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the Block Storage node.
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-backup
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure backup options:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
backup_swift_url = SWIFT_URL
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
||||
URL can be found by showing the object-store API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack catalog show object-store
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
Restart the Block Storage backup service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-backup restart
|
||||
|
||||
.. end
|
||||
|
73
doc/install-guide/source/cinder-backup-install-obs.rst
Normal file
73
doc/install-guide/source/cinder-backup-install-obs.rst
Normal file
@ -0,0 +1,73 @@
|
||||
:orphan:
|
||||
|
||||
Install and configure the backup service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Optionally, install and configure the backup service. For simplicity,
|
||||
this configuration uses the Block Storage node and the Object Storage
|
||||
(swift) driver, thus depending on the
|
||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
||||
to installing and configuring the backup service.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the Block Storage node.
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-backup
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure backup options:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
backup_swift_url = SWIFT_URL
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
||||
URL can be found by showing the object-store API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack catalog show object-store
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
Start the Block Storage backup service and configure it to
|
||||
start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-backup.service
|
||||
# systemctl start openstack-cinder-backup.service
|
||||
|
||||
.. end
|
||||
|
||||
|
73
doc/install-guide/source/cinder-backup-install-rdo.rst
Normal file
73
doc/install-guide/source/cinder-backup-install-rdo.rst
Normal file
@ -0,0 +1,73 @@
|
||||
:orphan:
|
||||
|
||||
Install and configure the backup service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Optionally, install and configure the backup service. For simplicity,
|
||||
this configuration uses the Block Storage node and the Object Storage
|
||||
(swift) driver, thus depending on the
|
||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
||||
to installing and configuring the backup service.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the Block Storage node.
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure backup options:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
backup_swift_url = SWIFT_URL
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
||||
URL can be found by showing the object-store API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack catalog show object-store
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
Start the Block Storage backup service and configure it to
|
||||
start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-backup.service
|
||||
# systemctl start openstack-cinder-backup.service
|
||||
|
||||
.. end
|
||||
|
||||
|
71
doc/install-guide/source/cinder-backup-install-ubuntu.rst
Normal file
71
doc/install-guide/source/cinder-backup-install-ubuntu.rst
Normal file
@ -0,0 +1,71 @@
|
||||
:orphan:
|
||||
|
||||
Install and configure the backup service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Optionally, install and configure the backup service. For simplicity,
|
||||
this configuration uses the Block Storage node and the Object Storage
|
||||
(swift) driver, thus depending on the
|
||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
||||
to installing and configuring the backup service.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the Block Storage node.
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-backup
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure backup options:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
backup_swift_url = SWIFT_URL
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
||||
URL can be found by showing the object-store API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack catalog show object-store
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
Restart the Block Storage backup service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-backup restart
|
||||
|
||||
.. end
|
||||
|
@ -5,108 +5,7 @@
|
||||
Install and configure the backup service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Optionally, install and configure the backup service. For simplicity,
|
||||
this configuration uses the Block Storage node and the Object Storage
|
||||
(swift) driver, thus depending on the
|
||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
.. note::
|
||||
|
||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
||||
to installing and configuring the backup service.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the Block Storage node.
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-backup
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-backup
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure backup options:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
backup_driver = cinder.backup.drivers.swift
|
||||
backup_swift_url = SWIFT_URL
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
||||
URL can be found by showing the object-store API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack catalog show object-store
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
Start the Block Storage backup service and configure it to
|
||||
start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-backup.service
|
||||
# systemctl start openstack-cinder-backup.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
Restart the Block Storage backup service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-backup restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
cinder-backup-install-*
|
||||
|
394
doc/install-guide/source/cinder-controller-install-debian.rst
Normal file
394
doc/install-guide/source/cinder-controller-install-debian.rst
Normal file
@ -0,0 +1,394 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Block
|
||||
Storage service, code-named cinder, on the controller node. This
|
||||
service requires at least one additional storage node that provides
|
||||
volumes to instances.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only
|
||||
CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create a ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt cinder
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
||||
| name | cinder |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user cinder admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv2 \
|
||||
--description "OpenStack Block Storage" volumev2
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| name | cinderv2 |
|
||||
| type | volumev2 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv3 \
|
||||
--description "OpenStack Block Storage" volumev3
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| name | cinderv3 |
|
||||
| type | volumev3 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require two service entities.
|
||||
|
||||
#. Create the Block Storage service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 513e73819e14460fb904163f41ef3759 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require endpoints for each service
|
||||
entity.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-api cinder-scheduler
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
||||
Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for
|
||||
the ``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
||||
use the management interface IP address of the controller node:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
3. Populate the Block Storage database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Configure Compute to use Block Storage
|
||||
--------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
||||
to it:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[cinder]
|
||||
os_region_name = RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-api restart
|
||||
|
||||
.. end
|
||||
|
||||
#. Restart the Block Storage services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-scheduler restart
|
||||
# service apache2 restart
|
||||
|
||||
.. end
|
||||
|
394
doc/install-guide/source/cinder-controller-install-obs.rst
Normal file
394
doc/install-guide/source/cinder-controller-install-obs.rst
Normal file
@ -0,0 +1,394 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Block
|
||||
Storage service, code-named cinder, on the controller node. This
|
||||
service requires at least one additional storage node that provides
|
||||
volumes to instances.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only
|
||||
CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create a ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt cinder
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
||||
| name | cinder |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user cinder admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv2 \
|
||||
--description "OpenStack Block Storage" volumev2
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| name | cinderv2 |
|
||||
| type | volumev2 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv3 \
|
||||
--description "OpenStack Block Storage" volumev3
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| name | cinderv3 |
|
||||
| type | volumev3 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require two service entities.
|
||||
|
||||
#. Create the Block Storage service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 513e73819e14460fb904163f41ef3759 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require endpoints for each service
|
||||
entity.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-api openstack-cinder-scheduler
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
||||
Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for
|
||||
the ``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
||||
use the management interface IP address of the controller node:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Configure Compute to use Block Storage
|
||||
--------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
||||
to it:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[cinder]
|
||||
os_region_name = RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-nova-api.service
|
||||
|
||||
.. end
|
||||
|
||||
#. Start the Block Storage services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
|
||||
.. end
|
||||
|
||||
|
407
doc/install-guide/source/cinder-controller-install-rdo.rst
Normal file
407
doc/install-guide/source/cinder-controller-install-rdo.rst
Normal file
@ -0,0 +1,407 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Block
|
||||
Storage service, code-named cinder, on the controller node. This
|
||||
service requires at least one additional storage node that provides
|
||||
volumes to instances.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only
|
||||
CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create a ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt cinder
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
||||
| name | cinder |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user cinder admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv2 \
|
||||
--description "OpenStack Block Storage" volumev2
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| name | cinderv2 |
|
||||
| type | volumev2 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv3 \
|
||||
--description "OpenStack Block Storage" volumev3
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| name | cinderv3 |
|
||||
| type | volumev3 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require two service entities.
|
||||
|
||||
#. Create the Block Storage service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 513e73819e14460fb904163f41ef3759 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require endpoints for each service
|
||||
entity.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
||||
Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for
|
||||
the ``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
||||
use the management interface IP address of the controller node:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
3. Populate the Block Storage database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Configure Compute to use Block Storage
|
||||
--------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
||||
to it:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[cinder]
|
||||
os_region_name = RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-nova-api.service
|
||||
|
||||
.. end
|
||||
|
||||
#. Start the Block Storage services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
|
||||
.. end
|
||||
|
||||
|
406
doc/install-guide/source/cinder-controller-install-ubuntu.rst
Normal file
406
doc/install-guide/source/cinder-controller-install-ubuntu.rst
Normal file
@ -0,0 +1,406 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Block
|
||||
Storage service, code-named cinder, on the controller node. This
|
||||
service requires at least one additional storage node that provides
|
||||
volumes to instances.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
* Create the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only
|
||||
CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create a ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt cinder
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
||||
| name | cinder |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user cinder admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv2 \
|
||||
--description "OpenStack Block Storage" volumev2
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| name | cinderv2 |
|
||||
| type | volumev2 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv3 \
|
||||
--description "OpenStack Block Storage" volumev3
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| name | cinderv3 |
|
||||
| type | volumev3 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require two service entities.
|
||||
|
||||
#. Create the Block Storage service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 513e73819e14460fb904163f41ef3759 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require endpoints for each service
|
||||
entity.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-api cinder-scheduler
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
||||
Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for
|
||||
the ``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
||||
use the management interface IP address of the controller node:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
3. Populate the Block Storage database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Configure Compute to use Block Storage
|
||||
--------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
||||
to it:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[cinder]
|
||||
os_region_name = RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-api restart
|
||||
|
||||
.. end
|
||||
|
||||
#. Restart the Block Storage services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-scheduler restart
|
||||
# service apache2 restart
|
||||
|
||||
.. end
|
||||
|
@ -3,471 +3,7 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Block
|
||||
Storage service, code-named cinder, on the controller node. This
|
||||
service requires at least one additional storage node that provides
|
||||
volumes to instances.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or debian or obs
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
* Create the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``cinder`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
||||
IDENTIFIED BY 'CINDER_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only
|
||||
CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create a ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt cinder
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
||||
| name | cinder |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``cinder`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user cinder admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv2 \
|
||||
--description "OpenStack Block Storage" volumev2
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| name | cinderv2 |
|
||||
| type | volumev2 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name cinderv3 \
|
||||
--description "OpenStack Block Storage" volumev3
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Block Storage |
|
||||
| enabled | True |
|
||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| name | cinderv3 |
|
||||
| type | volumev3 |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require two service entities.
|
||||
|
||||
#. Create the Block Storage service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 513e73819e14460fb904163f41ef3759 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
||||
| service_name | cinderv2 |
|
||||
| service_type | volumev2 |
|
||||
| url | http://controller:8776/v2/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
||||
|
||||
+--------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+------------------------------------------+
|
||||
| enabled | True |
|
||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
||||
| service_name | cinderv3 |
|
||||
| service_type | volumev3 |
|
||||
| url | http://controller:8776/v3/%(project_id)s |
|
||||
+--------------+------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The Block Storage services require endpoints for each service
|
||||
entity.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-api openstack-cinder-scheduler
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-api cinder-scheduler
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
||||
Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for
|
||||
the ``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
||||
use the management interface IP address of the controller node:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
.. only:: obs or rdo or ubuntu
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or ubuntu or debian
|
||||
|
||||
3. Populate the Block Storage database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
.. endonly
|
||||
|
||||
Configure Compute to use Block Storage
|
||||
--------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
||||
to it:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[cinder]
|
||||
os_region_name = RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-nova-api.service
|
||||
|
||||
.. end
|
||||
|
||||
#. Start the Block Storage services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service nova-api restart
|
||||
|
||||
.. end
|
||||
|
||||
#. Restart the Block Storage services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service cinder-scheduler restart
|
||||
# service apache2 restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
cinder-controller-install-*
|
||||
|
263
doc/install-guide/source/cinder-storage-install-debian.rst
Normal file
263
doc/install-guide/source/cinder-storage-install-debian.rst
Normal file
@ -0,0 +1,263 @@
|
||||
Install and configure a storage node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure storage nodes
|
||||
for the Block Storage service. For simplicity, this configuration
|
||||
references one storage node with an empty local block storage device.
|
||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
||||
value for your particular node.
|
||||
|
||||
The service provisions logical volumes on this device using the
|
||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
||||
You can follow these instructions with minor modifications to horizontally
|
||||
scale your environment with additional storage nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service on the
|
||||
storage node, you must prepare the storage device.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the storage node.
|
||||
|
||||
#. Install the supporting utility packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Some distributions include LVM by default.
|
||||
|
||||
#. Create the LVM physical volume ``/dev/sdb``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pvcreate /dev/sdb
|
||||
|
||||
Physical volume "/dev/sdb" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the LVM volume group ``cinder-volumes``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# vgcreate cinder-volumes /dev/sdb
|
||||
|
||||
Volume group "cinder-volumes" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
The Block Storage service creates logical volumes in this volume group.
|
||||
|
||||
#. Only instances can access Block Storage volumes. However, the
|
||||
underlying operating system manages the devices associated with
|
||||
the volumes. By default, the LVM volume scanning tool scans the
|
||||
``/dev`` directory for block storage devices that
|
||||
contain volumes. If projects use LVM on their volumes, the scanning
|
||||
tool detects these volumes and attempts to cache them which can cause
|
||||
a variety of problems with both the underlying operating system
|
||||
and project volumes. You must reconfigure LVM to scan only the devices
|
||||
that contain the ``cinder-volumes`` volume group. Edit the
|
||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``devices`` section, add a filter that accepts the
|
||||
``/dev/sdb`` device and rejects all other devices:
|
||||
|
||||
.. path /etc/lvm/lvm.conf
|
||||
.. code-block:: none
|
||||
|
||||
devices {
|
||||
...
|
||||
filter = [ "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Each item in the filter array begins with ``a`` for **accept** or
|
||||
``r`` for **reject** and includes a regular expression for the
|
||||
device name. The array must end with ``r/.*/`` to reject any
|
||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
||||
to test filters.
|
||||
|
||||
.. warning::
|
||||
|
||||
If your storage nodes use LVM on the operating system disk, you
|
||||
must also add the associated device to the filter. For example,
|
||||
if the ``/dev/sda`` device contains the operating system:
|
||||
|
||||
.. ignore_path /etc/lvm/lvm.conf
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Similarly, if your compute nodes use LVM on the operating
|
||||
system disk, you must also modify the filter in the
|
||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
||||
the operating system disk. For example, if the ``/dev/sda``
|
||||
device contains the operating system:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-volume
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
||||
the Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for the
|
||||
``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
||||
of the management network interface on your storage node,
|
||||
typically 10.0.0.41 for the first node in the
|
||||
:ref:`example architecture <overview-example-architectures>`.
|
||||
|
||||
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = lvm
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Back-end names are arbitrary. As an example, this guide
|
||||
uses the name of the driver as the name of the back end.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the location of the
|
||||
Image service API:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
glance_api_servers = http://controller:9292
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Restart the Block Storage volume service including its dependencies:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service tgt restart
|
||||
# service cinder-volume restart
|
||||
|
||||
.. end
|
||||
|
309
doc/install-guide/source/cinder-storage-install-obs.rst
Normal file
309
doc/install-guide/source/cinder-storage-install-obs.rst
Normal file
@ -0,0 +1,309 @@
|
||||
Install and configure a storage node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure storage nodes
|
||||
for the Block Storage service. For simplicity, this configuration
|
||||
references one storage node with an empty local block storage device.
|
||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
||||
value for your particular node.
|
||||
|
||||
The service provisions logical volumes on this device using the
|
||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
||||
You can follow these instructions with minor modifications to horizontally
|
||||
scale your environment with additional storage nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service on the
|
||||
storage node, you must prepare the storage device.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the storage node.
|
||||
|
||||
#. Install the supporting utility packages:
|
||||
|
||||
|
||||
* Install the LVM packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
* (Optional) If you intend to use non-raw image types such as QCOW2
|
||||
and VMDK, install the QEMU package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install qemu
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Some distributions include LVM by default.
|
||||
|
||||
#. Create the LVM physical volume ``/dev/sdb``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pvcreate /dev/sdb
|
||||
|
||||
Physical volume "/dev/sdb" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the LVM volume group ``cinder-volumes``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# vgcreate cinder-volumes /dev/sdb
|
||||
|
||||
Volume group "cinder-volumes" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
The Block Storage service creates logical volumes in this volume group.
|
||||
|
||||
#. Only instances can access Block Storage volumes. However, the
|
||||
underlying operating system manages the devices associated with
|
||||
the volumes. By default, the LVM volume scanning tool scans the
|
||||
``/dev`` directory for block storage devices that
|
||||
contain volumes. If projects use LVM on their volumes, the scanning
|
||||
tool detects these volumes and attempts to cache them which can cause
|
||||
a variety of problems with both the underlying operating system
|
||||
and project volumes. You must reconfigure LVM to scan only the devices
|
||||
that contain the ``cinder-volumes`` volume group. Edit the
|
||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``devices`` section, add a filter that accepts the
|
||||
``/dev/sdb`` device and rejects all other devices:
|
||||
|
||||
.. path /etc/lvm/lvm.conf
|
||||
.. code-block:: none
|
||||
|
||||
devices {
|
||||
...
|
||||
filter = [ "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Each item in the filter array begins with ``a`` for **accept** or
|
||||
``r`` for **reject** and includes a regular expression for the
|
||||
device name. The array must end with ``r/.*/`` to reject any
|
||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
||||
to test filters.
|
||||
|
||||
.. warning::
|
||||
|
||||
If your storage nodes use LVM on the operating system disk, you
|
||||
must also add the associated device to the filter. For example,
|
||||
if the ``/dev/sda`` device contains the operating system:
|
||||
|
||||
.. ignore_path /etc/lvm/lvm.conf
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Similarly, if your compute nodes use LVM on the operating
|
||||
system disk, you must also modify the filter in the
|
||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
||||
the operating system disk. For example, if the ``/dev/sda``
|
||||
device contains the operating system:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-volume tgt
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
||||
the Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for the
|
||||
``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
||||
of the management network interface on your storage node,
|
||||
typically 10.0.0.41 for the first node in the
|
||||
:ref:`example architecture <overview-example-architectures>`.
|
||||
|
||||
|
||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
||||
and appropriate iSCSI service:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[lvm]
|
||||
# ...
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group = cinder-volumes
|
||||
iscsi_protocol = iscsi
|
||||
iscsi_helper = tgtadm
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = lvm
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Back-end names are arbitrary. As an example, this guide
|
||||
uses the name of the driver as the name of the back end.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the location of the
|
||||
Image service API:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
glance_api_servers = http://controller:9292
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
3. Create the ``/etc/tgt/conf.d/cinder.conf`` file
|
||||
with the following data:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
include /var/lib/cinder/volumes/*
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
300
doc/install-guide/source/cinder-storage-install-rdo.rst
Normal file
300
doc/install-guide/source/cinder-storage-install-rdo.rst
Normal file
@ -0,0 +1,300 @@
|
||||
Install and configure a storage node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure storage nodes
|
||||
for the Block Storage service. For simplicity, this configuration
|
||||
references one storage node with an empty local block storage device.
|
||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
||||
value for your particular node.
|
||||
|
||||
The service provisions logical volumes on this device using the
|
||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
||||
You can follow these instructions with minor modifications to horizontally
|
||||
scale your environment with additional storage nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service on the
|
||||
storage node, you must prepare the storage device.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the storage node.
|
||||
|
||||
#. Install the supporting utility packages:
|
||||
|
||||
|
||||
|
||||
* Install the LVM packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
* Start the LVM metadata service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable lvm2-lvmetad.service
|
||||
# systemctl start lvm2-lvmetad.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Some distributions include LVM by default.
|
||||
|
||||
#. Create the LVM physical volume ``/dev/sdb``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pvcreate /dev/sdb
|
||||
|
||||
Physical volume "/dev/sdb" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the LVM volume group ``cinder-volumes``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# vgcreate cinder-volumes /dev/sdb
|
||||
|
||||
Volume group "cinder-volumes" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
The Block Storage service creates logical volumes in this volume group.
|
||||
|
||||
#. Only instances can access Block Storage volumes. However, the
|
||||
underlying operating system manages the devices associated with
|
||||
the volumes. By default, the LVM volume scanning tool scans the
|
||||
``/dev`` directory for block storage devices that
|
||||
contain volumes. If projects use LVM on their volumes, the scanning
|
||||
tool detects these volumes and attempts to cache them which can cause
|
||||
a variety of problems with both the underlying operating system
|
||||
and project volumes. You must reconfigure LVM to scan only the devices
|
||||
that contain the ``cinder-volumes`` volume group. Edit the
|
||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``devices`` section, add a filter that accepts the
|
||||
``/dev/sdb`` device and rejects all other devices:
|
||||
|
||||
.. path /etc/lvm/lvm.conf
|
||||
.. code-block:: none
|
||||
|
||||
devices {
|
||||
...
|
||||
filter = [ "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Each item in the filter array begins with ``a`` for **accept** or
|
||||
``r`` for **reject** and includes a regular expression for the
|
||||
device name. The array must end with ``r/.*/`` to reject any
|
||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
||||
to test filters.
|
||||
|
||||
.. warning::
|
||||
|
||||
If your storage nodes use LVM on the operating system disk, you
|
||||
must also add the associated device to the filter. For example,
|
||||
if the ``/dev/sda`` device contains the operating system:
|
||||
|
||||
.. ignore_path /etc/lvm/lvm.conf
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Similarly, if your compute nodes use LVM on the operating
|
||||
system disk, you must also modify the filter in the
|
||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
||||
the operating system disk. For example, if the ``/dev/sda``
|
||||
device contains the operating system:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder targetcli python-keystone
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
||||
the Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for the
|
||||
``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
||||
of the management network interface on your storage node,
|
||||
typically 10.0.0.41 for the first node in the
|
||||
:ref:`example architecture <overview-example-architectures>`.
|
||||
|
||||
|
||||
|
||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
||||
and appropriate iSCSI service. If the ``[lvm]`` section does not exist,
|
||||
create it:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[lvm]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group = cinder-volumes
|
||||
iscsi_protocol = iscsi
|
||||
iscsi_helper = lioadm
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = lvm
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Back-end names are arbitrary. As an example, this guide
|
||||
uses the name of the driver as the name of the back end.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the location of the
|
||||
Image service API:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
glance_api_servers = http://controller:9292
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service target.service
|
||||
# systemctl start openstack-cinder-volume.service target.service
|
||||
|
||||
.. end
|
||||
|
||||
|
287
doc/install-guide/source/cinder-storage-install-ubuntu.rst
Normal file
287
doc/install-guide/source/cinder-storage-install-ubuntu.rst
Normal file
@ -0,0 +1,287 @@
|
||||
Install and configure a storage node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure storage nodes
|
||||
for the Block Storage service. For simplicity, this configuration
|
||||
references one storage node with an empty local block storage device.
|
||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
||||
value for your particular node.
|
||||
|
||||
The service provisions logical volumes on this device using the
|
||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
||||
You can follow these instructions with minor modifications to horizontally
|
||||
scale your environment with additional storage nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service on the
|
||||
storage node, you must prepare the storage device.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the storage node.
|
||||
|
||||
#. Install the supporting utility packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Some distributions include LVM by default.
|
||||
|
||||
#. Create the LVM physical volume ``/dev/sdb``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pvcreate /dev/sdb
|
||||
|
||||
Physical volume "/dev/sdb" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the LVM volume group ``cinder-volumes``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# vgcreate cinder-volumes /dev/sdb
|
||||
|
||||
Volume group "cinder-volumes" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
The Block Storage service creates logical volumes in this volume group.
|
||||
|
||||
#. Only instances can access Block Storage volumes. However, the
|
||||
underlying operating system manages the devices associated with
|
||||
the volumes. By default, the LVM volume scanning tool scans the
|
||||
``/dev`` directory for block storage devices that
|
||||
contain volumes. If projects use LVM on their volumes, the scanning
|
||||
tool detects these volumes and attempts to cache them which can cause
|
||||
a variety of problems with both the underlying operating system
|
||||
and project volumes. You must reconfigure LVM to scan only the devices
|
||||
that contain the ``cinder-volumes`` volume group. Edit the
|
||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``devices`` section, add a filter that accepts the
|
||||
``/dev/sdb`` device and rejects all other devices:
|
||||
|
||||
.. path /etc/lvm/lvm.conf
|
||||
.. code-block:: none
|
||||
|
||||
devices {
|
||||
...
|
||||
filter = [ "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Each item in the filter array begins with ``a`` for **accept** or
|
||||
``r`` for **reject** and includes a regular expression for the
|
||||
device name. The array must end with ``r/.*/`` to reject any
|
||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
||||
to test filters.
|
||||
|
||||
.. warning::
|
||||
|
||||
If your storage nodes use LVM on the operating system disk, you
|
||||
must also add the associated device to the filter. For example,
|
||||
if the ``/dev/sda`` device contains the operating system:
|
||||
|
||||
.. ignore_path /etc/lvm/lvm.conf
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Similarly, if your compute nodes use LVM on the operating
|
||||
system disk, you must also modify the filter in the
|
||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
||||
the operating system disk. For example, if the ``/dev/sda``
|
||||
device contains the operating system:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-volume
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
||||
the Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for the
|
||||
``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
||||
of the management network interface on your storage node,
|
||||
typically 10.0.0.41 for the first node in the
|
||||
:ref:`example architecture <overview-example-architectures>`.
|
||||
|
||||
|
||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
||||
and appropriate iSCSI service:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[lvm]
|
||||
# ...
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group = cinder-volumes
|
||||
iscsi_protocol = iscsi
|
||||
iscsi_helper = tgtadm
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = lvm
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Back-end names are arbitrary. As an example, this guide
|
||||
uses the name of the driver as the name of the back end.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the location of the
|
||||
Image service API:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
glance_api_servers = http://controller:9292
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Restart the Block Storage volume service including its dependencies:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service tgt restart
|
||||
# service cinder-volume restart
|
||||
|
||||
.. end
|
||||
|
@ -3,415 +3,7 @@
|
||||
Install and configure a storage node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure storage nodes
|
||||
for the Block Storage service. For simplicity, this configuration
|
||||
references one storage node with an empty local block storage device.
|
||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
||||
value for your particular node.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
The service provisions logical volumes on this device using the
|
||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
||||
You can follow these instructions with minor modifications to horizontally
|
||||
scale your environment with additional storage nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Block Storage service on the
|
||||
storage node, you must prepare the storage device.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these steps on the storage node.
|
||||
|
||||
#. Install the supporting utility packages:
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Install the LVM packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
* (Optional) If you intend to use non-raw image types such as QCOW2
|
||||
and VMDK, install the QEMU package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install qemu
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Install the LVM packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
* Start the LVM metadata service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable lvm2-lvmetad.service
|
||||
# systemctl start lvm2-lvmetad.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install lvm2
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. note::
|
||||
|
||||
Some distributions include LVM by default.
|
||||
|
||||
#. Create the LVM physical volume ``/dev/sdb``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# pvcreate /dev/sdb
|
||||
|
||||
Physical volume "/dev/sdb" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the LVM volume group ``cinder-volumes``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# vgcreate cinder-volumes /dev/sdb
|
||||
|
||||
Volume group "cinder-volumes" successfully created
|
||||
|
||||
.. end
|
||||
|
||||
The Block Storage service creates logical volumes in this volume group.
|
||||
|
||||
#. Only instances can access Block Storage volumes. However, the
|
||||
underlying operating system manages the devices associated with
|
||||
the volumes. By default, the LVM volume scanning tool scans the
|
||||
``/dev`` directory for block storage devices that
|
||||
contain volumes. If projects use LVM on their volumes, the scanning
|
||||
tool detects these volumes and attempts to cache them which can cause
|
||||
a variety of problems with both the underlying operating system
|
||||
and project volumes. You must reconfigure LVM to scan only the devices
|
||||
that contain the ``cinder-volumes`` volume group. Edit the
|
||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``devices`` section, add a filter that accepts the
|
||||
``/dev/sdb`` device and rejects all other devices:
|
||||
|
||||
.. path /etc/lvm/lvm.conf
|
||||
.. code-block:: none
|
||||
|
||||
devices {
|
||||
...
|
||||
filter = [ "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Each item in the filter array begins with ``a`` for **accept** or
|
||||
``r`` for **reject** and includes a regular expression for the
|
||||
device name. The array must end with ``r/.*/`` to reject any
|
||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
||||
to test filters.
|
||||
|
||||
.. warning::
|
||||
|
||||
If your storage nodes use LVM on the operating system disk, you
|
||||
must also add the associated device to the filter. For example,
|
||||
if the ``/dev/sda`` device contains the operating system:
|
||||
|
||||
.. ignore_path /etc/lvm/lvm.conf
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Similarly, if your compute nodes use LVM on the operating
|
||||
system disk, you must also modify the filter in the
|
||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
||||
the operating system disk. For example, if the ``/dev/sda``
|
||||
device contains the operating system:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: ini
|
||||
|
||||
filter = [ "a/sda/", "r/.*/"]
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-cinder-volume tgt
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-cinder targetcli python-keystone
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install cinder-volume
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
||||
and complete the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
||||
the Block Storage database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for
|
||||
the ``openstack`` account in ``RabbitMQ``.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = cinder
|
||||
password = CINDER_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``CINDER_PASS`` with the password you chose for the
|
||||
``cinder`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
||||
of the management network interface on your storage node,
|
||||
typically 10.0.0.41 for the first node in the
|
||||
:ref:`example architecture <overview-example-architectures>`.
|
||||
|
||||
.. only:: obs or ubuntu
|
||||
|
||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
||||
and appropriate iSCSI service:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[lvm]
|
||||
# ...
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group = cinder-volumes
|
||||
iscsi_protocol = iscsi
|
||||
iscsi_helper = tgtadm
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
||||
and appropriate iSCSI service. If the ``[lvm]`` section does not exist,
|
||||
create it:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[lvm]
|
||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
||||
volume_group = cinder-volumes
|
||||
iscsi_protocol = iscsi
|
||||
iscsi_helper = lioadm
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
enabled_backends = lvm
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Back-end names are arbitrary. As an example, this guide
|
||||
uses the name of the driver as the name of the back end.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the location of the
|
||||
Image service API:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
glance_api_servers = http://controller:9292
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/cinder/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/cinder/tmp
|
||||
|
||||
.. end
|
||||
|
||||
.. only:: obs
|
||||
|
||||
3. Create the ``/etc/tgt/conf.d/cinder.conf`` file
|
||||
with the following data:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
include /var/lib/cinder/volumes/*
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Start the Block Storage volume service including its dependencies
|
||||
and configure them to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-cinder-volume.service target.service
|
||||
# systemctl start openstack-cinder-volume.service target.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Restart the Block Storage volume service including its dependencies:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service tgt restart
|
||||
# service cinder-volume restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
cinder-storage-install-*
|
||||
|
76
doc/install-guide/source/environment-debian.rst
Normal file
76
doc/install-guide/source/environment-debian.rst
Normal file
@ -0,0 +1,76 @@
|
||||
===========
|
||||
Environment
|
||||
===========
|
||||
|
||||
This section explains how to configure the controller node and one compute
|
||||
node using the example architecture.
|
||||
|
||||
Although most environments include Identity, Image service, Compute, at least
|
||||
one networking service, and the Dashboard, the Object Storage service can
|
||||
operate independently. If your use case only involves Object Storage, you can
|
||||
skip to `Object Storage Installation Guide
|
||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
|
||||
after configuring the appropriate nodes for it.
|
||||
|
||||
You must use an account with administrative privileges to configure each node.
|
||||
Either run the commands as the ``root`` user or configure the ``sudo``
|
||||
utility.
|
||||
|
||||
|
||||
For best performance, we recommend that your environment meets or exceeds
|
||||
the hardware requirements in :ref:`figure-hwreqs`.
|
||||
|
||||
The following minimum requirements should support a proof-of-concept
|
||||
environment with core services and several :term:`CirrOS` instances:
|
||||
|
||||
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
|
||||
|
||||
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
|
||||
|
||||
As the number of OpenStack services and virtual machines increase, so do the
|
||||
hardware requirements for the best performance. If performance degrades after
|
||||
enabling additional services or virtual machines, consider adding hardware
|
||||
resources to your environment.
|
||||
|
||||
To minimize clutter and provide more resources for OpenStack, we recommend
|
||||
a minimal installation of your Linux distribution. Also, you must install a
|
||||
64-bit version of your distribution on each node.
|
||||
|
||||
A single disk partition on each node works for most basic installations.
|
||||
However, you should consider :term:`Logical Volume Manager (LVM)` for
|
||||
installations with optional services such as Block Storage.
|
||||
|
||||
For first-time installation and testing purposes, many users select to build
|
||||
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
|
||||
include the following:
|
||||
|
||||
* One physical server can support multiple nodes, each with almost any
|
||||
number of network interfaces.
|
||||
|
||||
* Ability to take periodic "snap shots" throughout the installation
|
||||
process and "roll back" to a working configuration in the event of a
|
||||
problem.
|
||||
|
||||
However, VMs will reduce performance of your instances, particularly if
|
||||
your hypervisor and/or processor lacks support for hardware acceleration
|
||||
of nested VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you choose to install on VMs, make sure your hypervisor provides
|
||||
a way to disable MAC address filtering on the provider network
|
||||
interface.
|
||||
|
||||
For more information about system requirements, see the `OpenStack
|
||||
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-security.rst
|
||||
environment-networking.rst
|
||||
environment-ntp.rst
|
||||
environment-packages.rst
|
||||
environment-sql-database.rst
|
||||
environment-messaging.rst
|
||||
environment-memcached.rst
|
54
doc/install-guide/source/environment-memcached-debian.rst
Normal file
54
doc/install-guide/source/environment-memcached-debian.rst
Normal file
@ -0,0 +1,54 @@
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
|
||||
The Identity service authentication mechanism for services uses Memcached
|
||||
to cache tokens. The memcached service typically runs on the controller
|
||||
node. For production deployments, we recommend enabling a combination of
|
||||
firewalling, authentication, and encryption to secure it.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install memcached python-memcache
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/memcached.conf`` file and configure the
|
||||
service to use the management IP address of the controller node.
|
||||
This is to enable access by other nodes via the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-l 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line that had ``-l 127.0.0.1``.
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Restart the Memcached service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service memcached restart
|
||||
|
||||
.. end
|
||||
|
||||
|
59
doc/install-guide/source/environment-memcached-obs.rst
Normal file
59
doc/install-guide/source/environment-memcached-obs.rst
Normal file
@ -0,0 +1,59 @@
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
|
||||
The Identity service authentication mechanism for services uses Memcached
|
||||
to cache tokens. The memcached service typically runs on the controller
|
||||
node. For production deployments, we recommend enabling a combination of
|
||||
firewalling, authentication, and encryption to secure it.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install memcached python-python-memcached
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
|
||||
following actions:
|
||||
|
||||
* Configure the service to use the management IP address of the
|
||||
controller node. This is to enable access by other nodes via
|
||||
the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
MEMCACHED_PARAMS="-l 127.0.0.1"
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line ``MEMCACHED_PARAMS="-l 127.0.0.1,::1"``.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
* Start the Memcached service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable memcached.service
|
||||
# systemctl start memcached.service
|
||||
|
||||
.. end
|
||||
|
59
doc/install-guide/source/environment-memcached-rdo.rst
Normal file
59
doc/install-guide/source/environment-memcached-rdo.rst
Normal file
@ -0,0 +1,59 @@
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
|
||||
The Identity service authentication mechanism for services uses Memcached
|
||||
to cache tokens. The memcached service typically runs on the controller
|
||||
node. For production deployments, we recommend enabling a combination of
|
||||
firewalling, authentication, and encryption to secure it.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install memcached python-memcached
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
|
||||
following actions:
|
||||
|
||||
* Configure the service to use the management IP address of the
|
||||
controller node. This is to enable access by other nodes via
|
||||
the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
OPTIONS="-l 127.0.0.1,::1,controller"
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line ``OPTIONS="-l 127.0.0.1,::1"``.
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
* Start the Memcached service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable memcached.service
|
||||
# systemctl start memcached.service
|
||||
|
||||
.. end
|
||||
|
54
doc/install-guide/source/environment-memcached-ubuntu.rst
Normal file
54
doc/install-guide/source/environment-memcached-ubuntu.rst
Normal file
@ -0,0 +1,54 @@
|
||||
Memcached
|
||||
~~~~~~~~~
|
||||
|
||||
The Identity service authentication mechanism for services uses Memcached
|
||||
to cache tokens. The memcached service typically runs on the controller
|
||||
node. For production deployments, we recommend enabling a combination of
|
||||
firewalling, authentication, and encryption to secure it.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install memcached python-memcache
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/memcached.conf`` file and configure the
|
||||
service to use the management IP address of the controller node.
|
||||
This is to enable access by other nodes via the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-l 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line that had ``-l 127.0.0.1``.
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Restart the Memcached service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service memcached restart
|
||||
|
||||
.. end
|
||||
|
||||
|
@ -6,126 +6,7 @@ to cache tokens. The memcached service typically runs on the controller
|
||||
node. For production deployments, we recommend enabling a combination of
|
||||
firewalling, authentication, and encryption to secure it.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install memcached python-memcache
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install memcached python-memcached
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install memcached python-python-memcached
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
2. Edit the ``/etc/memcached.conf`` file and configure the
|
||||
service to use the management IP address of the controller node.
|
||||
This is to enable access by other nodes via the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-l 10.0.0.11
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line that had ``-l 127.0.0.1``.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
|
||||
following actions:
|
||||
|
||||
* Configure the service to use the management IP address of the
|
||||
controller node. This is to enable access by other nodes via
|
||||
the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
OPTIONS="-l 127.0.0.1,::1,controller"
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line ``OPTIONS="-l 127.0.0.1,::1"``.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
|
||||
following actions:
|
||||
|
||||
* Configure the service to use the management IP address of the
|
||||
controller node. This is to enable access by other nodes via
|
||||
the management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
MEMCACHED_PARAMS="-l 127.0.0.1"
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Change the existing line ``MEMCACHED_PARAMS="-l 127.0.0.1,::1"``.
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Restart the Memcached service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service memcached restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
* Start the Memcached service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable memcached.service
|
||||
# systemctl start memcached.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
environment-memcached-*
|
||||
|
56
doc/install-guide/source/environment-messaging-debian.rst
Normal file
56
doc/install-guide/source/environment-messaging-debian.rst
Normal file
@ -0,0 +1,56 @@
|
||||
Message queue
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack uses a :term:`message queue` to coordinate operations and
|
||||
status information among services. The message queue service typically
|
||||
runs on the controller node. OpenStack supports several message queue
|
||||
services including `RabbitMQ <https://www.rabbitmq.com>`__,
|
||||
`Qpid <https://qpid.apache.org>`__, and `ZeroMQ <http://zeromq.org>`__.
|
||||
However, most distributions that package OpenStack support a particular
|
||||
message queue service. This guide implements the RabbitMQ message queue
|
||||
service because most distributions support it. If you prefer to
|
||||
implement a different message queue service, consult the documentation
|
||||
associated with it.
|
||||
|
||||
The message queue runs on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the package:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
3. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
66
doc/install-guide/source/environment-messaging-obs.rst
Normal file
66
doc/install-guide/source/environment-messaging-obs.rst
Normal file
@ -0,0 +1,66 @@
|
||||
Message queue
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack uses a :term:`message queue` to coordinate operations and
|
||||
status information among services. The message queue service typically
|
||||
runs on the controller node. OpenStack supports several message queue
|
||||
services including `RabbitMQ <https://www.rabbitmq.com>`__,
|
||||
`Qpid <https://qpid.apache.org>`__, and `ZeroMQ <http://zeromq.org>`__.
|
||||
However, most distributions that package OpenStack support a particular
|
||||
message queue service. This guide implements the RabbitMQ message queue
|
||||
service because most distributions support it. If you prefer to
|
||||
implement a different message queue service, consult the documentation
|
||||
associated with it.
|
||||
|
||||
The message queue runs on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the package:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Start the message queue service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable rabbitmq-server.service
|
||||
# systemctl start rabbitmq-server.service
|
||||
|
||||
.. end
|
||||
|
||||
3. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
4. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
||||
|
66
doc/install-guide/source/environment-messaging-rdo.rst
Normal file
66
doc/install-guide/source/environment-messaging-rdo.rst
Normal file
@ -0,0 +1,66 @@
|
||||
Message queue
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack uses a :term:`message queue` to coordinate operations and
|
||||
status information among services. The message queue service typically
|
||||
runs on the controller node. OpenStack supports several message queue
|
||||
services including `RabbitMQ <https://www.rabbitmq.com>`__,
|
||||
`Qpid <https://qpid.apache.org>`__, and `ZeroMQ <http://zeromq.org>`__.
|
||||
However, most distributions that package OpenStack support a particular
|
||||
message queue service. This guide implements the RabbitMQ message queue
|
||||
service because most distributions support it. If you prefer to
|
||||
implement a different message queue service, consult the documentation
|
||||
associated with it.
|
||||
|
||||
The message queue runs on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the package:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Start the message queue service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable rabbitmq-server.service
|
||||
# systemctl start rabbitmq-server.service
|
||||
|
||||
.. end
|
||||
|
||||
3. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
4. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
||||
|
56
doc/install-guide/source/environment-messaging-ubuntu.rst
Normal file
56
doc/install-guide/source/environment-messaging-ubuntu.rst
Normal file
@ -0,0 +1,56 @@
|
||||
Message queue
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
OpenStack uses a :term:`message queue` to coordinate operations and
|
||||
status information among services. The message queue service typically
|
||||
runs on the controller node. OpenStack supports several message queue
|
||||
services including `RabbitMQ <https://www.rabbitmq.com>`__,
|
||||
`Qpid <https://qpid.apache.org>`__, and `ZeroMQ <http://zeromq.org>`__.
|
||||
However, most distributions that package OpenStack support a particular
|
||||
message queue service. This guide implements the RabbitMQ message queue
|
||||
service because most distributions support it. If you prefer to
|
||||
implement a different message queue service, consult the documentation
|
||||
associated with it.
|
||||
|
||||
The message queue runs on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the package:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
3. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
@ -14,101 +14,7 @@ associated with it.
|
||||
|
||||
The message queue runs on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
1. Install the package:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install rabbitmq-server
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
2. Start the message queue service and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable rabbitmq-server.service
|
||||
# systemctl start rabbitmq-server.service
|
||||
|
||||
.. end
|
||||
|
||||
3. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
4. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
2. Add the ``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl add_user openstack RABBIT_PASS
|
||||
|
||||
Creating user "openstack" ...
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with a suitable password.
|
||||
|
||||
3. Permit configuration, write, and read access for the
|
||||
``openstack`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
|
||||
|
||||
Setting permissions for user "openstack" in vhost "/" ...
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
environment-messaging-*
|
||||
|
@ -0,0 +1,50 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,48 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: bash
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,52 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
|
||||
to contain the following:
|
||||
|
||||
Do not change the ``HWADDR`` and ``UUID`` keys.
|
||||
|
||||
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: bash
|
||||
|
||||
DEVICE=INTERFACE_NAME
|
||||
TYPE=Ethernet
|
||||
ONBOOT="yes"
|
||||
BOOTPROTO="none"
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,50 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -1,84 +1,7 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
|
||||
to contain the following:
|
||||
|
||||
Do not change the ``HWADDR`` and ``UUID`` keys.
|
||||
|
||||
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: bash
|
||||
|
||||
DEVICE=INTERFACE_NAME
|
||||
TYPE=Ethernet
|
||||
ONBOOT="yes"
|
||||
BOOTPROTO="none"
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: bash
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
||||
environment-networking-compute-*
|
||||
|
@ -0,0 +1,46 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,44 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: ini
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,48 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
|
||||
to contain the following:
|
||||
|
||||
Do not change the ``HWADDR`` and ``UUID`` keys.
|
||||
|
||||
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: ini
|
||||
|
||||
DEVICE=INTERFACE_NAME
|
||||
TYPE=Ethernet
|
||||
ONBOOT="yes"
|
||||
BOOTPROTO="none"
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -0,0 +1,46 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@ -1,80 +1,7 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Edit the ``/etc/network/interfaces`` file to contain the following:
|
||||
|
||||
.. path /etc/network/interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# The provider network interface
|
||||
auto INTERFACE_NAME
|
||||
iface INTERFACE_NAME inet manual
|
||||
up ip link set dev $IFACE up
|
||||
down ip link set dev $IFACE down
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
|
||||
to contain the following:
|
||||
|
||||
Do not change the ``HWADDR`` and ``UUID`` keys.
|
||||
|
||||
.. path /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: ini
|
||||
|
||||
DEVICE=INTERFACE_NAME
|
||||
TYPE=Ethernet
|
||||
ONBOOT="yes"
|
||||
BOOTPROTO="none"
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: ini
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
||||
environment-networking-controller-*
|
||||
|
91
doc/install-guide/source/environment-networking-debian.rst
Normal file
91
doc/install-guide/source/environment-networking-debian.rst
Normal file
@ -0,0 +1,91 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation
|
||||
<https://wiki.debian.org/NetworkConfiguration>`__ .
|
||||
|
||||
|
||||
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using :term:`NAT <Network Address Translation (NAT)>` through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`.
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall` by
|
||||
default. For more information about securing your environment,
|
||||
refer to the `OpenStack Security Guide
|
||||
<https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller.rst
|
||||
environment-networking-compute.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify.rst
|
96
doc/install-guide/source/environment-networking-obs.rst
Normal file
96
doc/install-guide/source/environment-networking-obs.rst
Normal file
@ -0,0 +1,96 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `SLES 12
|
||||
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
|
||||
or `openSUSE
|
||||
<https://doc.opensuse.org/documentation/leap/reference/html/book.opensuse.reference/cha.basicnet.html>`__
|
||||
documentation.
|
||||
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using :term:`NAT <Network Address Translation (NAT)>` through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`.
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller.rst
|
||||
environment-networking-compute.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify.rst
|
93
doc/install-guide/source/environment-networking-rdo.rst
Normal file
93
doc/install-guide/source/environment-networking-rdo.rst
Normal file
@ -0,0 +1,93 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation
|
||||
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
|
||||
|
||||
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using :term:`NAT <Network Address Translation (NAT)>` through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`.
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller.rst
|
||||
environment-networking-compute.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify.rst
|
90
doc/install-guide/source/environment-networking-ubuntu.rst
Normal file
90
doc/install-guide/source/environment-networking-ubuntu.rst
Normal file
@ -0,0 +1,90 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`_.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using :term:`NAT <Network Address Translation (NAT)>` through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`.
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall` by
|
||||
default. For more information about securing your environment,
|
||||
refer to the `OpenStack Security Guide
|
||||
<https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller.rst
|
||||
environment-networking-compute.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify.rst
|
@ -0,0 +1,87 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall` by
|
||||
default. For more information about securing your environment,
|
||||
refer to the `OpenStack Security Guide
|
||||
<https://docs.openstack.org/security-guide/>`_.
|
||||
|
@ -0,0 +1,89 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
@ -0,0 +1,89 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
@ -0,0 +1,87 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall` by
|
||||
default. For more information about securing your environment,
|
||||
refer to the `OpenStack Security Guide
|
||||
<https://docs.openstack.org/security-guide/>`_.
|
||||
|
@ -1,100 +1,7 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will
|
||||
fail unless you alter or disable the firewall. For more
|
||||
information about securing your environment, refer to the
|
||||
`OpenStack Security Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall`
|
||||
by default. For more information about securing your environment,
|
||||
refer to the
|
||||
`OpenStack Security Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
.. endonly
|
||||
environment-networking-verify-*
|
||||
|
@ -3,141 +3,9 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation <https://help.ubuntu.com/lts/serverguide/network-configuration.html>`_.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation
|
||||
<https://wiki.debian.org/NetworkConfiguration>`__ .
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `documentation
|
||||
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Using_the_Command_Line_Interface.html>`__ .
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `SLES 12
|
||||
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
|
||||
or `openSUSE
|
||||
<https://doc.opensuse.org/documentation/leap/reference/html/book.opensuse.reference/cha.basicnet.html>`__
|
||||
documentation.
|
||||
|
||||
.. endonly
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`. In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via :term:`NAT <Network Address Translation (NAT)>`
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using :term:`NAT <Network Address Translation (NAT)>` through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, :term:`DNS <Domain Name System (DNS)>`, and
|
||||
:term:`NTP <Network Time Protocol (NTP)>`.
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
Your distribution enables a restrictive :term:`firewall` by
|
||||
default. During the installation process, certain steps will
|
||||
fail unless you alter or disable the firewall. For more
|
||||
information about securing your environment, refer to the
|
||||
`OpenStack Security Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
Your distribution does not enable a restrictive :term:`firewall`
|
||||
by default. For more information about securing your environment,
|
||||
refer to the
|
||||
`OpenStack Security Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller.rst
|
||||
environment-networking-compute.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify.rst
|
||||
environment-networking-debian
|
||||
environment-networking-obs
|
||||
environment-networking-rdo
|
||||
environment-networking-ubuntu
|
||||
|
@ -0,0 +1,59 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
|
||||
these keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the
|
||||
controller node, add this key to the ``/etc/chrony/chrony.conf``
|
||||
file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
|
61
doc/install-guide/source/environment-ntp-controller-obs.rst
Normal file
61
doc/install-guide/source/environment-ntp-controller-obs.rst
Normal file
@ -0,0 +1,61 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove these
|
||||
keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the
|
||||
controller node, add this key to the ``/etc/chrony.conf`` file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
||||
|
||||
4. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
61
doc/install-guide/source/environment-ntp-controller-rdo.rst
Normal file
61
doc/install-guide/source/environment-ntp-controller-rdo.rst
Normal file
@ -0,0 +1,61 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove these
|
||||
keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the
|
||||
controller node, add this key to the ``/etc/chrony.conf`` file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
||||
|
||||
4. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
@ -0,0 +1,59 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
|
||||
these keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the
|
||||
controller node, add this key to the ``/etc/chrony/chrony.conf``
|
||||
file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
|
@ -3,122 +3,7 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Perform these steps on the controller node.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and add, change, or remove
|
||||
these keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the controller node,
|
||||
add this key to the ``/etc/chrony/chrony.conf`` file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and add, change, or remove these
|
||||
keys as necessary for your environment:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
server NTP_SERVER iburst
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NTP_SERVER`` with the hostname or IP address of a suitable more
|
||||
accurate (lower stratum) NTP server. The configuration supports multiple
|
||||
``server`` keys.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the controller node synchronizes the time via a pool of
|
||||
public servers. However, you can optionally configure alternative
|
||||
servers such as those provided by your organization.
|
||||
|
||||
3. To enable other nodes to connect to the chrony daemon on the controller node,
|
||||
add this key to the ``/etc/chrony.conf`` file:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
allow 10.0.0.0/24
|
||||
|
||||
.. end
|
||||
|
||||
If necessary, replace ``10.0.0.0/24`` with a description of your subnet.
|
||||
|
||||
4. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
environment-ntp-controller-*
|
||||
|
43
doc/install-guide/source/environment-ntp-other-debian.rst
Normal file
43
doc/install-guide/source/environment-ntp-other-debian.rst
Normal file
@ -0,0 +1,43 @@
|
||||
Other nodes
|
||||
~~~~~~~~~~~
|
||||
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||
but one ``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
|
42
doc/install-guide/source/environment-ntp-other-obs.rst
Normal file
42
doc/install-guide/source/environment-ntp-other-obs.rst
Normal file
@ -0,0 +1,42 @@
|
||||
Other nodes
|
||||
~~~~~~~~~~~
|
||||
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||
``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
42
doc/install-guide/source/environment-ntp-other-rdo.rst
Normal file
42
doc/install-guide/source/environment-ntp-other-rdo.rst
Normal file
@ -0,0 +1,42 @@
|
||||
Other nodes
|
||||
~~~~~~~~~~~
|
||||
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||
``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
43
doc/install-guide/source/environment-ntp-other-ubuntu.rst
Normal file
43
doc/install-guide/source/environment-ntp-other-ubuntu.rst
Normal file
@ -0,0 +1,43 @@
|
||||
Other nodes
|
||||
~~~~~~~~~~~
|
||||
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||
but one ``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
|
@ -6,84 +6,7 @@ Other nodes
|
||||
Other nodes reference the controller node for clock synchronization.
|
||||
Perform these steps on all other nodes.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install chrony
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
2. Edit the ``/etc/chrony/chrony.conf`` file and comment out or remove all
|
||||
but one ``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
|
||||
|
||||
4. Restart the NTP service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service chrony restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
2. Edit the ``/etc/chrony.conf`` file and comment out or remove all but one
|
||||
``server`` key. Change it to reference the controller node:
|
||||
|
||||
.. path /etc/chrony.conf
|
||||
.. code-block:: shell
|
||||
|
||||
server controller iburst
|
||||
|
||||
.. end
|
||||
|
||||
3. Start the NTP service and configure it to start when the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable chronyd.service
|
||||
# systemctl start chronyd.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
environment-ntp-other-*
|
||||
|
81
doc/install-guide/source/environment-obs.rst
Normal file
81
doc/install-guide/source/environment-obs.rst
Normal file
@ -0,0 +1,81 @@
|
||||
===========
|
||||
Environment
|
||||
===========
|
||||
|
||||
This section explains how to configure the controller node and one compute
|
||||
node using the example architecture.
|
||||
|
||||
Although most environments include Identity, Image service, Compute, at least
|
||||
one networking service, and the Dashboard, the Object Storage service can
|
||||
operate independently. If your use case only involves Object Storage, you can
|
||||
skip to `Object Storage Installation Guide
|
||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
|
||||
after configuring the appropriate nodes for it.
|
||||
|
||||
You must use an account with administrative privileges to configure each node.
|
||||
Either run the commands as the ``root`` user or configure the ``sudo``
|
||||
utility.
|
||||
|
||||
|
||||
The :command:`systemctl enable` call on openSUSE outputs a warning message
|
||||
when the service uses SysV Init scripts instead of native systemd files. This
|
||||
warning can be ignored.
|
||||
|
||||
|
||||
For best performance, we recommend that your environment meets or exceeds
|
||||
the hardware requirements in :ref:`figure-hwreqs`.
|
||||
|
||||
The following minimum requirements should support a proof-of-concept
|
||||
environment with core services and several :term:`CirrOS` instances:
|
||||
|
||||
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
|
||||
|
||||
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
|
||||
|
||||
As the number of OpenStack services and virtual machines increase, so do the
|
||||
hardware requirements for the best performance. If performance degrades after
|
||||
enabling additional services or virtual machines, consider adding hardware
|
||||
resources to your environment.
|
||||
|
||||
To minimize clutter and provide more resources for OpenStack, we recommend
|
||||
a minimal installation of your Linux distribution. Also, you must install a
|
||||
64-bit version of your distribution on each node.
|
||||
|
||||
A single disk partition on each node works for most basic installations.
|
||||
However, you should consider :term:`Logical Volume Manager (LVM)` for
|
||||
installations with optional services such as Block Storage.
|
||||
|
||||
For first-time installation and testing purposes, many users select to build
|
||||
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
|
||||
include the following:
|
||||
|
||||
* One physical server can support multiple nodes, each with almost any
|
||||
number of network interfaces.
|
||||
|
||||
* Ability to take periodic "snap shots" throughout the installation
|
||||
process and "roll back" to a working configuration in the event of a
|
||||
problem.
|
||||
|
||||
However, VMs will reduce performance of your instances, particularly if
|
||||
your hypervisor and/or processor lacks support for hardware acceleration
|
||||
of nested VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you choose to install on VMs, make sure your hypervisor provides
|
||||
a way to disable MAC address filtering on the provider network
|
||||
interface.
|
||||
|
||||
For more information about system requirements, see the `OpenStack
|
||||
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-security.rst
|
||||
environment-networking.rst
|
||||
environment-ntp.rst
|
||||
environment-packages.rst
|
||||
environment-sql-database.rst
|
||||
environment-messaging.rst
|
||||
environment-memcached.rst
|
88
doc/install-guide/source/environment-packages-debian.rst
Normal file
88
doc/install-guide/source/environment-packages-debian.rst
Normal file
@ -0,0 +1,88 @@
|
||||
OpenStack packages
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Distributions release OpenStack packages as part of the distribution or
|
||||
using other methods because of differing release schedules. Perform
|
||||
these procedures on all nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
The set up of OpenStack packages described here needs to be done on
|
||||
all nodes: controller, compute, and Block Storage nodes.
|
||||
|
||||
.. warning::
|
||||
|
||||
Your hosts must contain the latest versions of base installation
|
||||
packages available for your distribution before proceeding further.
|
||||
|
||||
.. note::
|
||||
|
||||
Disable or remove any automatic update services because they can
|
||||
impact your OpenStack environment.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Enable the backports repository
|
||||
-------------------------------
|
||||
|
||||
The Newton release is available directly through the official
|
||||
Debian backports repository. To use this repository, follow
|
||||
the instruction from the official
|
||||
`Debian website <https://backports.debian.org/Instructions/>`_,
|
||||
which basically suggest doing the following steps:
|
||||
|
||||
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
|
||||
the source list:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo "deb http://http.debian.net/debian jessie-backports main" \
|
||||
>>/etc/apt/sources.list
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Later you can use the following command to install a package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt -t jessie-backports install ``PACKAGE``
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
1. Upgrade the packages on all nodes:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt update && apt dist-upgrade
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If the upgrade process includes a new kernel, reboot your host
|
||||
to activate it.
|
||||
|
||||
2. Install the OpenStack client:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
110
doc/install-guide/source/environment-packages-obs.rst
Normal file
110
doc/install-guide/source/environment-packages-obs.rst
Normal file
@ -0,0 +1,110 @@
|
||||
OpenStack packages
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Distributions release OpenStack packages as part of the distribution or
|
||||
using other methods because of differing release schedules. Perform
|
||||
these procedures on all nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
The set up of OpenStack packages described here needs to be done on
|
||||
all nodes: controller, compute, and Block Storage nodes.
|
||||
|
||||
.. warning::
|
||||
|
||||
Your hosts must contain the latest versions of base installation
|
||||
packages available for your distribution before proceeding further.
|
||||
|
||||
.. note::
|
||||
|
||||
Disable or remove any automatic update services because they can
|
||||
impact your OpenStack environment.
|
||||
|
||||
|
||||
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
* Enable the Open Build Service repositories based on your openSUSE or
|
||||
SLES version:
|
||||
|
||||
**On openSUSE:**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://Cloud:OpenStack:Ocata/openSUSE_Leap_42.2 Ocata
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The openSUSE distribution uses the concept of patterns to
|
||||
represent collections of packages. If you selected 'Minimal
|
||||
Server Selection (Text Mode)' during the initial installation,
|
||||
you may be presented with a dependency conflict when you
|
||||
attempt to install the OpenStack packages. To avoid this,
|
||||
remove the minimal\_base-conflicts package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper rm patterns-openSUSE-minimal_base-conflicts
|
||||
|
||||
.. end
|
||||
|
||||
**On SLES:**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://Cloud:OpenStack:Ocata/SLE_12_SP2 Ocata
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``D85F9316``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
|
||||
Key Fingerprint: 35B34E18 ABC1076D 66D5A86B 893A90DA D85F9316
|
||||
Key Created: 2015-12-16T16:48:37 CET
|
||||
Key Expires: 2018-02-23T16:48:37 CET
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
1. Upgrade the packages on all nodes:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper refresh && zypper dist-upgrade
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If the upgrade process includes a new kernel, reboot your host
|
||||
to activate it.
|
||||
|
||||
2. Install the OpenStack client:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
|
127
doc/install-guide/source/environment-packages-rdo.rst
Normal file
127
doc/install-guide/source/environment-packages-rdo.rst
Normal file
@ -0,0 +1,127 @@
|
||||
OpenStack packages
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Distributions release OpenStack packages as part of the distribution or
|
||||
using other methods because of differing release schedules. Perform
|
||||
these procedures on all nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
The set up of OpenStack packages described here needs to be done on
|
||||
all nodes: controller, compute, and Block Storage nodes.
|
||||
|
||||
.. warning::
|
||||
|
||||
Your hosts must contain the latest versions of base installation
|
||||
packages available for your distribution before proceeding further.
|
||||
|
||||
.. note::
|
||||
|
||||
Disable or remove any automatic update services because they can
|
||||
impact your OpenStack environment.
|
||||
|
||||
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
.. warning::
|
||||
|
||||
We recommend disabling EPEL when using RDO packages due to updates
|
||||
in EPEL breaking backwards compatibility. Or, preferably pin package
|
||||
versions using the ``yum-versionlock`` plugin.
|
||||
|
||||
.. note::
|
||||
|
||||
The following steps apply to RHEL only. CentOS does not require these
|
||||
steps.
|
||||
|
||||
#. When using RHEL, it is assumed that you have registered your system using
|
||||
Red Hat Subscription Management and that you have the
|
||||
``rhel-7-server-rpms`` repository enabled by default.
|
||||
|
||||
For more information on registering the system, see the
|
||||
`Red Hat Enterprise Linux 7 System Administrator's Guide
|
||||
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/part-Subscription_and_Support.html>`_.
|
||||
|
||||
#. In addition to ``rhel-7-server-rpms``, you also need to have the
|
||||
``rhel-7-server-optional-rpms``, ``rhel-7-server-extras-rpms``, and
|
||||
``rhel-7-server-rh-common-rpms`` repositories enabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# subscription-manager repos --enable=rhel-7-server-optional-rpms \
|
||||
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
* On CentOS, the ``extras`` repository provides the RPM that enables the
|
||||
OpenStack repository. CentOS includes the ``extras`` repository by
|
||||
default, so you can simply install the package to enable the OpenStack
|
||||
repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install centos-release-openstack-ocata
|
||||
|
||||
.. end
|
||||
|
||||
* On RHEL, download and install the RDO repository RPM to enable the
|
||||
OpenStack repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install https://rdoproject.org/repos/rdo-release.rpm
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
1. Upgrade the packages on all nodes:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum upgrade
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If the upgrade process includes a new kernel, reboot your host
|
||||
to activate it.
|
||||
|
||||
2. Install the OpenStack client:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
3. RHEL and CentOS enable :term:`SELinux` by default. Install the
|
||||
``openstack-selinux`` package to automatically manage security
|
||||
policies for OpenStack services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-selinux
|
||||
|
||||
.. end
|
||||
|
69
doc/install-guide/source/environment-packages-ubuntu.rst
Normal file
69
doc/install-guide/source/environment-packages-ubuntu.rst
Normal file
@ -0,0 +1,69 @@
|
||||
OpenStack packages
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Distributions release OpenStack packages as part of the distribution or
|
||||
using other methods because of differing release schedules. Perform
|
||||
these procedures on all nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
The set up of OpenStack packages described here needs to be done on
|
||||
all nodes: controller, compute, and Block Storage nodes.
|
||||
|
||||
.. warning::
|
||||
|
||||
Your hosts must contain the latest versions of base installation
|
||||
packages available for your distribution before proceeding further.
|
||||
|
||||
.. note::
|
||||
|
||||
Disable or remove any automatic update services because they can
|
||||
impact your OpenStack environment.
|
||||
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install software-properties-common
|
||||
# add-apt-repository cloud-archive:ocata
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
1. Upgrade the packages on all nodes:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt update && apt dist-upgrade
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If the upgrade process includes a new kernel, reboot your host
|
||||
to activate it.
|
||||
|
||||
2. Install the OpenStack client:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
@ -20,252 +20,7 @@ these procedures on all nodes.
|
||||
Disable or remove any automatic update services because they can
|
||||
impact your OpenStack environment.
|
||||
|
||||
.. only:: ubuntu
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install software-properties-common
|
||||
# add-apt-repository cloud-archive:ocata
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
.. warning::
|
||||
|
||||
We recommend disabling EPEL when using RDO packages due to updates
|
||||
in EPEL breaking backwards compatibility. Or, preferably pin package
|
||||
versions using the ``yum-versionlock`` plugin.
|
||||
|
||||
.. note::
|
||||
|
||||
The following steps apply to RHEL only. CentOS does not require these
|
||||
steps.
|
||||
|
||||
#. When using RHEL, it is assumed that you have registered your system using
|
||||
Red Hat Subscription Management and that you have the
|
||||
``rhel-7-server-rpms`` repository enabled by default.
|
||||
|
||||
For more information on registering the system, see the
|
||||
`Red Hat Enterprise Linux 7 System Administrator's Guide
|
||||
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/part-Subscription_and_Support.html>`_.
|
||||
|
||||
#. In addition to ``rhel-7-server-rpms``, you also need to have the
|
||||
``rhel-7-server-optional-rpms``, ``rhel-7-server-extras-rpms``, and
|
||||
``rhel-7-server-rh-common-rpms`` repositories enabled:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# subscription-manager repos --enable=rhel-7-server-optional-rpms \
|
||||
--enable=rhel-7-server-extras-rpms --enable=rhel-7-server-rh-common-rpms
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
* On CentOS, the ``extras`` repository provides the RPM that enables the
|
||||
OpenStack repository. CentOS includes the ``extras`` repository by
|
||||
default, so you can simply install the package to enable the OpenStack
|
||||
repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install centos-release-openstack-ocata
|
||||
|
||||
.. end
|
||||
|
||||
* On RHEL, download and install the RDO repository RPM to enable the
|
||||
OpenStack repository.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install https://rdoproject.org/repos/rdo-release.rpm
|
||||
|
||||
.. end
|
||||
|
||||
.. only:: obs
|
||||
|
||||
Enable the OpenStack repository
|
||||
-------------------------------
|
||||
|
||||
* Enable the Open Build Service repositories based on your openSUSE or
|
||||
SLES version:
|
||||
|
||||
**On openSUSE:**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://Cloud:OpenStack:Ocata/openSUSE_Leap_42.2 Ocata
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The openSUSE distribution uses the concept of patterns to
|
||||
represent collections of packages. If you selected 'Minimal
|
||||
Server Selection (Text Mode)' during the initial installation,
|
||||
you may be presented with a dependency conflict when you
|
||||
attempt to install the OpenStack packages. To avoid this,
|
||||
remove the minimal\_base-conflicts package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper rm patterns-openSUSE-minimal_base-conflicts
|
||||
|
||||
.. end
|
||||
|
||||
**On SLES:**
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper addrepo -f obs://Cloud:OpenStack:Ocata/SLE_12_SP2 Ocata
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The packages are signed by GPG key ``D85F9316``. You should
|
||||
verify the fingerprint of the imported GPG key before using it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
|
||||
Key Fingerprint: 35B34E18 ABC1076D 66D5A86B 893A90DA D85F9316
|
||||
Key Created: 2015-12-16T16:48:37 CET
|
||||
Key Expires: 2018-02-23T16:48:37 CET
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
Enable the backports repository
|
||||
-------------------------------
|
||||
|
||||
The Newton release is available directly through the official
|
||||
Debian backports repository. To use this repository, follow
|
||||
the instruction from the official
|
||||
`Debian website <https://backports.debian.org/Instructions/>`_,
|
||||
which basically suggest doing the following steps:
|
||||
|
||||
#. On all nodes, adding the Debian 8 (Jessie) backport repository to
|
||||
the source list:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo "deb http://http.debian.net/debian jessie-backports main" \
|
||||
>>/etc/apt/sources.list
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Later you can use the following command to install a package:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt -t jessie-backports install ``PACKAGE``
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
1. Upgrade the packages on all nodes:
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt update && apt dist-upgrade
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum upgrade
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper refresh && zypper dist-upgrade
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. note::
|
||||
|
||||
If the upgrade process includes a new kernel, reboot your host
|
||||
to activate it.
|
||||
|
||||
2. Install the OpenStack client:
|
||||
|
||||
.. only:: debian or ubuntu
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install python-openstackclient
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
3. RHEL and CentOS enable :term:`SELinux` by default. Install the
|
||||
``openstack-selinux`` package to automatically manage security
|
||||
policies for OpenStack services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-selinux
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
environment-packages-*
|
||||
|
76
doc/install-guide/source/environment-rdo.rst
Normal file
76
doc/install-guide/source/environment-rdo.rst
Normal file
@ -0,0 +1,76 @@
|
||||
===========
|
||||
Environment
|
||||
===========
|
||||
|
||||
This section explains how to configure the controller node and one compute
|
||||
node using the example architecture.
|
||||
|
||||
Although most environments include Identity, Image service, Compute, at least
|
||||
one networking service, and the Dashboard, the Object Storage service can
|
||||
operate independently. If your use case only involves Object Storage, you can
|
||||
skip to `Object Storage Installation Guide
|
||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
|
||||
after configuring the appropriate nodes for it.
|
||||
|
||||
You must use an account with administrative privileges to configure each node.
|
||||
Either run the commands as the ``root`` user or configure the ``sudo``
|
||||
utility.
|
||||
|
||||
|
||||
For best performance, we recommend that your environment meets or exceeds
|
||||
the hardware requirements in :ref:`figure-hwreqs`.
|
||||
|
||||
The following minimum requirements should support a proof-of-concept
|
||||
environment with core services and several :term:`CirrOS` instances:
|
||||
|
||||
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
|
||||
|
||||
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
|
||||
|
||||
As the number of OpenStack services and virtual machines increase, so do the
|
||||
hardware requirements for the best performance. If performance degrades after
|
||||
enabling additional services or virtual machines, consider adding hardware
|
||||
resources to your environment.
|
||||
|
||||
To minimize clutter and provide more resources for OpenStack, we recommend
|
||||
a minimal installation of your Linux distribution. Also, you must install a
|
||||
64-bit version of your distribution on each node.
|
||||
|
||||
A single disk partition on each node works for most basic installations.
|
||||
However, you should consider :term:`Logical Volume Manager (LVM)` for
|
||||
installations with optional services such as Block Storage.
|
||||
|
||||
For first-time installation and testing purposes, many users select to build
|
||||
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
|
||||
include the following:
|
||||
|
||||
* One physical server can support multiple nodes, each with almost any
|
||||
number of network interfaces.
|
||||
|
||||
* Ability to take periodic "snap shots" throughout the installation
|
||||
process and "roll back" to a working configuration in the event of a
|
||||
problem.
|
||||
|
||||
However, VMs will reduce performance of your instances, particularly if
|
||||
your hypervisor and/or processor lacks support for hardware acceleration
|
||||
of nested VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you choose to install on VMs, make sure your hypervisor provides
|
||||
a way to disable MAC address filtering on the provider network
|
||||
interface.
|
||||
|
||||
For more information about system requirements, see the `OpenStack
|
||||
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-security.rst
|
||||
environment-networking.rst
|
||||
environment-ntp.rst
|
||||
environment-packages.rst
|
||||
environment-sql-database.rst
|
||||
environment-messaging.rst
|
||||
environment-memcached.rst
|
68
doc/install-guide/source/environment-sql-database-debian.rst
Normal file
68
doc/install-guide/source/environment-sql-database-debian.rst
Normal file
@ -0,0 +1,68 @@
|
||||
SQL database
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Most OpenStack services use an SQL database to store information. The
|
||||
database typically runs on the controller node. The procedures in this
|
||||
guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <https://www.postgresql.org/>`__.
|
||||
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install mysql-server python-pymysql
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Create and edit the ``/etc/mysql/conf.d/openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. path /etc/mysql/conf.d/openstack.cnf
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
#. Restart the database service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service mysql restart
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
82
doc/install-guide/source/environment-sql-database-obs.rst
Normal file
82
doc/install-guide/source/environment-sql-database-obs.rst
Normal file
@ -0,0 +1,82 @@
|
||||
SQL database
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Most OpenStack services use an SQL database to store information. The
|
||||
database typically runs on the controller node. The procedures in this
|
||||
guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <https://www.postgresql.org/>`__.
|
||||
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install mariadb-client mariadb python-PyMySQL
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Create and edit the ``/etc/my.cnf.d/openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. path /etc/my.cnf.d/openstack.cnf
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Start the database service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mysql.service
|
||||
# systemctl start mysql.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Secure the database service by running the ``mysql_secure_installation``
|
||||
script. In particular, choose a suitable password for the database
|
||||
``root`` account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql_secure_installation
|
||||
|
||||
.. end
|
||||
|
82
doc/install-guide/source/environment-sql-database-rdo.rst
Normal file
82
doc/install-guide/source/environment-sql-database-rdo.rst
Normal file
@ -0,0 +1,82 @@
|
||||
SQL database
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Most OpenStack services use an SQL database to store information. The
|
||||
database typically runs on the controller node. The procedures in this
|
||||
guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <https://www.postgresql.org/>`__.
|
||||
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install mariadb mariadb-server python2-PyMySQL
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Create and edit the ``/etc/my.cnf.d/openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. path /etc/my.cnf.d/openstack.cnf
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Start the database service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mariadb.service
|
||||
# systemctl start mariadb.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Secure the database service by running the ``mysql_secure_installation``
|
||||
script. In particular, choose a suitable password for the database
|
||||
``root`` account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql_secure_installation
|
||||
|
||||
.. end
|
||||
|
87
doc/install-guide/source/environment-sql-database-ubuntu.rst
Normal file
87
doc/install-guide/source/environment-sql-database-ubuntu.rst
Normal file
@ -0,0 +1,87 @@
|
||||
SQL database
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Most OpenStack services use an SQL database to store information. The
|
||||
database typically runs on the controller node. The procedures in this
|
||||
guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <https://www.postgresql.org/>`__.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
As of Ubuntu 16.04, MariaDB was changed to use
|
||||
the "unix_socket Authentication Plugin". Local authentication is
|
||||
now performed using the user credentials (UID), and password
|
||||
authentication is no longer used by default. This means that
|
||||
the root user no longer uses a password for local access to
|
||||
the server.
|
||||
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install mariadb-server python-pymysql
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Create and edit the ``/etc/mysql/mariadb.conf.d/99-openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
#. Restart the database service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service mysql restart
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Secure the database service by running the ``mysql_secure_installation``
|
||||
script. In particular, choose a suitable password for the database
|
||||
``root`` account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql_secure_installation
|
||||
|
||||
.. end
|
||||
|
@ -7,195 +7,8 @@ guide use MariaDB or MySQL depending on the distribution. OpenStack
|
||||
services also support other SQL databases including
|
||||
`PostgreSQL <https://www.postgresql.org/>`__.
|
||||
|
||||
.. only:: ubuntu
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
.. note::
|
||||
environment-sql-database-*
|
||||
|
||||
As of Ubuntu 16.04, MariaDB was changed to use
|
||||
the "unix_socket Authentication Plugin". Local authentication is
|
||||
now performed using the user credentials (UID), and password
|
||||
authentication is no longer used by default. This means that
|
||||
the root user no longer uses a password for local access to
|
||||
the server.
|
||||
|
||||
.. endonly
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install mariadb-server python-pymysql
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install mysql-server python-pymysql
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install mariadb mariadb-server python2-PyMySQL
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install mariadb-client mariadb python-PyMySQL
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
2. Create and edit the ``/etc/mysql/conf.d/openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. path /etc/mysql/conf.d/openstack.cnf
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
2. Create and edit the ``/etc/mysql/mariadb.conf.d/99-openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
2. Create and edit the ``/etc/my.cnf.d/openstack.cnf`` file
|
||||
and complete the following actions:
|
||||
|
||||
- Create a ``[mysqld]`` section, and set the ``bind-address``
|
||||
key to the management IP address of the controller node to
|
||||
enable access by other nodes via the management network. Set
|
||||
additional keys to enable useful options and the UTF-8
|
||||
character set:
|
||||
|
||||
.. path /etc/my.cnf.d/openstack.cnf
|
||||
.. code-block:: ini
|
||||
|
||||
[mysqld]
|
||||
bind-address = 10.0.0.11
|
||||
|
||||
default-storage-engine = innodb
|
||||
innodb_file_per_table = on
|
||||
max_connections = 4096
|
||||
collation-server = utf8_general_ci
|
||||
character-set-server = utf8
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Restart the database service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service mysql restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs
|
||||
|
||||
#. Start the database service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mariadb.service
|
||||
# systemctl start mariadb.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable mysql.service
|
||||
# systemctl start mysql.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or obs or ubuntu
|
||||
|
||||
2. Secure the database service by running the ``mysql_secure_installation``
|
||||
script. In particular, choose a suitable password for the database
|
||||
``root`` account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql_secure_installation
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
76
doc/install-guide/source/environment-ubuntu.rst
Normal file
76
doc/install-guide/source/environment-ubuntu.rst
Normal file
@ -0,0 +1,76 @@
|
||||
===========
|
||||
Environment
|
||||
===========
|
||||
|
||||
This section explains how to configure the controller node and one compute
|
||||
node using the example architecture.
|
||||
|
||||
Although most environments include Identity, Image service, Compute, at least
|
||||
one networking service, and the Dashboard, the Object Storage service can
|
||||
operate independently. If your use case only involves Object Storage, you can
|
||||
skip to `Object Storage Installation Guide
|
||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
|
||||
after configuring the appropriate nodes for it.
|
||||
|
||||
You must use an account with administrative privileges to configure each node.
|
||||
Either run the commands as the ``root`` user or configure the ``sudo``
|
||||
utility.
|
||||
|
||||
|
||||
For best performance, we recommend that your environment meets or exceeds
|
||||
the hardware requirements in :ref:`figure-hwreqs`.
|
||||
|
||||
The following minimum requirements should support a proof-of-concept
|
||||
environment with core services and several :term:`CirrOS` instances:
|
||||
|
||||
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
|
||||
|
||||
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
|
||||
|
||||
As the number of OpenStack services and virtual machines increase, so do the
|
||||
hardware requirements for the best performance. If performance degrades after
|
||||
enabling additional services or virtual machines, consider adding hardware
|
||||
resources to your environment.
|
||||
|
||||
To minimize clutter and provide more resources for OpenStack, we recommend
|
||||
a minimal installation of your Linux distribution. Also, you must install a
|
||||
64-bit version of your distribution on each node.
|
||||
|
||||
A single disk partition on each node works for most basic installations.
|
||||
However, you should consider :term:`Logical Volume Manager (LVM)` for
|
||||
installations with optional services such as Block Storage.
|
||||
|
||||
For first-time installation and testing purposes, many users select to build
|
||||
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
|
||||
include the following:
|
||||
|
||||
* One physical server can support multiple nodes, each with almost any
|
||||
number of network interfaces.
|
||||
|
||||
* Ability to take periodic "snap shots" throughout the installation
|
||||
process and "roll back" to a working configuration in the event of a
|
||||
problem.
|
||||
|
||||
However, VMs will reduce performance of your instances, particularly if
|
||||
your hypervisor and/or processor lacks support for hardware acceleration
|
||||
of nested VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you choose to install on VMs, make sure your hypervisor provides
|
||||
a way to disable MAC address filtering on the provider network
|
||||
interface.
|
||||
|
||||
For more information about system requirements, see the `OpenStack
|
||||
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-security.rst
|
||||
environment-networking.rst
|
||||
environment-ntp.rst
|
||||
environment-packages.rst
|
||||
environment-sql-database.rst
|
||||
environment-messaging.rst
|
||||
environment-memcached.rst
|
@ -4,82 +4,9 @@
|
||||
Environment
|
||||
===========
|
||||
|
||||
This section explains how to configure the controller node and one compute
|
||||
node using the example architecture.
|
||||
|
||||
Although most environments include Identity, Image service, Compute, at least
|
||||
one networking service, and the Dashboard, the Object Storage service can
|
||||
operate independently. If your use case only involves Object Storage, you can
|
||||
skip to `Object Storage Installation Guide
|
||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_
|
||||
after configuring the appropriate nodes for it.
|
||||
|
||||
You must use an account with administrative privileges to configure each node.
|
||||
Either run the commands as the ``root`` user or configure the ``sudo``
|
||||
utility.
|
||||
|
||||
.. only:: obs
|
||||
|
||||
The :command:`systemctl enable` call on openSUSE outputs a warning message
|
||||
when the service uses SysV Init scripts instead of native systemd files. This
|
||||
warning can be ignored.
|
||||
|
||||
.. endonly
|
||||
|
||||
For best performance, we recommend that your environment meets or exceeds
|
||||
the hardware requirements in :ref:`figure-hwreqs`.
|
||||
|
||||
The following minimum requirements should support a proof-of-concept
|
||||
environment with core services and several :term:`CirrOS` instances:
|
||||
|
||||
* Controller Node: 1 processor, 4 GB memory, and 5 GB storage
|
||||
|
||||
* Compute Node: 1 processor, 2 GB memory, and 10 GB storage
|
||||
|
||||
As the number of OpenStack services and virtual machines increase, so do the
|
||||
hardware requirements for the best performance. If performance degrades after
|
||||
enabling additional services or virtual machines, consider adding hardware
|
||||
resources to your environment.
|
||||
|
||||
To minimize clutter and provide more resources for OpenStack, we recommend
|
||||
a minimal installation of your Linux distribution. Also, you must install a
|
||||
64-bit version of your distribution on each node.
|
||||
|
||||
A single disk partition on each node works for most basic installations.
|
||||
However, you should consider :term:`Logical Volume Manager (LVM)` for
|
||||
installations with optional services such as Block Storage.
|
||||
|
||||
For first-time installation and testing purposes, many users select to build
|
||||
each host as a :term:`virtual machine (VM)`. The primary benefits of VMs
|
||||
include the following:
|
||||
|
||||
* One physical server can support multiple nodes, each with almost any
|
||||
number of network interfaces.
|
||||
|
||||
* Ability to take periodic "snap shots" throughout the installation
|
||||
process and "roll back" to a working configuration in the event of a
|
||||
problem.
|
||||
|
||||
However, VMs will reduce performance of your instances, particularly if
|
||||
your hypervisor and/or processor lacks support for hardware acceleration
|
||||
of nested VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you choose to install on VMs, make sure your hypervisor provides
|
||||
a way to disable MAC address filtering on the provider network
|
||||
interface.
|
||||
|
||||
For more information about system requirements, see the `OpenStack
|
||||
Operations Guide <https://docs.openstack.org/ops-guide/>`_.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-security.rst
|
||||
environment-networking.rst
|
||||
environment-ntp.rst
|
||||
environment-packages.rst
|
||||
environment-sql-database.rst
|
||||
environment-messaging.rst
|
||||
environment-memcached.rst
|
||||
environment-debian
|
||||
environment-obs
|
||||
environment-rdo
|
||||
environment-ubuntu
|
||||
|
329
doc/install-guide/source/glance-install-debian.rst
Normal file
329
doc/install-guide/source/glance-install-debian.rst
Normal file
@ -0,0 +1,329 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Image service,
|
||||
code-named glance, on the controller node. For simplicity, this
|
||||
configuration stores images on the local file system.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Image service, you must
|
||||
create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE glance;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to
|
||||
admin-only CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``glance`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt glance
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
||||
| name | glance |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``glance`` user and
|
||||
``service`` project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user glance admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``glance`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name glance \
|
||||
--description "OpenStack Image" image
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Image |
|
||||
| enabled | True |
|
||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| name | glance |
|
||||
| type | image |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Image service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image public http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 340be3625e9b4239a6415d034e98aace |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image internal http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image admin http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install glance
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[glance_store]`` section, configure the local file
|
||||
system store and location of image files:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[glance_store]
|
||||
# ...
|
||||
stores = file,http
|
||||
default_store = file
|
||||
filesystem_store_datadir = /var/lib/glance/images/
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
||||
the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
|
||||
4. Populate the Image service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Restart the Image services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service glance-registry restart
|
||||
# service glance-api restart
|
||||
|
||||
.. end
|
||||
|
333
doc/install-guide/source/glance-install-obs.rst
Normal file
333
doc/install-guide/source/glance-install-obs.rst
Normal file
@ -0,0 +1,333 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Image service,
|
||||
code-named glance, on the controller node. For simplicity, this
|
||||
configuration stores images on the local file system.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Image service, you must
|
||||
create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE glance;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to
|
||||
admin-only CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``glance`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt glance
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
||||
| name | glance |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``glance`` user and
|
||||
``service`` project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user glance admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``glance`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name glance \
|
||||
--description "OpenStack Image" image
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Image |
|
||||
| enabled | True |
|
||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| name | glance |
|
||||
| type | image |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Image service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image public http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 340be3625e9b4239a6415d034e98aace |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image internal http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image admin http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
||||
with the upstream default configuration files. For example
|
||||
``/etc/glance/glance-api.conf`` or
|
||||
``/etc/glance/glance-registry.conf``, with customizations in
|
||||
``/etc/glance/glance-api.conf.d/`` or
|
||||
``/etc/glance/glance-registry.conf.d/``. While the following
|
||||
instructions modify the default configuration files, adding new files
|
||||
in ``/etc/glance/glance-api.conf.d`` or
|
||||
``/etc/glance/glance-registry.conf.d`` achieves the same result.
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-glance \
|
||||
openstack-glance-api openstack-glance-registry
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[glance_store]`` section, configure the local file
|
||||
system store and location of image files:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[glance_store]
|
||||
# ...
|
||||
stores = file,http
|
||||
default_store = file
|
||||
filesystem_store_datadir = /var/lib/glance/images/
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
||||
the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Start the Image services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl start openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
|
||||
.. end
|
||||
|
||||
|
332
doc/install-guide/source/glance-install-rdo.rst
Normal file
332
doc/install-guide/source/glance-install-rdo.rst
Normal file
@ -0,0 +1,332 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Image service,
|
||||
code-named glance, on the controller node. For simplicity, this
|
||||
configuration stores images on the local file system.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Image service, you must
|
||||
create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
* Create the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE glance;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to
|
||||
admin-only CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``glance`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt glance
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
||||
| name | glance |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``glance`` user and
|
||||
``service`` project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user glance admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``glance`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name glance \
|
||||
--description "OpenStack Image" image
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Image |
|
||||
| enabled | True |
|
||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| name | glance |
|
||||
| type | image |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Image service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image public http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 340be3625e9b4239a6415d034e98aace |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image internal http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image admin http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-glance
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[glance_store]`` section, configure the local file
|
||||
system store and location of image files:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[glance_store]
|
||||
# ...
|
||||
stores = file,http
|
||||
default_store = file
|
||||
filesystem_store_datadir = /var/lib/glance/images/
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
||||
the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
|
||||
4. Populate the Image service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Start the Image services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl start openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
|
||||
.. end
|
||||
|
||||
|
329
doc/install-guide/source/glance-install-ubuntu.rst
Normal file
329
doc/install-guide/source/glance-install-ubuntu.rst
Normal file
@ -0,0 +1,329 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the Image service,
|
||||
code-named glance, on the controller node. For simplicity, this
|
||||
configuration stores images on the local file system.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Image service, you must
|
||||
create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
* Create the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE glance;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to
|
||||
admin-only CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``glance`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt glance
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
||||
| name | glance |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``glance`` user and
|
||||
``service`` project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user glance admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``glance`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name glance \
|
||||
--description "OpenStack Image" image
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Image |
|
||||
| enabled | True |
|
||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| name | glance |
|
||||
| type | image |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Image service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image public http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 340be3625e9b4239a6415d034e98aace |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image internal http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image admin http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install glance
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[glance_store]`` section, configure the local file
|
||||
system store and location of image files:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[glance_store]
|
||||
# ...
|
||||
stores = file,http
|
||||
default_store = file
|
||||
filesystem_store_datadir = /var/lib/glance/images/
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
||||
the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
|
||||
4. Populate the Image service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. Restart the Image services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service glance-registry restart
|
||||
# service glance-api restart
|
||||
|
||||
.. end
|
||||
|
@ -5,398 +5,7 @@ This section describes how to install and configure the Image service,
|
||||
code-named glance, on the controller node. For simplicity, this
|
||||
configuration stores images on the local file system.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
Before you install and configure the Image service, you must
|
||||
create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or debian or obs
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
* Create the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE glance;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``glance`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to
|
||||
admin-only CLI commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``glance`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt glance
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
||||
| name | glance |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``glance`` user and
|
||||
``service`` project:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user glance admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``glance`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name glance \
|
||||
--description "OpenStack Image" image
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Image |
|
||||
| enabled | True |
|
||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| name | glance |
|
||||
| type | image |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Image service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image public http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 340be3625e9b4239a6415d034e98aace |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image internal http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
image admin http://controller:9292
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
||||
| service_name | glance |
|
||||
| service_type | image |
|
||||
| url | http://controller:9292 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. note::
|
||||
|
||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
||||
with the upstream default configuration files. For example
|
||||
``/etc/glance/glance-api.conf`` or
|
||||
``/etc/glance/glance-registry.conf``, with customizations in
|
||||
``/etc/glance/glance-api.conf.d/`` or
|
||||
``/etc/glance/glance-registry.conf.d/``. While the following
|
||||
instructions modify the default configuration files, adding new files
|
||||
in ``/etc/glance/glance-api.conf.d`` or
|
||||
``/etc/glance/glance-registry.conf.d`` achieves the same result.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-glance \
|
||||
openstack-glance-api openstack-glance-registry
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-glance
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install glance
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[glance_store]`` section, configure the local file
|
||||
system store and location of image files:
|
||||
|
||||
.. path /etc/glance/glance.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[glance_store]
|
||||
# ...
|
||||
stores = file,http
|
||||
default_store = file
|
||||
filesystem_store_datadir = /var/lib/glance/images/
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
||||
the following actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
||||
Image service database.
|
||||
|
||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
||||
configure Identity service access:
|
||||
|
||||
.. path /etc/glance/glance-registry.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
auth_uri = http://controller:5000
|
||||
auth_url = http://controller:35357
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = default
|
||||
user_domain_name = default
|
||||
project_name = service
|
||||
username = glance
|
||||
password = GLANCE_PASS
|
||||
|
||||
[paste_deploy]
|
||||
# ...
|
||||
flavor = keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
||||
``glance`` user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
.. only:: rdo or ubuntu or debian
|
||||
|
||||
4. Populate the Image service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Ignore any deprecation messages in this output.
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
* Start the Image services and configure them to start when
|
||||
the system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
# systemctl start openstack-glance-api.service \
|
||||
openstack-glance-registry.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Restart the Image services:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service glance-registry restart
|
||||
# service glance-api restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
glance-install-*
|
||||
|
212
doc/install-guide/source/horizon-install-debian.rst
Normal file
212
doc/install-guide/source/horizon-install-debian.rst
Normal file
@ -0,0 +1,212 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The only core service required by the dashboard is the Identity service.
|
||||
You can use the dashboard in combination with other services, such as
|
||||
Image service, Compute, and Networking. You can also use the dashboard
|
||||
in environments with stand-alone services such as Object Storage.
|
||||
|
||||
.. note::
|
||||
|
||||
This section assumes proper installation, configuration, and operation
|
||||
of the Identity service using the Apache HTTP server and Memcached
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install openstack-dashboard-apache
|
||||
|
||||
.. end
|
||||
|
||||
2. Respond to prompts for web server configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
The automatic configuration process generates a self-signed
|
||||
SSL certificate. Consider obtaining an official certificate
|
||||
for production environments.
|
||||
|
||||
.. note::
|
||||
|
||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
||||
keeping your default vhost and only adding an Alias directive: this is
|
||||
the default. The other mode will remove the default Apache vhost and install
|
||||
the dashboard on the webroot. It was the only available option
|
||||
before the Liberty release. If you prefer to set the Apache configuration
|
||||
manually, install the ``openstack-dashboard`` package instead of
|
||||
``openstack-dashboard-apache``.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* In the Dashboard configuration section, allow your hosts to access
|
||||
Dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
- Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu
|
||||
configuration section.
|
||||
- ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This
|
||||
may be useful for development work, but is potentially insecure
|
||||
and should not be used in production. See the
|
||||
`Django documentation
|
||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_ipv6': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Reload the web server configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 reload
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
204
doc/install-guide/source/horizon-install-obs.rst
Normal file
204
doc/install-guide/source/horizon-install-obs.rst
Normal file
@ -0,0 +1,204 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The only core service required by the dashboard is the Identity service.
|
||||
You can use the dashboard in combination with other services, such as
|
||||
Image service, Compute, and Networking. You can also use the dashboard
|
||||
in environments with stand-alone services such as Object Storage.
|
||||
|
||||
.. note::
|
||||
|
||||
This section assumes proper installation, configuration, and operation
|
||||
of the Identity service using the Apache HTTP server and Memcached
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Configure the web server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
||||
# a2enmod rewrite
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the
|
||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* Allow your hosts to access the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This may be
|
||||
useful for development work, but is potentially insecure and should
|
||||
not be used in production. See `Django documentation
|
||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
* Restart the web server and session storage service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart apache2.service memcached.service
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts each service if
|
||||
not currently running.
|
||||
|
||||
|
194
doc/install-guide/source/horizon-install-rdo.rst
Normal file
194
doc/install-guide/source/horizon-install-rdo.rst
Normal file
@ -0,0 +1,194 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The only core service required by the dashboard is the Identity service.
|
||||
You can use the dashboard in combination with other services, such as
|
||||
Image service, Compute, and Networking. You can also use the dashboard
|
||||
in environments with stand-alone services such as Object Storage.
|
||||
|
||||
.. note::
|
||||
|
||||
This section assumes proper installation, configuration, and operation
|
||||
of the Identity service using the Apache HTTP server and Memcached
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* Allow your hosts to access the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
ALLOWED_HOSTS can also be ['*'] to accept all hosts. This may be
|
||||
useful for development work, but is potentially insecure and should
|
||||
not be used in production. See
|
||||
https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
|
||||
* Restart the web server and session storage service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart httpd.service memcached.service
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts each service if
|
||||
not currently running.
|
||||
|
194
doc/install-guide/source/horizon-install-ubuntu.rst
Normal file
194
doc/install-guide/source/horizon-install-ubuntu.rst
Normal file
@ -0,0 +1,194 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the dashboard
|
||||
on the controller node.
|
||||
|
||||
The only core service required by the dashboard is the Identity service.
|
||||
You can use the dashboard in combination with other services, such as
|
||||
Image service, Compute, and Networking. You can also use the dashboard
|
||||
in environments with stand-alone services such as Object Storage.
|
||||
|
||||
.. note::
|
||||
|
||||
This section assumes proper installation, configuration, and operation
|
||||
of the Identity service using the Apache HTTP server and Memcached
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* In the Dashboard configuration section, allow your hosts to access
|
||||
Dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
- Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu
|
||||
configuration section.
|
||||
- ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This
|
||||
may be useful for development work, but is potentially insecure
|
||||
and should not be used in production. See the
|
||||
`Django documentation
|
||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_ipv6': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
* Reload the web server configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 reload
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
@ -16,554 +16,7 @@ in environments with stand-alone services such as Object Storage.
|
||||
service as described in the :ref:`Install and configure the Identity
|
||||
service <keystone-install>` section.
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
.. only:: obs
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install openstack-dashboard
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
1. Install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install openstack-dashboard-apache
|
||||
|
||||
.. end
|
||||
|
||||
2. Respond to prompts for web server configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
The automatic configuration process generates a self-signed
|
||||
SSL certificate. Consider obtaining an official certificate
|
||||
for production environments.
|
||||
|
||||
.. note::
|
||||
|
||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
||||
keeping your default vhost and only adding an Alias directive: this is
|
||||
the default. The other mode will remove the default Apache vhost and install
|
||||
the dashboard on the webroot. It was the only available option
|
||||
before the Liberty release. If you prefer to set the Apache configuration
|
||||
manually, install the ``openstack-dashboard`` package instead of
|
||||
``openstack-dashboard-apache``.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
2. Configure the web server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
||||
# a2enmod rewrite
|
||||
|
||||
.. end
|
||||
|
||||
3. Edit the
|
||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* Allow your hosts to access the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This may be
|
||||
useful for development work, but is potentially insecure and should
|
||||
not be used in production. See `Django documentation
|
||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* Allow your hosts to access the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
ALLOWED_HOSTS can also be ['*'] to accept all hosts. This may be
|
||||
useful for development work, but is potentially insecure and should
|
||||
not be used in production. See
|
||||
https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
2. Edit the
|
||||
``/etc/openstack-dashboard/local_settings.py``
|
||||
file and complete the following actions:
|
||||
|
||||
* Configure the dashboard to use OpenStack services on the
|
||||
``controller`` node:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_HOST = "controller"
|
||||
|
||||
.. end
|
||||
|
||||
* In the Dashboard configuration section, allow your hosts to access
|
||||
Dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
- Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu
|
||||
configuration section.
|
||||
- ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This
|
||||
may be useful for development work, but is potentially insecure
|
||||
and should not be used in production. See the
|
||||
`Django documentation
|
||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
||||
for further information.
|
||||
|
||||
* Configure the ``memcached`` session storage service:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
||||
|
||||
CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
||||
'LOCATION': 'controller:11211',
|
||||
}
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out any other session storage configuration.
|
||||
|
||||
* Enable the Identity API version 3:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
||||
|
||||
.. end
|
||||
|
||||
* Enable support for domains:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
||||
|
||||
.. end
|
||||
|
||||
* Configure API versions:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_API_VERSIONS = {
|
||||
"identity": 3,
|
||||
"image": 2,
|
||||
"volume": 2,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``Default`` as the default domain for users that you create
|
||||
via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
||||
|
||||
.. end
|
||||
|
||||
* Configure ``user`` as the default role for
|
||||
users that you create via the dashboard:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
||||
|
||||
.. end
|
||||
|
||||
* If you chose networking option 1, disable support for layer-3
|
||||
networking services:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_router': False,
|
||||
'enable_quotas': False,
|
||||
'enable_ipv6': False,
|
||||
'enable_distributed_router': False,
|
||||
'enable_ha_router': False,
|
||||
'enable_lb': False,
|
||||
'enable_firewall': False,
|
||||
'enable_vpn': False,
|
||||
'enable_fip_topology_check': False,
|
||||
}
|
||||
|
||||
.. end
|
||||
|
||||
* Optionally, configure the time zone:
|
||||
|
||||
.. path /etc/openstack-dashboard/local_settings.py
|
||||
.. code-block:: python
|
||||
|
||||
TIME_ZONE = "TIME_ZONE"
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
||||
For more information, see the `list of time zones
|
||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
||||
|
||||
.. endonly
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
* Reload the web server configuration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 reload
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
* Restart the web server and session storage service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart apache2.service memcached.service
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts each service if
|
||||
not currently running.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
* Restart the web server and session storage service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart httpd.service memcached.service
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
The ``systemctl restart`` command starts each service if
|
||||
not currently running.
|
||||
|
||||
.. endonly
|
||||
horizon-install-*
|
||||
|
14
doc/install-guide/source/horizon-verify-debian.rst
Normal file
14
doc/install-guide/source/horizon-verify-debian.rst
Normal file
@ -0,0 +1,14 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the dashboard.
|
||||
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/``.
|
||||
|
||||
|
||||
|
||||
|
||||
Authenticate using ``admin`` or ``demo`` user
|
||||
and ``default`` domain credentials.
|
14
doc/install-guide/source/horizon-verify-obs.rst
Normal file
14
doc/install-guide/source/horizon-verify-obs.rst
Normal file
@ -0,0 +1,14 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the dashboard.
|
||||
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/``.
|
||||
|
||||
|
||||
|
||||
|
||||
Authenticate using ``admin`` or ``demo`` user
|
||||
and ``default`` domain credentials.
|
14
doc/install-guide/source/horizon-verify-rdo.rst
Normal file
14
doc/install-guide/source/horizon-verify-rdo.rst
Normal file
@ -0,0 +1,14 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the dashboard.
|
||||
|
||||
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/dashboard``.
|
||||
|
||||
|
||||
|
||||
Authenticate using ``admin`` or ``demo`` user
|
||||
and ``default`` domain credentials.
|
14
doc/install-guide/source/horizon-verify-ubuntu.rst
Normal file
14
doc/install-guide/source/horizon-verify-ubuntu.rst
Normal file
@ -0,0 +1,14 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the dashboard.
|
||||
|
||||
|
||||
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/horizon``.
|
||||
|
||||
|
||||
Authenticate using ``admin`` or ``demo`` user
|
||||
and ``default`` domain credentials.
|
@ -1,28 +1,7 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the dashboard.
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
.. only:: obs or debian
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/``.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/dashboard``.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
Access the dashboard using a web browser at
|
||||
``http://controller/horizon``.
|
||||
|
||||
.. endonly
|
||||
|
||||
Authenticate using ``admin`` or ``demo`` user
|
||||
and ``default`` domain credentials.
|
||||
horizon-verify-*
|
||||
|
84
doc/install-guide/source/index-debian.rst
Normal file
84
doc/install-guide/source/index-debian.rst
Normal file
@ -0,0 +1,84 @@
|
||||
==========================================
|
||||
OpenStack Installation Tutorial for Debian
|
||||
==========================================
|
||||
|
||||
|
||||
Abstract
|
||||
~~~~~~~~
|
||||
|
||||
The OpenStack system consists of several key services that are separately
|
||||
installed. These services work together depending on your cloud
|
||||
needs and include the Compute, Identity, Networking, Image, Block Storage,
|
||||
Object Storage, Telemetry, Orchestration, and Database services. You
|
||||
can install any of these projects separately and configure them stand-alone
|
||||
or as connected entities.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
This guide walks through an installation by using packages
|
||||
available through Debian 8 (code name: Jessie).
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses installation with debconf set to non-interactive
|
||||
mode. That is, there will be no debconf prompt. To configure a computer
|
||||
to use this mode, run the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# dpkg-reconfigure debconf
|
||||
|
||||
.. end
|
||||
|
||||
If you prefer to use debconf, refer to the debconf
|
||||
install-guide for Debian.
|
||||
|
||||
|
||||
Explanations of configuration options and sample configuration files
|
||||
are included.
|
||||
|
||||
.. note::
|
||||
The Training Labs scripts provide an automated way of deploying the
|
||||
cluster described in this Installation Guide into VirtualBox or KVM
|
||||
VMs. You will need a desktop computer or a laptop with at least 8
|
||||
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
|
||||
Please see the
|
||||
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
|
||||
|
||||
This guide documents the OpenStack Ocata release.
|
||||
|
||||
.. warning::
|
||||
|
||||
This guide is a work-in-progress and is subject to updates frequently.
|
||||
Pre-release packages have been used for testing, and some instructions
|
||||
may not work with final versions. Please help us make this guide better
|
||||
by reporting any errors you encounter.
|
||||
|
||||
Contents
|
||||
~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
common/conventions.rst
|
||||
overview.rst
|
||||
environment.rst
|
||||
keystone.rst
|
||||
glance.rst
|
||||
nova.rst
|
||||
neutron.rst
|
||||
horizon.rst
|
||||
cinder.rst
|
||||
additional-services.rst
|
||||
launch-instance.rst
|
||||
common/appendix.rst
|
||||
|
||||
.. Pseudo only directive for each distribution used by the build tool.
|
||||
This pseudo only directive for toctree only works fine with Tox.
|
||||
When you directly build this guide with Sphinx,
|
||||
some navigation menu may not work properly.
|
||||
.. Keep this pseudo only directive not to break translation tool chain
|
||||
at the openstack-doc-tools repo until it is changed.
|
||||
.. end of contents
|
72
doc/install-guide/source/index-obs.rst
Normal file
72
doc/install-guide/source/index-obs.rst
Normal file
@ -0,0 +1,72 @@
|
||||
======================================================================
|
||||
OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise
|
||||
======================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
Abstract
|
||||
~~~~~~~~
|
||||
|
||||
The OpenStack system consists of several key services that are separately
|
||||
installed. These services work together depending on your cloud
|
||||
needs and include the Compute, Identity, Networking, Image, Block Storage,
|
||||
Object Storage, Telemetry, Orchestration, and Database services. You
|
||||
can install any of these projects separately and configure them stand-alone
|
||||
or as connected entities.
|
||||
|
||||
|
||||
|
||||
|
||||
This guide will show you how to install OpenStack by using packages
|
||||
on openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 - for
|
||||
both SP1 and SP2 - through the Open Build Service Cloud repository.
|
||||
|
||||
|
||||
|
||||
Explanations of configuration options and sample configuration files
|
||||
are included.
|
||||
|
||||
.. note::
|
||||
The Training Labs scripts provide an automated way of deploying the
|
||||
cluster described in this Installation Guide into VirtualBox or KVM
|
||||
VMs. You will need a desktop computer or a laptop with at least 8
|
||||
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
|
||||
Please see the
|
||||
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
|
||||
|
||||
This guide documents the OpenStack Ocata release.
|
||||
|
||||
.. warning::
|
||||
|
||||
This guide is a work-in-progress and is subject to updates frequently.
|
||||
Pre-release packages have been used for testing, and some instructions
|
||||
may not work with final versions. Please help us make this guide better
|
||||
by reporting any errors you encounter.
|
||||
|
||||
Contents
|
||||
~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
common/conventions.rst
|
||||
overview.rst
|
||||
environment.rst
|
||||
keystone.rst
|
||||
glance.rst
|
||||
nova.rst
|
||||
neutron.rst
|
||||
horizon.rst
|
||||
cinder.rst
|
||||
additional-services.rst
|
||||
launch-instance.rst
|
||||
common/appendix.rst
|
||||
|
||||
.. Pseudo only directive for each distribution used by the build tool.
|
||||
This pseudo only directive for toctree only works fine with Tox.
|
||||
When you directly build this guide with Sphinx,
|
||||
some navigation menu may not work properly.
|
||||
.. Keep this pseudo only directive not to break translation tool chain
|
||||
at the openstack-doc-tools repo until it is changed.
|
||||
.. end of contents
|
73
doc/install-guide/source/index-rdo.rst
Normal file
73
doc/install-guide/source/index-rdo.rst
Normal file
@ -0,0 +1,73 @@
|
||||
=======================================================================
|
||||
OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS
|
||||
=======================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Abstract
|
||||
~~~~~~~~
|
||||
|
||||
The OpenStack system consists of several key services that are separately
|
||||
installed. These services work together depending on your cloud
|
||||
needs and include the Compute, Identity, Networking, Image, Block Storage,
|
||||
Object Storage, Telemetry, Orchestration, and Database services. You
|
||||
can install any of these projects separately and configure them stand-alone
|
||||
or as connected entities.
|
||||
|
||||
|
||||
This guide will show you how to install OpenStack by using packages
|
||||
available on Red Hat Enterprise Linux 7 and its derivatives through
|
||||
the RDO repository.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Explanations of configuration options and sample configuration files
|
||||
are included.
|
||||
|
||||
.. note::
|
||||
The Training Labs scripts provide an automated way of deploying the
|
||||
cluster described in this Installation Guide into VirtualBox or KVM
|
||||
VMs. You will need a desktop computer or a laptop with at least 8
|
||||
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
|
||||
Please see the
|
||||
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
|
||||
|
||||
This guide documents the OpenStack Ocata release.
|
||||
|
||||
.. warning::
|
||||
|
||||
This guide is a work-in-progress and is subject to updates frequently.
|
||||
Pre-release packages have been used for testing, and some instructions
|
||||
may not work with final versions. Please help us make this guide better
|
||||
by reporting any errors you encounter.
|
||||
|
||||
Contents
|
||||
~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
common/conventions.rst
|
||||
overview.rst
|
||||
environment.rst
|
||||
keystone.rst
|
||||
glance.rst
|
||||
nova.rst
|
||||
neutron.rst
|
||||
horizon.rst
|
||||
cinder.rst
|
||||
additional-services.rst
|
||||
launch-instance.rst
|
||||
common/appendix.rst
|
||||
|
||||
.. Pseudo only directive for each distribution used by the build tool.
|
||||
This pseudo only directive for toctree only works fine with Tox.
|
||||
When you directly build this guide with Sphinx,
|
||||
some navigation menu may not work properly.
|
||||
.. Keep this pseudo only directive not to break translation tool chain
|
||||
at the openstack-doc-tools repo until it is changed.
|
||||
.. end of contents
|
71
doc/install-guide/source/index-ubuntu.rst
Normal file
71
doc/install-guide/source/index-ubuntu.rst
Normal file
@ -0,0 +1,71 @@
|
||||
==========================================
|
||||
OpenStack Installation Tutorial for Ubuntu
|
||||
==========================================
|
||||
|
||||
|
||||
|
||||
Abstract
|
||||
~~~~~~~~
|
||||
|
||||
The OpenStack system consists of several key services that are separately
|
||||
installed. These services work together depending on your cloud
|
||||
needs and include the Compute, Identity, Networking, Image, Block Storage,
|
||||
Object Storage, Telemetry, Orchestration, and Database services. You
|
||||
can install any of these projects separately and configure them stand-alone
|
||||
or as connected entities.
|
||||
|
||||
|
||||
|
||||
This guide will walk through an installation by using packages
|
||||
available through Canonical's Ubuntu Cloud archive repository for
|
||||
Ubuntu 16.04 (LTS).
|
||||
|
||||
|
||||
|
||||
|
||||
Explanations of configuration options and sample configuration files
|
||||
are included.
|
||||
|
||||
.. note::
|
||||
The Training Labs scripts provide an automated way of deploying the
|
||||
cluster described in this Installation Guide into VirtualBox or KVM
|
||||
VMs. You will need a desktop computer or a laptop with at least 8
|
||||
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
|
||||
Please see the
|
||||
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
|
||||
|
||||
This guide documents the OpenStack Ocata release.
|
||||
|
||||
.. warning::
|
||||
|
||||
This guide is a work-in-progress and is subject to updates frequently.
|
||||
Pre-release packages have been used for testing, and some instructions
|
||||
may not work with final versions. Please help us make this guide better
|
||||
by reporting any errors you encounter.
|
||||
|
||||
Contents
|
||||
~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
common/conventions.rst
|
||||
overview.rst
|
||||
environment.rst
|
||||
keystone.rst
|
||||
glance.rst
|
||||
nova.rst
|
||||
neutron.rst
|
||||
horizon.rst
|
||||
cinder.rst
|
||||
additional-services.rst
|
||||
launch-instance.rst
|
||||
common/appendix.rst
|
||||
|
||||
.. Pseudo only directive for each distribution used by the build tool.
|
||||
This pseudo only directive for toctree only works fine with Tox.
|
||||
When you directly build this guide with Sphinx,
|
||||
some navigation menu may not work properly.
|
||||
.. Keep this pseudo only directive not to break translation tool chain
|
||||
at the openstack-doc-tools repo until it is changed.
|
||||
.. end of contents
|
@ -1,140 +1,11 @@
|
||||
.. title:: OpenStack Installation Tutorial
|
||||
|
||||
.. Don't remove or change title tag manually, which is used by the build tool.
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
=======================================================================
|
||||
OpenStack Installation Tutorial for Red Hat Enterprise Linux and CentOS
|
||||
=======================================================================
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
======================================================================
|
||||
OpenStack Installation Tutorial for openSUSE and SUSE Linux Enterprise
|
||||
======================================================================
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
==========================================
|
||||
OpenStack Installation Tutorial for Ubuntu
|
||||
==========================================
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
==========================================
|
||||
OpenStack Installation Tutorial for Debian
|
||||
==========================================
|
||||
|
||||
.. endonly
|
||||
|
||||
Abstract
|
||||
~~~~~~~~
|
||||
|
||||
The OpenStack system consists of several key services that are separately
|
||||
installed. These services work together depending on your cloud
|
||||
needs and include the Compute, Identity, Networking, Image, Block Storage,
|
||||
Object Storage, Telemetry, Orchestration, and Database services. You
|
||||
can install any of these projects separately and configure them stand-alone
|
||||
or as connected entities.
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
This guide will show you how to install OpenStack by using packages
|
||||
available on Red Hat Enterprise Linux 7 and its derivatives through
|
||||
the RDO repository.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
This guide will walk through an installation by using packages
|
||||
available through Canonical's Ubuntu Cloud archive repository for
|
||||
Ubuntu 16.04 (LTS).
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
This guide will show you how to install OpenStack by using packages
|
||||
on openSUSE Leap 42.2 and SUSE Linux Enterprise Server 12 - for
|
||||
both SP1 and SP2 - through the Open Build Service Cloud repository.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
This guide walks through an installation by using packages
|
||||
available through Debian 8 (code name: Jessie).
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses installation with debconf set to non-interactive
|
||||
mode. That is, there will be no debconf prompt. To configure a computer
|
||||
to use this mode, run the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# dpkg-reconfigure debconf
|
||||
|
||||
.. end
|
||||
|
||||
If you prefer to use debconf, refer to the debconf
|
||||
install-guide for Debian.
|
||||
|
||||
.. endonly
|
||||
|
||||
Explanations of configuration options and sample configuration files
|
||||
are included.
|
||||
|
||||
.. note::
|
||||
The Training Labs scripts provide an automated way of deploying the
|
||||
cluster described in this Installation Guide into VirtualBox or KVM
|
||||
VMs. You will need a desktop computer or a laptop with at least 8
|
||||
GB memory and 20 GB free storage running Linux, MaOS, or Windows.
|
||||
Please see the
|
||||
`OpenStack Training Labs <https://docs.openstack.org/training_labs/>`_.
|
||||
|
||||
This guide documents the OpenStack Ocata release.
|
||||
|
||||
.. warning::
|
||||
|
||||
This guide is a work-in-progress and is subject to updates frequently.
|
||||
Pre-release packages have been used for testing, and some instructions
|
||||
may not work with final versions. Please help us make this guide better
|
||||
by reporting any errors you encounter.
|
||||
|
||||
Contents
|
||||
~~~~~~~~
|
||||
=================================
|
||||
OpenStack Installation Tutorial
|
||||
=================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:maxdepth: 3
|
||||
|
||||
common/conventions.rst
|
||||
overview.rst
|
||||
environment.rst
|
||||
keystone.rst
|
||||
glance.rst
|
||||
nova.rst
|
||||
neutron.rst
|
||||
horizon.rst
|
||||
cinder.rst
|
||||
additional-services.rst
|
||||
launch-instance.rst
|
||||
common/appendix.rst
|
||||
|
||||
.. Pseudo only directive for each distribution used by the build tool.
|
||||
This pseudo only directive for toctree only works fine with Tox.
|
||||
When you directly build this guide with Sphinx,
|
||||
some navigation menu may not work properly.
|
||||
.. Keep this pseudo only directive not to break translation tool chain
|
||||
at the openstack-doc-tools repo until it is changed.
|
||||
.. only:: obs or rdo or ubuntu
|
||||
.. only:: debian
|
||||
.. end of contents
|
||||
index-debian
|
||||
index-obs
|
||||
index-rdo
|
||||
index-ubuntu
|
||||
|
197
doc/install-guide/source/keystone-install-debian.rst
Normal file
197
doc/install-guide/source/keystone-install-debian.rst
Normal file
@ -0,0 +1,197 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the OpenStack
|
||||
Identity service, code-named keystone, on the controller node. For
|
||||
scalability purposes, this configuration deploys Fernet tokens and
|
||||
the Apache HTTP server to handle requests.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Identity service, you must
|
||||
create a database.
|
||||
|
||||
|
||||
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Create the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
||||
|
||||
.. end
|
||||
|
||||
#. Grant proper access to the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
||||
|
||||
#. Exit the database access client.
|
||||
|
||||
.. _keystone-install-configure-debian:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. The package handles
|
||||
all of the Apache configuration for you (including the activation of
|
||||
the ``mod_wsgi`` apache2 module and keystone configuration in Apache).
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install keystone
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[token]`` section, configure the Fernet token provider:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[token]
|
||||
# ...
|
||||
provider = fernet
|
||||
|
||||
.. end
|
||||
|
||||
3. Populate the Identity service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
||||
|
||||
.. end
|
||||
|
||||
4. Initialize Fernet key repositories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
||||
|
||||
.. end
|
||||
|
||||
5. Bootstrap the Identity service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
||||
--bootstrap-public-url http://controller:5000/v3/ \
|
||||
--bootstrap-region-id RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
||||
|
||||
Configure the Apache HTTP server
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
|
||||
``ServerName`` option to reference the controller node:
|
||||
|
||||
.. path /etc/apache2/apache2.conf
|
||||
.. code-block:: apache
|
||||
|
||||
ServerName controller
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
The Debian package will perform the below operations for you:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# a2enmod wsgi
|
||||
# a2ensite wsgi-keystone.conf
|
||||
# invoke-rc.d apache2 restart
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
2. Configure the administrative account
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=admin
|
||||
$ export OS_PASSWORD=ADMIN_PASS
|
||||
$ export OS_PROJECT_NAME=admin
|
||||
$ export OS_USER_DOMAIN_NAME=Default
|
||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
||||
$ export OS_IDENTITY_API_VERSION=3
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with the password used in the
|
||||
``keystone-manage bootstrap`` command in `keystone-install-configure-debian`_.
|
261
doc/install-guide/source/keystone-install-obs.rst
Normal file
261
doc/install-guide/source/keystone-install-obs.rst
Normal file
@ -0,0 +1,261 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the OpenStack
|
||||
Identity service, code-named keystone, on the controller node. For
|
||||
scalability purposes, this configuration deploys Fernet tokens and
|
||||
the Apache HTTP server to handle requests.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Identity service, you must
|
||||
create a database.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Before you begin, ensure you have the most recent version of
|
||||
``python-pyasn1`` `installed <https://pypi.python.org/pypi/pyasn1>`_.
|
||||
|
||||
|
||||
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Create the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
||||
|
||||
.. end
|
||||
|
||||
#. Grant proper access to the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
||||
|
||||
#. Exit the database access client.
|
||||
|
||||
.. _keystone-install-configure-obs:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. Therefore, this guide
|
||||
manually disables the keystone service.
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
||||
with the upstream default configuration files. For example
|
||||
``/etc/keystone/keystone.conf``, with customizations in
|
||||
``/etc/keystone/keystone.conf.d/010-keystone.conf``. While the
|
||||
following instructions modify the default configuration file, adding a
|
||||
new file in ``/etc/keystone/keystone.conf.d`` achieves the same
|
||||
result.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-keystone apache2-mod_wsgi
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[token]`` section, configure the Fernet token provider:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[token]
|
||||
# ...
|
||||
provider = fernet
|
||||
|
||||
.. end
|
||||
|
||||
3. Populate the Identity service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
||||
|
||||
.. end
|
||||
|
||||
4. Initialize Fernet key repositories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
||||
|
||||
.. end
|
||||
|
||||
5. Bootstrap the Identity service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
||||
--bootstrap-public-url http://controller:5000/v3/ \
|
||||
--bootstrap-region-id RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
||||
|
||||
Configure the Apache HTTP server
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. Edit the ``/etc/sysconfig/apache2`` file and configure the
|
||||
``APACHE_SERVERNAME`` option to reference the controller node:
|
||||
|
||||
.. path /etc/sysconfig/apache2
|
||||
.. code-block:: shell
|
||||
|
||||
APACHE_SERVERNAME="controller"
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the ``/etc/apache2/conf.d/wsgi-keystone.conf`` file
|
||||
with the following content:
|
||||
|
||||
.. path /etc/apache2/conf.d/wsgi-keystone.conf
|
||||
.. code-block:: apache
|
||||
|
||||
Listen 5000
|
||||
Listen 35357
|
||||
|
||||
<VirtualHost *:5000>
|
||||
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
||||
WSGIProcessGroup keystone-public
|
||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
ErrorLogFormat "%{cu}t %M"
|
||||
ErrorLog /var/log/apache2/keystone.log
|
||||
CustomLog /var/log/apache2/keystone_access.log combined
|
||||
|
||||
<Directory /usr/bin>
|
||||
Require all granted
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
<VirtualHost *:35357>
|
||||
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
||||
WSGIProcessGroup keystone-admin
|
||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
ErrorLogFormat "%{cu}t %M"
|
||||
ErrorLog /var/log/apache2/keystone.log
|
||||
CustomLog /var/log/apache2/keystone_access.log combined
|
||||
|
||||
<Directory /usr/bin>
|
||||
Require all granted
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
.. end
|
||||
|
||||
#. Recursively change the ownership of the ``/etc/keystone`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chown -R keystone:keystone /etc/keystone
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
|
||||
|
||||
|
||||
#. Start the Apache HTTP service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable apache2.service
|
||||
# systemctl start apache2.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Configure the administrative account
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=admin
|
||||
$ export OS_PASSWORD=ADMIN_PASS
|
||||
$ export OS_PROJECT_NAME=admin
|
||||
$ export OS_USER_DOMAIN_NAME=Default
|
||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
||||
$ export OS_IDENTITY_API_VERSION=3
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with the password used in the
|
||||
``keystone-manage bootstrap`` command in `keystone-install-configure-obs`_.
|
203
doc/install-guide/source/keystone-install-rdo.rst
Normal file
203
doc/install-guide/source/keystone-install-rdo.rst
Normal file
@ -0,0 +1,203 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the OpenStack
|
||||
Identity service, code-named keystone, on the controller node. For
|
||||
scalability purposes, this configuration deploys Fernet tokens and
|
||||
the Apache HTTP server to handle requests.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Identity service, you must
|
||||
create a database.
|
||||
|
||||
|
||||
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
2. Create the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
||||
|
||||
.. end
|
||||
|
||||
#. Grant proper access to the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
||||
|
||||
#. Exit the database access client.
|
||||
|
||||
.. _keystone-install-configure-rdo:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. Therefore, this guide
|
||||
manually disables the keystone service.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-keystone httpd mod_wsgi
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[token]`` section, configure the Fernet token provider:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[token]
|
||||
# ...
|
||||
provider = fernet
|
||||
|
||||
.. end
|
||||
|
||||
3. Populate the Identity service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
||||
|
||||
.. end
|
||||
|
||||
4. Initialize Fernet key repositories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
||||
|
||||
.. end
|
||||
|
||||
5. Bootstrap the Identity service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
||||
--bootstrap-public-url http://controller:5000/v3/ \
|
||||
--bootstrap-region-id RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
||||
|
||||
Configure the Apache HTTP server
|
||||
--------------------------------
|
||||
|
||||
|
||||
#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the
|
||||
``ServerName`` option to reference the controller node:
|
||||
|
||||
.. path /etc/httpd/conf/httpd
|
||||
.. code-block:: apache
|
||||
|
||||
ServerName controller
|
||||
|
||||
.. end
|
||||
|
||||
#. Create a link to the ``/usr/share/keystone/wsgi-keystone.conf`` file:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
|
||||
|
||||
#. Start the Apache HTTP service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable httpd.service
|
||||
# systemctl start httpd.service
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Configure the administrative account
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=admin
|
||||
$ export OS_PASSWORD=ADMIN_PASS
|
||||
$ export OS_PROJECT_NAME=admin
|
||||
$ export OS_USER_DOMAIN_NAME=Default
|
||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
||||
$ export OS_IDENTITY_API_VERSION=3
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with the password used in the
|
||||
``keystone-manage bootstrap`` command in `keystone-install-configure-rdo`_.
|
193
doc/install-guide/source/keystone-install-ubuntu.rst
Normal file
193
doc/install-guide/source/keystone-install-ubuntu.rst
Normal file
@ -0,0 +1,193 @@
|
||||
Install and configure
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes how to install and configure the OpenStack
|
||||
Identity service, code-named keystone, on the controller node. For
|
||||
scalability purposes, this configuration deploys Fernet tokens and
|
||||
the Apache HTTP server to handle requests.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you install and configure the Identity service, you must
|
||||
create a database.
|
||||
|
||||
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
2. Create the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
||||
|
||||
.. end
|
||||
|
||||
#. Grant proper access to the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
||||
|
||||
#. Exit the database access client.
|
||||
|
||||
.. _keystone-install-configure-ubuntu:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. The package handles
|
||||
all of the Apache configuration for you (including the activation of
|
||||
the ``mod_wsgi`` apache2 module and keystone configuration in Apache).
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install keystone
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[token]`` section, configure the Fernet token provider:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[token]
|
||||
# ...
|
||||
provider = fernet
|
||||
|
||||
.. end
|
||||
|
||||
3. Populate the Identity service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
||||
|
||||
.. end
|
||||
|
||||
4. Initialize Fernet key repositories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
||||
|
||||
.. end
|
||||
|
||||
5. Bootstrap the Identity service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
||||
--bootstrap-public-url http://controller:5000/v3/ \
|
||||
--bootstrap-region-id RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
||||
|
||||
Configure the Apache HTTP server
|
||||
--------------------------------
|
||||
|
||||
|
||||
|
||||
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
|
||||
``ServerName`` option to reference the controller node:
|
||||
|
||||
.. path /etc/apache2/apache2.conf
|
||||
.. code-block:: apache
|
||||
|
||||
ServerName controller
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
|
||||
#. Restart the Apache service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 restart
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
|
||||
2. Configure the administrative account
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=admin
|
||||
$ export OS_PASSWORD=ADMIN_PASS
|
||||
$ export OS_PROJECT_NAME=admin
|
||||
$ export OS_USER_DOMAIN_NAME=Default
|
||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
||||
$ export OS_IDENTITY_API_VERSION=3
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with the password used in the
|
||||
``keystone-manage bootstrap`` command in `keystone-install-configure-ubuntu`_.
|
@ -8,385 +8,7 @@ Identity service, code-named keystone, on the controller node. For
|
||||
scalability purposes, this configuration deploys Fernet tokens and
|
||||
the Apache HTTP server to handle requests.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
Before you install and configure the Identity service, you must
|
||||
create a database.
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. note::
|
||||
|
||||
Before you begin, ensure you have the most recent version of
|
||||
``python-pyasn1`` `installed <https://pypi.python.org/pypi/pyasn1>`_.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mysql
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo or debian or obs
|
||||
|
||||
#. Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Create the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
||||
|
||||
.. end
|
||||
|
||||
#. Grant proper access to the ``keystone`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
||||
|
||||
#. Exit the database access client.
|
||||
|
||||
.. _keystone-install-configure:
|
||||
|
||||
Install and configure components
|
||||
--------------------------------
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
.. only:: obs or rdo
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. Therefore, this guide
|
||||
manually disables the keystone service.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
.. note::
|
||||
|
||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
||||
with the upstream default configuration files. For example
|
||||
``/etc/keystone/keystone.conf``, with customizations in
|
||||
``/etc/keystone/keystone.conf.d/010-keystone.conf``. While the
|
||||
following instructions modify the default configuration file, adding a
|
||||
new file in ``/etc/keystone/keystone.conf.d`` achieves the same
|
||||
result.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
.. note::
|
||||
|
||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
||||
Identity service requests on ports 5000 and 35357. By default, the
|
||||
keystone service still listens on these ports. The package handles
|
||||
all of the Apache configuration for you (including the activation of
|
||||
the ``mod_wsgi`` apache2 module and keystone configuration in Apache).
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# apt install keystone
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# yum install openstack-keystone httpd mod_wsgi
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Run the following command to install the packages:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install openstack-keystone apache2-mod_wsgi
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[token]`` section, configure the Fernet token provider:
|
||||
|
||||
.. path /etc/keystone/keystone.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[token]
|
||||
# ...
|
||||
provider = fernet
|
||||
|
||||
.. end
|
||||
|
||||
3. Populate the Identity service database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
||||
|
||||
.. end
|
||||
|
||||
4. Initialize Fernet key repositories:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
||||
|
||||
.. end
|
||||
|
||||
5. Bootstrap the Identity service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
||||
--bootstrap-public-url http://controller:5000/v3/ \
|
||||
--bootstrap-region-id RegionOne
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
||||
|
||||
Configure the Apache HTTP server
|
||||
--------------------------------
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the
|
||||
``ServerName`` option to reference the controller node:
|
||||
|
||||
.. path /etc/httpd/conf/httpd
|
||||
.. code-block:: apache
|
||||
|
||||
ServerName controller
|
||||
|
||||
.. end
|
||||
|
||||
#. Create a link to the ``/usr/share/keystone/wsgi-keystone.conf`` file:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: ubuntu or debian
|
||||
|
||||
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
|
||||
``ServerName`` option to reference the controller node:
|
||||
|
||||
.. path /etc/apache2/apache2.conf
|
||||
.. code-block:: apache
|
||||
|
||||
ServerName controller
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: debian
|
||||
|
||||
.. note::
|
||||
|
||||
The Debian package will perform the below operations for you:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# a2enmod wsgi
|
||||
# a2ensite wsgi-keystone.conf
|
||||
# invoke-rc.d apache2 restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Edit the ``/etc/sysconfig/apache2`` file and configure the
|
||||
``APACHE_SERVERNAME`` option to reference the controller node:
|
||||
|
||||
.. path /etc/sysconfig/apache2
|
||||
.. code-block:: shell
|
||||
|
||||
APACHE_SERVERNAME="controller"
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the ``/etc/apache2/conf.d/wsgi-keystone.conf`` file
|
||||
with the following content:
|
||||
|
||||
.. path /etc/apache2/conf.d/wsgi-keystone.conf
|
||||
.. code-block:: apache
|
||||
|
||||
Listen 5000
|
||||
Listen 35357
|
||||
|
||||
<VirtualHost *:5000>
|
||||
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
||||
WSGIProcessGroup keystone-public
|
||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
ErrorLogFormat "%{cu}t %M"
|
||||
ErrorLog /var/log/apache2/keystone.log
|
||||
CustomLog /var/log/apache2/keystone_access.log combined
|
||||
|
||||
<Directory /usr/bin>
|
||||
Require all granted
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
<VirtualHost *:35357>
|
||||
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
||||
WSGIProcessGroup keystone-admin
|
||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
|
||||
WSGIApplicationGroup %{GLOBAL}
|
||||
WSGIPassAuthorization On
|
||||
ErrorLogFormat "%{cu}t %M"
|
||||
ErrorLog /var/log/apache2/keystone.log
|
||||
CustomLog /var/log/apache2/keystone_access.log combined
|
||||
|
||||
<Directory /usr/bin>
|
||||
Require all granted
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
.. end
|
||||
|
||||
#. Recursively change the ownership of the ``/etc/keystone`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# chown -R keystone:keystone /etc/keystone
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
|
||||
Finalize the installation
|
||||
-------------------------
|
||||
|
||||
.. only:: ubuntu
|
||||
|
||||
#. Restart the Apache service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# service apache2 restart
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. Start the Apache HTTP service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable httpd.service
|
||||
# systemctl start httpd.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: obs
|
||||
|
||||
#. Start the Apache HTTP service and configure it to start when the system
|
||||
boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable apache2.service
|
||||
# systemctl start apache2.service
|
||||
|
||||
.. end
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Configure the administrative account
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export OS_USERNAME=admin
|
||||
$ export OS_PASSWORD=ADMIN_PASS
|
||||
$ export OS_PROJECT_NAME=admin
|
||||
$ export OS_USER_DOMAIN_NAME=Default
|
||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
||||
$ export OS_IDENTITY_API_VERSION=3
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``ADMIN_PASS`` with the password used in the
|
||||
``keystone-manage bootstrap`` command in `keystone-install-configure`_.
|
||||
keystone-install-*
|
||||
|
74
doc/install-guide/source/keystone-verify-debian.rst
Normal file
74
doc/install-guide/source/keystone-verify-debian.rst
Normal file
@ -0,0 +1,74 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the Identity service before installing other
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these commands on the controller node.
|
||||
|
||||
|
||||
|
||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
||||
environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ unset OS_AUTH_URL OS_PASSWORD
|
||||
|
||||
.. end
|
||||
|
||||
3. As the ``admin`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name admin --os-username admin token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:14:07.056119Z |
|
||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``admin`` user.
|
||||
|
||||
4. As the ``demo`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name demo --os-username demo token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:15:39.014479Z |
|
||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``demo``
|
||||
user and API port 5000 which only allows regular (non-admin)
|
||||
access to the Identity service API.
|
83
doc/install-guide/source/keystone-verify-obs.rst
Normal file
83
doc/install-guide/source/keystone-verify-obs.rst
Normal file
@ -0,0 +1,83 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the Identity service before installing other
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these commands on the controller node.
|
||||
|
||||
|
||||
#. For security reasons, disable the temporary authentication
|
||||
token mechanism:
|
||||
|
||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
||||
file and remove ``admin_token_auth`` from the
|
||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
||||
and ``[pipeline:api_v3]`` sections.
|
||||
|
||||
|
||||
|
||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
||||
environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ unset OS_AUTH_URL OS_PASSWORD
|
||||
|
||||
.. end
|
||||
|
||||
3. As the ``admin`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name admin --os-username admin token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:14:07.056119Z |
|
||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``admin`` user.
|
||||
|
||||
4. As the ``demo`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name demo --os-username demo token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:15:39.014479Z |
|
||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``demo``
|
||||
user and API port 5000 which only allows regular (non-admin)
|
||||
access to the Identity service API.
|
83
doc/install-guide/source/keystone-verify-rdo.rst
Normal file
83
doc/install-guide/source/keystone-verify-rdo.rst
Normal file
@ -0,0 +1,83 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the Identity service before installing other
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these commands on the controller node.
|
||||
|
||||
|
||||
|
||||
#. For security reasons, disable the temporary authentication
|
||||
token mechanism:
|
||||
|
||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
||||
file and remove ``admin_token_auth`` from the
|
||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
||||
and ``[pipeline:api_v3]`` sections.
|
||||
|
||||
|
||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
||||
environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ unset OS_AUTH_URL OS_PASSWORD
|
||||
|
||||
.. end
|
||||
|
||||
3. As the ``admin`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name admin --os-username admin token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:14:07.056119Z |
|
||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``admin`` user.
|
||||
|
||||
4. As the ``demo`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name demo --os-username demo token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:15:39.014479Z |
|
||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``demo``
|
||||
user and API port 5000 which only allows regular (non-admin)
|
||||
access to the Identity service API.
|
83
doc/install-guide/source/keystone-verify-ubuntu.rst
Normal file
83
doc/install-guide/source/keystone-verify-ubuntu.rst
Normal file
@ -0,0 +1,83 @@
|
||||
Verify operation
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Verify operation of the Identity service before installing other
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
Perform these commands on the controller node.
|
||||
|
||||
|
||||
#. For security reasons, disable the temporary authentication
|
||||
token mechanism:
|
||||
|
||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
||||
file and remove ``admin_token_auth`` from the
|
||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
||||
and ``[pipeline:api_v3]`` sections.
|
||||
|
||||
|
||||
|
||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
||||
environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ unset OS_AUTH_URL OS_PASSWORD
|
||||
|
||||
.. end
|
||||
|
||||
3. As the ``admin`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name admin --os-username admin token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:14:07.056119Z |
|
||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``admin`` user.
|
||||
|
||||
4. As the ``demo`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name demo --os-username demo token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:15:39.014479Z |
|
||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``demo``
|
||||
user and API port 5000 which only allows regular (non-admin)
|
||||
access to the Identity service API.
|
@ -8,89 +8,7 @@ services.
|
||||
|
||||
Perform these commands on the controller node.
|
||||
|
||||
.. only:: obs or ubuntu
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
#. For security reasons, disable the temporary authentication
|
||||
token mechanism:
|
||||
|
||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
||||
file and remove ``admin_token_auth`` from the
|
||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
||||
and ``[pipeline:api_v3]`` sections.
|
||||
|
||||
.. endonly
|
||||
|
||||
.. only:: rdo
|
||||
|
||||
#. For security reasons, disable the temporary authentication
|
||||
token mechanism:
|
||||
|
||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
||||
file and remove ``admin_token_auth`` from the
|
||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
||||
and ``[pipeline:api_v3]`` sections.
|
||||
|
||||
.. endonly
|
||||
|
||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
||||
environment variable:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ unset OS_AUTH_URL OS_PASSWORD
|
||||
|
||||
.. end
|
||||
|
||||
3. As the ``admin`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name admin --os-username admin token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:14:07.056119Z |
|
||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``admin`` user.
|
||||
|
||||
4. As the ``demo`` user, request an authentication token:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
||||
--os-project-domain-name Default --os-user-domain-name Default \
|
||||
--os-project-name demo --os-username demo token issue
|
||||
|
||||
Password:
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
| expires | 2016-02-12T20:15:39.014479Z |
|
||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
||||
+------------+-----------------------------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command uses the password for the ``demo``
|
||||
user and API port 5000 which only allows regular (non-admin)
|
||||
access to the Identity service API.
|
||||
keystone-verify-*
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user