
This change includes several interconnected features: * Migration to Deckhand-based configuration. This is integrated here, because new configuration data were needed, so it would have been wasted effort to either implement it in the old format or to update the old configuration data to Dechkand format. * Failing faster with stronger validation. Migration to Deckhand configuration was a good opportunity to add schema validation, which is a requirement in the near term anyway. Additionally, rendering all templates up front adds an additional layer of "fail-fast". * Separation of certificate generation and configuration assembly into different commands. Combined with Deckhand substitution, this creates a much clearer distinction between Promenade configuration and deployable secrets. * Migration of components to charts. This is a key step that will enable support for dynamic node management. Additionally, this paves the way for significant configurability in component deployment. * Version of kubelet is configurable & controlled via download url. * Restructuring templates to be more intuitive. Many of the templates require changes or deletion due to the migration to charts. * Installation of pre-configured useful tools on hosts, including calicoctl. * DNS is now provided by coredns, which is highly configurable. Change-Id: I9f2d8da6346f4308be5083a54764ce6035a2e10c
2.6 KiB
Getting Started
Development
Deployment using Vagrant
Initial Setup of Vagrant
Deployment using Vagrant uses KVM instead of Virtualbox due to better performance of disk and networking, which both have significant impact on the stability of the etcd clusters.
Make sure you have [Vagrant](https://vagrantup.com) installed, then run ./tools/vagrant/full-vagrant-setup.sh, which will do the following:
- Install Vagrant libvirt plugin and its dependencies
- Install NFS dependencies for Vagrant volume sharing
- Install [packer](https://packer.io) and build a KVM image for Ubuntu 16.04
Deployment
A complete set of configuration that works with the Vagrantfile in the top-level directory is provided in the example directory.
To exercise that example, first generate certs and combine the configuration into usable parts:
./tools/build-example.sh
Start the VMs:
vagrant up --parallel
Then bring up the genesis node:
vagrant ssh n0 -c 'sudo /vagrant/example/scripts/genesis.sh'
Join additional master nodes:
vagrant ssh n1 -c 'sudo /vagrant/example/scripts/join-n1.sh'
vagrant ssh n2 -c 'sudo /vagrant/example/scripts/join-n2.sh'
Re-provision the genesis node as a normal master:
vagrant ssh n0 -c 'sudo promenade-teardown'
vagrant ssh n1 -c 'sudo kubectl delete node n0'
vagrant destroy -f n0
vagrant up n0
vagrant ssh n0 -c 'sudo /vagrant/example/scripts/join-n0.sh'
Join the remaining worker:
vagrant ssh n3 -c 'sudo /vagrant/example/scripts/join-n3.sh'
Building the image
To build the image directly, you can use the standard Docker build command:
docker build -t promenade:local .
To build the image from behind a proxy, you can:
export http_proxy=...
export no_proxy=...
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$http_proxy --build-arg no_proxy=$no_proxy -t promenade:local .
For convenience, there is a script which builds an image from the current code, then uses it to construct scripts for the example:
./tools/dev-build.sh
NOTE the dev-build.sh
script puts Promenade in
debug mode, which will instruct it to use Vagrant's shared directory to
source local charts.
Using Promenade Behind a Proxy
To use Promenade from behind a proxy, use the proxy settings see
configuration/kubernetes-network
.