You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

138 lines
7.8 KiB

  1. # Vagrant
  2. Assuming you have Vagrant 2.0+ installed with virtualbox, libvirt/qemu or vmware, but is untested) you should be able to launch a 3 node Kubernetes cluster by simply running `vagrant up`. This will spin up 3 VMs and install kubernetes on them. Once they are completed you can connect to any of them by running `vagrant ssh k8s-[1..3]`.
  3. To give an estimate of the expected duration of a provisioning run: On a dual core i5-6300u laptop with an SSD, provisioning takes around 13 to 15 minutes, once the container images and other files are cached. Note that libvirt/qemu is recommended over virtualbox as it is quite a bit faster, especially during boot-up time.
  4. For proper performance a minimum of 12GB RAM is recommended. It is possible to run a 3 node cluster on a laptop with 8GB of RAM using the default Vagrantfile, provided you have 8GB zram swap configured and not much more than a browser and a mail client running. If you decide to run on such a machine, then also make sure that any tmpfs devices, that are mounted, are mostly empty and disable any swapfiles mounted on HDD/SSD or you will be in for some serious swap-madness. Things can get a bit sluggish during provisioning, but when that's done, the system will actually be able to perform quite well.
  5. ## Customize Vagrant
  6. You can override the default settings in the `Vagrantfile` either by directly modifying the `Vagrantfile` or through an override file. In the same directory as the `Vagrantfile`, create a folder called `vagrant` and create `config.rb` file in it. An example of how to configure this file is given below.
  7. ## Use alternative OS for Vagrant
  8. By default, Vagrant uses Ubuntu 18.04 box to provision a local cluster. You may use an alternative supported operating system for your local cluster.
  9. Customize `$os` variable in `Vagrantfile` or as override, e.g.,:
  10. ```ShellSession
  11. echo '$os = "coreos-stable"' >> vagrant/config.rb
  12. ```
  13. The supported operating systems for vagrant are defined in the `SUPPORTED_OS` constant in the `Vagrantfile`.
  14. ## File and image caching
  15. Kubespray can take quite a while to start on a laptop. To improve provisioning speed, the variable 'download_run_once' is set. This will make kubespray download all files and containers just once and then redistributes them to the other nodes and as a bonus, also cache all downloads locally and re-use them on the next provisioning run. For more information on download settings see [download documentation](/docs/downloads.md).
  16. ## Example use of Vagrant
  17. The following is an example of setting up and running kubespray using `vagrant`. For repeated runs, you could save the script to a file in the root of the kubespray and run it by executing 'source <name_of_the_file>.
  18. ```ShellSession
  19. # use virtualenv to install all python requirements
  20. VENVDIR=venv
  21. virtualenv --python=/usr/bin/python3.7 $VENVDIR
  22. source $VENVDIR/bin/activate
  23. pip install -r requirements.txt
  24. # prepare an inventory to test with
  25. INV=inventory/my_lab
  26. rm -rf ${INV}.bak &> /dev/null
  27. mv ${INV} ${INV}.bak &> /dev/null
  28. cp -a inventory/sample ${INV}
  29. rm -f ${INV}/hosts.ini
  30. # customize the vagrant environment
  31. mkdir vagrant
  32. cat << EOF > vagrant/config.rb
  33. \$instance_name_prefix = "kub"
  34. \$vm_cpus = 1
  35. \$num_instances = 3
  36. \$os = "centos-bento"
  37. \$subnet = "10.0.20"
  38. \$network_plugin = "flannel"
  39. \$inventory = "$INV"
  40. \$shared_folders = { 'temp/docker_rpms' => "/var/cache/yum/x86_64/7/docker-ce/packages" }
  41. EOF
  42. # make the rpm cache
  43. mkdir -p temp/docker_rpms
  44. vagrant up
  45. # make a copy of the downloaded docker rpm, to speed up the next provisioning run
  46. scp kub-1:/var/cache/yum/x86_64/7/docker-ce/packages/* temp/docker_rpms/
  47. # copy kubectl access configuration in place
  48. mkdir $HOME/.kube/ &> /dev/null
  49. ln -s $INV/artifacts/admin.conf $HOME/.kube/config
  50. # make the kubectl binary available
  51. sudo ln -s $INV/artifacts/kubectl /usr/local/bin/kubectl
  52. #or
  53. export PATH=$PATH:$INV/artifacts
  54. ```
  55. If a vagrant run failed and you've made some changes to fix the issue causing the fail, here is how you would re-run ansible:
  56. ```ShellSession
  57. ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml
  58. ```
  59. If all went well, you check if it's all working as expected:
  60. ```ShellSession
  61. kubectl get nodes
  62. ```
  63. The output should look like this:
  64. ```ShellSession
  65. $ kubectl get nodes
  66. NAME STATUS ROLES AGE VERSION
  67. kub-1 Ready master 32m v1.14.1
  68. kub-2 Ready master 31m v1.14.1
  69. kub-3 Ready <none> 31m v1.14.1
  70. ```
  71. Another nice test is the following:
  72. ```ShellSession
  73. kubectl get po --all-namespaces -o wide
  74. ```
  75. Which should yield something like the following:
  76. ```ShellSession
  77. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  78. kube-system coredns-97c4b444f-9wm86 1/1 Running 0 31m 10.233.66.2 kub-3 <none> <none>
  79. kube-system coredns-97c4b444f-g7hqx 0/1 Pending 0 30m <none> <none> <none> <none>
  80. kube-system dns-autoscaler-5fc5fdbf6-5c48k 1/1 Running 0 31m 10.233.66.3 kub-3 <none> <none>
  81. kube-system kube-apiserver-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none>
  82. kube-system kube-apiserver-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none>
  83. kube-system kube-controller-manager-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none>
  84. kube-system kube-controller-manager-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none>
  85. kube-system kube-flannel-8tgcn 2/2 Running 0 31m 10.0.20.103 kub-3 <none> <none>
  86. kube-system kube-flannel-b2hgt 2/2 Running 0 31m 10.0.20.101 kub-1 <none> <none>
  87. kube-system kube-flannel-zx4bc 2/2 Running 0 31m 10.0.20.102 kub-2 <none> <none>
  88. kube-system kube-proxy-4bjdn 1/1 Running 0 31m 10.0.20.102 kub-2 <none> <none>
  89. kube-system kube-proxy-l5tt5 1/1 Running 0 31m 10.0.20.103 kub-3 <none> <none>
  90. kube-system kube-proxy-x59q8 1/1 Running 0 31m 10.0.20.101 kub-1 <none> <none>
  91. kube-system kube-scheduler-kub-1 1/1 Running 0 32m 10.0.20.101 kub-1 <none> <none>
  92. kube-system kube-scheduler-kub-2 1/1 Running 0 32m 10.0.20.102 kub-2 <none> <none>
  93. kube-system kubernetes-dashboard-6c7466966c-jqz42 1/1 Running 0 31m 10.233.66.4 kub-3 <none> <none>
  94. kube-system nginx-proxy-kub-3 1/1 Running 0 32m 10.0.20.103 kub-3 <none> <none>
  95. kube-system nodelocaldns-2x7vh 1/1 Running 0 31m 10.0.20.102 kub-2 <none> <none>
  96. kube-system nodelocaldns-fpvnz 1/1 Running 0 31m 10.0.20.103 kub-3 <none> <none>
  97. kube-system nodelocaldns-h2f42 1/1 Running 0 31m 10.0.20.101 kub-1 <none> <none>
  98. ```
  99. Create clusteradmin rbac and get the login token for the dashboard:
  100. ```ShellSession
  101. kubectl create -f contrib/misc/clusteradmin-rbac.yml
  102. kubectl -n kube-system describe secret kubernetes-dashboard-token | grep 'token:' | grep -o '[^ ]\+$'
  103. ```
  104. Copy it to the clipboard and now log in to the [dashboard](https://10.0.20.101:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login).