You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

169 lines
8.2 KiB

  1. # Vagrant
  2. Assuming you have Vagrant 2.0+ installed with virtualbox or libvirt/qemu
  3. (vmware may work, but is untested) you should be able to launch a 3 node
  4. Kubernetes cluster by simply running `vagrant up`.
  5. This will spin up 3 VMs and install kubernetes on them.
  6. Once they are completed you can connect to any of them by running `vagrant ssh k8s-[1..3]`.
  7. To give an estimate of the expected duration of a provisioning run:
  8. On a dual core i5-6300u laptop with an SSD, provisioning takes around 13
  9. to 15 minutes, once the container images and other files are cached.
  10. Note that libvirt/qemu is recommended over virtualbox as it is quite a bit
  11. faster, especially during boot-up time.
  12. For proper performance a minimum of 12GB RAM is recommended.
  13. It is possible to run a 3 node cluster on a laptop with 8GB of RAM using
  14. the default Vagrantfile, provided you have 8GB zram swap configured and
  15. not much more than a browser and a mail client running.
  16. If you decide to run on such a machine, then also make sure that any tmpfs
  17. devices, that are mounted, are mostly empty and disable any swapfiles
  18. mounted on HDD/SSD or you will be in for some serious swap-madness.
  19. Things can get a bit sluggish during provisioning, but when that's done,
  20. the system will actually be able to perform quite well.
  21. ## Customize Vagrant
  22. You can override the default settings in the `Vagrantfile` either by
  23. directly modifying the `Vagrantfile` or through an override file.
  24. In the same directory as the `Vagrantfile`, create a folder called
  25. `vagrant` and create `config.rb` file in it.
  26. An example of how to configure this file is given below.
  27. ## Use alternative OS for Vagrant
  28. By default, Vagrant uses Ubuntu 18.04 box to provision a local cluster.
  29. You may use an alternative supported operating system for your local cluster.
  30. Customize `$os` variable in `Vagrantfile` or as override, e.g.,:
  31. ```ShellSession
  32. echo '$os = "flatcar-stable"' >> vagrant/config.rb
  33. ```
  34. The supported operating systems for vagrant are defined in the `SUPPORTED_OS`
  35. constant in the `Vagrantfile`.
  36. ## File and image caching
  37. Kubespray can take quite a while to start on a laptop. To improve provisioning
  38. speed, the variable 'download_run_once' is set. This will make kubespray
  39. download all files and containers just once and then redistributes them to
  40. the other nodes and as a bonus, also cache all downloads locally and re-use
  41. them on the next provisioning run. For more information on download settings
  42. see [download documentation](/docs/advanced/downloads.md).
  43. ## Example use of Vagrant
  44. The following is an example of setting up and running kubespray using `vagrant`.
  45. For repeated runs, you could save the script to a file in the root of the
  46. kubespray and run it by executing `source <name_of_the_file>`.
  47. ```ShellSession
  48. # use virtualenv to install all python requirements
  49. VENVDIR=venv
  50. virtualenv --python=/usr/bin/python3.7 $VENVDIR
  51. source $VENVDIR/bin/activate
  52. pip install -r requirements.txt
  53. # prepare an inventory to test with
  54. INV=inventory/my_lab
  55. rm -rf ${INV}.bak &> /dev/null
  56. mv ${INV} ${INV}.bak &> /dev/null
  57. cp -a inventory/sample ${INV}
  58. rm -f ${INV}/hosts.ini
  59. # customize the vagrant environment
  60. mkdir vagrant
  61. cat << EOF > vagrant/config.rb
  62. \$instance_name_prefix = "kub"
  63. \$vm_cpus = 1
  64. \$num_instances = 3
  65. \$os = "centos-bento"
  66. \$subnet = "10.0.20"
  67. \$network_plugin = "flannel"
  68. \$inventory = "$INV"
  69. \$shared_folders = { 'temp/docker_rpms' => "/var/cache/yum/x86_64/7/docker-ce/packages" }
  70. \$extra_vars = {
  71. dns_domain: my.custom.domain
  72. }
  73. # or
  74. \$extra_vars = "path/to/extra/vars/file.yml"
  75. EOF
  76. # make the rpm cache
  77. mkdir -p temp/docker_rpms
  78. vagrant up
  79. # make a copy of the downloaded docker rpm, to speed up the next provisioning run
  80. scp kub-1:/var/cache/yum/x86_64/7/docker-ce/packages/* temp/docker_rpms/
  81. # copy kubectl access configuration in place
  82. mkdir $HOME/.kube/ &> /dev/null
  83. ln -s $PWD/$INV/artifacts/admin.conf $HOME/.kube/config
  84. # make the kubectl binary available
  85. sudo ln -s $PWD/$INV/artifacts/kubectl /usr/local/bin/kubectl
  86. #or
  87. export PATH=$PATH:$PWD/$INV/artifacts
  88. ```
  89. If a vagrant run failed and you've made some changes to fix the issue causing
  90. the fail, here is how you would re-run ansible:
  91. ```ShellSession
  92. ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml
  93. ```
  94. If all went well, you check if it's all working as expected:
  95. ```ShellSession
  96. kubectl get nodes
  97. ```
  98. The output should look like this:
  99. ```ShellSession
  100. $ kubectl get nodes
  101. NAME STATUS ROLES AGE VERSION
  102. kub-1 Ready control-plane,master 4m37s v1.22.5
  103. kub-2 Ready control-plane,master 4m7s v1.22.5
  104. kub-3 Ready <none> 3m7s v1.22.5
  105. ```
  106. Another nice test is the following:
  107. ```ShellSession
  108. kubectl get pods --all-namespaces -o wide
  109. ```
  110. Which should yield something like the following:
  111. ```ShellSession
  112. $ kubectl get pods --all-namespaces -o wide
  113. NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  114. kube-system coredns-8474476ff8-m2469 1/1 Running 0 2m45s 10.233.65.2 kub-2 <none> <none>
  115. kube-system coredns-8474476ff8-v5wzj 1/1 Running 0 2m41s 10.233.64.3 kub-1 <none> <none>
  116. kube-system dns-autoscaler-5ffdc7f89d-76tnv 1/1 Running 0 2m43s 10.233.64.2 kub-1 <none> <none>
  117. kube-system kube-apiserver-kub-1 1/1 Running 1 4m54s 10.0.20.101 kub-1 <none> <none>
  118. kube-system kube-apiserver-kub-2 1/1 Running 1 4m33s 10.0.20.102 kub-2 <none> <none>
  119. kube-system kube-controller-manager-kub-1 1/1 Running 1 5m1s 10.0.20.101 kub-1 <none> <none>
  120. kube-system kube-controller-manager-kub-2 1/1 Running 1 4m33s 10.0.20.102 kub-2 <none> <none>
  121. kube-system kube-flannel-9xgf5 1/1 Running 0 3m10s 10.0.20.102 kub-2 <none> <none>
  122. kube-system kube-flannel-l8jbl 1/1 Running 0 3m10s 10.0.20.101 kub-1 <none> <none>
  123. kube-system kube-flannel-zss4t 1/1 Running 0 3m10s 10.0.20.103 kub-3 <none> <none>
  124. kube-system kube-multus-ds-amd64-bhpc9 1/1 Running 0 3m2s 10.0.20.103 kub-3 <none> <none>
  125. kube-system kube-multus-ds-amd64-n6vl8 1/1 Running 0 3m2s 10.0.20.102 kub-2 <none> <none>
  126. kube-system kube-multus-ds-amd64-qttgs 1/1 Running 0 3m2s 10.0.20.101 kub-1 <none> <none>
  127. kube-system kube-proxy-2x4jl 1/1 Running 0 3m33s 10.0.20.101 kub-1 <none> <none>
  128. kube-system kube-proxy-d48r7 1/1 Running 0 3m33s 10.0.20.103 kub-3 <none> <none>
  129. kube-system kube-proxy-f45lp 1/1 Running 0 3m33s 10.0.20.102 kub-2 <none> <none>
  130. kube-system kube-scheduler-kub-1 1/1 Running 1 4m54s 10.0.20.101 kub-1 <none> <none>
  131. kube-system kube-scheduler-kub-2 1/1 Running 1 4m33s 10.0.20.102 kub-2 <none> <none>
  132. kube-system nginx-proxy-kub-3 1/1 Running 0 3m33s 10.0.20.103 kub-3 <none> <none>
  133. kube-system nodelocaldns-cg9tz 1/1 Running 0 2m41s 10.0.20.102 kub-2 <none> <none>
  134. kube-system nodelocaldns-htswt 1/1 Running 0 2m41s 10.0.20.103 kub-3 <none> <none>
  135. kube-system nodelocaldns-nsp7s 1/1 Running 0 2m41s 10.0.20.101 kub-1 <none> <none>
  136. local-path-storage local-path-provisioner-66df45bfdd-km4zg 1/1 Running 0 2m54s 10.233.66.2 kub-3 <none> <none>
  137. ```