You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

227 lines
8.6 KiB

Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
  1. # -*- mode: ruby -*-
  2. # # vi: set ft=ruby :
  3. # For help on using kubespray with vagrant, check out docs/vagrant.md
  4. require 'fileutils'
  5. Vagrant.require_version ">= 2.0.0"
  6. CONFIG = File.join(File.dirname(__FILE__), "vagrant/config.rb")
  7. COREOS_URL_TEMPLATE = "https://storage.googleapis.com/%s.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
  8. # Uniq disk UUID for libvirt
  9. DISK_UUID = Time.now.utc.to_i
  10. SUPPORTED_OS = {
  11. "coreos-stable" => {box: "coreos-stable", user: "core", box_url: COREOS_URL_TEMPLATE % ["stable"]},
  12. "coreos-alpha" => {box: "coreos-alpha", user: "core", box_url: COREOS_URL_TEMPLATE % ["alpha"]},
  13. "coreos-beta" => {box: "coreos-beta", user: "core", box_url: COREOS_URL_TEMPLATE % ["beta"]},
  14. "ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
  15. "ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
  16. "centos" => {box: "centos/7", user: "vagrant"},
  17. "centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
  18. "centos8" => {box: "centos/8", user: "vagrant"},
  19. "centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
  20. "fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
  21. "opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
  22. "opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
  23. "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
  24. }
  25. # Defaults for config options defined in CONFIG
  26. $num_instances = 3
  27. $instance_name_prefix = "k8s"
  28. $vm_gui = false
  29. $vm_memory = 2048
  30. $vm_cpus = 1
  31. $shared_folders = {}
  32. $forwarded_ports = {}
  33. $subnet = "172.17.8"
  34. $os = "ubuntu1804"
  35. $network_plugin = "flannel"
  36. # Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
  37. $multi_networking = false
  38. # The first three nodes are etcd servers
  39. $etcd_instances = $num_instances
  40. # The first two nodes are kube masters
  41. $kube_master_instances = $num_instances == 1 ? $num_instances : ($num_instances - 1)
  42. # All nodes are kube nodes
  43. $kube_node_instances = $num_instances
  44. # The following only works when using the libvirt provider
  45. $kube_node_instances_with_disks = false
  46. $kube_node_instances_with_disks_size = "20G"
  47. $kube_node_instances_with_disks_number = 2
  48. $override_disk_size = false
  49. $disk_size = "20GB"
  50. $local_path_provisioner_enabled = false
  51. $local_path_provisioner_claim_root = "/opt/local-path-provisioner/"
  52. $playbook = "cluster.yml"
  53. host_vars = {}
  54. if File.exist?(CONFIG)
  55. require CONFIG
  56. end
  57. $box = SUPPORTED_OS[$os][:box]
  58. # if $inventory is not set, try to use example
  59. $inventory = "inventory/sample" if ! $inventory
  60. $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
  61. # if $inventory has a hosts.ini file use it, otherwise copy over
  62. # vars etc to where vagrant expects dynamic inventory to be
  63. if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
  64. $vagrant_ansible = File.join(File.dirname(__FILE__), ".vagrant", "provisioners", "ansible")
  65. FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
  66. if ! File.exist?(File.join($vagrant_ansible,"inventory"))
  67. FileUtils.ln_s($inventory, File.join($vagrant_ansible,"inventory"))
  68. end
  69. end
  70. if Vagrant.has_plugin?("vagrant-proxyconf")
  71. $no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
  72. (1..$num_instances).each do |i|
  73. $no_proxy += ",#{$subnet}.#{i+100}"
  74. end
  75. end
  76. Vagrant.configure("2") do |config|
  77. config.vm.box = $box
  78. if SUPPORTED_OS[$os].has_key? :box_url
  79. config.vm.box_url = SUPPORTED_OS[$os][:box_url]
  80. end
  81. config.ssh.username = SUPPORTED_OS[$os][:user]
  82. # plugin conflict
  83. if Vagrant.has_plugin?("vagrant-vbguest") then
  84. config.vbguest.auto_update = false
  85. end
  86. # always use Vagrants insecure key
  87. config.ssh.insert_key = false
  88. if ($override_disk_size)
  89. unless Vagrant.has_plugin?("vagrant-disksize")
  90. system "vagrant plugin install vagrant-disksize"
  91. end
  92. config.disksize.size = $disk_size
  93. end
  94. (1..$num_instances).each do |i|
  95. config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
  96. node.vm.hostname = vm_name
  97. if Vagrant.has_plugin?("vagrant-proxyconf")
  98. node.proxy.http = ENV['HTTP_PROXY'] || ENV['http_proxy'] || ""
  99. node.proxy.https = ENV['HTTPS_PROXY'] || ENV['https_proxy'] || ""
  100. node.proxy.no_proxy = $no_proxy
  101. end
  102. ["vmware_fusion", "vmware_workstation"].each do |vmware|
  103. node.vm.provider vmware do |v|
  104. v.vmx['memsize'] = $vm_memory
  105. v.vmx['numvcpus'] = $vm_cpus
  106. end
  107. end
  108. node.vm.provider :virtualbox do |vb|
  109. vb.memory = $vm_memory
  110. vb.cpus = $vm_cpus
  111. vb.gui = $vm_gui
  112. vb.linked_clone = true
  113. vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
  114. end
  115. node.vm.provider :libvirt do |lv|
  116. lv.memory = $vm_memory
  117. lv.cpus = $vm_cpus
  118. lv.default_prefix = 'kubespray'
  119. # Fix kernel panic on fedora 28
  120. if $os == "fedora"
  121. lv.cpu_mode = "host-passthrough"
  122. end
  123. end
  124. if $kube_node_instances_with_disks
  125. # Libvirt
  126. driverletters = ('a'..'z').to_a
  127. node.vm.provider :libvirt do |lv|
  128. # always make /dev/sd{a/b/c} so that CI can ensure that
  129. # virtualbox and libvirt will have the same devices to use for OSDs
  130. (1..$kube_node_instances_with_disks_number).each do |d|
  131. lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "ide"
  132. end
  133. end
  134. end
  135. if $expose_docker_tcp
  136. node.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
  137. end
  138. $forwarded_ports.each do |guest, host|
  139. node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
  140. end
  141. node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
  142. $shared_folders.each do |src, dst|
  143. node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
  144. end
  145. ip = "#{$subnet}.#{i+100}"
  146. node.vm.network :private_network, ip: ip
  147. # Disable swap for each vm
  148. node.vm.provision "shell", inline: "swapoff -a"
  149. host_vars[vm_name] = {
  150. "ip": ip,
  151. "flannel_interface": "eth1",
  152. "kube_network_plugin": $network_plugin,
  153. "kube_network_plugin_multus": $multi_networking,
  154. "download_run_once": "True",
  155. "download_localhost": "False",
  156. "download_cache_dir": ENV['HOME'] + "/kubespray_cache",
  157. # Make kubespray cache even when download_run_once is false
  158. "download_force_cache": "True",
  159. # Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
  160. "download_keep_remote_cache": "False",
  161. "docker_keepcache": "1",
  162. # These two settings will put kubectl and admin.config in $inventory/artifacts
  163. "kubeconfig_localhost": "True",
  164. "kubectl_localhost": "True",
  165. "local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
  166. "local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
  167. "ansible_ssh_user": SUPPORTED_OS[$os][:user]
  168. }
  169. # Only execute the Ansible provisioner once, when all the machines are up and ready.
  170. if i == $num_instances
  171. node.vm.provision "ansible" do |ansible|
  172. ansible.playbook = $playbook
  173. $ansible_inventory_path = File.join( $inventory, "hosts.ini")
  174. if File.exist?($ansible_inventory_path)
  175. ansible.inventory_path = $ansible_inventory_path
  176. end
  177. ansible.become = true
  178. ansible.limit = "all,localhost"
  179. ansible.host_key_checking = false
  180. ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
  181. ansible.host_vars = host_vars
  182. #ansible.tags = ['download']
  183. ansible.groups = {
  184. "etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
  185. "kube-master" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
  186. "kube-node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
  187. "k8s-cluster:children" => ["kube-master", "kube-node"],
  188. }
  189. end
  190. end
  191. end
  192. end
  193. end