You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

316 lines
13 KiB

Use supported version of fedora in CI (#10108) * tests: replace fedora35 with fedora37 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: replace fedora36 with fedora38 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * docs: update fedora version in docs Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * molecule: upgrade fedora version Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: upgrade fedora images for vagrant and kubevirt Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * vagrant: workaround to fix private network ip address in fedora Fedora stop supporting syconfig network script so we added a workaround here https://github.com/hashicorp/vagrant/issues/12762#issuecomment-1535957837 to fix it. * netowrkmanager: do not configure dns if using systemd-resolved We should not configure dns if we point to systemd-resolved. Systemd-resolved is using NetworkManager to infer the upstream DNS server so if we set NetworkManager to 127.0.0.53 it will prevent systemd-resolved to get the correct network DNS server. Thus if we are in this case we just don't set this setting. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * image-builder: update centos7 image Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * gitlab-ci: mark fedora packet jobs as allow failure Fedora networking is still broken on Packet, let's mark it as allow failure for now. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> --------- Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
1 year ago
Use supported version of fedora in CI (#10108) * tests: replace fedora35 with fedora37 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: replace fedora36 with fedora38 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * docs: update fedora version in docs Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * molecule: upgrade fedora version Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: upgrade fedora images for vagrant and kubevirt Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * vagrant: workaround to fix private network ip address in fedora Fedora stop supporting syconfig network script so we added a workaround here https://github.com/hashicorp/vagrant/issues/12762#issuecomment-1535957837 to fix it. * netowrkmanager: do not configure dns if using systemd-resolved We should not configure dns if we point to systemd-resolved. Systemd-resolved is using NetworkManager to infer the upstream DNS server so if we set NetworkManager to 127.0.0.53 it will prevent systemd-resolved to get the correct network DNS server. Thus if we are in this case we just don't set this setting. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * image-builder: update centos7 image Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * gitlab-ci: mark fedora packet jobs as allow failure Fedora networking is still broken on Packet, let's mark it as allow failure for now. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> --------- Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
1 year ago
Use supported version of fedora in CI (#10108) * tests: replace fedora35 with fedora37 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: replace fedora36 with fedora38 Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * docs: update fedora version in docs Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * molecule: upgrade fedora version Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * tests: upgrade fedora images for vagrant and kubevirt Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * vagrant: workaround to fix private network ip address in fedora Fedora stop supporting syconfig network script so we added a workaround here https://github.com/hashicorp/vagrant/issues/12762#issuecomment-1535957837 to fix it. * netowrkmanager: do not configure dns if using systemd-resolved We should not configure dns if we point to systemd-resolved. Systemd-resolved is using NetworkManager to infer the upstream DNS server so if we set NetworkManager to 127.0.0.53 it will prevent systemd-resolved to get the correct network DNS server. Thus if we are in this case we just don't set this setting. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * image-builder: update centos7 image Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> * gitlab-ci: mark fedora packet jobs as allow failure Fedora networking is still broken on Packet, let's mark it as allow failure for now. Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch> --------- Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@proton.ch>
1 year ago
Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
Added file and container image caching (#4828) * File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant. * When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching. * The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache * A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost. * Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009 * Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused. * Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml * All features of commit d6fd0d2acaec9f53e75d82db30411f96a5bf2cc9 by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching. Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical. Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
5 years ago
  1. # -*- mode: ruby -*-
  2. # # vi: set ft=ruby :
  3. # For help on using kubespray with vagrant, check out docs/developers/vagrant.md
  4. require 'fileutils'
  5. Vagrant.require_version ">= 2.0.0"
  6. CONFIG = File.join(File.dirname(__FILE__), ENV['KUBESPRAY_VAGRANT_CONFIG'] || 'vagrant/config.rb')
  7. FLATCAR_URL_TEMPLATE = "https://%s.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.json"
  8. # Uniq disk UUID for libvirt
  9. DISK_UUID = Time.now.utc.to_i
  10. SUPPORTED_OS = {
  11. "flatcar-stable" => {box: "flatcar-stable", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["stable"]},
  12. "flatcar-beta" => {box: "flatcar-beta", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["beta"]},
  13. "flatcar-alpha" => {box: "flatcar-alpha", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["alpha"]},
  14. "flatcar-edge" => {box: "flatcar-edge", user: "core", box_url: FLATCAR_URL_TEMPLATE % ["edge"]},
  15. "ubuntu2004" => {box: "generic/ubuntu2004", user: "vagrant"},
  16. "ubuntu2204" => {box: "generic/ubuntu2204", user: "vagrant"},
  17. "ubuntu2404" => {box: "bento/ubuntu-24.04", user: "vagrant"},
  18. "centos" => {box: "centos/7", user: "vagrant"},
  19. "centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
  20. "centos8" => {box: "centos/8", user: "vagrant"},
  21. "centos8-bento" => {box: "bento/centos-8", user: "vagrant"},
  22. "almalinux8" => {box: "almalinux/8", user: "vagrant"},
  23. "almalinux8-bento" => {box: "bento/almalinux-8", user: "vagrant"},
  24. "rockylinux8" => {box: "rockylinux/8", user: "vagrant"},
  25. "rockylinux9" => {box: "rockylinux/9", user: "vagrant"},
  26. "fedora37" => {box: "fedora/37-cloud-base", user: "vagrant"},
  27. "fedora38" => {box: "fedora/38-cloud-base", user: "vagrant"},
  28. "opensuse" => {box: "opensuse/Leap-15.4.x86_64", user: "vagrant"},
  29. "opensuse-tumbleweed" => {box: "opensuse/Tumbleweed.x86_64", user: "vagrant"},
  30. "oraclelinux" => {box: "generic/oracle7", user: "vagrant"},
  31. "oraclelinux8" => {box: "generic/oracle8", user: "vagrant"},
  32. "rhel7" => {box: "generic/rhel7", user: "vagrant"},
  33. "rhel8" => {box: "generic/rhel8", user: "vagrant"},
  34. "debian11" => {box: "debian/bullseye64", user: "vagrant"},
  35. "debian12" => {box: "debian/bookworm64", user: "vagrant"},
  36. }
  37. if File.exist?(CONFIG)
  38. require CONFIG
  39. end
  40. # Defaults for config options defined in CONFIG
  41. $num_instances ||= 3
  42. $instance_name_prefix ||= "k8s"
  43. $vm_gui ||= false
  44. $vm_memory ||= 2048
  45. $vm_cpus ||= 2
  46. $shared_folders ||= {}
  47. $forwarded_ports ||= {}
  48. $subnet ||= "172.18.8"
  49. $subnet_ipv6 ||= "fd3c:b398:0698:0756"
  50. $os ||= "ubuntu2004"
  51. $network_plugin ||= "flannel"
  52. # Setting multi_networking to true will install Multus: https://github.com/k8snetworkplumbingwg/multus-cni
  53. $multi_networking ||= "False"
  54. $download_run_once ||= "True"
  55. $download_force_cache ||= "False"
  56. # The first three nodes are etcd servers
  57. $etcd_instances ||= [$num_instances, 3].min
  58. # The first two nodes are kube masters
  59. $kube_master_instances ||= [$num_instances, 2].min
  60. # All nodes are kube nodes
  61. $kube_node_instances ||= $num_instances
  62. # The following only works when using the libvirt provider
  63. $kube_node_instances_with_disks ||= false
  64. $kube_node_instances_with_disks_size ||= "20G"
  65. $kube_node_instances_with_disks_number ||= 2
  66. $override_disk_size ||= false
  67. $disk_size ||= "20GB"
  68. $local_path_provisioner_enabled ||= "False"
  69. $local_path_provisioner_claim_root ||= "/opt/local-path-provisioner/"
  70. $libvirt_nested ||= false
  71. # boolean or string (e.g. "-vvv")
  72. $ansible_verbosity ||= false
  73. $ansible_tags ||= ENV['VAGRANT_ANSIBLE_TAGS'] || ""
  74. $vagrant_dir ||= File.join(File.dirname(__FILE__), ".vagrant")
  75. $playbook ||= "cluster.yml"
  76. $extra_vars ||= {}
  77. host_vars = {}
  78. # throw error if os is not supported
  79. if ! SUPPORTED_OS.key?($os)
  80. puts "Unsupported OS: #{$os}"
  81. puts "Supported OS are: #{SUPPORTED_OS.keys.join(', ')}"
  82. exit 1
  83. end
  84. $box = SUPPORTED_OS[$os][:box]
  85. # if $inventory is not set, try to use example
  86. $inventory = "inventory/sample" if ! $inventory
  87. $inventory = File.absolute_path($inventory, File.dirname(__FILE__))
  88. # if $inventory has a hosts.ini file use it, otherwise copy over
  89. # vars etc to where vagrant expects dynamic inventory to be
  90. if ! File.exist?(File.join(File.dirname($inventory), "hosts.ini"))
  91. $vagrant_ansible = File.join(File.absolute_path($vagrant_dir), "provisioners", "ansible")
  92. FileUtils.mkdir_p($vagrant_ansible) if ! File.exist?($vagrant_ansible)
  93. $vagrant_inventory = File.join($vagrant_ansible,"inventory")
  94. FileUtils.rm_f($vagrant_inventory)
  95. FileUtils.ln_s($inventory, $vagrant_inventory)
  96. end
  97. if Vagrant.has_plugin?("vagrant-proxyconf")
  98. $no_proxy = ENV['NO_PROXY'] || ENV['no_proxy'] || "127.0.0.1,localhost"
  99. (1..$num_instances).each do |i|
  100. $no_proxy += ",#{$subnet}.#{i+100}"
  101. end
  102. end
  103. Vagrant.configure("2") do |config|
  104. config.vm.box = $box
  105. if SUPPORTED_OS[$os].has_key? :box_url
  106. config.vm.box_url = SUPPORTED_OS[$os][:box_url]
  107. end
  108. config.ssh.username = SUPPORTED_OS[$os][:user]
  109. # plugin conflict
  110. if Vagrant.has_plugin?("vagrant-vbguest") then
  111. config.vbguest.auto_update = false
  112. end
  113. # always use Vagrants insecure key
  114. config.ssh.insert_key = false
  115. if ($override_disk_size)
  116. unless Vagrant.has_plugin?("vagrant-disksize")
  117. system "vagrant plugin install vagrant-disksize"
  118. end
  119. config.disksize.size = $disk_size
  120. end
  121. (1..$num_instances).each do |i|
  122. config.vm.define vm_name = "%s-%01d" % [$instance_name_prefix, i] do |node|
  123. node.vm.hostname = vm_name
  124. if Vagrant.has_plugin?("vagrant-proxyconf")
  125. node.proxy.http = ENV['HTTP_PROXY'] || ENV['http_proxy'] || ""
  126. node.proxy.https = ENV['HTTPS_PROXY'] || ENV['https_proxy'] || ""
  127. node.proxy.no_proxy = $no_proxy
  128. end
  129. ["vmware_fusion", "vmware_workstation"].each do |vmware|
  130. node.vm.provider vmware do |v|
  131. v.vmx['memsize'] = $vm_memory
  132. v.vmx['numvcpus'] = $vm_cpus
  133. end
  134. end
  135. node.vm.provider :virtualbox do |vb|
  136. vb.memory = $vm_memory
  137. vb.cpus = $vm_cpus
  138. vb.gui = $vm_gui
  139. vb.linked_clone = true
  140. vb.customize ["modifyvm", :id, "--vram", "8"] # ubuntu defaults to 256 MB which is a waste of precious RAM
  141. vb.customize ["modifyvm", :id, "--audio", "none"]
  142. end
  143. node.vm.provider :libvirt do |lv|
  144. lv.nested = $libvirt_nested
  145. lv.cpu_mode = "host-model"
  146. lv.memory = $vm_memory
  147. lv.cpus = $vm_cpus
  148. lv.default_prefix = 'kubespray'
  149. # Fix kernel panic on fedora 28
  150. if $os == "fedora"
  151. lv.cpu_mode = "host-passthrough"
  152. end
  153. end
  154. if $kube_node_instances_with_disks
  155. # Libvirt
  156. driverletters = ('a'..'z').to_a
  157. node.vm.provider :libvirt do |lv|
  158. # always make /dev/sd{a/b/c} so that CI can ensure that
  159. # virtualbox and libvirt will have the same devices to use for OSDs
  160. (1..$kube_node_instances_with_disks_number).each do |d|
  161. lv.storage :file, :device => "hd#{driverletters[d]}", :path => "disk-#{i}-#{d}-#{DISK_UUID}.disk", :size => $kube_node_instances_with_disks_size, :bus => "scsi"
  162. end
  163. end
  164. node.vm.provider :virtualbox do |vb|
  165. # always make /dev/sd{a/b/c} so that CI can ensure that
  166. # virtualbox and libvirt will have the same devices to use for OSDs
  167. (1..$kube_node_instances_with_disks_number).each do |d|
  168. vb.customize ['createhd', '--filename', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--size', $kube_node_instances_with_disks_size] # 10GB disk
  169. vb.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', d, '--device', 0, '--type', 'hdd', '--medium', "disk-#{i}-#{driverletters[d]}-#{DISK_UUID}.disk", '--nonrotational', 'on', '--mtype', 'normal']
  170. end
  171. end
  172. end
  173. if $expose_docker_tcp
  174. node.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
  175. end
  176. $forwarded_ports.each do |guest, host|
  177. node.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
  178. end
  179. if ["rhel7","rhel8"].include? $os
  180. # Vagrant synced_folder rsync options cannot be used for RHEL boxes as Rsync package cannot
  181. # be installed until the host is registered with a valid Red Hat support subscription
  182. node.vm.synced_folder ".", "/vagrant", disabled: false
  183. $shared_folders.each do |src, dst|
  184. node.vm.synced_folder src, dst
  185. end
  186. else
  187. node.vm.synced_folder ".", "/vagrant", disabled: false, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z'] , rsync__exclude: ['.git','venv']
  188. $shared_folders.each do |src, dst|
  189. node.vm.synced_folder src, dst, type: "rsync", rsync__args: ['--verbose', '--archive', '--delete', '-z']
  190. end
  191. end
  192. ip = "#{$subnet}.#{i+100}"
  193. node.vm.network :private_network,
  194. :ip => ip,
  195. :libvirt__guest_ipv6 => 'yes',
  196. :libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
  197. :libvirt__ipv6_prefix => "64",
  198. :libvirt__forward_mode => "none",
  199. :libvirt__dhcp_enabled => false
  200. # Disable swap for each vm
  201. node.vm.provision "shell", inline: "swapoff -a"
  202. # ubuntu2004 and ubuntu2204 have IPv6 explicitly disabled. This undoes that.
  203. if ["ubuntu2004", "ubuntu2204"].include? $os
  204. node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
  205. node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
  206. end
  207. # Hack for fedora37/38 to get the IP address of the second interface
  208. if ["fedora37", "fedora38"].include? $os
  209. config.vm.provision "shell", inline: <<-SHELL
  210. nmcli conn modify 'Wired connection 2' ipv4.addresses $(cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep IPADDR | cut -d "=" -f2)
  211. nmcli conn modify 'Wired connection 2' ipv4.method manual
  212. service NetworkManager restart
  213. SHELL
  214. end
  215. # Rockylinux boxes needs UEFI
  216. if ["rockylinux8", "rockylinux9"].include? $os
  217. config.vm.provider "libvirt" do |domain|
  218. domain.loader = "/usr/share/OVMF/x64/OVMF_CODE.fd"
  219. end
  220. end
  221. # Disable firewalld on oraclelinux/redhat vms
  222. if ["oraclelinux","oraclelinux8","rhel7","rhel8","rockylinux8"].include? $os
  223. node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
  224. end
  225. host_vars[vm_name] = {
  226. "ip": ip,
  227. "flannel_interface": "eth1",
  228. "kube_network_plugin": $network_plugin,
  229. "kube_network_plugin_multus": $multi_networking,
  230. "download_run_once": $download_run_once,
  231. "download_localhost": "False",
  232. "download_cache_dir": ENV['HOME'] + "/kubespray_cache",
  233. # Make kubespray cache even when download_run_once is false
  234. "download_force_cache": $download_force_cache,
  235. # Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
  236. "download_keep_remote_cache": "False",
  237. "docker_rpm_keepcache": "1",
  238. # These two settings will put kubectl and admin.config in $inventory/artifacts
  239. "kubeconfig_localhost": "True",
  240. "kubectl_localhost": "True",
  241. "local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
  242. "local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
  243. "ansible_ssh_user": SUPPORTED_OS[$os][:user],
  244. "ansible_ssh_private_key_file": File.join(Dir.home, ".vagrant.d", "insecure_private_key"),
  245. "unsafe_show_logs": "True"
  246. }
  247. # Only execute the Ansible provisioner once, when all the machines are up and ready.
  248. # And limit the action to gathering facts, the full playbook is going to be ran by testcases_run.sh
  249. if i == $num_instances
  250. node.vm.provision "ansible" do |ansible|
  251. ansible.playbook = $playbook
  252. ansible.compatibility_mode = "2.0"
  253. ansible.verbose = $ansible_verbosity
  254. $ansible_inventory_path = File.join( $inventory, "hosts.ini")
  255. if File.exist?($ansible_inventory_path)
  256. ansible.inventory_path = $ansible_inventory_path
  257. end
  258. ansible.become = true
  259. ansible.limit = "all,localhost"
  260. ansible.host_key_checking = false
  261. ansible.raw_arguments = ["--forks=#{$num_instances}", "--flush-cache", "-e ansible_become_pass=vagrant"]
  262. ansible.host_vars = host_vars
  263. ansible.extra_vars = $extra_vars
  264. if $ansible_tags != ""
  265. ansible.tags = [$ansible_tags]
  266. end
  267. ansible.groups = {
  268. "etcd" => ["#{$instance_name_prefix}-[1:#{$etcd_instances}]"],
  269. "kube_control_plane" => ["#{$instance_name_prefix}-[1:#{$kube_master_instances}]"],
  270. "kube_node" => ["#{$instance_name_prefix}-[1:#{$kube_node_instances}]"],
  271. "k8s_cluster:children" => ["kube_control_plane", "kube_node"],
  272. }
  273. end
  274. end
  275. end
  276. end
  277. end