You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

280 lines
14 KiB

7 years ago
  1. # Ansible variables
  2. ## Inventory
  3. The inventory is composed of 3 groups:
  4. * **kube_node** : list of kubernetes nodes where the pods will run.
  5. * **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
  6. * **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
  7. Note: do not modify the children of _k8s_cluster_, like putting
  8. the _etcd_ group into the _k8s_cluster_, unless you are certain
  9. to do that and you have it fully contained in the latter:
  10. ```ShellSession
  11. etcd ⊂ k8s_cluster => kube_node ∩ etcd = etcd
  12. ```
  13. When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
  14. If you want it a standalone, make sure those groups do not intersect.
  15. If you want the server to act both as control-plane and node, the server must be defined
  16. on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
  17. unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
  18. not _kube_node_.
  19. There are also two special groups:
  20. * **calico_rr** : explained for [advanced Calico networking cases](/docs/calico.md)
  21. * **bastion** : configure a bastion host if your nodes are not directly reachable
  22. Below is a complete inventory example:
  23. ```ini
  24. ## Configure 'ip' variable to bind kubernetes services on a
  25. ## different ip than the default iface
  26. node1 ansible_host=95.54.0.12 ip=10.3.0.1
  27. node2 ansible_host=95.54.0.13 ip=10.3.0.2
  28. node3 ansible_host=95.54.0.14 ip=10.3.0.3
  29. node4 ansible_host=95.54.0.15 ip=10.3.0.4
  30. node5 ansible_host=95.54.0.16 ip=10.3.0.5
  31. node6 ansible_host=95.54.0.17 ip=10.3.0.6
  32. [kube_control_plane]
  33. node1
  34. node2
  35. [etcd]
  36. node1
  37. node2
  38. node3
  39. [kube_node]
  40. node2
  41. node3
  42. node4
  43. node5
  44. node6
  45. [k8s_cluster:children]
  46. kube_node
  47. kube_control_plane
  48. ```
  49. ## Group vars and overriding variables precedence
  50. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
  51. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
  52. Mandatory variables that are common for at least one role (or a node group) can be found in the
  53. `inventory/sample/group_vars/k8s_cluster.yml`.
  54. There are also role vars for docker, kubernetes preinstall and control plane roles.
  55. According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
  56. those cannot be overridden from the group vars. In order to override, one should use
  57. the `-e` runtime flags (most simple way) or other layers described in the docs.
  58. Kubespray uses only a few layers to override things (or expect them to
  59. be overridden for roles):
  60. Layer | Comment
  61. ------|--------
  62. **role defaults** | provides best UX to override things for Kubespray deployments
  63. inventory vars | Unused
  64. **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things
  65. inventory host_vars | Unused
  66. playbook group_vars | Unused
  67. playbook host_vars | Unused
  68. **host facts** | Kubespray overrides for internal roles' logic, like state flags
  69. play vars | Unused
  70. play vars_prompt | Unused
  71. play vars_files | Unused
  72. registered vars | Unused
  73. set_facts | Kubespray overrides those, for some places
  74. **role and include vars** | Provides bad UX to override things! Use extra vars to enforce
  75. block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
  76. task vars (only for the task) | Unused for roles, but only for helper scripts
  77. **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
  78. ## Ansible tags
  79. The following tags are defined in playbooks:
  80. | Tag name | Used for
  81. |--------------------------------|---------
  82. | annotate | Create kube-router annotation
  83. | apps | K8s apps definitions
  84. | asserts | Check tasks for download role
  85. | aws-ebs-csi-driver | Configuring csi driver: aws-ebs
  86. | azure-csi-driver | Configuring csi driver: azure
  87. | bastion | Setup ssh config for bastion
  88. | bootstrap-os | Anything related to host OS configuration
  89. | calico | Network plugin Calico
  90. | calico_rr | Configuring Calico route reflector
  91. | canal | Network plugin Canal
  92. | cephfs-provisioner | Configuring CephFS
  93. | cert-manager | Configuring certificate manager for K8s
  94. | cilium | Network plugin Cilium
  95. | cinder-csi-driver | Configuring csi driver: cinder
  96. | client | Kubernetes clients role
  97. | cloud-provider | Cloud-provider related tasks
  98. | cluster-roles | Configuring cluster wide application (psp ...)
  99. | cni | CNI plugins for Network Plugins
  100. | containerd | Configuring containerd engine runtime for hosts
  101. | container_engine_accelerator | Enable nvidia accelerator for runtimes
  102. | container-engine | Configuring container engines
  103. | container-runtimes | Configuring container runtimes
  104. | coredns | Configuring coredns deployment
  105. | crio | Configuring crio container engine for hosts
  106. | crun | Configuring crun runtime
  107. | csi-driver | Configuring csi driver
  108. | dashboard | Installing and configuring the Kubernetes Dashboard
  109. | dns | Remove dns entries when resetting
  110. | docker | Configuring docker engine runtime for hosts
  111. | download | Fetching container images to a delegate host
  112. | etcd | Configuring etcd cluster
  113. | etcd-secrets | Configuring etcd certs/keys
  114. | etchosts | Configuring /etc/hosts entries for hosts
  115. | external-cloud-controller | Configure cloud controllers
  116. | external-openstack | Cloud controller : openstack
  117. | external-provisioner | Configure external provisioners
  118. | external-vsphere | Cloud controller : vsphere
  119. | facts | Gathering facts and misc check results
  120. | files | Remove files when resetting
  121. | flannel | Network plugin flannel
  122. | gce | Cloud-provider GCP
  123. | gcp-pd-csi-driver | Configuring csi driver: gcp-pd
  124. | gvisor | Configuring gvisor runtime
  125. | helm | Installing and configuring Helm
  126. | ingress-controller | Configure ingress controllers
  127. | ingress_alb | AWS ALB Ingress Controller
  128. | init | Windows kubernetes init nodes
  129. | iptables | Flush and clear iptable when resetting
  130. | k8s-pre-upgrade | Upgrading K8s cluster
  131. | k8s-secrets | Configuring K8s certs/keys
  132. | k8s-gen-tokens | Configuring K8s tokens
  133. | kata-containers | Configuring kata-containers runtime
  134. | krew | Install and manage krew
  135. | kubeadm | Roles linked to kubeadm tasks
  136. | kube-apiserver | Configuring static pod kube-apiserver
  137. | kube-controller-manager | Configuring static pod kube-controller-manager
  138. | kube-vip | Installing and configuring kube-vip
  139. | kubectl | Installing kubectl and bash completion
  140. | kubelet | Configuring kubelet service
  141. | kube-ovn | Network plugin kube-ovn
  142. | kube-router | Network plugin kube-router
  143. | kube-proxy | Configuring static pod kube-proxy
  144. | localhost | Special steps for the localhost (ansible runner)
  145. | local-path-provisioner | Configure External provisioner: local-path
  146. | local-volume-provisioner | Configure External provisioner: local-volume
  147. | macvlan | Network plugin macvlan
  148. | master | Configuring K8s master node role
  149. | metallb | Installing and configuring metallb
  150. | metrics_server | Configuring metrics_server
  151. | netchecker | Installing netchecker K8s app
  152. | network | Configuring networking plugins for K8s
  153. | mounts | Umount kubelet dirs when reseting
  154. | multus | Network plugin multus
  155. | nginx | Configuring LB for kube-apiserver instances
  156. | node | Configuring K8s minion (compute) node role
  157. | nodelocaldns | Configuring nodelocaldns daemonset
  158. | node-label | Tasks linked to labeling of nodes
  159. | node-webhook | Tasks linked to webhook (grating access to resources)
  160. | nvidia_gpu | Enable nvidia accelerator for runtimes
  161. | oci | Cloud provider: oci
  162. | persistent_volumes | Configure csi volumes
  163. | persistent_volumes_aws_ebs_csi | Configuring csi driver: aws-ebs
  164. | persistent_volumes_cinder_csi | Configuring csi driver: cinder
  165. | persistent_volumes_gcp_pd_csi | Configuring csi driver: gcp-pd
  166. | persistent_volumes_openstack | Configuring csi driver: openstack
  167. | policy-controller | Configuring Calico policy controller
  168. | post-remove | Tasks running post-remove operation
  169. | post-upgrade | Tasks running post-upgrade operation
  170. | pre-remove | Tasks running pre-remove operation
  171. | pre-upgrade | Tasks running pre-upgrade operation
  172. | preinstall | Preliminary configuration steps
  173. | registry | Configuring local docker registry
  174. | reset | Tasks running doing the node reset
  175. | resolvconf | Configuring /etc/resolv.conf for hosts/apps
  176. | rbd-provisioner | Configure External provisioner: rdb
  177. | services | Remove services (etcd, kubelet etc...) when resetting
  178. | snapshot | Enabling csi snapshot
  179. | snapshot-controller | Configuring csi snapshot controller
  180. | upgrade | Upgrading, f.e. container images/binaries
  181. | upload | Distributing images/binaries across hosts
  182. | vsphere-csi-driver | Configuring csi driver: vsphere
  183. | weave | Network plugin Weave
  184. | win_nodes | Running windows specific tasks
  185. | youki | Configuring youki runtime
  186. Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
  187. tags found in the codebase. New tags will be listed with the empty "Used for"
  188. field.
  189. ## Example commands
  190. Example command to filter and apply only DNS configuration tasks and skip
  191. everything else related to host OS configuration and downloading images of containers:
  192. ```ShellSession
  193. ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
  194. ```
  195. And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
  196. ```ShellSession
  197. ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
  198. ```
  199. And this prepares all container images locally (at the ansible runner node) without installing
  200. or upgrading related stuff or trying to upload container to K8s cluster nodes:
  201. ```ShellSession
  202. ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
  203. -e download_run_once=true -e download_localhost=true \
  204. --tags download --skip-tags upload,upgrade
  205. ```
  206. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
  207. ## Bastion host
  208. If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
  209. you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
  210. simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
  211. bastion host.
  212. ```ShellSession
  213. [bastion]
  214. bastion ansible_host=x.x.x.x
  215. ```
  216. For more information about Ansible and bastion hosts, read
  217. [Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
  218. ## Mitogen
  219. Mitogen support is deprecated, please see [mitogen related docs](/docs/mitogen.md) for useage and reasons for deprecation.
  220. ## Beyond ansible 2.9
  221. Ansible project has decided, in order to ease their maintenance burden, to split between
  222. two projects which are now joined under the Ansible umbrella.
  223. Ansible-base (2.10.x branch) will contain just the ansible language implementation while
  224. ansible modules that were previously bundled into a single repository will be part of the
  225. ansible 3.x package. Pleasee see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/)
  226. that explains in detail the need and the evolution plan.
  227. **Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`.
  228. You first need to uninstall your old ansible (pre 2.10) version and install the new one.
  229. ```ShellSession
  230. pip uninstall ansible ansible-base ansible-core
  231. cd kubespray/
  232. pip install -U .
  233. ```
  234. **Note:** some changes needed to support ansible 2.10+ are not backwards compatible with 2.9
  235. Kubespray needs to evolve and keep pace with upstream ansible and will be forced to eventually
  236. drop 2.9 support. Kubespray CIs use only the ansible version specified in the `requirements.txt`
  237. and while the `ansible_version.yml` may allow older versions to be used, these are not
  238. exercised in the CI and compatibility is not guaranteed.