You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

303 lines
14 KiB

2 years ago
7 years ago
  1. # Ansible
  2. ## Installing Ansible
  3. Kubespray supports multiple ansible versions and ships different `requirements.txt` files for them.
  4. Depending on your available python version you may be limited in choosing which ansible version to use.
  5. It is recommended to deploy the ansible version used by kubespray into a python virtual environment.
  6. ```ShellSession
  7. VENVDIR=kubespray-venv
  8. KUBESPRAYDIR=kubespray
  9. ANSIBLE_VERSION=2.12
  10. virtualenv --python=$(which python3) $VENVDIR
  11. source $VENVDIR/bin/activate
  12. cd $KUBESPRAYDIR
  13. pip install -U -r requirements-$ANSIBLE_VERSION.txt
  14. test -f requirements-$ANSIBLE_VERSION.yml && \
  15. ansible-galaxy role install -r requirements-$ANSIBLE_VERSION.yml && \
  16. ansible-galaxy collection -r requirements-$ANSIBLE_VERSION.yml
  17. ```
  18. ### Ansible Python Compatibility
  19. Based on the table below and the available python version for your ansible host you should choose the appropriate ansible version to use with kubespray.
  20. | Ansible Version | Python Version |
  21. | --------------- | -------------- |
  22. | 2.11 | 2.7,3.5-3.9 |
  23. | 2.12 | 3.8-3.10 |
  24. ## Inventory
  25. The inventory is composed of 3 groups:
  26. * **kube_node** : list of kubernetes nodes where the pods will run.
  27. * **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run.
  28. * **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
  29. Note: do not modify the children of _k8s_cluster_, like putting
  30. the _etcd_ group into the _k8s_cluster_, unless you are certain
  31. to do that and you have it fully contained in the latter:
  32. ```ShellSession
  33. etcd ⊂ k8s_cluster => kube_node ∩ etcd = etcd
  34. ```
  35. When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
  36. If you want it a standalone, make sure those groups do not intersect.
  37. If you want the server to act both as control-plane and node, the server must be defined
  38. on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
  39. unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
  40. not _kube_node_.
  41. There are also two special groups:
  42. * **calico_rr** : explained for [advanced Calico networking cases](/docs/calico.md)
  43. * **bastion** : configure a bastion host if your nodes are not directly reachable
  44. Below is a complete inventory example:
  45. ```ini
  46. ## Configure 'ip' variable to bind kubernetes services on a
  47. ## different ip than the default iface
  48. node1 ansible_host=95.54.0.12 ip=10.3.0.1
  49. node2 ansible_host=95.54.0.13 ip=10.3.0.2
  50. node3 ansible_host=95.54.0.14 ip=10.3.0.3
  51. node4 ansible_host=95.54.0.15 ip=10.3.0.4
  52. node5 ansible_host=95.54.0.16 ip=10.3.0.5
  53. node6 ansible_host=95.54.0.17 ip=10.3.0.6
  54. [kube_control_plane]
  55. node1
  56. node2
  57. [etcd]
  58. node1
  59. node2
  60. node3
  61. [kube_node]
  62. node2
  63. node3
  64. node4
  65. node5
  66. node6
  67. [k8s_cluster:children]
  68. kube_node
  69. kube_control_plane
  70. ```
  71. ## Group vars and overriding variables precedence
  72. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
  73. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
  74. Mandatory variables that are common for at least one role (or a node group) can be found in the
  75. `inventory/sample/group_vars/k8s_cluster.yml`.
  76. There are also role vars for docker, kubernetes preinstall and control plane roles.
  77. According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
  78. those cannot be overridden from the group vars. In order to override, one should use
  79. the `-e` runtime flags (most simple way) or other layers described in the docs.
  80. Kubespray uses only a few layers to override things (or expect them to
  81. be overridden for roles):
  82. Layer | Comment
  83. ------|--------
  84. **role defaults** | provides best UX to override things for Kubespray deployments
  85. inventory vars | Unused
  86. **inventory group_vars** | Expects users to use ``all.yml``,``k8s_cluster.yml`` etc. to override things
  87. inventory host_vars | Unused
  88. playbook group_vars | Unused
  89. playbook host_vars | Unused
  90. **host facts** | Kubespray overrides for internal roles' logic, like state flags
  91. play vars | Unused
  92. play vars_prompt | Unused
  93. play vars_files | Unused
  94. registered vars | Unused
  95. set_facts | Kubespray overrides those, for some places
  96. **role and include vars** | Provides bad UX to override things! Use extra vars to enforce
  97. block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
  98. task vars (only for the task) | Unused for roles, but only for helper scripts
  99. **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
  100. ## Ansible tags
  101. The following tags are defined in playbooks:
  102. | Tag name | Used for
  103. |--------------------------------|---------
  104. | annotate | Create kube-router annotation
  105. | apps | K8s apps definitions
  106. | asserts | Check tasks for download role
  107. | aws-ebs-csi-driver | Configuring csi driver: aws-ebs
  108. | azure-csi-driver | Configuring csi driver: azure
  109. | bastion | Setup ssh config for bastion
  110. | bootstrap-os | Anything related to host OS configuration
  111. | calico | Network plugin Calico
  112. | calico_rr | Configuring Calico route reflector
  113. | canal | Network plugin Canal
  114. | cephfs-provisioner | Configuring CephFS
  115. | cert-manager | Configuring certificate manager for K8s
  116. | cilium | Network plugin Cilium
  117. | cinder-csi-driver | Configuring csi driver: cinder
  118. | client | Kubernetes clients role
  119. | cloud-provider | Cloud-provider related tasks
  120. | cluster-roles | Configuring cluster wide application (psp ...)
  121. | cni | CNI plugins for Network Plugins
  122. | containerd | Configuring containerd engine runtime for hosts
  123. | container_engine_accelerator | Enable nvidia accelerator for runtimes
  124. | container-engine | Configuring container engines
  125. | container-runtimes | Configuring container runtimes
  126. | coredns | Configuring coredns deployment
  127. | crio | Configuring crio container engine for hosts
  128. | crun | Configuring crun runtime
  129. | csi-driver | Configuring csi driver
  130. | dashboard | Installing and configuring the Kubernetes Dashboard
  131. | dns | Remove dns entries when resetting
  132. | docker | Configuring docker engine runtime for hosts
  133. | download | Fetching container images to a delegate host
  134. | etcd | Configuring etcd cluster
  135. | etcd-secrets | Configuring etcd certs/keys
  136. | etchosts | Configuring /etc/hosts entries for hosts
  137. | external-cloud-controller | Configure cloud controllers
  138. | external-openstack | Cloud controller : openstack
  139. | external-provisioner | Configure external provisioners
  140. | external-vsphere | Cloud controller : vsphere
  141. | facts | Gathering facts and misc check results
  142. | files | Remove files when resetting
  143. | flannel | Network plugin flannel
  144. | gce | Cloud-provider GCP
  145. | gcp-pd-csi-driver | Configuring csi driver: gcp-pd
  146. | gvisor | Configuring gvisor runtime
  147. | helm | Installing and configuring Helm
  148. | ingress-controller | Configure ingress controllers
  149. | ingress_alb | AWS ALB Ingress Controller
  150. | init | Windows kubernetes init nodes
  151. | iptables | Flush and clear iptable when resetting
  152. | k8s-pre-upgrade | Upgrading K8s cluster
  153. | k8s-secrets | Configuring K8s certs/keys
  154. | k8s-gen-tokens | Configuring K8s tokens
  155. | kata-containers | Configuring kata-containers runtime
  156. | krew | Install and manage krew
  157. | kubeadm | Roles linked to kubeadm tasks
  158. | kube-apiserver | Configuring static pod kube-apiserver
  159. | kube-controller-manager | Configuring static pod kube-controller-manager
  160. | kube-vip | Installing and configuring kube-vip
  161. | kubectl | Installing kubectl and bash completion
  162. | kubelet | Configuring kubelet service
  163. | kube-ovn | Network plugin kube-ovn
  164. | kube-router | Network plugin kube-router
  165. | kube-proxy | Configuring static pod kube-proxy
  166. | localhost | Special steps for the localhost (ansible runner)
  167. | local-path-provisioner | Configure External provisioner: local-path
  168. | local-volume-provisioner | Configure External provisioner: local-volume
  169. | macvlan | Network plugin macvlan
  170. | master | Configuring K8s master node role
  171. | metallb | Installing and configuring metallb
  172. | metrics_server | Configuring metrics_server
  173. | netchecker | Installing netchecker K8s app
  174. | network | Configuring networking plugins for K8s
  175. | mounts | Umount kubelet dirs when reseting
  176. | multus | Network plugin multus
  177. | nginx | Configuring LB for kube-apiserver instances
  178. | node | Configuring K8s minion (compute) node role
  179. | nodelocaldns | Configuring nodelocaldns daemonset
  180. | node-label | Tasks linked to labeling of nodes
  181. | node-webhook | Tasks linked to webhook (grating access to resources)
  182. | nvidia_gpu | Enable nvidia accelerator for runtimes
  183. | oci | Cloud provider: oci
  184. | persistent_volumes | Configure csi volumes
  185. | persistent_volumes_aws_ebs_csi | Configuring csi driver: aws-ebs
  186. | persistent_volumes_cinder_csi | Configuring csi driver: cinder
  187. | persistent_volumes_gcp_pd_csi | Configuring csi driver: gcp-pd
  188. | persistent_volumes_openstack | Configuring csi driver: openstack
  189. | policy-controller | Configuring Calico policy controller
  190. | post-remove | Tasks running post-remove operation
  191. | post-upgrade | Tasks running post-upgrade operation
  192. | pre-remove | Tasks running pre-remove operation
  193. | pre-upgrade | Tasks running pre-upgrade operation
  194. | preinstall | Preliminary configuration steps
  195. | registry | Configuring local docker registry
  196. | reset | Tasks running doing the node reset
  197. | resolvconf | Configuring /etc/resolv.conf for hosts/apps
  198. | rbd-provisioner | Configure External provisioner: rdb
  199. | services | Remove services (etcd, kubelet etc...) when resetting
  200. | snapshot | Enabling csi snapshot
  201. | snapshot-controller | Configuring csi snapshot controller
  202. | upgrade | Upgrading, f.e. container images/binaries
  203. | upload | Distributing images/binaries across hosts
  204. | vsphere-csi-driver | Configuring csi driver: vsphere
  205. | weave | Network plugin Weave
  206. | win_nodes | Running windows specific tasks
  207. | youki | Configuring youki runtime
  208. Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
  209. tags found in the codebase. New tags will be listed with the empty "Used for"
  210. field.
  211. ## Example commands
  212. Example command to filter and apply only DNS configuration tasks and skip
  213. everything else related to host OS configuration and downloading images of containers:
  214. ```ShellSession
  215. ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
  216. ```
  217. And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
  218. ```ShellSession
  219. ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
  220. ```
  221. And this prepares all container images locally (at the ansible runner node) without installing
  222. or upgrading related stuff or trying to upload container to K8s cluster nodes:
  223. ```ShellSession
  224. ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
  225. -e download_run_once=true -e download_localhost=true \
  226. --tags download --skip-tags upload,upgrade
  227. ```
  228. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
  229. ## Bastion host
  230. If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
  231. you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
  232. simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
  233. bastion host.
  234. ```ShellSession
  235. [bastion]
  236. bastion ansible_host=x.x.x.x
  237. ```
  238. For more information about Ansible and bastion hosts, read
  239. [Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
  240. ## Mitogen
  241. Mitogen support is deprecated, please see [mitogen related docs](/docs/mitogen.md) for usage and reasons for deprecation.
  242. ## Beyond ansible 2.9
  243. Ansible project has decided, in order to ease their maintenance burden, to split between
  244. two projects which are now joined under the Ansible umbrella.
  245. Ansible-base (2.10.x branch) will contain just the ansible language implementation while
  246. ansible modules that were previously bundled into a single repository will be part of the
  247. ansible 3.x package. Please see [this blog post](https://blog.while-true-do.io/ansible-release-3-0-0/)
  248. that explains in detail the need and the evolution plan.
  249. **Note:** this change means that ansible virtual envs cannot be upgraded with `pip install -U`.
  250. You first need to uninstall your old ansible (pre 2.10) version and install the new one.
  251. ```ShellSession
  252. pip uninstall ansible ansible-base ansible-core
  253. cd kubespray/
  254. pip install -U .
  255. ```