You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

183 lines
7.7 KiB

7 years ago
  1. Ansible variables
  2. ===============
  3. Inventory
  4. -------------
  5. The inventory is composed of 3 groups:
  6. * **kube-node** : list of kubernetes nodes where the pods will run.
  7. * **kube-master** : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
  8. * **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
  9. Note: do not modify the children of _k8s-cluster_, like putting
  10. the _etcd_ group into the _k8s-cluster_, unless you are certain
  11. to do that and you have it fully contained in the latter:
  12. ```
  13. k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
  14. ```
  15. When _kube-node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads.
  16. If you want it a standalone, make sure those groups do not intersect.
  17. If you want the server to act both as master and node, the server must be defined
  18. on both groups _kube-master_ and _kube-node_. If you want a standalone and
  19. unschedulable master, the server must be defined only in the _kube-master_ and
  20. not _kube-node_.
  21. There are also two special groups:
  22. * **calico-rr** : explained for [advanced Calico networking cases](calico.md)
  23. * **bastion** : configure a bastion host if your nodes are not directly reachable
  24. Below is a complete inventory example:
  25. ```
  26. ## Configure 'ip' variable to bind kubernetes services on a
  27. ## different ip than the default iface
  28. node1 ansible_ssh_host=95.54.0.12 ip=10.3.0.1
  29. node2 ansible_ssh_host=95.54.0.13 ip=10.3.0.2
  30. node3 ansible_ssh_host=95.54.0.14 ip=10.3.0.3
  31. node4 ansible_ssh_host=95.54.0.15 ip=10.3.0.4
  32. node5 ansible_ssh_host=95.54.0.16 ip=10.3.0.5
  33. node6 ansible_ssh_host=95.54.0.17 ip=10.3.0.6
  34. [kube-master]
  35. node1
  36. node2
  37. [etcd]
  38. node1
  39. node2
  40. node3
  41. [kube-node]
  42. node2
  43. node3
  44. node4
  45. node5
  46. node6
  47. [k8s-cluster:children]
  48. kube-node
  49. kube-master
  50. ```
  51. Group vars and overriding variables precedence
  52. ----------------------------------------------
  53. The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
  54. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
  55. Mandatory variables that are common for at least one role (or a node group) can be found in the
  56. `inventory/sample/group_vars/k8s-cluster.yml`.
  57. There are also role vars for docker, rkt, kubernetes preinstall and master roles.
  58. According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
  59. those cannot be overriden from the group vars. In order to override, one should use
  60. the `-e ` runtime flags (most simple way) or other layers described in the docs.
  61. Kubespray uses only a few layers to override things (or expect them to
  62. be overriden for roles):
  63. Layer | Comment
  64. ------|--------
  65. **role defaults** | provides best UX to override things for Kubespray deployments
  66. inventory vars | Unused
  67. **inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
  68. inventory host_vars | Unused
  69. playbook group_vars | Unused
  70. playbook host_vars | Unused
  71. **host facts** | Kubespray overrides for internal roles' logic, like state flags
  72. play vars | Unused
  73. play vars_prompt | Unused
  74. play vars_files | Unused
  75. registered vars | Unused
  76. set_facts | Kubespray overrides those, for some places
  77. **role and include vars** | Provides bad UX to override things! Use extra vars to enforce
  78. block vars (only for tasks in block) | Kubespray overrides for internal roles' logic
  79. task vars (only for the task) | Unused for roles, but only for helper scripts
  80. **extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
  81. Ansible tags
  82. ------------
  83. The following tags are defined in playbooks:
  84. | Tag name | Used for
  85. |--------------------------|---------
  86. | apps | K8s apps definitions
  87. | azure | Cloud-provider Azure
  88. | bastion | Setup ssh config for bastion
  89. | bootstrap-os | Anything related to host OS configuration
  90. | calico | Network plugin Calico
  91. | canal | Network plugin Canal
  92. | cloud-provider | Cloud-provider related tasks
  93. | dnsmasq | Configuring DNS stack for hosts and K8s apps
  94. | docker | Configuring docker for hosts
  95. | download | Fetching container images to a delegate host
  96. | etcd | Configuring etcd cluster
  97. | etcd-pre-upgrade | Upgrading etcd cluster
  98. | etcd-secrets | Configuring etcd certs/keys
  99. | etchosts | Configuring /etc/hosts entries for hosts
  100. | facts | Gathering facts and misc check results
  101. | flannel | Network plugin flannel
  102. | gce | Cloud-provider GCP
  103. | hyperkube | Manipulations with K8s hyperkube image
  104. | k8s-pre-upgrade | Upgrading K8s cluster
  105. | k8s-secrets | Configuring K8s certs/keys
  106. | kube-apiserver | Configuring static pod kube-apiserver
  107. | kube-controller-manager | Configuring static pod kube-controller-manager
  108. | kubectl | Installing kubectl and bash completion
  109. | kubelet | Configuring kubelet service
  110. | kube-proxy | Configuring static pod kube-proxy
  111. | kube-scheduler | Configuring static pod kube-scheduler
  112. | localhost | Special steps for the localhost (ansible runner)
  113. | master | Configuring K8s master node role
  114. | netchecker | Installing netchecker K8s app
  115. | network | Configuring networking plugins for K8s
  116. | nginx | Configuring LB for kube-apiserver instances
  117. | node | Configuring K8s minion (compute) node role
  118. | openstack | Cloud-provider OpenStack
  119. | preinstall | Preliminary configuration steps
  120. | resolvconf | Configuring /etc/resolv.conf for hosts/apps
  121. | upgrade | Upgrading, f.e. container images/binaries
  122. | upload | Distributing images/binaries across hosts
  123. | weave | Network plugin Weave
  124. Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
  125. tags found in the codebase. New tags will be listed with the empty "Used for"
  126. field.
  127. Example commands
  128. ----------------
  129. Example command to filter and apply only DNS configuration tasks and skip
  130. everything else related to host OS configuration and downloading images of containers:
  131. ```
  132. ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,dnsmasq,facts --skip-tags=download,bootstrap-os
  133. ```
  134. And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
  135. ```
  136. ansible-playbook -i inventory/sample/hosts.ini -e dnsmasq_dns_server='' cluster.yml --tags resolvconf
  137. ```
  138. And this prepares all container images localy (at the ansible runner node) without installing
  139. or upgrading related stuff or trying to upload container to K8s cluster nodes:
  140. ```
  141. ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
  142. -e download_run_once=true -e download_localhost=true \
  143. --tags download --skip-tags upload,upgrade
  144. ```
  145. Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
  146. Bastion host
  147. --------------
  148. If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
  149. you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
  150. simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
  151. bastion host.
  152. ```
  153. bastion ansible_ssh_host=x.x.x.x
  154. ```
  155. For more information about Ansible and bastion hosts, read
  156. [Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)