You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

108 lines
5.3 KiB

  1. # Overview
  2. Distributed system such as Kubernetes are designed to be resilient to the
  3. failures. More details about Kubernetes High-Availability (HA) may be found at
  4. [Building High-Availability Clusters](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/)
  5. To have a simple view the most of the parts of HA will be skipped to describe
  6. Kubelet<->Controller Manager communication only.
  7. By default the normal behavior looks like:
  8. 1. Kubelet updates it status to apiserver periodically, as specified by
  9. `--node-status-update-frequency`. The default value is **10s**.
  10. 2. Kubernetes controller manager checks the statuses of Kubelet every
  11. `–-node-monitor-period`. The default value is **5s**.
  12. 3. In case the status is updated within `--node-monitor-grace-period` of time,
  13. Kubernetes controller manager considers healthy status of Kubelet. The
  14. default value is **40s**.
  15. > Kubernetes controller manager and Kubelet work asynchronously. It means that
  16. > the delay may include any network latency, API Server latency, etcd latency,
  17. > latency caused by load on one's control plane nodes and so on. So if
  18. > `--node-status-update-frequency` is set to 5s in reality it may appear in
  19. > etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
  20. > nodes.
  21. ## Failure
  22. Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
  23. `nodeStatusUpdateRetry` is constantly set to 5 in
  24. [kubelet.go](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet.go#L102).
  25. Kubelet will try to update the status in
  26. [tryUpdateNodeStatus](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet_node_status.go#L312)
  27. function. Kubelet uses `http.Client()` Golang method, but has no specified
  28. timeout. Thus there may be some glitches when API Server is overloaded while
  29. TCP connection is established.
  30. So, there will be `nodeStatusUpdateRetry` * `--node-status-update-frequency`
  31. attempts to set a status of node.
  32. At the same time Kubernetes controller manager will try to check
  33. `nodeStatusUpdateRetry` times every `--node-monitor-period` of time. After
  34. `--node-monitor-grace-period` it will consider node unhealthy. Pods will then be rescheduled based on the
  35. [Taint Based Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions)
  36. timers that you set on them individually, or the API Server's global timers:`--default-not-ready-toleration-seconds` &
  37. ``--default-unreachable-toleration-seconds``.
  38. Kube proxy has a watcher over API. Once pods are evicted, Kube proxy will
  39. notice and will update iptables of the node. It will remove endpoints from
  40. services so pods from failed node won't be accessible anymore.
  41. ## Recommendations for different cases
  42. ## Fast Update and Fast Reaction
  43. If `--node-status-update-frequency` is set to **4s** (10s is default).
  44. `--node-monitor-period` to **2s** (5s is default).
  45. `--node-monitor-grace-period` to **20s** (40s is default).
  46. `--default-not-ready-toleration-seconds` and ``--default-unreachable-toleration-seconds`` are set to **30**
  47. (300 seconds is default). Note these two values should be integers representing the number of seconds ("s" or "m" for
  48. seconds\minutes are not specified).
  49. In such scenario, pods will be evicted in **50s** because the node will be
  50. considered as down after **20s**, and `--default-not-ready-toleration-seconds` or
  51. ``--default-unreachable-toleration-seconds`` occur after **30s** more. However, this scenario creates an overhead on
  52. etcd as every node will try to update its status every 2 seconds.
  53. If the environment has 1000 nodes, there will be 15000 node updates per
  54. minute which may require large etcd containers or even dedicated nodes for etcd.
  55. > If we calculate the number of tries, the division will give 5, but in reality
  56. > it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
  57. > total number of attempts will vary from 15 to 25 due to latency of all
  58. > components.
  59. ## Medium Update and Average Reaction
  60. Let's set `--node-status-update-frequency` to **20s**
  61. `--node-monitor-grace-period` to **2m** and `--default-not-ready-toleration-seconds` and
  62. ``--default-unreachable-toleration-seconds`` to **60**.
  63. In that case, Kubelet will try to update status every 20s. So, it will be 6 * 5
  64. = 30 attempts before Kubernetes controller manager will consider unhealthy
  65. status of node. After 1m it will evict all pods. The total time will be 3m
  66. before eviction process.
  67. Such scenario is good for medium environments as 1000 nodes will require 3000
  68. etcd updates per minute.
  69. > In reality, there will be from 4 to 6 node update tries. The total number of
  70. > of attempts will vary from 20 to 30.
  71. ## Low Update and Slow reaction
  72. Let's set `--node-status-update-frequency` to **1m**.
  73. `--node-monitor-grace-period` will set to **5m** and `--default-not-ready-toleration-seconds` and
  74. ``--default-unreachable-toleration-seconds`` to **60**. In this scenario, every kubelet will try to update the status
  75. every minute. There will be 5 * 5 = 25 attempts before unhealthy status. After 5m,
  76. Kubernetes controller manager will set unhealthy status. This means that pods
  77. will be evicted after 1m after being marked unhealthy. (6m in total).
  78. > In reality, there will be from 3 to 5 tries. The total number of attempt will
  79. > vary from 15 to 25.
  80. There can be different combinations such as Fast Update with Slow reaction to
  81. satisfy specific cases.