You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

103 lines
4.7 KiB

  1. # Overview
  2. Distributed system such as Kubernetes are designed to be resilient to the
  3. failures. More details about Kubernetes High-Availability (HA) may be found at
  4. [Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/)
  5. To have a simple view the most of parts of HA will be skipped to describe
  6. Kubelet<->Controller Manager communication only.
  7. By default the normal behavior looks like:
  8. 1. Kubelet updates it status to apiserver periodically, as specified by
  9. `--node-status-update-frequency`. The default value is **10s**.
  10. 2. Kubernetes controller manager checks the statuses of Kubelets every
  11. `–-node-monitor-period`. The default value is **5s**.
  12. 3. In case the status is updated within `--node-monitor-grace-period` of time,
  13. Kubernetes controller manager considers healthy status of Kubelet. The
  14. default value is **40s**.
  15. > Kubernetes controller manager and Kubelets work asynchronously. It means that
  16. > the delay may include any network latency, API Server latency, etcd latency,
  17. > latency caused by load on one's master nodes and so on. So if
  18. > `--node-status-update-frequency` is set to 5s in reality it may appear in
  19. > etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
  20. > nodes.
  21. # Failure
  22. Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
  23. `nodeStatusUpdateRetry` is constantly set to 5 in
  24. [kubelet.go](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet.go#L102).
  25. Kubelet will try to update the status in
  26. [tryUpdateNodeStatus](https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/kubelet_node_status.go#L312)
  27. function. Kubelet uses `http.Client()` Golang method, but has no specified
  28. timeout. Thus there may be some glitches when API Server is overloaded while
  29. TCP connection is established.
  30. So, there will be `nodeStatusUpdateRetry` * `--node-status-update-frequency`
  31. attempts to set a status of node.
  32. At the same time Kubernetes controller manager will try to check
  33. `nodeStatusUpdateRetry` times every `--node-monitor-period` of time. After
  34. `--node-monitor-grace-period` it will consider node unhealthy. It will remove
  35. its pods based on `--pod-eviction-timeout`
  36. Kube proxy has a watcher over API. Once pods are evicted, Kube proxy will
  37. notice and will update iptables of the node. It will remove endpoints from
  38. services so pods from failed node won't be accessible anymore.
  39. # Recommendations for different cases
  40. ## Fast Update and Fast Reaction
  41. If `-–node-status-update-frequency` is set to **4s** (10s is default).
  42. `--node-monitor-period` to **2s** (5s is default).
  43. `--node-monitor-grace-period` to **20s** (40s is default).
  44. `--pod-eviction-timeout` is set to **30s** (5m is default)
  45. In such scenario, pods will be evicted in **50s** because the node will be
  46. considered as down after **20s**, and `--pod-eviction-timeout` occurs after
  47. **30s** more. However, this scenario creates an overhead on etcd as every node
  48. will try to update its status every 2 seconds.
  49. If the environment has 1000 nodes, there will be 15000 node updates per
  50. minute which may require large etcd containers or even dedicated nodes for etcd.
  51. > If we calculate the number of tries, the division will give 5, but in reality
  52. > it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
  53. > total number of attempts will vary from 15 to 25 due to latency of all
  54. > components.
  55. ## Medium Update and Average Reaction
  56. Let's set `-–node-status-update-frequency` to **20s**
  57. `--node-monitor-grace-period` to **2m** and `--pod-eviction-timeout` to **1m**.
  58. In that case, Kubelet will try to update status every 20s. So, it will be 6 * 5
  59. = 30 attempts before Kubernetes controller manager will consider unhealthy
  60. status of node. After 1m it will evict all pods. The total time will be 3m
  61. before eviction process.
  62. Such scenario is good for medium environments as 1000 nodes will require 3000
  63. etcd updates per minute.
  64. > In reality, there will be from 4 to 6 node update tries. The total number of
  65. > of attempts will vary from 20 to 30.
  66. ## Low Update and Slow reaction
  67. Let's set `-–node-status-update-frequency` to **1m**.
  68. `--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
  69. to **1m**. In this scenario, every kubelet will try to update the status every
  70. minute. There will be 5 * 5 = 25 attempts before unhealthy status. After 5m,
  71. Kubernetes controller manager will set unhealthy status. This means that pods
  72. will be evicted after 1m after being marked unhealthy. (6m in total).
  73. > In reality, there will be from 3 to 5 tries. The total number of attempt will
  74. > vary from 15 to 25.
  75. There can be different combinations such as Fast Update with Slow reaction to
  76. satisfy specific cases.