You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

155 lines
5.3 KiB

  1. # Adding/replacing a node
  2. Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/issues/3471#issuecomment-530036084)
  3. ## Limitation: Removal of first kube-master and etcd-master
  4. Currently you can't remove the first node in your kube-master and etcd-master list. If you still want to remove this node you have to:
  5. ### 1) Change order of current masters
  6. Modify the order of your master list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
  7. ```yaml
  8. children:
  9. kube-master:
  10. hosts:
  11. node-1:
  12. node-2:
  13. node-3:
  14. kube-node:
  15. hosts:
  16. node-1:
  17. node-2:
  18. node-3:
  19. etcd:
  20. hosts:
  21. node-1:
  22. node-2:
  23. node-3:
  24. ```
  25. change your inventory to:
  26. ```yaml
  27. children:
  28. kube-master:
  29. hosts:
  30. node-2:
  31. node-3:
  32. node-1:
  33. kube-node:
  34. hosts:
  35. node-2:
  36. node-3:
  37. node-1:
  38. etcd:
  39. hosts:
  40. node-2:
  41. node-3:
  42. node-1:
  43. ```
  44. ## 2) Upgrade the cluster
  45. run `cluster-upgrade.yml` or `cluster.yml`. Now you are good to go on with the removal.
  46. ## Adding/replacing a worker node
  47. This should be the easiest.
  48. ### 1) Add new node to the inventory
  49. ### 2) Run `scale.yml`
  50. You can use `--limit=NODE_NAME` to limit Kubespray to avoid disturbing other nodes in the cluster.
  51. Before using `--limit` run playbook `facts.yml` without the limit to refresh facts cache for all nodes.
  52. ### 3) Remove an old node with remove-node.yml
  53. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
  54. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars: `-e node=NODE_NAME reset_nodes=false`.
  55. Use this flag even when you remove other types of nodes like a master or etcd nodes.
  56. ### 5) Remove the node from the inventory
  57. That's it.
  58. ## Adding/replacing a master node
  59. ### 1) Run `cluster.yml`
  60. Append the new host to the inventory and run `cluster.yml`. You can NOT use `scale.yml` for that.
  61. ### 3) Restart kube-system/nginx-proxy
  62. In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserver. Kubespray will update its static config, but it needs to be restarted in order to reload.
  63. ```sh
  64. # run in every host
  65. docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart
  66. ```
  67. ### 4) Remove old master nodes
  68. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
  69. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars.
  70. ## Adding an etcd node
  71. You need to make sure there are always an odd number of etcd nodes in the cluster. In such a way, this is always a replace or scale up operation. Either add two new nodes or remove an old one.
  72. ### 1) Add the new node running cluster.yml
  73. Update the inventory and run `cluster.yml` passing `--limit=etcd,kube-master -e ignore_assert_errors=yes`.
  74. If the node you want to add as an etcd node is already a worker or master node in your cluster, you have to remove him first using `remove-node.yml`.
  75. Run `upgrade-cluster.yml` also passing `--limit=etcd,kube-master -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster.
  76. At this point, you will have an even number of nodes.
  77. Everything should still be working, and you should only have problems if the cluster decides to elect a new etcd leader before you remove a node.
  78. Even so, running applications should continue to be available.
  79. If you add multiple ectd nodes with one run, you might want to append `-e etcd_retries=10` to increase the amount of retries between each ectd node join.
  80. Otherwise the etcd cluster might still be processing the first join and fail on subsequent nodes. `etcd_retries=10` might work to join 3 new nodes.
  81. ## Removing an etcd node
  82. ### 1) Remove old etcd members from the cluster runtime
  83. Acquire a shell prompt into one of the etcd containers and use etcdctl to remove the old member. Use a etcd master that will not be removed for that.
  84. ```sh
  85. # list all members
  86. etcdctl member list
  87. # run remove for each member you want pass to remove-node.yml in step 2
  88. etcdctl member remove MEMBER_ID
  89. # careful!!! if you remove a wrong member you will be in trouble
  90. # wait until you do not get a 'Failed' output from
  91. etcdctl member list
  92. # note: these command lines are actually much bigger, if you are not inside an etcd container, since you need to pass all certificates to etcdctl.
  93. ```
  94. You can get into an etcd container by running `docker exec -it $(docker ps --filter "name=etcd" --format "{{.ID}}") sh` on one of the etcd masters.
  95. ### 2) Remove an old etcd node
  96. With the node still in the inventory, run `remove-node.yml` passing `-e node=NODE_NAME` as the name of the node that should be removed.
  97. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars.
  98. ### 3) Make sure only remaining nodes are in your inventory
  99. Remove `NODE_NAME` from your inventory file.
  100. ### 4) Update kubernetes and network configuration files with the valid list of etcd members
  101. Run `cluster.yml` to regenerate the configuration files on all remaining nodes.
  102. ### 5) Shutdown the old instance
  103. That's it.