You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

355 lines
12 KiB

  1. # Upgrading Kubernetes in Kubespray
  2. Kubespray handles upgrades the same way it handles initial deployment. That is to
  3. say that each component is laid down in a fixed order.
  4. You can also individually control versions of components by explicitly defining their
  5. versions. Here are all version vars for each component:
  6. * docker_version
  7. * kube_version
  8. * etcd_version
  9. * calico_version
  10. * calico_cni_version
  11. * weave_version
  12. * flannel_version
  13. * kubedns_version
  14. :warning: [Attempting to upgrade from an older release straight to the latest release is unsupported and likely to break something](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515) :warning:
  15. See [Multiple Upgrades](#multiple-upgrades) for how to upgrade from older releases to the latest release
  16. ## Unsafe upgrade example
  17. If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
  18. deploy the following way:
  19. ```ShellSession
  20. ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.3
  21. ```
  22. And then repeat with v1.4.6 as kube_version:
  23. ```ShellSession
  24. ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.6
  25. ```
  26. ## Graceful upgrade
  27. Kubespray also supports cordon, drain and uncordoning of nodes when performing
  28. a cluster upgrade. There is a separate playbook used for this purpose. It is
  29. important to note that upgrade-cluster.yml can only be used for upgrading an
  30. existing cluster. That means there must be at least 1 kube-master already
  31. deployed.
  32. ```ShellSession
  33. ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.6.0
  34. ```
  35. After a successful upgrade, the Server Version should be updated:
  36. ```ShellSession
  37. $ kubectl version
  38. Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T19:15:41Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
  39. Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0+coreos.0", GitCommit:"8031716957d697332f9234ddf85febb07ac6c3e3", GitTreeState:"clean", BuildDate:"2017-03-29T04:33:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
  40. ```
  41. ## Multiple upgrades
  42. :warning: [Do not skip releases when upgrading--upgrade by one tag at a time.](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515) :warning:
  43. For instance, if you're on v2.6.0, then check out v2.7.0, run the upgrade, check out the next tag, and run the next upgrade, etc.
  44. Assuming you don't explicitly define a kubernetes version in your k8s-cluster.yml, you simply check out the next tag and run the upgrade-cluster.yml playbook
  45. * If you do define kubernetes version in your inventory (e.g. group_vars/k8s-cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=v1.11.3`
  46. Otherwise, the upgrade will leave your cluster at the same k8s version defined in your inventory vars.
  47. The below example shows taking a cluster that was set up for v2.6.0 up to v2.10.0
  48. ```ShellSession
  49. $ kubectl get node
  50. NAME STATUS ROLES AGE VERSION
  51. apollo Ready master,node 1h v1.10.4
  52. boomer Ready master,node 42m v1.10.4
  53. caprica Ready master,node 42m v1.10.4
  54. $ git describe --tags
  55. v2.6.0
  56. $ git tag
  57. ...
  58. v2.6.0
  59. v2.7.0
  60. v2.8.0
  61. v2.8.1
  62. v2.8.2
  63. ...
  64. $ git checkout v2.7.0
  65. Previous HEAD position was 8b3ce6e4 bump upgrade tests to v2.5.0 commit (#3087)
  66. HEAD is now at 05dabb7e Fix Bionic networking restart error #3430 (#3431)
  67. # NOTE: May need to sudo pip3 install -r requirements.txt when upgrading.
  68. ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  69. ...
  70. $ kubectl get node
  71. NAME STATUS ROLES AGE VERSION
  72. apollo Ready master,node 1h v1.11.3
  73. boomer Ready master,node 1h v1.11.3
  74. caprica Ready master,node 1h v1.11.3
  75. $ git checkout v2.8.0
  76. Previous HEAD position was 05dabb7e Fix Bionic networking restart error #3430 (#3431)
  77. HEAD is now at 9051aa52 Fix ubuntu-contiv test failed (#3808)
  78. ```
  79. :info: NOTE: Review changes between the sample inventory and your inventory when upgrading versions. :info:
  80. Some deprecations between versions that mean you can't just upgrade straight from 2.7.0 to 2.8.0 if you started with the sample inventory.
  81. In this case, I set "kubeadm_enabled" to false, knowing that it is deprecated and removed by 2.9.0, to delay converting the cluster to kubeadm as long as I could.
  82. ```ShellSession
  83. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  84. ...
  85. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  86. ...
  87. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  88. yes
  89. ...
  90. $ kubectl get node
  91. NAME STATUS ROLES AGE VERSION
  92. apollo Ready master,node 114m v1.12.3
  93. boomer Ready master,node 114m v1.12.3
  94. caprica Ready master,node 114m v1.12.3
  95. $ git checkout v2.8.1
  96. Previous HEAD position was 9051aa52 Fix ubuntu-contiv test failed (#3808)
  97. HEAD is now at 2ac1c756 More Feature/2.8 backports for 2.8.1 (#3911)
  98. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  99. ...
  100. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  101. ...
  102. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  103. yes
  104. ...
  105. $ kubectl get node
  106. NAME STATUS ROLES AGE VERSION
  107. apollo Ready master,node 2h36m v1.12.4
  108. boomer Ready master,node 2h36m v1.12.4
  109. caprica Ready master,node 2h36m v1.12.4
  110. $ git checkout v2.8.2
  111. Previous HEAD position was 2ac1c756 More Feature/2.8 backports for 2.8.1 (#3911)
  112. HEAD is now at 4167807f Upgrade to 1.12.5 (#4066)
  113. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  114. ...
  115. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  116. ...
  117. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  118. yes
  119. ...
  120. $ kubectl get node
  121. NAME STATUS ROLES AGE VERSION
  122. apollo Ready master,node 3h3m v1.12.5
  123. boomer Ready master,node 3h3m v1.12.5
  124. caprica Ready master,node 3h3m v1.12.5
  125. $ git checkout v2.8.3
  126. Previous HEAD position was 4167807f Upgrade to 1.12.5 (#4066)
  127. HEAD is now at ea41fc5e backport cve-2019-5736 to release-2.8 (#4234)
  128. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  129. ...
  130. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  131. ...
  132. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  133. yes
  134. ...
  135. $ kubectl get node
  136. NAME STATUS ROLES AGE VERSION
  137. apollo Ready master,node 5h18m v1.12.5
  138. boomer Ready master,node 5h18m v1.12.5
  139. caprica Ready master,node 5h18m v1.12.5
  140. $ git checkout v2.8.4
  141. Previous HEAD position was ea41fc5e backport cve-2019-5736 to release-2.8 (#4234)
  142. HEAD is now at 3901480b go to k8s 1.12.7 (#4400)
  143. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  144. ...
  145. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  146. ...
  147. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  148. yes
  149. ...
  150. $ kubectl get node
  151. NAME STATUS ROLES AGE VERSION
  152. apollo Ready master,node 5h37m v1.12.7
  153. boomer Ready master,node 5h37m v1.12.7
  154. caprica Ready master,node 5h37m v1.12.7
  155. $ git checkout v2.8.5
  156. Previous HEAD position was 3901480b go to k8s 1.12.7 (#4400)
  157. HEAD is now at 6f97687d Release 2.8 robust san handling (#4478)
  158. $ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  159. ...
  160. "msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
  161. ...
  162. Are you sure you want to deploy cluster using the deprecated non-kubeadm mode. (output is hidden):
  163. yes
  164. ...
  165. $ kubectl get node
  166. NAME STATUS ROLES AGE VERSION
  167. apollo Ready master,node 5h45m v1.12.7
  168. boomer Ready master,node 5h45m v1.12.7
  169. caprica Ready master,node 5h45m v1.12.7
  170. $ git checkout v2.9.0
  171. Previous HEAD position was 6f97687d Release 2.8 robust san handling (#4478)
  172. HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
  173. ```
  174. :warning: IMPORTANT: Some of the variable formats changed in the k8s-cluster.yml between 2.8.5 and 2.9.0 :warning:
  175. If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run.
  176. It is at this point the cluster was upgraded from non-kubeadm to kubeadm as per the deprecation warning.
  177. ```ShellSession
  178. ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  179. ...
  180. $ kubectl get node
  181. NAME STATUS ROLES AGE VERSION
  182. apollo Ready master,node 6h54m v1.13.5
  183. boomer Ready master,node 6h55m v1.13.5
  184. caprica Ready master,node 6h54m v1.13.5
  185. # Watch out: 2.10.0 is hiding between 2.1.2 and 2.2.0
  186. $ git tag
  187. ...
  188. v2.1.0
  189. v2.1.1
  190. v2.1.2
  191. v2.10.0
  192. v2.2.0
  193. ...
  194. $ git checkout v2.10.0
  195. Previous HEAD position was a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
  196. HEAD is now at dcd9c950 Add etcd role dependency on kube user to avoid etcd role failure when running scale.yml with a fresh node. (#3240) (#4479)
  197. ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
  198. ...
  199. $ kubectl get node
  200. NAME STATUS ROLES AGE VERSION
  201. apollo Ready master,node 7h40m v1.14.1
  202. boomer Ready master,node 7h40m v1.14.1
  203. caprica Ready master,node 7h40m v1.14.1
  204. ```
  205. ## Upgrade order
  206. As mentioned above, components are upgraded in the order in which they were
  207. installed in the Ansible playbook. The order of component installation is as
  208. follows:
  209. * Docker
  210. * etcd
  211. * kubelet and kube-proxy
  212. * network_plugin (such as Calico or Weave)
  213. * kube-apiserver, kube-scheduler, and kube-controller-manager
  214. * Add-ons (such as KubeDNS)
  215. ## Upgrade considerations
  216. Kubespray supports rotating certificates used for etcd and Kubernetes
  217. components, but some manual steps may be required. If you have a pod that
  218. requires use of a service token and is deployed in a namespace other than
  219. `kube-system`, you will need to manually delete the affected pods after
  220. rotating certificates. This is because all service account tokens are dependent
  221. on the apiserver token that is used to generate them. When the certificate
  222. rotates, all service account tokens must be rotated as well. During the
  223. kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
  224. recreated. All other invalidated service account tokens are cleaned up
  225. automatically, but other pods are not deleted out of an abundance of caution
  226. for impact to user deployed pods.
  227. ### Component-based upgrades
  228. A deployer may want to upgrade specific components in order to minimize risk
  229. or save time. This strategy is not covered by CI as of this writing, so it is
  230. not guaranteed to work.
  231. These commands are useful only for upgrading fully-deployed, healthy, existing
  232. hosts. This will definitely not work for undeployed or partially deployed
  233. hosts.
  234. Upgrade docker:
  235. ```ShellSession
  236. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
  237. ```
  238. Upgrade etcd:
  239. ```ShellSession
  240. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
  241. ```
  242. Upgrade vault:
  243. ```ShellSession
  244. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=vault
  245. ```
  246. Upgrade kubelet:
  247. ```ShellSession
  248. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
  249. ```
  250. Upgrade Kubernetes master components:
  251. ```ShellSession
  252. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
  253. ```
  254. Upgrade network plugins:
  255. ```ShellSession
  256. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
  257. ```
  258. Upgrade all add-ons:
  259. ```ShellSession
  260. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
  261. ```
  262. Upgrade just helm (assuming `helm_enabled` is true):
  263. ```ShellSession
  264. ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
  265. ```