You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

100 lines
5.1 KiB

  1. OpenStack
  2. ===============
  3. To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
  4. After that make sure to source in your OpenStack credentials like you would do when using `nova-client` or `neutron-client` by using `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
  5. For those who prefer to pass the OpenStack CA certificate as a string, one can
  6. base64 encode the cacert file and store it in the variable `openstack_cacert`.
  7. The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
  8. Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
  9. Unless you are using calico or kube-router you can now run the playbook.
  10. **Additional step needed when using calico or kube-router:**
  11. Being L3 CNI, calico and kube-router do not encapsulate all packages with the hosts' ip addresses. Instead the packets will be routed with the PODs ip addresses directly.
  12. OpenStack will filter and drop all packets from ips it does not know to prevent spoofing.
  13. In order to make L3 CNIs work on OpenStack you will need to tell OpenStack to allow pods packets by allowing the network they use.
  14. First you will need the ids of your OpenStack instances that will run kubernetes:
  15. ```bash
  16. openstack server list --project YOUR_PROJECT
  17. +--------------------------------------+--------+----------------------------------+--------+-------------+
  18. | ID | Name | Tenant ID | Status | Power State |
  19. +--------------------------------------+--------+----------------------------------+--------+-------------+
  20. | e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
  21. | 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
  22. ```
  23. Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports (though they are now configured through using OpenStack):
  24. ```bash
  25. openstack port list -c id -c device_id --project YOUR_PROJECT
  26. +--------------------------------------+--------------------------------------+
  27. | id | device_id |
  28. +--------------------------------------+--------------------------------------+
  29. | 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
  30. | e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
  31. ```
  32. Given the port ids on the left, you can set the two `allowed-address`(es) in OpenStack. Note that you have to allow both `kube_service_addresses` (default `10.233.0.0/18`) and `kube_pods_subnet` (default `10.233.64.0/18`.)
  33. ```bash
  34. # allow kube_service_addresses and kube_pods_subnet network
  35. openstack port set 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
  36. openstack port set e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
  37. ```
  38. If all the VMs in the tenant correspond to kubespray deployment, you can "sweep run" above with:
  39. ```bash
  40. openstack port list --device-owner=compute:nova -c ID -f value | xargs -tI@ openstack port set @ --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
  41. ```
  42. Now you can finally run the playbook.
  43. Upgrade from the in-tree to the external cloud provider
  44. ---------------
  45. The in-tree cloud provider is deprecated and will be removed in a future version of Kubernetes. The target release for removing all remaining in-tree cloud providers is set to 1.21
  46. The new cloud provider is configured to have Octavia by default in Kubespray.
  47. - Change cloud provider from `cloud_provider: openstack` to the new external Cloud provider:
  48. ```yaml
  49. cloud_provider: external
  50. external_cloud_provider: openstack
  51. ```
  52. - Enable Cinder CSI:
  53. ```yaml
  54. cinder_csi_enabled: true
  55. ```
  56. - Enable topology support (optional), if your openstack provider has custom Zone names you can override the default "nova" zone by setting the variable `cinder_topology_zones`
  57. ```yaml
  58. cinder_topology: true
  59. ```
  60. - If you are using OpenStack loadbalancer(s) replace the `openstack_lbaas_subnet_id` with the new `external_openstack_lbaas_subnet_id`. **Note** The new cloud provider is using Octavia instead of Neutron LBaaS by default!
  61. - Enable 3 feature gates to allow migration of all volumes and storage classes (if you have any feature gates already set just add the 3 listed below):
  62. ```yaml
  63. kube_feature_gates:
  64. - CSIMigration=true
  65. - CSIMigrationOpenStack=true
  66. - ExpandCSIVolumes=true
  67. ```
  68. - Run the `upgrade-cluster.yml` playbook
  69. - Run the cleanup playbook located under extra_playbooks `extra_playbooks/migrate_openstack_provider.yml` (this will clean up all resources used by the old cloud provider)
  70. - You can remove the feature gates for Volume migration. If you want to enable the possibility to expand CSI volumes you could leave the `ExpandCSIVolumes=true` feature gate