You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

154 lines
5.5 KiB

  1. # Kubernetes on Exoscale with Terraform
  2. Provision a Kubernetes cluster on [Exoscale](https://www.exoscale.com/) using Terraform and Kubespray
  3. ## Overview
  4. The setup looks like following
  5. ```text
  6. Kubernetes cluster
  7. +-----------------------+
  8. +---------------+ | +--------------+ |
  9. | | | | +--------------+ |
  10. | API server LB +---------> | | | |
  11. | | | | | Master/etcd | |
  12. +---------------+ | | | node(s) | |
  13. | +-+ | |
  14. | +--------------+ |
  15. | ^ |
  16. | | |
  17. | v |
  18. +---------------+ | +--------------+ |
  19. | | | | +--------------+ |
  20. | Ingress LB +---------> | | | |
  21. | | | | | Worker | |
  22. +---------------+ | | | node(s) | |
  23. | +-+ | |
  24. | +--------------+ |
  25. +-----------------------+
  26. ```
  27. ## Requirements
  28. * Terraform 0.13.0 or newer
  29. *0.12 also works if you modify the provider block to include version and remove all `versions.tf` files*
  30. ## Quickstart
  31. NOTE: *Assumes you are at the root of the kubespray repo*
  32. Copy the sample inventory for your cluster and copy the default terraform variables.
  33. ```bash
  34. CLUSTER=my-exoscale-cluster
  35. cp -r inventory/sample inventory/$CLUSTER
  36. cp contrib/terraform/exoscale/default.tfvars inventory/$CLUSTER/
  37. cd inventory/$CLUSTER
  38. ```
  39. Edit `default.tfvars` to match your setup. You MUST, at the very least, change `ssh_public_keys`.
  40. ```bash
  41. # Ensure $EDITOR points to your favorite editor, e.g., vim, emacs, VS Code, etc.
  42. $EDITOR default.tfvars
  43. ```
  44. For authentication you can use the credentials file `~/.cloudstack.ini` or `./cloudstack.ini`.
  45. The file should look like something like this:
  46. ```ini
  47. [cloudstack]
  48. key = <API key>
  49. secret = <API secret>
  50. ```
  51. Follow the [Exoscale IAM Quick-start](https://community.exoscale.com/documentation/iam/quick-start/) to learn how to generate API keys.
  52. ### Encrypted credentials
  53. To have the credentials encrypted at rest, you can use [sops](https://github.com/mozilla/sops) and only decrypt the credentials at runtime.
  54. ```bash
  55. cat << EOF > cloudstack.ini
  56. [cloudstack]
  57. key =
  58. secret =
  59. EOF
  60. sops --encrypt --in-place --pgp <PGP key fingerprint> cloudstack.ini
  61. sops cloudstack.ini
  62. ```
  63. Run terraform to create the infrastructure
  64. ```bash
  65. terraform init ../../contrib/terraform/exoscale
  66. terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale
  67. ```
  68. If your cloudstack credentials file is encrypted using sops, run the following:
  69. ```bash
  70. terraform init ../../contrib/terraform/exoscale
  71. sops exec-file -no-fifo cloudstack.ini 'CLOUDSTACK_CONFIG={} terraform apply -var-file default.tfvars ../../contrib/terraform/exoscale'
  72. ```
  73. You should now have a inventory file named `inventory.ini` that you can use with kubespray.
  74. You can now copy your inventory file and use it with kubespray to set up a cluster.
  75. You can type `terraform output` to find out the IP addresses of the nodes, as well as control-plane and data-plane load-balancer.
  76. It is a good idea to check that you have basic SSH connectivity to the nodes. You can do that by:
  77. ```bash
  78. ansible -i inventory.ini -m ping all
  79. ```
  80. Example to use this with the default sample inventory:
  81. ```bash
  82. ansible-playbook -i inventory.ini ../../cluster.yml -b -v
  83. ```
  84. ## Teardown
  85. The Kubernetes cluster cannot create any load-balancers or disks, hence, teardown is as simple as Terraform destroy:
  86. ```bash
  87. terraform destroy -var-file default.tfvars ../../contrib/terraform/exoscale
  88. ```
  89. ## Variables
  90. ### Required
  91. * `ssh_public_keys`: List of public SSH keys to install on all machines
  92. * `zone`: The zone where to run the cluster
  93. * `machines`: Machines to provision. Key of this object will be used as the name of the machine
  94. * `node_type`: The role of this node *(master|worker)*
  95. * `size`: The size to use
  96. * `boot_disk`: The boot disk to use
  97. * `image_name`: Name of the image
  98. * `root_partition_size`: Size *(in GB)* for the root partition
  99. * `ceph_partition_size`: Size *(in GB)* for the partition for rook to use as ceph storage. *(Set to 0 to disable)*
  100. * `node_local_partition_size`: Size *(in GB)* for the partition for node-local-storage. *(Set to 0 to disable)*
  101. * `ssh_whitelist`: List of IP ranges (CIDR) that will be allowed to ssh to the nodes
  102. * `api_server_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the API server
  103. * `nodeport_whitelist`: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)
  104. ### Optional
  105. * `prefix`: Prefix to use for all resources, required to be unique for all clusters in the same project *(Defaults to `default`)*
  106. An example variables file can be found `default.tfvars`
  107. ## Known limitations
  108. ### Only single disk
  109. Since Exoscale doesn't support additional disks to be mounted onto an instance, this script has the ability to create partitions for [Rook](https://rook.io/) and [node-local-storage](https://kubernetes.io/docs/concepts/storage/volumes/#local).
  110. ### No Kubernetes API
  111. The current solution doesn't use the [Exoscale Kubernetes cloud controller](https://github.com/exoscale/exoscale-cloud-controller-manager).
  112. This means that we need to set up a HTTP(S) loadbalancer in front of all workers and set the Ingress controller to DaemonSet mode.