You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

307 lines
11 KiB

  1. # Kubernetes on Openstack with Terraform
  2. Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on
  3. Openstack.
  4. ## Status
  5. This will install a Kubernetes cluster on an Openstack Cloud. It should work on
  6. most modern installs of OpenStack that support the basic services.
  7. ## Approach
  8. The terraform configuration inspects variables found in
  9. [variables.tf](variables.tf) to create resources in your OpenStack cluster.
  10. There is a [python script](../terraform.py) that reads the generated`.tfstate`
  11. file to generate a dynamic inventory that is consumed by the main ansible script
  12. to actually install kubernetes and stand up the cluster.
  13. ### Networking
  14. The configuration includes creating a private subnet with a router to the
  15. external net. It will allocate floating-ips from a pool and assign them to the
  16. hosts where that makes sense. You have the option of creating bastion hosts
  17. inside the private subnet to access the nodes there.
  18. ### Kubernetes Nodes
  19. You can create many different kubernetes topologies by setting the number of
  20. different classes of hosts. For each class there are options for allocating
  21. floating ip addresses or not.
  22. - Master Nodes with etcd
  23. - Master nodes without etcd
  24. - Standalone etcd hosts
  25. - Kubernetes worker nodes
  26. Note that the ansible script will report an invalid configuration if you wind up
  27. with an even number of etcd instances since that is not a valid configuration.
  28. ### Gluster FS
  29. The terraform configuration supports provisioning of an optional GlusterFS
  30. shared file system based on a separate set of VMs. To enable this, you need to
  31. specify
  32. - the number of gluster hosts
  33. - Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks
  34. - Other properties related to provisioning the hosts
  35. Even if you are using Container Linux by CoreOS for your cluster, you will still
  36. need the GlusterFS VMs to be based on either Debian or RedHat based images,
  37. Container Linux by CoreOS cannot serve GlusterFS, but can connect to it through
  38. binaries available on hyperkube v1.4.3_coreos.0 or higher.
  39. ## Requirements
  40. - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
  41. - [Install Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
  42. - you already have a suitable OS image in glance
  43. - you already have a floating-ip pool created
  44. - you have security-groups enabled
  45. - you have a pair of keys generated that can be used to secure the new hosts
  46. ## Module Architecture
  47. The configuration is divided into three modules:
  48. - Network
  49. - IPs
  50. - Compute
  51. The main reason for splitting the configuration up in this way is to easily
  52. accommodate situations where floating IPs are limited by a quota or if you have
  53. any external references to the floating IP (e.g. DNS) that would otherwise have
  54. to be updated.
  55. You can force your existing IPs by modifying the compute variables in
  56. `kubespray.tf` as
  57. ```
  58. k8s_master_fips = ["151.101.129.67"]
  59. k8s_node_fips = ["151.101.129.68"]
  60. ```
  61. ## Terraform
  62. Terraform will be used to provision all of the OpenStack resources. It is also
  63. used to deploy and provision the software requirements.
  64. ### Prep
  65. #### OpenStack
  66. Ensure your OpenStack **Identity v2** credentials are loaded in environment
  67. variables. This can be done by downloading a credentials .rc file from your
  68. OpenStack dashboard and sourcing it:
  69. ```
  70. $ source ~/.stackrc
  71. ```
  72. Ensure that you have your Openstack credentials loaded into Terraform
  73. environment variables. Likely via a command similar to:
  74. ```
  75. $ echo Setting up Terraform creds && \
  76. export TF_VAR_username=${OS_USERNAME} && \
  77. export TF_VAR_password=${OS_PASSWORD} && \
  78. export TF_VAR_tenant=${OS_TENANT_NAME} && \
  79. export TF_VAR_auth_url=${OS_AUTH_URL}
  80. ```
  81. ### Terraform Variables
  82. The construction of the cluster is driven by values found in
  83. [variables.tf](variables.tf).
  84. The best way to set these values is to create a file in the project's root
  85. directory called something like`my-terraform-vars.tfvars`. Many of the
  86. variables are obvious. Here is a summary of some of the more interesting
  87. ones:
  88. |Variable | Description |
  89. |---------|-------------|
  90. |`cluster_name` | All OpenStack resources will use the Terraform variable`cluster_name` (default`example`) in their name to make it easier to track. For example the first compute resource will be named`example-kubernetes-1`. |
  91. |`network_name` | The name to be given to the internal network that will be generated |
  92. |`dns_nameservers`| An array of DNS name server names to be used by hosts in the internal subnet. |
  93. |`floatingip_pool` | Name of the pool from which floating IPs will be allocated |
  94. |`external_net` | UUID of the external network that will be routed to |
  95. |`flavor_k8s_master`,`flavor_k8s_node`,`flavor_etcd`, `flavor_bastion`,`flavor_gfs_node` | Flavor depends on your openstack installation, you can get available flavor IDs through`nova flavor-list` |
  96. |`image`,`image_gfs` | Name of the image to use in provisioning the compute resources. Should already be loaded into glance. |
  97. |`ssh_user`,`ssh_user_gfs` | The username to ssh into the image with. This usually depends on the image you have selected |
  98. |`public_key_path` | Path on your local workstation to the public key file you wish to use in creating the key pairs |
  99. |`number_of_k8s_masters`, `number_of_k8s_masters_no_floating_ip` | Number of nodes that serve as both master and etcd. These can be provisioned with or without floating IP addresses|
  100. |`number_of_k8s_masters_no_etcd`, `number_of_k8s_masters_no_floating_ip_no_etcd` | Number of nodes that serve as just master with no etcd. These can be provisioned with or without floating IP addresses |
  101. |`number_of_etcd` | Number of pure etcd nodes |
  102. |`number_of_k8s_nodes`, `number_of_k8s_nodes_no_floating_ip` | Kubernetes worker nodes. These can be provisioned with or without floating ip addresses. |
  103. |`number_of_bastions` | Number of bastion hosts to create. Scripts assume this is really just zero or one |
  104. |`number_of_gfs_nodes_no_floating_ip` | Number of gluster servers to provision. |
  105. | `gfs_volume_size_in_gb` | Size of the non-ephemeral volumes to be attached to store the GlusterFS bricks |
  106. ## Initializing Terraform
  107. Before Terraform can operate on your cluster you need to install required
  108. plugins. This is accomplished with the command
  109. ```bash
  110. $ terraform init contrib/terraform/openstack
  111. ```
  112. ## Provisioning Cluster with Terraform
  113. You can apply the terraform config to your cluster with the following command
  114. issued from the project's root directory
  115. ```bash
  116. $ terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
  117. ```
  118. if you chose to create a bastion host, this script will create
  119. `contrib/terraform/openstack/k8s-cluster.yml` with an ssh command for ansible to
  120. be able to access your machines tunneling through the bastion's ip adress. If
  121. you want to manually handle the ssh tunneling to these machines, please delete
  122. or move that file. If you want to use this, just leave it there, as ansible will
  123. pick it up automatically.
  124. ## Destroying Cluster with Terraform
  125. You can destroy a config deployed to your cluster with the following command
  126. issued from the project's root directory
  127. ```bash
  128. $ terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-terraform-vars.tfvars contrib/terraform/openstack
  129. ```
  130. ## Debugging Cluster Provisioning
  131. You can enable debugging output from Terraform by setting
  132. `OS_DEBUG` to 1 and`TF_LOG` to`DEBUG` before runing the terraform command
  133. # Running the Ansible Script
  134. Ensure your local ssh-agent is running and your ssh key has been added. This
  135. step is required by the terraform provisioner:
  136. ```
  137. $ eval $(ssh-agent -s)
  138. $ ssh-add ~/.ssh/id_rsa
  139. ```
  140. Make sure you can connect to the hosts:
  141. ```
  142. $ ansible -i contrib/terraform/openstack/hosts -m ping all
  143. example-k8s_node-1 | SUCCESS => {
  144. "changed": false,
  145. "ping": "pong"
  146. }
  147. example-etcd-1 | SUCCESS => {
  148. "changed": false,
  149. "ping": "pong"
  150. }
  151. example-k8s-master-1 | SUCCESS => {
  152. "changed": false,
  153. "ping": "pong"
  154. }
  155. ```
  156. if you are deploying a system that needs bootstrapping, like Container Linux by
  157. CoreOS, these might have a state`FAILED` due to Container Linux by CoreOS not
  158. having python. As long as the state is not`UNREACHABLE`, this is fine.
  159. if it fails try to connect manually via SSH ... it could be something as simple as a stale host key.
  160. ## Configure Cluster variables
  161. Edit`inventory/group_vars/all.yml`:
  162. - Set variable **bootstrap_os** according selected image
  163. ```
  164. # Valid bootstrap options (required): ubuntu, coreos, centos, none
  165. bootstrap_os: coreos
  166. ```
  167. - **bin_dir**
  168. ```
  169. # Directory where the binaries will be installed
  170. # Default:
  171. # bin_dir: /usr/local/bin
  172. # For Container Linux by CoreOS:
  173. bin_dir: /opt/bin
  174. ```
  175. - and **cloud_provider**
  176. ```
  177. cloud_provider: openstack
  178. ```
  179. Edit`inventory/group_vars/k8s-cluster.yml`:
  180. - Set variable **kube_network_plugin** according selected networking
  181. ```
  182. # Choose network plugin (calico, weave or flannel)
  183. # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
  184. kube_network_plugin: flannel
  185. ```
  186. > flannel works out-of-the-box
  187. > calico requires allowing service's and pod's subnets on according OpenStack Neutron ports
  188. - Set variable **resolvconf_mode**
  189. ```
  190. # Can be docker_dns, host_resolvconf or none
  191. # Default:
  192. # resolvconf_mode: docker_dns
  193. # For Container Linux by CoreOS:
  194. resolvconf_mode: host_resolvconf
  195. ```
  196. For calico configure OpenStack Neutron ports: [OpenStack](/docs/openstack.md)
  197. ## Deploy kubernetes:
  198. ```
  199. $ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml
  200. ```
  201. ## Set up local kubectl
  202. 1. Install kubectl on your workstation:
  203. [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
  204. 2. Add route to internal IP of master node (if needed):
  205. ```
  206. sudo route add [master-internal-ip] gw [router-ip]
  207. ```
  208. or
  209. ```
  210. sudo route add -net [internal-subnet]/24 gw [router-ip]
  211. ```
  212. 3. List Kubernetes certs&keys:
  213. ```
  214. ssh [os-user]@[master-ip] sudo ls /etc/kubernetes/ssl/
  215. ```
  216. 4. Get admin's certs&key:
  217. ```
  218. ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1-key.pem > admin-key.pem
  219. ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/admin-[cluster_name]-k8s-master-1.pem > admin.pem
  220. ssh [os-user]@[master-ip] sudo cat /etc/kubernetes/ssl/ca.pem > ca.pem
  221. ```
  222. 5. Configure kubectl:
  223. ```
  224. kubectl config set-cluster default-cluster --server=https://[master-internal-ip]:6443 \
  225. --certificate-authority=ca.pem
  226. kubectl config set-credentials default-admin \
  227. --certificate-authority=ca.pem \
  228. --client-key=admin-key.pem \
  229. --client-certificate=admin.pem
  230. kubectl config set-context default-system --cluster=default-cluster --user=default-admin
  231. kubectl config use-context default-system
  232. ```
  233. 7. Check it:
  234. ```
  235. kubectl version
  236. ```
  237. If you are using floating ip addresses then you may get this error:
  238. ```
  239. Unable to connect to the server: x509: certificate is valid for 10.0.0.6, 10.0.0.6, 10.233.0.1, 127.0.0.1, not 132.249.238.25
  240. ```
  241. You can tell kubectl to ignore this condition by adding the
  242. `--insecure-skip-tls-verify` option.
  243. ## GlusterFS
  244. GlusterFS is not deployed by the standard`cluster.yml` playbook, see the
  245. [glusterfs playbook documentation](../../network-storage/glusterfs/README.md)
  246. for instructions.
  247. Basically you will install gluster as
  248. ```bash
  249. $ ansible-playbook --become -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
  250. ```
  251. # What's next
  252. [Start Hello Kubernetes Service](https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/)