You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

162 lines
4.9 KiB

  1. # Kubernetes on AWS with Terraform
  2. ## Overview
  3. This project will create:
  4. - VPC with Public and Private Subnets in # Availability Zones
  5. - Bastion Hosts and NAT Gateways in the Public Subnet
  6. - A dynamic number of masters, etcd, and worker nodes in the Private Subnet
  7. - even distributed over the # of Availability Zones
  8. - AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet
  9. ## Requirements
  10. - Terraform 0.12.0 or newer
  11. ## How to Use
  12. - Export the variables for your AWS credentials or edit `credentials.tfvars`:
  13. ```commandline
  14. export TF_VAR_AWS_ACCESS_KEY_ID="www"
  15. export TF_VAR_AWS_SECRET_ACCESS_KEY ="xxx"
  16. export TF_VAR_AWS_SSH_KEY_NAME="yyy"
  17. export TF_VAR_AWS_DEFAULT_REGION="zzz"
  18. ```
  19. - Update `contrib/terraform/aws/terraform.tfvars` with your data. By default, the Terraform scripts use Ubuntu 18.04 LTS (Bionic) as base image. If you want to change this behaviour, see note "Using other distrib than Ubuntu" below.
  20. - Create an AWS EC2 SSH Key
  21. - Run with `terraform apply --var-file="credentials.tfvars"` or `terraform apply` depending if you exported your AWS credentials
  22. Example:
  23. ```commandline
  24. terraform apply -var-file=credentials.tfvars
  25. ```
  26. - Terraform automatically creates an Ansible Inventory file called `hosts` with the created infrastructure in the directory `inventory`
  27. - Ansible will automatically generate an ssh config file for your bastion hosts. To connect to hosts with ssh using bastion host use generated `ssh-bastion.conf`. Ansible automatically detects bastion and changes `ssh_args`
  28. ```commandline
  29. ssh -F ./ssh-bastion.conf user@$ip
  30. ```
  31. - Once the infrastructure is created, you can run the kubespray playbooks and supply inventory/hosts with the `-i` flag.
  32. Example (this one assumes you are using Ubuntu)
  33. ```commandline
  34. ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible_user=ubuntu -b --become-user=root --flush-cache
  35. ```
  36. ***Using other distrib than Ubuntu***
  37. If you want to use another distribution than Ubuntu 18.04 (Bionic) LTS, you can modify the search filters of the 'data "aws_ami" "distro"' in variables.tf.
  38. For example, to use:
  39. - Debian Jessie, replace 'data "aws_ami" "distro"' in variables.tf with
  40. ```ini
  41. data "aws_ami" "distro" {
  42. most_recent = true
  43. filter {
  44. name = "name"
  45. values = ["debian-jessie-amd64-hvm-*"]
  46. }
  47. filter {
  48. name = "virtualization-type"
  49. values = ["hvm"]
  50. }
  51. owners = ["379101102735"]
  52. }
  53. ```
  54. - Ubuntu 16.04, replace 'data "aws_ami" "distro"' in variables.tf with
  55. ```ini
  56. data "aws_ami" "distro" {
  57. most_recent = true
  58. filter {
  59. name = "name"
  60. values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-*"]
  61. }
  62. filter {
  63. name = "virtualization-type"
  64. values = ["hvm"]
  65. }
  66. owners = ["099720109477"]
  67. }
  68. ```
  69. - Centos 7, replace 'data "aws_ami" "distro"' in variables.tf with
  70. ```ini
  71. data "aws_ami" "distro" {
  72. most_recent = true
  73. filter {
  74. name = "name"
  75. values = ["dcos-centos7-*"]
  76. }
  77. filter {
  78. name = "virtualization-type"
  79. values = ["hvm"]
  80. }
  81. owners = ["688023202711"]
  82. }
  83. ```
  84. ## Connecting to Kubernetes
  85. You can use the following set of commands to get the kubeconfig file from your newly created cluster. Before running the commands, make sure you are in the project's root folder.
  86. ```commandline
  87. # Get the controller's IP address.
  88. CONTROLLER_HOST_NAME=$(cat ./inventory/hosts | grep "\[kube_control_plane\]" -A 1 | tail -n 1)
  89. CONTROLLER_IP=$(cat ./inventory/hosts | grep $CONTROLLER_HOST_NAME | grep ansible_host | cut -d'=' -f2)
  90. # Get the hostname of the load balancer.
  91. LB_HOST=$(cat inventory/hosts | grep apiserver_loadbalancer_domain_name | cut -d'"' -f2)
  92. # Get the controller's SSH fingerprint.
  93. ssh-keygen -R $CONTROLLER_IP > /dev/null 2>&1
  94. ssh-keyscan -H $CONTROLLER_IP >> ~/.ssh/known_hosts 2>/dev/null
  95. # Get the kubeconfig from the controller.
  96. mkdir -p ~/.kube
  97. ssh -F ssh-bastion.conf centos@$CONTROLLER_IP "sudo chmod 644 /etc/kubernetes/admin.conf"
  98. scp -F ssh-bastion.conf centos@$CONTROLLER_IP:/etc/kubernetes/admin.conf ~/.kube/config
  99. sed -i "s^server:.*^server: https://$LB_HOST:6443^" ~/.kube/config
  100. kubectl get nodes
  101. ```
  102. ## Troubleshooting
  103. ### Remaining AWS IAM Instance Profile
  104. If the cluster was destroyed without using Terraform it is possible that
  105. the AWS IAM Instance Profiles still remain. To delete them you can use
  106. the `AWS CLI` with the following command:
  107. ```commandline
  108. aws iam delete-instance-profile --region <region_name> --instance-profile-name <profile_name>
  109. ```
  110. ### Ansible Inventory doesn't get created
  111. It could happen that Terraform doesn't create an Ansible Inventory file automatically. If this is the case copy the output after `inventory=` and create a file named `hosts`in the directory `inventory` and paste the inventory into the file.
  112. ## Architecture
  113. Pictured is an AWS Infrastructure created with this Terraform project distributed over two Availability Zones.
  114. ![AWS Infrastructure with Terraform ](docs/aws_kubespray.png)