You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

81 lines
5.8 KiB

  1. # AWS
  2. To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
  3. Prior to creating your instances, you **must** ensure that you have created IAM roles and policies for both "kubernetes-master" and "kubernetes-node". You can find the IAM policies [here](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/aws_iam/). See the [IAM Documentation](https://aws.amazon.com/documentation/iam/) if guidance is needed on how to set these up. When you bring your instances online, associate them with the respective IAM role. Nodes that are only to be used for Etcd do not need a role.
  4. You would also need to tag the resources in your VPC accordingly for the aws provider to utilize them. Tag the subnets, route tables and all instances that kubernetes will be run on with key `kubernetes.io/cluster/$cluster_name` (`$cluster_name` must be a unique identifier for the cluster). Tag the subnets that must be targeted by external ELBs with the key `kubernetes.io/role/elb` and internal ELBs with the key `kubernetes.io/role/internal-elb`.
  5. Make sure your VPC has both DNS Hostnames support and Private DNS enabled.
  6. The next step is to make sure the hostnames in your `inventory` file are identical to your internal hostnames in AWS. This may look something like `ip-111-222-333-444.us-west-2.compute.internal`. You can then specify how Ansible connects to these instances with `ansible_ssh_host` and `ansible_ssh_user`.
  7. You can now create your cluster!
  8. ## Dynamic Inventory
  9. There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
  10. This will produce an inventory that is passed into Ansible that looks like the following:
  11. ```json
  12. {
  13. "_meta": {
  14. "hostvars": {
  15. "ip-172-31-3-xxx.us-east-2.compute.internal": {
  16. "ansible_ssh_host": "172.31.3.xxx"
  17. },
  18. "ip-172-31-8-xxx.us-east-2.compute.internal": {
  19. "ansible_ssh_host": "172.31.8.xxx"
  20. }
  21. }
  22. },
  23. "etcd": [
  24. "ip-172-31-3-xxx.us-east-2.compute.internal"
  25. ],
  26. "k8s-cluster": {
  27. "children": [
  28. "kube_control_plane",
  29. "kube-node"
  30. ]
  31. },
  32. "kube_control_plane": [
  33. "ip-172-31-3-xxx.us-east-2.compute.internal"
  34. ],
  35. "kube-node": [
  36. "ip-172-31-8-xxx.us-east-2.compute.internal"
  37. ]
  38. }
  39. ```
  40. Guide:
  41. - Create instances in AWS as needed.
  42. - Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube_control_plane`, `etcd`, or `kube-node`. You can also share roles like `kube_control_plane, etcd`
  43. - Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
  44. - Set the following AWS credentials and info as environment variables in your terminal:
  45. ```ShellSession
  46. export AWS_ACCESS_KEY_ID="xxxxx"
  47. export AWS_SECRET_ACCESS_KEY="yyyyy"
  48. export REGION="us-east-2"
  49. ```
  50. - We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
  51. ## Kubespray configuration
  52. Declare the cloud config variables for the `aws` provider as follows. Setting these variables are optional and depend on your use case.
  53. Variable|Type|Comment
  54. ---|---|---
  55. aws_zone|string|Force set the AWS zone. Recommended to leave blank.
  56. aws_vpc|string|The AWS VPC flag enables the possibility to run the master components on a different aws account, on a different cloud provider or on-premise. If the flag is set also the KubernetesClusterTag must be provided
  57. aws_subnet_id|string|SubnetID enables using a specific subnet to use for ELB's
  58. aws_route_table_id|string|RouteTableID enables using a specific RouteTable
  59. aws_role_arn|string|RoleARN is the IAM role to assume when interaction with AWS APIs
  60. aws_kubernetes_cluster_tag|string|KubernetesClusterTag is the legacy cluster id we'll use to identify our cluster resources
  61. aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use to identify our cluster resources
  62. aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
  63. aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
  64. aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.