k8s-sig-cluster-lifecycleawskubesprayhigh-availabilityansiblekubernetes-clustergcekubernetesbare-metal
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
5.0 KiB
5.0 KiB
Kubernetes on GCP with Terraform
Provision a Kubernetes cluster on GCP using Terraform and Kubespray
Overview
The setup looks like following
Kubernetes cluster
+-----------------------+
+---------------+ | +--------------+ |
| | | | +--------------+ |
| API server LB +---------> | | | |
| | | | | Master/etcd | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------+ |
| ^ |
| | |
| v |
+---------------+ | +--------------+ |
| | | | +--------------+ |
| Ingress LB +---------> | | | |
| | | | | Worker | |
+---------------+ | | | node(s) | |
| +-+ | |
| +--------------+ |
+-----------------------+
Requirements
- Terraform 0.12.0 or newer
Quickstart
To get a cluster up and running you'll need a JSON keyfile.
Set the path to the file in the tfvars.json
file and run the following:
terraform apply -var-file tfvars.json -state dev-cluster.tfstate -var gcp_project_id=<ID of your GCP project> -var keyfile_location=<location of the json keyfile>
To generate kubespray inventory based on the terraform state file you can run the following:
./generate-inventory.sh dev-cluster.tfstate > inventory.ini
You should now have a inventory file named inventory.ini
that you can use with kubespray, e.g.
ansible-playbook -i contrib/terraform/gcs/inventory.ini cluster.yml -b -v
Variables
Required
keyfile_location
: Location to the keyfile to use as credentials for the google terraform providergcp_project_id
: ID of the GCP project to deploy the cluster inssh_pub_key
: Path to public ssh key to use for all machinesregion
: The region where to run the clustermachines
: Machines to provision. Key of this object will be used as the name of the machinenode_type
: The role of this node (master|worker)size
: The size to usezone
: The zone the machine should run inadditional_disks
: Extra disks to add to the machine. Key of this object will be used as the disk namesize
: Size of the disk (in GB)
boot_disk
: The boot disk to useimage_name
: Name of the imagesize
: Size of the boot disk (in GB)
ssh_whitelist
: List of IP ranges (CIDR) that will be allowed to ssh to the nodesapi_server_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to the API servernodeport_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to the kubernetes nodes on port 30000-32767 (kubernetes nodeports)ingress_whitelist
: List of IP ranges (CIDR) that will be allowed to connect to ingress on ports 80 and 443extra_ingress_firewalls
: Additional ingress firewall rules. Key will be used as the name of the rulesource_ranges
: List of IP ranges (CIDR). Example:["8.8.8.8"]
protocol
: Protocol. Example"tcp"
ports
: List of ports, as string. Example["53"]
target_tags
: List of target tag (either the machine name orcontrol-plane
orworker
). Example:["control-plane", "worker-0"]
Optional
prefix
: Prefix to use for all resources, required to be unique for all clusters in the same project (Defaults todefault
)master_sa_email
: Service account email to use for the control plane nodes (Defaults to""
, auto generate one)master_sa_scopes
: Service account email to use for the control plane nodes (Defaults to["https://www.googleapis.com/auth/cloud-platform"]
)master_preemptible
: Enable preemptible for the control plane nodes (Defaults tofalse
)master_additional_disk_type
: Disk type for extra disks added on the control plane nodes (Defaults to"pd-ssd"
)worker_sa_email
: Service account email to use for the worker nodes (Defaults to""
, auto generate one)worker_sa_scopes
: Service account email to use for the worker nodes (Defaults to["https://www.googleapis.com/auth/cloud-platform"]
)worker_preemptible
: Enable preemptible for the worker nodes (Defaults tofalse
)worker_additional_disk_type
: Disk type for extra disks added on the worker nodes (Defaults to"pd-ssd"
)
An example variables file can be found tfvars.json
Known limitations
This solution does not provide a solution to use a bastion host. Thus all the nodes must expose a public IP for kubespray to work.