Browse Source
Merge pull request #285 from paulczar/contrib_terraform_openstack
Merge pull request #285 from paulczar/contrib_terraform_openstack
WIP: terraform openstackpull/305/head
Smaine Kahlouch
8 years ago
committed by
GitHub
8 changed files with 1416 additions and 0 deletions
Split View
Diff Options
-
137contrib/terraform/openstack/README.md
-
136contrib/terraform/openstack/group_vars/all.yml
-
1contrib/terraform/openstack/hosts
-
94contrib/terraform/openstack/kubespray.tf
-
238contrib/terraform/openstack/terraform.tfstate
-
13contrib/terraform/openstack/terraform.tfstate.backup
-
61contrib/terraform/openstack/variables.tf
-
736contrib/terraform/terraform.py
@ -0,0 +1,137 @@ |
|||
# Kubernetes on Openstack with Terraform |
|||
|
|||
Provision a Kubernetes cluster with [Terraform](https://www.terraform.io) on |
|||
Openstack. |
|||
|
|||
## Status |
|||
|
|||
This will install a Kubernetes cluster on an Openstack Cloud. It is tested on a |
|||
OpenStack Cloud provided by [BlueBox](https://www.blueboxcloud.com/) and |
|||
should work on most modern installs of OpenStack that support the basic |
|||
services. |
|||
|
|||
There are some assumptions made to try and ensure it will work on your openstack cluster. |
|||
|
|||
* floating-ips are used for access |
|||
* you already have a suitable OS image in glance |
|||
* you already have both an internal network and a floating-ip pool created |
|||
* you have security-groups enabled |
|||
|
|||
|
|||
## Requirements |
|||
|
|||
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) |
|||
|
|||
## Terraform |
|||
|
|||
Terraform will be used to provision all of the OpenStack resources required to |
|||
run Docker Swarm. It is also used to deploy and provision the software |
|||
requirements. |
|||
|
|||
### Prep |
|||
|
|||
#### OpenStack |
|||
|
|||
Ensure your OpenStack credentials are loaded in environment variables. This is |
|||
how I do it: |
|||
|
|||
``` |
|||
$ source ~/.stackrc |
|||
``` |
|||
|
|||
You will need two networks before installing, an internal network and |
|||
an external (floating IP Pool) network. The internet network can be shared as |
|||
we use security groups to provide network segregation. Due to the many |
|||
differences between OpenStack installs the Terraform does not attempt to create |
|||
these for you. |
|||
|
|||
By default Terraform will expect that your networks are called `internal` and |
|||
`external`. You can change this by altering the Terraform variables `network_name` and `floatingip_pool`. |
|||
|
|||
A full list of variables you can change can be found at [variables.tf](variables.tf). |
|||
|
|||
All OpenStack resources will use the Terraform variable `cluster_name` ( |
|||
default `example`) in their name to make it easier to track. For example the |
|||
first compute resource will be named `example-kubernetes-1`. |
|||
|
|||
#### Terraform |
|||
|
|||
Ensure your local ssh-agent is running and your ssh key has been added. This |
|||
step is required by the terraform provisioner: |
|||
|
|||
``` |
|||
$ eval $(ssh-agent -s) |
|||
$ ssh-add ~/.ssh/id_rsa |
|||
``` |
|||
|
|||
|
|||
Ensure that you have your Openstack credentials loaded into Terraform |
|||
environment variables. Likely via a command similar to: |
|||
|
|||
``` |
|||
$ echo Setting up Terraform creds && \ |
|||
export TF_VAR_username=${OS_USERNAME} && \ |
|||
export TF_VAR_password=${OS_PASSWORD} && \ |
|||
export TF_VAR_tenant=${OS_TENANT_NAME} && \ |
|||
export TF_VAR_auth_url=${OS_AUTH_URL} |
|||
``` |
|||
|
|||
# Provision a Kubernetes Cluster on OpenStack |
|||
|
|||
``` |
|||
terraform apply -state=contrib/terraform/openstack/terraform.tfstate contrib/terraform/openstack |
|||
openstack_compute_secgroup_v2.k8s_master: Creating... |
|||
description: "" => "example - Kubernetes Master" |
|||
name: "" => "example-k8s-master" |
|||
rule.#: "" => "<computed>" |
|||
... |
|||
... |
|||
Apply complete! Resources: 9 added, 0 changed, 0 destroyed. |
|||
|
|||
The state of your infrastructure has been saved to the path |
|||
below. This state is required to modify and destroy your |
|||
infrastructure, so keep it safe. To inspect the complete state |
|||
use the `terraform show` command. |
|||
|
|||
State path: contrib/terraform/openstack/terraform.tfstate |
|||
``` |
|||
|
|||
Make sure you can connect to the hosts: |
|||
|
|||
``` |
|||
$ ansible -i contrib/terraform/openstack/hosts -m ping all |
|||
example-k8s_node-1 | SUCCESS => { |
|||
"changed": false, |
|||
"ping": "pong" |
|||
} |
|||
example-etcd-1 | SUCCESS => { |
|||
"changed": false, |
|||
"ping": "pong" |
|||
} |
|||
example-k8s-master-1 | SUCCESS => { |
|||
"changed": false, |
|||
"ping": "pong" |
|||
} |
|||
``` |
|||
|
|||
if it fails try to connect manually via SSH ... it could be somthing as simple as a stale host key. |
|||
|
|||
Deploy kubernetes: |
|||
|
|||
``` |
|||
$ ansible-playbook --become -i contrib/terraform/openstack/hosts cluster.yml |
|||
``` |
|||
|
|||
# clean up: |
|||
|
|||
``` |
|||
$ terraform destroy |
|||
Do you really want to destroy? |
|||
Terraform will delete all your managed infrastructure. |
|||
There is no undo. Only 'yes' will be accepted to confirm. |
|||
|
|||
Enter a value: yes |
|||
... |
|||
... |
|||
Apply complete! Resources: 0 added, 0 changed, 12 destroyed. |
|||
``` |
@ -0,0 +1,136 @@ |
|||
# Directory where the binaries will be installed |
|||
bin_dir: /usr/local/bin |
|||
|
|||
# Where the binaries will be downloaded. |
|||
# Note: ensure that you've enough disk space (about 1G) |
|||
local_release_dir: "/tmp/releases" |
|||
|
|||
# Uncomment this line for CoreOS only. |
|||
# Directory where python binary is installed |
|||
# ansible_python_interpreter: "/opt/bin/python" |
|||
|
|||
# This is the group that the cert creation scripts chgrp the |
|||
# cert files to. Not really changable... |
|||
kube_cert_group: kube-cert |
|||
|
|||
# Cluster Loglevel configuration |
|||
kube_log_level: 2 |
|||
|
|||
# Users to create for basic auth in Kubernetes API via HTTP |
|||
kube_api_pwd: "changeme" |
|||
kube_users: |
|||
kube: |
|||
pass: "{{kube_api_pwd}}" |
|||
role: admin |
|||
root: |
|||
pass: "changeme" |
|||
role: admin |
|||
|
|||
# Kubernetes cluster name, also will be used as DNS domain |
|||
cluster_name: cluster.local |
|||
|
|||
# For some environments, each node has a pubilcally accessible |
|||
# address and an address it should bind services to. These are |
|||
# really inventory level variables, but described here for consistency. |
|||
# |
|||
# When advertising access, the access_ip will be used, but will defer to |
|||
# ip and then the default ansible ip when unspecified. |
|||
# |
|||
# When binding to restrict access, the ip variable will be used, but will |
|||
# defer to the default ansible ip when unspecified. |
|||
# |
|||
# The ip variable is used for specific address binding, e.g. listen address |
|||
# for etcd. This is use to help with environments like Vagrant or multi-nic |
|||
# systems where one address should be preferred over another. |
|||
# ip: 10.2.2.2 |
|||
# |
|||
# The access_ip variable is used to define how other nodes should access |
|||
# the node. This is used in flannel to allow other flannel nodes to see |
|||
# this node for example. The access_ip is really useful AWS and Google |
|||
# environments where the nodes are accessed remotely by the "public" ip, |
|||
# but don't know about that address themselves. |
|||
# access_ip: 1.1.1.1 |
|||
|
|||
# Choose network plugin (calico, weave or flannel) |
|||
kube_network_plugin: flannel |
|||
|
|||
# Kubernetes internal network for services, unused block of space. |
|||
kube_service_addresses: 10.233.0.0/18 |
|||
|
|||
# internal network. When used, it will assign IP |
|||
# addresses from this range to individual pods. |
|||
# This network must be unused in your network infrastructure! |
|||
kube_pods_subnet: 10.233.64.0/18 |
|||
|
|||
# internal network total size (optional). This is the prefix of the |
|||
# entire network. Must be unused in your environment. |
|||
# kube_network_prefix: 18 |
|||
|
|||
# internal network node size allocation (optional). This is the size allocated |
|||
# to each node on your network. With these defaults you should have |
|||
# room for 4096 nodes with 254 pods per node. |
|||
kube_network_node_prefix: 24 |
|||
|
|||
# With calico it is possible to distributed routes with border routers of the datacenter. |
|||
peer_with_router: false |
|||
# Warning : enabling router peering will disable calico's default behavior ('node mesh'). |
|||
# The subnets of each nodes will be distributed by the datacenter router |
|||
|
|||
# The port the API Server will be listening on. |
|||
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}" |
|||
kube_apiserver_port: 443 # (https) |
|||
kube_apiserver_insecure_port: 8080 # (http) |
|||
|
|||
# Internal DNS configuration. |
|||
# Kubernetes can create and mainatain its own DNS server to resolve service names |
|||
# into appropriate IP addresses. It's highly advisable to run such DNS server, |
|||
# as it greatly simplifies configuration of your applications - you can use |
|||
# service names instead of magic environment variables. |
|||
# You still must manually configure all your containers to use this DNS server, |
|||
# Kubernetes won't do this for you (yet). |
|||
|
|||
# Upstream dns servers used by dnsmasq |
|||
upstream_dns_servers: |
|||
- 8.8.8.8 |
|||
- 8.8.4.4 |
|||
# |
|||
# # Use dns server : https://github.com/ansibl8s/k8s-skydns/blob/master/skydns-README.md |
|||
dns_setup: true |
|||
dns_domain: "{{ cluster_name }}" |
|||
# |
|||
# # Ip address of the kubernetes skydns service |
|||
skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" |
|||
dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" |
|||
|
|||
# There are some changes specific to the cloud providers |
|||
# for instance we need to encapsulate packets with some network plugins |
|||
# If set the possible values are either 'gce', 'aws' or 'openstack' |
|||
# When openstack is used make sure to source in the openstack credentials |
|||
# like you would do when using nova-client before starting the playbook. |
|||
# cloud_provider: |
|||
|
|||
# For multi masters architecture: |
|||
# kube-proxy doesn't support multiple apiservers for the time being so you'll need to configure your own loadbalancer |
|||
# This domain name will be inserted into the /etc/hosts file of all servers |
|||
# configuration example with haproxy : |
|||
# listen kubernetes-apiserver-https |
|||
# bind 10.99.0.21:8383 |
|||
# option ssl-hello-chk |
|||
# mode tcp |
|||
# timeout client 3h |
|||
# timeout server 3h |
|||
# server master1 10.99.0.26:443 |
|||
# server master2 10.99.0.27:443 |
|||
# balance roundrobin |
|||
# apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local" |
|||
|
|||
## Set these proxy values in order to update docker daemon to use proxies |
|||
# http_proxy: "" |
|||
# https_proxy: "" |
|||
# no_proxy: "" |
|||
|
|||
## A string of extra options to pass to the docker daemon. |
|||
## This string should be exactly as you wish it to appear. |
|||
## An obvious use case is allowing insecure-registry access |
|||
## to self hosted registries like so: |
|||
docker_options: "--insecure-registry={{ kube_service_addresses }}" |
@ -0,0 +1 @@ |
|||
../terraform.py |
@ -0,0 +1,94 @@ |
|||
resource "openstack_networking_floatingip_v2" "k8s_master" { |
|||
count = "${var.number_of_k8s_masters}" |
|||
pool = "${var.floatingip_pool}" |
|||
} |
|||
|
|||
resource "openstack_networking_floatingip_v2" "k8s_node" { |
|||
count = "${var.number_of_k8s_nodes}" |
|||
pool = "${var.floatingip_pool}" |
|||
} |
|||
|
|||
|
|||
resource "openstack_compute_keypair_v2" "k8s" { |
|||
name = "kubernetes-${var.cluster_name}" |
|||
public_key = "${file(var.public_key_path)}" |
|||
} |
|||
|
|||
resource "openstack_compute_secgroup_v2" "k8s_master" { |
|||
name = "${var.cluster_name}-k8s-master" |
|||
description = "${var.cluster_name} - Kubernetes Master" |
|||
} |
|||
|
|||
resource "openstack_compute_secgroup_v2" "k8s" { |
|||
name = "${var.cluster_name}-k8s" |
|||
description = "${var.cluster_name} - Kubernetes" |
|||
rule { |
|||
ip_protocol = "tcp" |
|||
from_port = "22" |
|||
to_port = "22" |
|||
cidr = "0.0.0.0/0" |
|||
} |
|||
rule { |
|||
ip_protocol = "icmp" |
|||
from_port = "-1" |
|||
to_port = "-1" |
|||
cidr = "0.0.0.0/0" |
|||
} |
|||
rule { |
|||
ip_protocol = "tcp" |
|||
from_port = "1" |
|||
to_port = "65535" |
|||
self = true |
|||
} |
|||
rule { |
|||
ip_protocol = "udp" |
|||
from_port = "1" |
|||
to_port = "65535" |
|||
self = true |
|||
} |
|||
rule { |
|||
ip_protocol = "icmp" |
|||
from_port = "-1" |
|||
to_port = "-1" |
|||
self = true |
|||
} |
|||
} |
|||
|
|||
resource "openstack_compute_instance_v2" "k8s_master" { |
|||
name = "${var.cluster_name}-k8s-master-${count.index+1}" |
|||
count = "${var.number_of_k8s_masters}" |
|||
image_name = "${var.image}" |
|||
flavor_id = "${var.flavor_k8s_master}" |
|||
key_pair = "${openstack_compute_keypair_v2.k8s.name}" |
|||
network { |
|||
name = "${var.network_name}" |
|||
} |
|||
security_groups = [ "${openstack_compute_secgroup_v2.k8s_master.name}", |
|||
"${openstack_compute_secgroup_v2.k8s.name}" ] |
|||
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_master.*.address, count.index)}" |
|||
metadata = { |
|||
ssh_user = "${var.ssh_user}" |
|||
kubespray_groups = "etcd,kube-master,kube-node,k8s-cluster" |
|||
} |
|||
} |
|||
|
|||
resource "openstack_compute_instance_v2" "k8s_node" { |
|||
name = "${var.cluster_name}-k8s-node-${count.index+1}" |
|||
count = "${var.number_of_k8s_nodes}" |
|||
image_name = "${var.image}" |
|||
flavor_id = "${var.flavor_k8s_node}" |
|||
key_pair = "${openstack_compute_keypair_v2.k8s.name}" |
|||
network { |
|||
name = "${var.network_name}" |
|||
} |
|||
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ] |
|||
floating_ip = "${element(openstack_networking_floatingip_v2.k8s_node.*.address, count.index)}" |
|||
metadata = { |
|||
ssh_user = "${var.ssh_user}" |
|||
kubespray_groups = "kube-node,k8s-cluster" |
|||
} |
|||
} |
|||
|
|||
#output "msg" { |
|||
# value = "Your hosts are ready to go!\nYour ssh hosts are: ${join(", ", openstack_networking_floatingip_v2.k8s_master.*.address )}" |
|||
#} |
@ -0,0 +1,238 @@ |
|||
{ |
|||
"version": 1, |
|||
"serial": 17, |
|||
"modules": [ |
|||
{ |
|||
"path": [ |
|||
"root" |
|||
], |
|||
"outputs": {}, |
|||
"resources": { |
|||
"openstack_compute_instance_v2.k8s_master.0": { |
|||
"type": "openstack_compute_instance_v2", |
|||
"depends_on": [ |
|||
"openstack_compute_keypair_v2.k8s", |
|||
"openstack_compute_secgroup_v2.k8s", |
|||
"openstack_compute_secgroup_v2.k8s_master", |
|||
"openstack_networking_floatingip_v2.k8s_master" |
|||
], |
|||
"primary": { |
|||
"id": "f4a44f6e-33ff-4e35-b593-34f3dfd80dc9", |
|||
"attributes": { |
|||
"access_ip_v4": "173.247.105.12", |
|||
"access_ip_v6": "", |
|||
"flavor_id": "3", |
|||
"flavor_name": "m1.medium", |
|||
"floating_ip": "173.247.105.12", |
|||
"id": "f4a44f6e-33ff-4e35-b593-34f3dfd80dc9", |
|||
"image_id": "1525c3f3-1224-4958-bd07-da9feaedf18b", |
|||
"image_name": "ubuntu-14.04", |
|||
"key_pair": "kubernetes-example", |
|||
"metadata.#": "2", |
|||
"metadata.kubespray_groups": "etcd,kube-master,kube-node,k8s-cluster", |
|||
"metadata.ssh_user": "ubuntu", |
|||
"name": "example-k8s-master-1", |
|||
"network.#": "1", |
|||
"network.0.access_network": "false", |
|||
"network.0.fixed_ip_v4": "10.230.7.86", |
|||
"network.0.fixed_ip_v6": "", |
|||
"network.0.floating_ip": "173.247.105.12", |
|||
"network.0.mac": "fa:16:3e:fb:82:1d", |
|||
"network.0.name": "internal", |
|||
"network.0.port": "", |
|||
"network.0.uuid": "ba0fdd03-72b5-41eb-bb67-fef437fd6cb4", |
|||
"security_groups.#": "2", |
|||
"security_groups.2779334175": "example-k8s", |
|||
"security_groups.3772290257": "example-k8s-master", |
|||
"volume.#": "0" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_compute_instance_v2.k8s_master.1": { |
|||
"type": "openstack_compute_instance_v2", |
|||
"depends_on": [ |
|||
"openstack_compute_keypair_v2.k8s", |
|||
"openstack_compute_secgroup_v2.k8s", |
|||
"openstack_compute_secgroup_v2.k8s_master", |
|||
"openstack_networking_floatingip_v2.k8s_master" |
|||
], |
|||
"primary": { |
|||
"id": "cbb565fe-a3b6-44ff-8f81-8ec29704d11b", |
|||
"attributes": { |
|||
"access_ip_v4": "173.247.105.70", |
|||
"access_ip_v6": "", |
|||
"flavor_id": "3", |
|||
"flavor_name": "m1.medium", |
|||
"floating_ip": "173.247.105.70", |
|||
"id": "cbb565fe-a3b6-44ff-8f81-8ec29704d11b", |
|||
"image_id": "1525c3f3-1224-4958-bd07-da9feaedf18b", |
|||
"image_name": "ubuntu-14.04", |
|||
"key_pair": "kubernetes-example", |
|||
"metadata.#": "2", |
|||
"metadata.kubespray_groups": "etcd,kube-master,kube-node,k8s-cluster", |
|||
"metadata.ssh_user": "ubuntu", |
|||
"name": "example-k8s-master-2", |
|||
"network.#": "1", |
|||
"network.0.access_network": "false", |
|||
"network.0.fixed_ip_v4": "10.230.7.85", |
|||
"network.0.fixed_ip_v6": "", |
|||
"network.0.floating_ip": "173.247.105.70", |
|||
"network.0.mac": "fa:16:3e:33:98:e6", |
|||
"network.0.name": "internal", |
|||
"network.0.port": "", |
|||
"network.0.uuid": "ba0fdd03-72b5-41eb-bb67-fef437fd6cb4", |
|||
"security_groups.#": "2", |
|||
"security_groups.2779334175": "example-k8s", |
|||
"security_groups.3772290257": "example-k8s-master", |
|||
"volume.#": "0" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_compute_instance_v2.k8s_node": { |
|||
"type": "openstack_compute_instance_v2", |
|||
"depends_on": [ |
|||
"openstack_compute_keypair_v2.k8s", |
|||
"openstack_compute_secgroup_v2.k8s", |
|||
"openstack_networking_floatingip_v2.k8s_node" |
|||
], |
|||
"primary": { |
|||
"id": "39deed7e-8307-4b62-b56c-ce2b405a03fa", |
|||
"attributes": { |
|||
"access_ip_v4": "173.247.105.76", |
|||
"access_ip_v6": "", |
|||
"flavor_id": "3", |
|||
"flavor_name": "m1.medium", |
|||
"floating_ip": "173.247.105.76", |
|||
"id": "39deed7e-8307-4b62-b56c-ce2b405a03fa", |
|||
"image_id": "1525c3f3-1224-4958-bd07-da9feaedf18b", |
|||
"image_name": "ubuntu-14.04", |
|||
"key_pair": "kubernetes-example", |
|||
"metadata.#": "2", |
|||
"metadata.kubespray_groups": "kube-node,k8s-cluster", |
|||
"metadata.ssh_user": "ubuntu", |
|||
"name": "example-k8s-node-1", |
|||
"network.#": "1", |
|||
"network.0.access_network": "false", |
|||
"network.0.fixed_ip_v4": "10.230.7.84", |
|||
"network.0.fixed_ip_v6": "", |
|||
"network.0.floating_ip": "173.247.105.76", |
|||
"network.0.mac": "fa:16:3e:53:57:bc", |
|||
"network.0.name": "internal", |
|||
"network.0.port": "", |
|||
"network.0.uuid": "ba0fdd03-72b5-41eb-bb67-fef437fd6cb4", |
|||
"security_groups.#": "1", |
|||
"security_groups.2779334175": "example-k8s", |
|||
"volume.#": "0" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_compute_keypair_v2.k8s": { |
|||
"type": "openstack_compute_keypair_v2", |
|||
"primary": { |
|||
"id": "kubernetes-example", |
|||
"attributes": { |
|||
"id": "kubernetes-example", |
|||
"name": "kubernetes-example", |
|||
"public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9nU6RPYCabjLH1LvJfpp9L8r8q5RZ6niS92zD95xpm2b2obVydWe0tCSFdmULBuvT8Q8YQ4qOG2g/oJlsGOsia+4CQjYEUV9CgTH9H5HK3vUOwtO5g2eFnYKSmI/4znHa0WYpQFnQK2kSSeCs2beTlJhc8vjfN/2HHmuny6SxNSbnCk/nZdwamxEONIVdjlm3CSBlq4PChT/D/uUqm/nOm0Zqdk9ZlTBkucsjiOCJeEzg4HioKmIH8ewqsKuS7kMADHPH98JMdBhTKbYbLrxTC/RfiaON58WJpmdOA935TT5Td5aVQZoqe/i/5yFRp5fMG239jtfbM0Igu44TEIib pczarkowski@Pauls-MacBook-Pro.local\n" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_compute_secgroup_v2.k8s": { |
|||
"type": "openstack_compute_secgroup_v2", |
|||
"primary": { |
|||
"id": "418394e2-b4be-4953-b7a3-b309bf28fbdb", |
|||
"attributes": { |
|||
"description": "example - Kubernetes", |
|||
"id": "418394e2-b4be-4953-b7a3-b309bf28fbdb", |
|||
"name": "example-k8s", |
|||
"rule.#": "5", |
|||
"rule.112275015.cidr": "", |
|||
"rule.112275015.from_group_id": "", |
|||
"rule.112275015.from_port": "1", |
|||
"rule.112275015.id": "597170c9-b35a-45c0-8717-652a342f3fd6", |
|||
"rule.112275015.ip_protocol": "tcp", |
|||
"rule.112275015.self": "true", |
|||
"rule.112275015.to_port": "65535", |
|||
"rule.2180185248.cidr": "0.0.0.0/0", |
|||
"rule.2180185248.from_group_id": "", |
|||
"rule.2180185248.from_port": "-1", |
|||
"rule.2180185248.id": "ffdcdd5e-f18b-4537-b502-8849affdfed9", |
|||
"rule.2180185248.ip_protocol": "icmp", |
|||
"rule.2180185248.self": "false", |
|||
"rule.2180185248.to_port": "-1", |
|||
"rule.3267409695.cidr": "", |
|||
"rule.3267409695.from_group_id": "", |
|||
"rule.3267409695.from_port": "-1", |
|||
"rule.3267409695.id": "4f91d9ca-940c-4f4d-9ce1-024cbd7d9c54", |
|||
"rule.3267409695.ip_protocol": "icmp", |
|||
"rule.3267409695.self": "true", |
|||
"rule.3267409695.to_port": "-1", |
|||
"rule.635693822.cidr": "", |
|||
"rule.635693822.from_group_id": "", |
|||
"rule.635693822.from_port": "1", |
|||
"rule.635693822.id": "c6816e5b-a1a4-4071-acce-d09b92d14d49", |
|||
"rule.635693822.ip_protocol": "udp", |
|||
"rule.635693822.self": "true", |
|||
"rule.635693822.to_port": "65535", |
|||
"rule.836640770.cidr": "0.0.0.0/0", |
|||
"rule.836640770.from_group_id": "", |
|||
"rule.836640770.from_port": "22", |
|||
"rule.836640770.id": "8845acba-636b-4c23-b9e2-5bff76d9008d", |
|||
"rule.836640770.ip_protocol": "tcp", |
|||
"rule.836640770.self": "false", |
|||
"rule.836640770.to_port": "22" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_compute_secgroup_v2.k8s_master": { |
|||
"type": "openstack_compute_secgroup_v2", |
|||
"primary": { |
|||
"id": "c74aed25-6161-46c4-a488-dfc7f49a228e", |
|||
"attributes": { |
|||
"description": "example - Kubernetes Master", |
|||
"id": "c74aed25-6161-46c4-a488-dfc7f49a228e", |
|||
"name": "example-k8s-master", |
|||
"rule.#": "0" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_networking_floatingip_v2.k8s_master.0": { |
|||
"type": "openstack_networking_floatingip_v2", |
|||
"primary": { |
|||
"id": "2a320c67-214d-4631-a840-2de82505ed3f", |
|||
"attributes": { |
|||
"address": "173.247.105.12", |
|||
"id": "2a320c67-214d-4631-a840-2de82505ed3f", |
|||
"pool": "external", |
|||
"port_id": "" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_networking_floatingip_v2.k8s_master.1": { |
|||
"type": "openstack_networking_floatingip_v2", |
|||
"primary": { |
|||
"id": "3adbfc13-e7ae-4bcf-99d3-3ba9db056e1f", |
|||
"attributes": { |
|||
"address": "173.247.105.70", |
|||
"id": "3adbfc13-e7ae-4bcf-99d3-3ba9db056e1f", |
|||
"pool": "external", |
|||
"port_id": "" |
|||
} |
|||
} |
|||
}, |
|||
"openstack_networking_floatingip_v2.k8s_node": { |
|||
"type": "openstack_networking_floatingip_v2", |
|||
"primary": { |
|||
"id": "a3f77aa6-5c3a-4edf-b97e-ee211dfa81e1", |
|||
"attributes": { |
|||
"address": "173.247.105.76", |
|||
"id": "a3f77aa6-5c3a-4edf-b97e-ee211dfa81e1", |
|||
"pool": "external", |
|||
"port_id": "" |
|||
} |
|||
} |
|||
} |
|||
} |
|||
} |
|||
] |
|||
} |
@ -0,0 +1,13 @@ |
|||
{ |
|||
"version": 1, |
|||
"serial": 16, |
|||
"modules": [ |
|||
{ |
|||
"path": [ |
|||
"root" |
|||
], |
|||
"outputs": {}, |
|||
"resources": {} |
|||
} |
|||
] |
|||
} |
@ -0,0 +1,61 @@ |
|||
variable "cluster_name" { |
|||
default = "example" |
|||
} |
|||
|
|||
variable "number_of_k8s_masters" { |
|||
default = 2 |
|||
} |
|||
|
|||
variable "number_of_k8s_nodes" { |
|||
default = 1 |
|||
} |
|||
|
|||
variable "public_key_path" { |
|||
description = "The path of the ssh pub key" |
|||
default = "~/.ssh/id_rsa.pub" |
|||
} |
|||
|
|||
variable "image" { |
|||
description = "the image to use" |
|||
default = "ubuntu-14.04" |
|||
} |
|||
|
|||
variable "ssh_user" { |
|||
description = "used to fill out tags for ansible inventory" |
|||
default = "ubuntu" |
|||
} |
|||
|
|||
variable "flavor_k8s_master" { |
|||
default = 3 |
|||
} |
|||
|
|||
variable "flavor_k8s_node" { |
|||
default = 3 |
|||
} |
|||
|
|||
|
|||
variable "network_name" { |
|||
description = "name of the internal network to use" |
|||
default = "internal" |
|||
} |
|||
|
|||
variable "floatingip_pool" { |
|||
description = "name of the floating ip pool to use" |
|||
default = "external" |
|||
} |
|||
|
|||
variable "username" { |
|||
description = "Your openstack username" |
|||
} |
|||
|
|||
variable "password" { |
|||
description = "Your openstack password" |
|||
} |
|||
|
|||
variable "tenant" { |
|||
description = "Your openstack tenant/project" |
|||
} |
|||
|
|||
variable "auth_url" { |
|||
description = "Your openstack auth URL" |
|||
} |
@ -0,0 +1,736 @@ |
|||
#!/usr/bin/env python |
|||
# |
|||
# Copyright 2015 Cisco Systems, Inc. |
|||
# |
|||
# Licensed under the Apache License, Version 2.0 (the "License"); |
|||
# you may not use this file except in compliance with the License. |
|||
# You may obtain a copy of the License at |
|||
# |
|||
# http://www.apache.org/licenses/LICENSE-2.0 |
|||
# |
|||
# Unless required by applicable law or agreed to in writing, software |
|||
# distributed under the License is distributed on an "AS IS" BASIS, |
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|||
# See the License for the specific language governing permissions and |
|||
# limitations under the License. |
|||
# |
|||
# original: https://github.com/CiscoCloud/terraform.py |
|||
|
|||
"""\ |
|||
Dynamic inventory for Terraform - finds all `.tfstate` files below the working |
|||
directory and generates an inventory based on them. |
|||
""" |
|||
from __future__ import unicode_literals, print_function |
|||
import argparse |
|||
from collections import defaultdict |
|||
from functools import wraps |
|||
import json |
|||
import os |
|||
import re |
|||
|
|||
VERSION = '0.3.0pre' |
|||
|
|||
|
|||
def tfstates(root=None): |
|||
root = root or os.getcwd() |
|||
for dirpath, _, filenames in os.walk(root): |
|||
for name in filenames: |
|||
if os.path.splitext(name)[-1] == '.tfstate': |
|||
yield os.path.join(dirpath, name) |
|||
|
|||
|
|||
def iterresources(filenames): |
|||
for filename in filenames: |
|||
with open(filename, 'r') as json_file: |
|||
state = json.load(json_file) |
|||
for module in state['modules']: |
|||
name = module['path'][-1] |
|||
for key, resource in module['resources'].items(): |
|||
yield name, key, resource |
|||
|
|||
## READ RESOURCES |
|||
PARSERS = {} |
|||
|
|||
|
|||
def _clean_dc(dcname): |
|||
# Consul DCs are strictly alphanumeric with underscores and hyphens - |
|||
# ensure that the consul_dc attribute meets these requirements. |
|||
return re.sub('[^\w_\-]', '-', dcname) |
|||
|
|||
|
|||
def iterhosts(resources): |
|||
'''yield host tuples of (name, attributes, groups)''' |
|||
for module_name, key, resource in resources: |
|||
resource_type, name = key.split('.', 1) |
|||
try: |
|||
parser = PARSERS[resource_type] |
|||
except KeyError: |
|||
continue |
|||
|
|||
yield parser(resource, module_name) |
|||
|
|||
|
|||
def parses(prefix): |
|||
def inner(func): |
|||
PARSERS[prefix] = func |
|||
return func |
|||
|
|||
return inner |
|||
|
|||
|
|||
def calculate_mantl_vars(func): |
|||
"""calculate Mantl vars""" |
|||
|
|||
@wraps(func) |
|||
def inner(*args, **kwargs): |
|||
name, attrs, groups = func(*args, **kwargs) |
|||
|
|||
# attrs |
|||
if attrs.get('role', '') == 'control': |
|||
attrs['consul_is_server'] = True |
|||
else: |
|||
attrs['consul_is_server'] = False |
|||
|
|||
# groups |
|||
if attrs.get('publicly_routable', False): |
|||
groups.append('publicly_routable') |
|||
|
|||
return name, attrs, groups |
|||
|
|||
return inner |
|||
|
|||
|
|||
def _parse_prefix(source, prefix, sep='.'): |
|||
for compkey, value in source.items(): |
|||
try: |
|||
curprefix, rest = compkey.split(sep, 1) |
|||
except ValueError: |
|||
continue |
|||
|
|||
if curprefix != prefix or rest == '#': |
|||
continue |
|||
|
|||
yield rest, value |
|||
|
|||
|
|||
def parse_attr_list(source, prefix, sep='.'): |
|||
attrs = defaultdict(dict) |
|||
for compkey, value in _parse_prefix(source, prefix, sep): |
|||
idx, key = compkey.split(sep, 1) |
|||
attrs[idx][key] = value |
|||
|
|||
return attrs.values() |
|||
|
|||
|
|||
def parse_dict(source, prefix, sep='.'): |
|||
return dict(_parse_prefix(source, prefix, sep)) |
|||
|
|||
|
|||
def parse_list(source, prefix, sep='.'): |
|||
return [value for _, value in _parse_prefix(source, prefix, sep)] |
|||
|
|||
|
|||
def parse_bool(string_form): |
|||
token = string_form.lower()[0] |
|||
|
|||
if token == 't': |
|||
return True |
|||
elif token == 'f': |
|||
return False |
|||
else: |
|||
raise ValueError('could not convert %r to a bool' % string_form) |
|||
|
|||
|
|||
@parses('triton_machine') |
|||
@calculate_mantl_vars |
|||
def triton_machine(resource, module_name): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
name = raw_attrs.get('name') |
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'id': raw_attrs['id'], |
|||
'dataset': raw_attrs['dataset'], |
|||
'disk': raw_attrs['disk'], |
|||
'firewall_enabled': parse_bool(raw_attrs['firewall_enabled']), |
|||
'image': raw_attrs['image'], |
|||
'ips': parse_list(raw_attrs, 'ips'), |
|||
'memory': raw_attrs['memory'], |
|||
'name': raw_attrs['name'], |
|||
'networks': parse_list(raw_attrs, 'networks'), |
|||
'package': raw_attrs['package'], |
|||
'primary_ip': raw_attrs['primaryip'], |
|||
'root_authorized_keys': raw_attrs['root_authorized_keys'], |
|||
'state': raw_attrs['state'], |
|||
'tags': parse_dict(raw_attrs, 'tags'), |
|||
'type': raw_attrs['type'], |
|||
'user_data': raw_attrs['user_data'], |
|||
'user_script': raw_attrs['user_script'], |
|||
|
|||
# ansible |
|||
'ansible_ssh_host': raw_attrs['primaryip'], |
|||
'ansible_ssh_port': 22, |
|||
'ansible_ssh_user': 'root', # it's "root" on Triton by default |
|||
|
|||
# generic |
|||
'public_ipv4': raw_attrs['primaryip'], |
|||
'provider': 'triton', |
|||
} |
|||
|
|||
# private IPv4 |
|||
for ip in attrs['ips']: |
|||
if ip.startswith('10') or ip.startswith('192.168'): # private IPs |
|||
attrs['private_ipv4'] = ip |
|||
break |
|||
|
|||
if 'private_ipv4' not in attrs: |
|||
attrs['private_ipv4'] = attrs['public_ipv4'] |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['tags'].get('dc', 'none')), |
|||
'role': attrs['tags'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['tags'].get('python_bin', 'python') |
|||
}) |
|||
|
|||
# add groups based on attrs |
|||
groups.append('triton_image=' + attrs['image']) |
|||
groups.append('triton_package=' + attrs['package']) |
|||
groups.append('triton_state=' + attrs['state']) |
|||
groups.append('triton_firewall_enabled=%s' % attrs['firewall_enabled']) |
|||
groups.extend('triton_tags_%s=%s' % item |
|||
for item in attrs['tags'].items()) |
|||
groups.extend('triton_network=' + network |
|||
for network in attrs['networks']) |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('digitalocean_droplet') |
|||
@calculate_mantl_vars |
|||
def digitalocean_host(resource, tfvars=None): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
name = raw_attrs['name'] |
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'id': raw_attrs['id'], |
|||
'image': raw_attrs['image'], |
|||
'ipv4_address': raw_attrs['ipv4_address'], |
|||
'locked': parse_bool(raw_attrs['locked']), |
|||
'metadata': json.loads(raw_attrs.get('user_data', '{}')), |
|||
'region': raw_attrs['region'], |
|||
'size': raw_attrs['size'], |
|||
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'), |
|||
'status': raw_attrs['status'], |
|||
# ansible |
|||
'ansible_ssh_host': raw_attrs['ipv4_address'], |
|||
'ansible_ssh_port': 22, |
|||
'ansible_ssh_user': 'root', # it's always "root" on DO |
|||
# generic |
|||
'public_ipv4': raw_attrs['ipv4_address'], |
|||
'private_ipv4': raw_attrs.get('ipv4_address_private', |
|||
raw_attrs['ipv4_address']), |
|||
'provider': 'digitalocean', |
|||
} |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python') |
|||
}) |
|||
|
|||
# add groups based on attrs |
|||
groups.append('do_image=' + attrs['image']) |
|||
groups.append('do_locked=%s' % attrs['locked']) |
|||
groups.append('do_region=' + attrs['region']) |
|||
groups.append('do_size=' + attrs['size']) |
|||
groups.append('do_status=' + attrs['status']) |
|||
groups.extend('do_metadata_%s=%s' % item |
|||
for item in attrs['metadata'].items()) |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('softlayer_virtualserver') |
|||
@calculate_mantl_vars |
|||
def softlayer_host(resource, module_name): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
name = raw_attrs['name'] |
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'id': raw_attrs['id'], |
|||
'image': raw_attrs['image'], |
|||
'ipv4_address': raw_attrs['ipv4_address'], |
|||
'metadata': json.loads(raw_attrs.get('user_data', '{}')), |
|||
'region': raw_attrs['region'], |
|||
'ram': raw_attrs['ram'], |
|||
'cpu': raw_attrs['cpu'], |
|||
'ssh_keys': parse_list(raw_attrs, 'ssh_keys'), |
|||
'public_ipv4': raw_attrs['ipv4_address'], |
|||
'private_ipv4': raw_attrs['ipv4_address_private'], |
|||
'ansible_ssh_host': raw_attrs['ipv4_address'], |
|||
'ansible_ssh_port': 22, |
|||
'ansible_ssh_user': 'root', |
|||
'provider': 'softlayer', |
|||
} |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', attrs['region'])), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python') |
|||
}) |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('openstack_compute_instance_v2') |
|||
@calculate_mantl_vars |
|||
def openstack_host(resource, module_name): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
name = raw_attrs['name'] |
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'access_ip_v4': raw_attrs['access_ip_v4'], |
|||
'access_ip_v6': raw_attrs['access_ip_v6'], |
|||
'flavor': parse_dict(raw_attrs, 'flavor', |
|||
sep='_'), |
|||
'id': raw_attrs['id'], |
|||
'image': parse_dict(raw_attrs, 'image', |
|||
sep='_'), |
|||
'key_pair': raw_attrs['key_pair'], |
|||
'metadata': parse_dict(raw_attrs, 'metadata'), |
|||
'network': parse_attr_list(raw_attrs, 'network'), |
|||
'region': raw_attrs.get('region', ''), |
|||
'security_groups': parse_list(raw_attrs, 'security_groups'), |
|||
# ansible |
|||
'ansible_ssh_port': 22, |
|||
# workaround for an OpenStack bug where hosts have a different domain |
|||
# after they're restarted |
|||
'host_domain': 'novalocal', |
|||
'use_host_domain': True, |
|||
# generic |
|||
'public_ipv4': raw_attrs['access_ip_v4'], |
|||
'private_ipv4': raw_attrs['access_ip_v4'], |
|||
'provider': 'openstack', |
|||
} |
|||
|
|||
if 'floating_ip' in raw_attrs: |
|||
attrs['private_ipv4'] = raw_attrs['network.0.fixed_ip_v4'] |
|||
|
|||
try: |
|||
attrs.update({ |
|||
'ansible_ssh_host': raw_attrs['access_ip_v4'], |
|||
'publicly_routable': True, |
|||
}) |
|||
except (KeyError, ValueError): |
|||
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False}) |
|||
|
|||
# attrs specific to Ansible |
|||
if 'metadata.ssh_user' in raw_attrs: |
|||
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user'] |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python') |
|||
}) |
|||
|
|||
# add groups based on attrs |
|||
groups.append('os_image=' + attrs['image']['name']) |
|||
groups.append('os_flavor=' + attrs['flavor']['name']) |
|||
groups.extend('os_metadata_%s=%s' % item |
|||
for item in attrs['metadata'].items()) |
|||
groups.append('os_region=' + attrs['region']) |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['metadata'].get('role', 'none')) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
# groups specific to kubespray |
|||
for group in attrs['metadata'].get('kubespray_groups', "").split(","): |
|||
groups.append(group) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('aws_instance') |
|||
@calculate_mantl_vars |
|||
def aws_host(resource, module_name): |
|||
name = resource['primary']['attributes']['tags.Name'] |
|||
raw_attrs = resource['primary']['attributes'] |
|||
|
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'ami': raw_attrs['ami'], |
|||
'availability_zone': raw_attrs['availability_zone'], |
|||
'ebs_block_device': parse_attr_list(raw_attrs, 'ebs_block_device'), |
|||
'ebs_optimized': parse_bool(raw_attrs['ebs_optimized']), |
|||
'ephemeral_block_device': parse_attr_list(raw_attrs, |
|||
'ephemeral_block_device'), |
|||
'id': raw_attrs['id'], |
|||
'key_name': raw_attrs['key_name'], |
|||
'private': parse_dict(raw_attrs, 'private', |
|||
sep='_'), |
|||
'public': parse_dict(raw_attrs, 'public', |
|||
sep='_'), |
|||
'root_block_device': parse_attr_list(raw_attrs, 'root_block_device'), |
|||
'security_groups': parse_list(raw_attrs, 'security_groups'), |
|||
'subnet': parse_dict(raw_attrs, 'subnet', |
|||
sep='_'), |
|||
'tags': parse_dict(raw_attrs, 'tags'), |
|||
'tenancy': raw_attrs['tenancy'], |
|||
'vpc_security_group_ids': parse_list(raw_attrs, |
|||
'vpc_security_group_ids'), |
|||
# ansible-specific |
|||
'ansible_ssh_port': 22, |
|||
'ansible_ssh_host': raw_attrs['public_ip'], |
|||
# generic |
|||
'public_ipv4': raw_attrs['public_ip'], |
|||
'private_ipv4': raw_attrs['private_ip'], |
|||
'provider': 'aws', |
|||
} |
|||
|
|||
# attrs specific to Ansible |
|||
if 'tags.sshUser' in raw_attrs: |
|||
attrs['ansible_ssh_user'] = raw_attrs['tags.sshUser'] |
|||
if 'tags.sshPrivateIp' in raw_attrs: |
|||
attrs['ansible_ssh_host'] = raw_attrs['private_ip'] |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['tags'].get('dc', module_name)), |
|||
'role': attrs['tags'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['tags'].get('python_bin','python') |
|||
}) |
|||
|
|||
# groups specific to Mantl |
|||
groups.extend(['aws_ami=' + attrs['ami'], |
|||
'aws_az=' + attrs['availability_zone'], |
|||
'aws_key_name=' + attrs['key_name'], |
|||
'aws_tenancy=' + attrs['tenancy']]) |
|||
groups.extend('aws_tag_%s=%s' % item for item in attrs['tags'].items()) |
|||
groups.extend('aws_vpc_security_group=' + group |
|||
for group in attrs['vpc_security_group_ids']) |
|||
groups.extend('aws_subnet_%s=%s' % subnet |
|||
for subnet in attrs['subnet'].items()) |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('google_compute_instance') |
|||
@calculate_mantl_vars |
|||
def gce_host(resource, module_name): |
|||
name = resource['primary']['id'] |
|||
raw_attrs = resource['primary']['attributes'] |
|||
groups = [] |
|||
|
|||
# network interfaces |
|||
interfaces = parse_attr_list(raw_attrs, 'network_interface') |
|||
for interface in interfaces: |
|||
interface['access_config'] = parse_attr_list(interface, |
|||
'access_config') |
|||
for key in interface.keys(): |
|||
if '.' in key: |
|||
del interface[key] |
|||
|
|||
# general attrs |
|||
attrs = { |
|||
'can_ip_forward': raw_attrs['can_ip_forward'] == 'true', |
|||
'disks': parse_attr_list(raw_attrs, 'disk'), |
|||
'machine_type': raw_attrs['machine_type'], |
|||
'metadata': parse_dict(raw_attrs, 'metadata'), |
|||
'network': parse_attr_list(raw_attrs, 'network'), |
|||
'network_interface': interfaces, |
|||
'self_link': raw_attrs['self_link'], |
|||
'service_account': parse_attr_list(raw_attrs, 'service_account'), |
|||
'tags': parse_list(raw_attrs, 'tags'), |
|||
'zone': raw_attrs['zone'], |
|||
# ansible |
|||
'ansible_ssh_port': 22, |
|||
'provider': 'gce', |
|||
} |
|||
|
|||
# attrs specific to Ansible |
|||
if 'metadata.ssh_user' in raw_attrs: |
|||
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user'] |
|||
|
|||
# attrs specific to Mantl |
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python') |
|||
}) |
|||
|
|||
try: |
|||
attrs.update({ |
|||
'ansible_ssh_host': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'], |
|||
'public_ipv4': interfaces[0]['access_config'][0]['nat_ip'] or interfaces[0]['access_config'][0]['assigned_nat_ip'], |
|||
'private_ipv4': interfaces[0]['address'], |
|||
'publicly_routable': True, |
|||
}) |
|||
except (KeyError, ValueError): |
|||
attrs.update({'ansible_ssh_host': '', 'publicly_routable': False}) |
|||
|
|||
# add groups based on attrs |
|||
groups.extend('gce_image=' + disk['image'] for disk in attrs['disks']) |
|||
groups.append('gce_machine_type=' + attrs['machine_type']) |
|||
groups.extend('gce_metadata_%s=%s' % (key, value) |
|||
for (key, value) in attrs['metadata'].items() |
|||
if key not in set(['sshKeys'])) |
|||
groups.extend('gce_tag=' + tag for tag in attrs['tags']) |
|||
groups.append('gce_zone=' + attrs['zone']) |
|||
|
|||
if attrs['can_ip_forward']: |
|||
groups.append('gce_ip_forward') |
|||
if attrs['publicly_routable']: |
|||
groups.append('gce_publicly_routable') |
|||
|
|||
# groups specific to Mantl |
|||
groups.append('role=' + attrs['metadata'].get('role', 'none')) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('vsphere_virtual_machine') |
|||
@calculate_mantl_vars |
|||
def vsphere_host(resource, module_name): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
network_attrs = parse_dict(raw_attrs, 'network_interface') |
|||
network = parse_dict(network_attrs, '0') |
|||
ip_address = network.get('ipv4_address', network['ip_address']) |
|||
name = raw_attrs['name'] |
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'id': raw_attrs['id'], |
|||
'ip_address': ip_address, |
|||
'private_ipv4': ip_address, |
|||
'public_ipv4': ip_address, |
|||
'metadata': parse_dict(raw_attrs, 'custom_configuration_parameters'), |
|||
'ansible_ssh_port': 22, |
|||
'provider': 'vsphere', |
|||
} |
|||
|
|||
try: |
|||
attrs.update({ |
|||
'ansible_ssh_host': ip_address, |
|||
}) |
|||
except (KeyError, ValueError): |
|||
attrs.update({'ansible_ssh_host': '', }) |
|||
|
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('consul_dc', module_name)), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
'ansible_python_interpreter': attrs['metadata'].get('python_bin','python') |
|||
}) |
|||
|
|||
# attrs specific to Ansible |
|||
if 'ssh_user' in attrs['metadata']: |
|||
attrs['ansible_ssh_user'] = attrs['metadata']['ssh_user'] |
|||
|
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
@parses('azure_instance') |
|||
@calculate_mantl_vars |
|||
def azure_host(resource, module_name): |
|||
name = resource['primary']['attributes']['name'] |
|||
raw_attrs = resource['primary']['attributes'] |
|||
|
|||
groups = [] |
|||
|
|||
attrs = { |
|||
'automatic_updates': raw_attrs['automatic_updates'], |
|||
'description': raw_attrs['description'], |
|||
'hosted_service_name': raw_attrs['hosted_service_name'], |
|||
'id': raw_attrs['id'], |
|||
'image': raw_attrs['image'], |
|||
'ip_address': raw_attrs['ip_address'], |
|||
'location': raw_attrs['location'], |
|||
'name': raw_attrs['name'], |
|||
'reverse_dns': raw_attrs['reverse_dns'], |
|||
'security_group': raw_attrs['security_group'], |
|||
'size': raw_attrs['size'], |
|||
'ssh_key_thumbprint': raw_attrs['ssh_key_thumbprint'], |
|||
'subnet': raw_attrs['subnet'], |
|||
'username': raw_attrs['username'], |
|||
'vip_address': raw_attrs['vip_address'], |
|||
'virtual_network': raw_attrs['virtual_network'], |
|||
'endpoint': parse_attr_list(raw_attrs, 'endpoint'), |
|||
# ansible |
|||
'ansible_ssh_port': 22, |
|||
'ansible_ssh_user': raw_attrs['username'], |
|||
'ansible_ssh_host': raw_attrs['vip_address'], |
|||
} |
|||
|
|||
# attrs specific to mantl |
|||
attrs.update({ |
|||
'consul_dc': attrs['location'].lower().replace(" ", "-"), |
|||
'role': attrs['description'] |
|||
}) |
|||
|
|||
# groups specific to mantl |
|||
groups.extend(['azure_image=' + attrs['image'], |
|||
'azure_location=' + attrs['location'].lower().replace(" ", "-"), |
|||
'azure_username=' + attrs['username'], |
|||
'azure_security_group=' + attrs['security_group']]) |
|||
|
|||
# groups specific to mantl |
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
|
|||
return name, attrs, groups |
|||
|
|||
|
|||
@parses('clc_server') |
|||
@calculate_mantl_vars |
|||
def clc_server(resource, module_name): |
|||
raw_attrs = resource['primary']['attributes'] |
|||
name = raw_attrs.get('id') |
|||
groups = [] |
|||
md = parse_dict(raw_attrs, 'metadata') |
|||
attrs = { |
|||
'metadata': md, |
|||
'ansible_ssh_port': md.get('ssh_port', 22), |
|||
'ansible_ssh_user': md.get('ssh_user', 'root'), |
|||
'provider': 'clc', |
|||
'publicly_routable': False, |
|||
} |
|||
|
|||
try: |
|||
attrs.update({ |
|||
'public_ipv4': raw_attrs['public_ip_address'], |
|||
'private_ipv4': raw_attrs['private_ip_address'], |
|||
'ansible_ssh_host': raw_attrs['public_ip_address'], |
|||
'publicly_routable': True, |
|||
}) |
|||
except (KeyError, ValueError): |
|||
attrs.update({ |
|||
'ansible_ssh_host': raw_attrs['private_ip_address'], |
|||
'private_ipv4': raw_attrs['private_ip_address'], |
|||
}) |
|||
|
|||
attrs.update({ |
|||
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)), |
|||
'role': attrs['metadata'].get('role', 'none'), |
|||
}) |
|||
|
|||
groups.append('role=' + attrs['role']) |
|||
groups.append('dc=' + attrs['consul_dc']) |
|||
return name, attrs, groups |
|||
|
|||
|
|||
|
|||
## QUERY TYPES |
|||
def query_host(hosts, target): |
|||
for name, attrs, _ in hosts: |
|||
if name == target: |
|||
return attrs |
|||
|
|||
return {} |
|||
|
|||
|
|||
def query_list(hosts): |
|||
groups = defaultdict(dict) |
|||
meta = {} |
|||
|
|||
for name, attrs, hostgroups in hosts: |
|||
for group in set(hostgroups): |
|||
groups[group].setdefault('hosts', []) |
|||
groups[group]['hosts'].append(name) |
|||
|
|||
meta[name] = attrs |
|||
|
|||
groups['_meta'] = {'hostvars': meta} |
|||
return groups |
|||
|
|||
|
|||
def query_hostfile(hosts): |
|||
out = ['## begin hosts generated by terraform.py ##'] |
|||
out.extend( |
|||
'{}\t{}'.format(attrs['ansible_ssh_host'].ljust(16), name) |
|||
for name, attrs, _ in hosts |
|||
) |
|||
|
|||
out.append('## end hosts generated by terraform.py ##') |
|||
return '\n'.join(out) |
|||
|
|||
|
|||
def main(): |
|||
parser = argparse.ArgumentParser( |
|||
__file__, __doc__, |
|||
formatter_class=argparse.ArgumentDefaultsHelpFormatter, ) |
|||
modes = parser.add_mutually_exclusive_group(required=True) |
|||
modes.add_argument('--list', |
|||
action='store_true', |
|||
help='list all variables') |
|||
modes.add_argument('--host', help='list variables for a single host') |
|||
modes.add_argument('--version', |
|||
action='store_true', |
|||
help='print version and exit') |
|||
modes.add_argument('--hostfile', |
|||
action='store_true', |
|||
help='print hosts as a /etc/hosts snippet') |
|||
parser.add_argument('--pretty', |
|||
action='store_true', |
|||
help='pretty-print output JSON') |
|||
parser.add_argument('--nometa', |
|||
action='store_true', |
|||
help='with --list, exclude hostvars') |
|||
default_root = os.environ.get('TERRAFORM_STATE_ROOT', |
|||
os.path.abspath(os.path.join(os.path.dirname(__file__), |
|||
'..', '..', ))) |
|||
parser.add_argument('--root', |
|||
default=default_root, |
|||
help='custom root to search for `.tfstate`s in') |
|||
|
|||
args = parser.parse_args() |
|||
|
|||
if args.version: |
|||
print('%s %s' % (__file__, VERSION)) |
|||
parser.exit() |
|||
|
|||
hosts = iterhosts(iterresources(tfstates(args.root))) |
|||
if args.list: |
|||
output = query_list(hosts) |
|||
if args.nometa: |
|||
del output['_meta'] |
|||
print(json.dumps(output, indent=4 if args.pretty else None)) |
|||
elif args.host: |
|||
output = query_host(hosts, args.host) |
|||
print(json.dumps(output, indent=4 if args.pretty else None)) |
|||
elif args.hostfile: |
|||
output = query_hostfile(hosts) |
|||
print(output) |
|||
|
|||
parser.exit() |
|||
|
|||
|
|||
if __name__ == '__main__': |
|||
main() |
Write
Preview
Loading…
Cancel
Save