Browse Source

GlusterFS with external VMs, terraform/os included

pull/641/head
Pablo Moreno 8 years ago
parent
commit
27e239c8d6
28 changed files with 595 additions and 1 deletions
  1. 2
      cluster.yml
  2. 92
      contrib/network-storage/glusterfs/README.md
  3. 17
      contrib/network-storage/glusterfs/glusterfs.yml
  4. 44
      contrib/network-storage/glusterfs/inventory.example
  5. 44
      contrib/network-storage/glusterfs/roles/glusterfs/README.md
  6. 11
      contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml
  7. 30
      contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml
  8. 16
      contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml
  9. 24
      contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml
  10. 10
      contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml
  11. 13
      contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml
  12. 30
      contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml
  13. 82
      contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml
  14. 26
      contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml
  15. 11
      contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml
  16. 1
      contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt
  17. 5
      contrib/network-storage/glusterfs/roles/glusterfs/server/tests/test.yml
  18. 2
      contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml
  19. 2
      contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml
  20. 19
      contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml
  21. 24
      contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2
  22. 14
      contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2
  23. 1
      contrib/network-storage/glusterfs/roles/kubernetes-pv/lib
  24. 2
      contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml
  25. 15
      contrib/terraform/openstack/README.md
  26. 29
      contrib/terraform/openstack/kubespray.tf
  27. 21
      contrib/terraform/openstack/variables.tf
  28. 9
      contrib/terraform/terraform.py

2
cluster.yml

@ -12,7 +12,7 @@
any_errors_fatal: true
gather_facts: true
- hosts: all
- hosts: all:!network-storage
any_errors_fatal: true
roles:
- { role: kubernetes/preinstall, tags: preinstall }

92
contrib/network-storage/glusterfs/README.md

@ -0,0 +1,92 @@
# Deploying a Kargo Kubernetes Cluster with GlusterFS
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section.
## Using an Ansible inventory
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group.
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kargo root folder, and execute (supposing that the machines are all using ubuntu):
```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml
```
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute:
```
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml
```
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM:
```
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core
```
## Using Terraform and Ansible
First step is to fill in a `my-kargo-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like:
```
cluster_name = "cluster1"
number_of_k8s_masters = "1"
number_of_k8s_masters_no_floating_ip = "2"
number_of_k8s_nodes_no_floating_ip = "0"
number_of_k8s_nodes = "0"
public_key_path = "~/.ssh/my-desired-key.pub"
image = "Ubuntu 16.04"
ssh_user = "ubuntu"
flavor_k8s_node = "node-flavor-id-in-your-openstack"
flavor_k8s_master = "master-flavor-id-in-your-openstack"
network_name = "k8s-network"
floatingip_pool = "net_external"
# GlusterFS variables
flavor_gfs_node = "gluster-flavor-id-in-your-openstack"
image_gfs = "Ubuntu 16.04"
number_of_gfs_nodes_no_floating_ip = "3"
gfs_volume_size_in_gb = "50"
ssh_user_gfs = "ubuntu"
```
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform:
```
$ source ~/.stackrc
$ eval $(ssh-agent -s)
$ ssh-add ~/.ssh/my-desired-key
$ echo Setting up Terraform creds && \
export TF_VAR_username=${OS_USERNAME} && \
export TF_VAR_password=${OS_PASSWORD} && \
export TF_VAR_tenant=${OS_TENANT_NAME} && \
export TF_VAR_auth_url=${OS_AUTH_URL}
```
Then, standing on the kargo directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster:
```
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
```
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping).
Then, provision your Kubernetes (Kargo) cluster with the following ansible call:
```
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml
```
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call:
```
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml
```
If you need to destroy the cluster, you can run:
```
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack
```

17
contrib/network-storage/glusterfs/glusterfs.yml

@ -0,0 +1,17 @@
---
- hosts: all
gather_facts: true
- hosts: gfs-cluster
roles:
- { role: glusterfs/server }
- hosts: k8s-cluster
roles:
- { role: glusterfs/client }
- hosts: kube-master[0]
roles:
- { role: kubernetes-pv/lib }
- { role: kubernetes-pv }

44
contrib/network-storage/glusterfs/inventory.example

@ -0,0 +1,44 @@
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
# node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
# node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
# node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
# node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
# node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
#
# ## GlusterFS nodes
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default).
# ## As in the previous case, you can set ip to give direct communication on internal IPs
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8
# gfs_node1 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9
# [kube-master]
# node1
# node2
# [etcd]
# node1
# node2
# node3
# [kube-node]
# node2
# node3
# node4
# node5
# node6
# [k8s-cluster:children]
# kube-node
# kube-master
# [gfs-cluster]
# gfs_node1
# gfs_node2
# gfs_node3
# [network-storage:children]
# gfs-cluster

44
contrib/network-storage/glusterfs/roles/glusterfs/README.md

@ -0,0 +1,44 @@
# Ansible Role: GlusterFS
[![Build Status](https://travis-ci.org/geerlingguy/ansible-role-glusterfs.svg?branch=master)](https://travis-ci.org/geerlingguy/ansible-role-glusterfs)
Installs and configures GlusterFS on Linux.
## Requirements
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role).
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/gluster_volume_module.html) module to ease the management of Gluster volumes.
## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
glusterfs_default_release: ""
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy).
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.5"
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info.
## Dependencies
None.
## Example Playbook
- hosts: server
roles:
- geerlingguy.glusterfs
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/).
## License
MIT / BSD
## Author Information
This role was created in 2015 by [Jeff Geerling](http://www.jeffgeerling.com/), author of [Ansible for DevOps](https://www.ansiblefordevops.com/).

11
contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml

@ -0,0 +1,11 @@
---
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
gluster_brick_name: gluster

30
contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml

@ -0,0 +1,30 @@
---
dependencies: []
galaxy_info:
author: geerlingguy
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- 6
- 7
- name: Ubuntu
versions:
- precise
- trusty
- xenial
- name: Debian
versions:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing

16
contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml

@ -0,0 +1,16 @@
---
# This is meant for Ubuntu and RedHat installations, where apparently the glusterfs-client is not used from inside
# hyperkube and needs to be installed as part of the system.
# Setup/install tasks.
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined
- include: setup-Debian.yml
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined
- name: Ensure Gluster mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_mount_dir }}"
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined

24
contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml

@ -0,0 +1,24 @@
---
- name: Add PPA for GlusterFS.
apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present
update_cache: yes
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS client will reinstall if the PPA was just added.
apt:
name: "{{ item }}"
state: absent
with_items:
- glusterfs-client
when: glusterfs_ppa_added.changed
- name: Ensure GlusterFS client is installed.
apt:
name: "{{ item }}"
state: installed
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-client

10
contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml

@ -0,0 +1,10 @@
---
- name: Install Prerequisites
yum: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
yum: name={{ item }} state=present
with_items:
- glusterfs-client

13
contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml

@ -0,0 +1,13 @@
---
# For Ubuntu.
glusterfs_default_release: ""
glusterfs_ppa_use: yes
glusterfs_ppa_version: "3.8"
# Gluster configuration.
gluster_mount_dir: /mnt/gluster
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick"
gluster_brick_name: gluster
# Default device to mount for xfs formatting, terraform overrides this by setting the variable in the inventory.
disk_volume_device_1: /dev/vdb

30
contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml

@ -0,0 +1,30 @@
---
dependencies: []
galaxy_info:
author: geerlingguy
description: GlusterFS installation for Linux.
company: "Midwestern Mac, LLC"
license: "license (BSD, MIT)"
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- 6
- 7
- name: Ubuntu
versions:
- precise
- trusty
- xenial
- name: Debian
versions:
- wheezy
- jessie
galaxy_tags:
- system
- networking
- cloud
- clustering
- files
- sharing

82
contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml

@ -0,0 +1,82 @@
---
# Include variables and define needed variables.
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
# Instal xfs package
- name: install xfs Debian
apt: name=xfsprogs state=present
when: ansible_os_family == "Debian"
- name: install xfs RedHat
yum: name=xfsprogs state=present
when: ansible_os_family == "RedHat"
# Format external volumes in xfs
- name: Format volumes in xfs
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}"
# Mount external volumes
- name: mounting new xfs filesystem
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted"
# Setup/install tasks.
- include: setup-RedHat.yml
when: ansible_os_family == 'RedHat'
- include: setup-Debian.yml
when: ansible_os_family == 'Debian'
- name: Ensure GlusterFS is started and enabled at boot.
service: "name={{ glusterfs_daemon }} state=started enabled=yes"
- name: Ensure Gluster brick and mount directories exist.
file: "path={{ item }} state=directory mode=0775"
with_items:
- "{{ gluster_brick_dir }}"
- "{{ gluster_mount_dir }}"
- name: Configure Gluster volume.
gluster_volume:
state: present
name: "{{ gluster_brick_name }}"
brick: "{{ gluster_brick_dir }}"
replicas: "{{ groups['gfs-cluster'] | length }}"
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}"
host: "{{ inventory_hostname }}"
force: yes
run_once: true
- name: Mount glusterfs to retrieve disk size
mount:
name: "{{ gluster_mount_dir }}"
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
fstype: glusterfs
opts: "defaults,_netdev"
state: mounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Get Gluster disk size
setup: filter=ansible_mounts
register: mounts_data
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Set Gluster disk size to variable
set_fact:
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}"
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Create file on GlusterFS
template:
dest: "{{ gluster_mount_dir }}/.test-file.txt"
src: test-file.txt
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]
- name: Unmount glusterfs
mount:
name: "{{ gluster_mount_dir }}"
fstype: glusterfs
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster"
state: unmounted
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0]

26
contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml

@ -0,0 +1,26 @@
---
- name: Add PPA for GlusterFS.
apt_repository:
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}'
state: present
update_cache: yes
register: glusterfs_ppa_added
when: glusterfs_ppa_use
- name: Ensure GlusterFS will reinstall if the PPA was just added.
apt:
name: "{{ item }}"
state: absent
with_items:
- glusterfs-server
- glusterfs-client
when: glusterfs_ppa_added.changed
- name: Ensure GlusterFS is installed.
apt:
name: "{{ item }}"
state: installed
default_release: "{{ glusterfs_default_release }}"
with_items:
- glusterfs-server
- glusterfs-client

11
contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml

@ -0,0 +1,11 @@
---
- name: Install Prerequisites
yum: name={{ item }} state=present
with_items:
- "centos-release-gluster{{ glusterfs_default_release }}"
- name: Install Packages
yum: name={{ item }} state=present
with_items:
- glusterfs-server
- glusterfs-client

1
contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt

@ -0,0 +1 @@
test file

5
contrib/network-storage/glusterfs/roles/glusterfs/server/tests/test.yml

@ -0,0 +1,5 @@
---
- hosts: all
roles:
- role_under_test

2
contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml

@ -0,0 +1,2 @@
---
glusterfs_daemon: glusterfs-server

2
contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml

@ -0,0 +1,2 @@
---
glusterfs_daemon: glusterd

19
contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml

@ -0,0 +1,19 @@
---
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV
template: src={{item.file}} dest=/etc/kubernetes/{{item.dest}}
with_items:
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json}
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml}
register: gluster_pv
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined
- name: Kubernetes Apps | Set GlusterFS endpoint and PV
kube:
name: glusterfs
namespace: default
kubectl: "{{bin_dir}}/kubectl"
resource: "{{item.item.type}}"
filename: "/etc/kubernetes/{{item.item.dest}}"
state: "{{item.changed | ternary('latest','present') }}"
with_items: "{{ gluster_pv.results }}"
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined

24
contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2

@ -0,0 +1,24 @@
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"subsets": [
{% for host in groups['gfs-cluster'] %}
{
"addresses": [
{
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}"
}
],
"ports": [
{
"port": 1
}
]
}{%- if not loop.last %}, {% endif -%}
{% endfor %}
]
}

14
contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfs
spec:
capacity:
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi"
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs
path: gluster
readOnly: false
persistentVolumeReclaimPolicy: Retain

1
contrib/network-storage/glusterfs/roles/kubernetes-pv/lib

@ -0,0 +1 @@
../../../../../roles/kubernetes-apps/lib

2
contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml

@ -0,0 +1,2 @@
dependencies:
- {role: kubernetes-pv/ansible, tags: apps}

15
contrib/terraform/openstack/README.md

@ -83,6 +83,21 @@ number_of_k8s_nodes = "0"
```
This will provision one VM as master using a floating ip, two additional masters using no floating ips (these will only have private ips inside your tenancy) and one VM as node, again without a floating ip.
Additionally, now the terraform based installation supports provisioning of a GlusterFS shared file system based on a separate set of VMs, running either a Debian or RedHat based set of VMs. To enable this, you need to add to your `my-terraform-vars.tfvars` the following variables:
```
# Flavour depends on your openstack installation, you can get available flavours through `nova list-flavors`
flavor_gfs_node = "af659280-5b8a-42b5-8865-a703775911da"
# This is the name of an image already available in your openstack installation.
image_gfs = "Ubuntu 15.10"
number_of_gfs_nodes_no_floating_ip = "3"
# This is the size of the non-ephemeral volumes to be attached to store the GlusterFS bricks.
gfs_volume_size_in_gb = "50"
# The user needed for the image choosen for GlusterFS.
ssh_user_gfs = "ubuntu"
```
If these variables are provided, this will give rise to a new ansible group called `gfs-cluster`, for which we have added ansible roles to execute in the ansible provisioning step. If you are using CoreOS, these GlusterFS VM necessarily need to be either Debian or RedHat based VMs, CoreOS cannot serve GlusterFS, but can connect to it through binaries available on hyperkube v1.4.3_coreos.0 or higher.
# Provision a Kubernetes Cluster on OpenStack

29
contrib/terraform/openstack/kubespray.tf

@ -130,6 +130,35 @@ resource "openstack_compute_instance_v2" "k8s_node_no_floating_ip" {
}
}
resource "openstack_blockstorage_volume_v2" "glusterfs_volume" {
name = "${var.cluster_name}-gfs-nephe-vol-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
description = "Non-ephemeral volume for GlusterFS"
size = "${var.gfs_volume_size_in_gb}"
}
resource "openstack_compute_instance_v2" "glusterfs_node_no_floating_ip" {
name = "${var.cluster_name}-gfs-node-nf-${count.index+1}"
count = "${var.number_of_gfs_nodes_no_floating_ip}"
image_name = "${var.image_gfs}"
flavor_id = "${var.flavor_gfs_node}"
key_pair = "${openstack_compute_keypair_v2.k8s.name}"
network {
name = "${var.network_name}"
}
security_groups = ["${openstack_compute_secgroup_v2.k8s.name}" ]
metadata = {
ssh_user = "${var.ssh_user_gfs}"
kubespray_groups = "gfs-cluster,network-storage"
}
volume {
volume_id = "${element(openstack_blockstorage_volume_v2.glusterfs_volume.*.id, count.index)}"
}
provisioner "local-exec" {
command = "sed s/USER/${var.ssh_user}/ contrib/terraform/openstack/ansible_bastion_template.txt | sed s/BASTION_ADDRESS/${element(openstack_networking_floatingip_v2.k8s_master.*.address, 0)}/ > contrib/terraform/openstack/group_vars/gfs-cluster.yml"
}
}

21
contrib/terraform/openstack/variables.tf

@ -18,6 +18,14 @@ variable "number_of_k8s_nodes_no_floating_ip" {
default = 1
}
variable "number_of_gfs_nodes_no_floating_ip" {
default = 0
}
variable "gfs_volume_size_in_gb" {
default = 75
}
variable "public_key_path" {
description = "The path of the ssh pub key"
default = "~/.ssh/id_rsa.pub"
@ -28,11 +36,21 @@ variable "image" {
default = "ubuntu-14.04"
}
variable "image_gfs" {
description = "Glance image to use for GlusterFS"
default = "ubuntu-16.04"
}
variable "ssh_user" {
description = "used to fill out tags for ansible inventory"
default = "ubuntu"
}
variable "ssh_user_gfs" {
description = "used to fill out tags for ansible inventory"
default = "ubuntu"
}
variable "flavor_k8s_master" {
default = 3
}
@ -41,6 +59,9 @@ variable "flavor_k8s_node" {
default = 3
}
variable "flavor_gfs_node" {
default = 3
}
variable "network_name" {
description = "name of the internal network to use"

9
contrib/terraform/terraform.py

@ -347,6 +347,15 @@ def openstack_host(resource, module_name):
if 'metadata.ssh_user' in raw_attrs:
attrs['ansible_ssh_user'] = raw_attrs['metadata.ssh_user']
if 'volume.#' in raw_attrs.keys() and int(raw_attrs['volume.#']) > 0:
device_index = 1
for key, value in raw_attrs.items():
match = re.search("^volume.*.device$", key)
if match:
attrs['disk_volume_device_'+str(device_index)] = value
device_index += 1
# attrs specific to Mantl
attrs.update({
'consul_dc': _clean_dc(attrs['metadata'].get('dc', module_name)),

Loading…
Cancel
Save