Pablo Moreno
8 years ago
28 changed files with 595 additions and 1 deletions
Split View
Diff Options
-
2cluster.yml
-
92contrib/network-storage/glusterfs/README.md
-
17contrib/network-storage/glusterfs/glusterfs.yml
-
44contrib/network-storage/glusterfs/inventory.example
-
44contrib/network-storage/glusterfs/roles/glusterfs/README.md
-
11contrib/network-storage/glusterfs/roles/glusterfs/client/defaults/main.yml
-
30contrib/network-storage/glusterfs/roles/glusterfs/client/meta/main.yml
-
16contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/main.yml
-
24contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-Debian.yml
-
10contrib/network-storage/glusterfs/roles/glusterfs/client/tasks/setup-RedHat.yml
-
13contrib/network-storage/glusterfs/roles/glusterfs/server/defaults/main.yml
-
30contrib/network-storage/glusterfs/roles/glusterfs/server/meta/main.yml
-
82contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/main.yml
-
26contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-Debian.yml
-
11contrib/network-storage/glusterfs/roles/glusterfs/server/tasks/setup-RedHat.yml
-
1contrib/network-storage/glusterfs/roles/glusterfs/server/templates/test-file.txt
-
5contrib/network-storage/glusterfs/roles/glusterfs/server/tests/test.yml
-
2contrib/network-storage/glusterfs/roles/glusterfs/server/vars/Debian.yml
-
2contrib/network-storage/glusterfs/roles/glusterfs/server/vars/RedHat.yml
-
19contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/tasks/main.yaml
-
24contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-endpoint.json.j2
-
14contrib/network-storage/glusterfs/roles/kubernetes-pv/ansible/templates/glusterfs-kubernetes-pv.yml.j2
-
1contrib/network-storage/glusterfs/roles/kubernetes-pv/lib
-
2contrib/network-storage/glusterfs/roles/kubernetes-pv/meta/main.yaml
-
15contrib/terraform/openstack/README.md
-
29contrib/terraform/openstack/kubespray.tf
-
21contrib/terraform/openstack/variables.tf
-
9contrib/terraform/terraform.py
@ -0,0 +1,92 @@ |
|||
# Deploying a Kargo Kubernetes Cluster with GlusterFS |
|||
|
|||
You can either deploy using Ansible on its own by supplying your own inventory file or by using Terraform to create the VMs and then providing a dynamic inventory to Ansible. The following two sections are self-contained, you don't need to go through one to use the other. So, if you want to provision with Terraform, you can skip the **Using an Ansible inventory** section, and if you want to provision with a pre-built ansible inventory, you can neglect the **Using Terraform and Ansible** section. |
|||
|
|||
## Using an Ansible inventory |
|||
|
|||
In the same directory of this ReadMe file you should find a file named `inventory.example` which contains an example setup. Please note that, additionally to the Kubernetes nodes/masters, we define a set of machines for GlusterFS and we add them to the group `[gfs-cluster]`, which in turn is added to the larger `[network-storage]` group as a child group. |
|||
|
|||
Change that file to reflect your local setup (adding more machines or removing them and setting the adequate ip numbers), and save it to `inventory/k8s_gfs_inventory`. Make sure that the settings on `inventory/group_vars/all.yml` make sense with your deployment. Then execute change to the kargo root folder, and execute (supposing that the machines are all using ubuntu): |
|||
|
|||
``` |
|||
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./cluster.yml |
|||
``` |
|||
|
|||
This will provision your Kubernetes cluster. Then, to provision and configure the GlusterFS cluster, from the same directory execute: |
|||
|
|||
``` |
|||
ansible-playbook -b --become-user=root -i inventory/k8s_gfs_inventory --user=ubuntu ./contrib/network-storage/glusterfs/glusterfs.yml |
|||
``` |
|||
|
|||
If your machines are not using Ubuntu, you need to change the `--user=ubuntu` to the correct user. Alternatively, if your Kubernetes machines are using one OS and your GlusterFS a different one, you can instead specify the `ansible_ssh_user=<correct-user>` variable in the inventory file that you just created, for each machine/VM: |
|||
|
|||
``` |
|||
k8s-master-1 ansible_ssh_host=192.168.0.147 ip=192.168.0.147 ansible_ssh_user=core |
|||
k8s-master-node-1 ansible_ssh_host=192.168.0.148 ip=192.168.0.148 ansible_ssh_user=core |
|||
k8s-master-node-2 ansible_ssh_host=192.168.0.146 ip=192.168.0.146 ansible_ssh_user=core |
|||
``` |
|||
|
|||
## Using Terraform and Ansible |
|||
|
|||
First step is to fill in a `my-kargo-gluster-cluster.tfvars` file with the specification desired for your cluster. An example with all required variables would look like: |
|||
|
|||
``` |
|||
cluster_name = "cluster1" |
|||
number_of_k8s_masters = "1" |
|||
number_of_k8s_masters_no_floating_ip = "2" |
|||
number_of_k8s_nodes_no_floating_ip = "0" |
|||
number_of_k8s_nodes = "0" |
|||
public_key_path = "~/.ssh/my-desired-key.pub" |
|||
image = "Ubuntu 16.04" |
|||
ssh_user = "ubuntu" |
|||
flavor_k8s_node = "node-flavor-id-in-your-openstack" |
|||
flavor_k8s_master = "master-flavor-id-in-your-openstack" |
|||
network_name = "k8s-network" |
|||
floatingip_pool = "net_external" |
|||
|
|||
# GlusterFS variables |
|||
flavor_gfs_node = "gluster-flavor-id-in-your-openstack" |
|||
image_gfs = "Ubuntu 16.04" |
|||
number_of_gfs_nodes_no_floating_ip = "3" |
|||
gfs_volume_size_in_gb = "50" |
|||
ssh_user_gfs = "ubuntu" |
|||
``` |
|||
|
|||
As explained in the general terraform/openstack guide, you need to source your OpenStack credentials file, add your ssh-key to the ssh-agent and setup environment variables for terraform: |
|||
|
|||
``` |
|||
$ source ~/.stackrc |
|||
$ eval $(ssh-agent -s) |
|||
$ ssh-add ~/.ssh/my-desired-key |
|||
$ echo Setting up Terraform creds && \ |
|||
export TF_VAR_username=${OS_USERNAME} && \ |
|||
export TF_VAR_password=${OS_PASSWORD} && \ |
|||
export TF_VAR_tenant=${OS_TENANT_NAME} && \ |
|||
export TF_VAR_auth_url=${OS_AUTH_URL} |
|||
``` |
|||
|
|||
Then, standing on the kargo directory (root base of the Git checkout), issue the following terraform command to create the VMs for the cluster: |
|||
|
|||
``` |
|||
terraform apply -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack |
|||
``` |
|||
|
|||
This will create both your Kubernetes and Gluster VMs. Make sure that the ansible file `contrib/terraform/openstack/group_vars/all.yml` includes any ansible variable that you want to setup (like, for instance, the type of machine for bootstrapping). |
|||
|
|||
Then, provision your Kubernetes (Kargo) cluster with the following ansible call: |
|||
|
|||
``` |
|||
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./cluster.yml |
|||
``` |
|||
|
|||
Finally, provision the glusterfs nodes and add the Persistent Volume setup for GlusterFS in Kubernetes through the following ansible call: |
|||
|
|||
``` |
|||
ansible-playbook -b --become-user=root -i contrib/terraform/openstack/hosts ./contrib/network-storage/glusterfs/glusterfs.yml |
|||
``` |
|||
|
|||
If you need to destroy the cluster, you can run: |
|||
|
|||
``` |
|||
terraform destroy -state=contrib/terraform/openstack/terraform.tfstate -var-file=my-kargo-gluster-cluster.tfvars contrib/terraform/openstack |
|||
``` |
@ -0,0 +1,17 @@ |
|||
--- |
|||
- hosts: all |
|||
gather_facts: true |
|||
|
|||
- hosts: gfs-cluster |
|||
roles: |
|||
- { role: glusterfs/server } |
|||
|
|||
- hosts: k8s-cluster |
|||
roles: |
|||
- { role: glusterfs/client } |
|||
|
|||
- hosts: kube-master[0] |
|||
roles: |
|||
- { role: kubernetes-pv/lib } |
|||
- { role: kubernetes-pv } |
|||
|
@ -0,0 +1,44 @@ |
|||
# ## Configure 'ip' variable to bind kubernetes services on a |
|||
# ## different ip than the default iface |
|||
# node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1 |
|||
# node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2 |
|||
# node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3 |
|||
# node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4 |
|||
# node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5 |
|||
# node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6 |
|||
# |
|||
# ## GlusterFS nodes |
|||
# ## Set disk_volume_device_1 to desired device for gluster brick, if different to /dev/vdb (default). |
|||
# ## As in the previous case, you can set ip to give direct communication on internal IPs |
|||
# gfs_node1 ansible_ssh_host=95.54.0.18 # disk_volume_device_1=/dev/vdc ip=10.3.0.7 |
|||
# gfs_node2 ansible_ssh_host=95.54.0.19 # disk_volume_device_1=/dev/vdc ip=10.3.0.8 |
|||
# gfs_node1 ansible_ssh_host=95.54.0.20 # disk_volume_device_1=/dev/vdc ip=10.3.0.9 |
|||
|
|||
# [kube-master] |
|||
# node1 |
|||
# node2 |
|||
|
|||
# [etcd] |
|||
# node1 |
|||
# node2 |
|||
# node3 |
|||
|
|||
# [kube-node] |
|||
# node2 |
|||
# node3 |
|||
# node4 |
|||
# node5 |
|||
# node6 |
|||
|
|||
# [k8s-cluster:children] |
|||
# kube-node |
|||
# kube-master |
|||
|
|||
# [gfs-cluster] |
|||
# gfs_node1 |
|||
# gfs_node2 |
|||
# gfs_node3 |
|||
|
|||
# [network-storage:children] |
|||
# gfs-cluster |
|||
|
@ -0,0 +1,44 @@ |
|||
# Ansible Role: GlusterFS |
|||
|
|||
[![Build Status](https://travis-ci.org/geerlingguy/ansible-role-glusterfs.svg?branch=master)](https://travis-ci.org/geerlingguy/ansible-role-glusterfs) |
|||
|
|||
Installs and configures GlusterFS on Linux. |
|||
|
|||
## Requirements |
|||
|
|||
For GlusterFS to connect between servers, TCP ports `24007`, `24008`, and `24009`/`49152`+ (that port, plus an additional incremented port for each additional server in the cluster; the latter if GlusterFS is version 3.4+), and TCP/UDP port `111` must be open. You can open these using whatever firewall you wish (this can easily be configured using the `geerlingguy.firewall` role). |
|||
|
|||
This role performs basic installation and setup of Gluster, but it does not configure or mount bricks (volumes), since that step is easier to do in a series of plays in your own playbook. Ansible 1.9+ includes the [`gluster_volume`](https://docs.ansible.com/gluster_volume_module.html) module to ease the management of Gluster volumes. |
|||
|
|||
## Role Variables |
|||
|
|||
Available variables are listed below, along with default values (see `defaults/main.yml`): |
|||
|
|||
glusterfs_default_release: "" |
|||
|
|||
You can specify a `default_release` for apt on Debian/Ubuntu by overriding this variable. This is helpful if you need a different package or version for the main GlusterFS packages (e.g. GlusterFS 3.5.x instead of 3.2.x with the `wheezy-backports` default release on Debian Wheezy). |
|||
|
|||
glusterfs_ppa_use: yes |
|||
glusterfs_ppa_version: "3.5" |
|||
|
|||
For Ubuntu, specify whether to use the official Gluster PPA, and which version of the PPA to use. See Gluster's [Getting Started Guide](http://www.gluster.org/community/documentation/index.php/Getting_started_install) for more info. |
|||
|
|||
## Dependencies |
|||
|
|||
None. |
|||
|
|||
## Example Playbook |
|||
|
|||
- hosts: server |
|||
roles: |
|||
- geerlingguy.glusterfs |
|||
|
|||
For a real-world use example, read through [Simple GlusterFS Setup with Ansible](http://www.jeffgeerling.com/blog/simple-glusterfs-setup-ansible), a blog post by this role's author, which is included in Chapter 8 of [Ansible for DevOps](https://www.ansiblefordevops.com/). |
|||
|
|||
## License |
|||
|
|||
MIT / BSD |
|||
|
|||
## Author Information |
|||
|
|||
This role was created in 2015 by [Jeff Geerling](http://www.jeffgeerling.com/), author of [Ansible for DevOps](https://www.ansiblefordevops.com/). |
@ -0,0 +1,11 @@ |
|||
--- |
|||
# For Ubuntu. |
|||
glusterfs_default_release: "" |
|||
glusterfs_ppa_use: yes |
|||
glusterfs_ppa_version: "3.8" |
|||
|
|||
# Gluster configuration. |
|||
gluster_mount_dir: /mnt/gluster |
|||
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster |
|||
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick" |
|||
gluster_brick_name: gluster |
@ -0,0 +1,30 @@ |
|||
--- |
|||
dependencies: [] |
|||
|
|||
galaxy_info: |
|||
author: geerlingguy |
|||
description: GlusterFS installation for Linux. |
|||
company: "Midwestern Mac, LLC" |
|||
license: "license (BSD, MIT)" |
|||
min_ansible_version: 2.0 |
|||
platforms: |
|||
- name: EL |
|||
versions: |
|||
- 6 |
|||
- 7 |
|||
- name: Ubuntu |
|||
versions: |
|||
- precise |
|||
- trusty |
|||
- xenial |
|||
- name: Debian |
|||
versions: |
|||
- wheezy |
|||
- jessie |
|||
galaxy_tags: |
|||
- system |
|||
- networking |
|||
- cloud |
|||
- clustering |
|||
- files |
|||
- sharing |
@ -0,0 +1,16 @@ |
|||
--- |
|||
# This is meant for Ubuntu and RedHat installations, where apparently the glusterfs-client is not used from inside |
|||
# hyperkube and needs to be installed as part of the system. |
|||
|
|||
# Setup/install tasks. |
|||
- include: setup-RedHat.yml |
|||
when: ansible_os_family == 'RedHat' and groups['gfs-cluster'] is defined |
|||
|
|||
- include: setup-Debian.yml |
|||
when: ansible_os_family == 'Debian' and groups['gfs-cluster'] is defined |
|||
|
|||
- name: Ensure Gluster mount directories exist. |
|||
file: "path={{ item }} state=directory mode=0775" |
|||
with_items: |
|||
- "{{ gluster_mount_dir }}" |
|||
when: ansible_os_family in ["Debian","RedHat"] and groups['gfs-cluster'] is defined |
@ -0,0 +1,24 @@ |
|||
--- |
|||
- name: Add PPA for GlusterFS. |
|||
apt_repository: |
|||
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}' |
|||
state: present |
|||
update_cache: yes |
|||
register: glusterfs_ppa_added |
|||
when: glusterfs_ppa_use |
|||
|
|||
- name: Ensure GlusterFS client will reinstall if the PPA was just added. |
|||
apt: |
|||
name: "{{ item }}" |
|||
state: absent |
|||
with_items: |
|||
- glusterfs-client |
|||
when: glusterfs_ppa_added.changed |
|||
|
|||
- name: Ensure GlusterFS client is installed. |
|||
apt: |
|||
name: "{{ item }}" |
|||
state: installed |
|||
default_release: "{{ glusterfs_default_release }}" |
|||
with_items: |
|||
- glusterfs-client |
@ -0,0 +1,10 @@ |
|||
--- |
|||
- name: Install Prerequisites |
|||
yum: name={{ item }} state=present |
|||
with_items: |
|||
- "centos-release-gluster{{ glusterfs_default_release }}" |
|||
|
|||
- name: Install Packages |
|||
yum: name={{ item }} state=present |
|||
with_items: |
|||
- glusterfs-client |
@ -0,0 +1,13 @@ |
|||
--- |
|||
# For Ubuntu. |
|||
glusterfs_default_release: "" |
|||
glusterfs_ppa_use: yes |
|||
glusterfs_ppa_version: "3.8" |
|||
|
|||
# Gluster configuration. |
|||
gluster_mount_dir: /mnt/gluster |
|||
gluster_volume_node_mount_dir: /mnt/xfs-drive-gluster |
|||
gluster_brick_dir: "{{ gluster_volume_node_mount_dir }}/brick" |
|||
gluster_brick_name: gluster |
|||
# Default device to mount for xfs formatting, terraform overrides this by setting the variable in the inventory. |
|||
disk_volume_device_1: /dev/vdb |
@ -0,0 +1,30 @@ |
|||
--- |
|||
dependencies: [] |
|||
|
|||
galaxy_info: |
|||
author: geerlingguy |
|||
description: GlusterFS installation for Linux. |
|||
company: "Midwestern Mac, LLC" |
|||
license: "license (BSD, MIT)" |
|||
min_ansible_version: 2.0 |
|||
platforms: |
|||
- name: EL |
|||
versions: |
|||
- 6 |
|||
- 7 |
|||
- name: Ubuntu |
|||
versions: |
|||
- precise |
|||
- trusty |
|||
- xenial |
|||
- name: Debian |
|||
versions: |
|||
- wheezy |
|||
- jessie |
|||
galaxy_tags: |
|||
- system |
|||
- networking |
|||
- cloud |
|||
- clustering |
|||
- files |
|||
- sharing |
@ -0,0 +1,82 @@ |
|||
--- |
|||
# Include variables and define needed variables. |
|||
- name: Include OS-specific variables. |
|||
include_vars: "{{ ansible_os_family }}.yml" |
|||
|
|||
# Instal xfs package |
|||
- name: install xfs Debian |
|||
apt: name=xfsprogs state=present |
|||
when: ansible_os_family == "Debian" |
|||
|
|||
- name: install xfs RedHat |
|||
yum: name=xfsprogs state=present |
|||
when: ansible_os_family == "RedHat" |
|||
|
|||
# Format external volumes in xfs |
|||
- name: Format volumes in xfs |
|||
filesystem: "fstype=xfs dev={{ disk_volume_device_1 }}" |
|||
|
|||
# Mount external volumes |
|||
- name: mounting new xfs filesystem |
|||
mount: "name={{ gluster_volume_node_mount_dir }} src={{ disk_volume_device_1 }} fstype=xfs state=mounted" |
|||
|
|||
# Setup/install tasks. |
|||
- include: setup-RedHat.yml |
|||
when: ansible_os_family == 'RedHat' |
|||
|
|||
- include: setup-Debian.yml |
|||
when: ansible_os_family == 'Debian' |
|||
|
|||
- name: Ensure GlusterFS is started and enabled at boot. |
|||
service: "name={{ glusterfs_daemon }} state=started enabled=yes" |
|||
|
|||
- name: Ensure Gluster brick and mount directories exist. |
|||
file: "path={{ item }} state=directory mode=0775" |
|||
with_items: |
|||
- "{{ gluster_brick_dir }}" |
|||
- "{{ gluster_mount_dir }}" |
|||
|
|||
- name: Configure Gluster volume. |
|||
gluster_volume: |
|||
state: present |
|||
name: "{{ gluster_brick_name }}" |
|||
brick: "{{ gluster_brick_dir }}" |
|||
replicas: "{{ groups['gfs-cluster'] | length }}" |
|||
cluster: "{% for item in groups['gfs-cluster'] -%}{{ hostvars[item]['ip']|default(hostvars[item].ansible_default_ipv4['address']) }}{% if not loop.last %},{% endif %}{%- endfor %}" |
|||
host: "{{ inventory_hostname }}" |
|||
force: yes |
|||
run_once: true |
|||
|
|||
- name: Mount glusterfs to retrieve disk size |
|||
mount: |
|||
name: "{{ gluster_mount_dir }}" |
|||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster" |
|||
fstype: glusterfs |
|||
opts: "defaults,_netdev" |
|||
state: mounted |
|||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] |
|||
|
|||
- name: Get Gluster disk size |
|||
setup: filter=ansible_mounts |
|||
register: mounts_data |
|||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] |
|||
|
|||
- name: Set Gluster disk size to variable |
|||
set_fact: |
|||
gluster_disk_size_gb: "{{ (mounts_data.ansible_facts.ansible_mounts | selectattr('mount', 'equalto', gluster_mount_dir) | map(attribute='size_total') | first | int / (1024*1024*1024)) | int }}" |
|||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] |
|||
|
|||
- name: Create file on GlusterFS |
|||
template: |
|||
dest: "{{ gluster_mount_dir }}/.test-file.txt" |
|||
src: test-file.txt |
|||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] |
|||
|
|||
- name: Unmount glusterfs |
|||
mount: |
|||
name: "{{ gluster_mount_dir }}" |
|||
fstype: glusterfs |
|||
src: "{{ ip|default(ansible_default_ipv4['address']) }}:/gluster" |
|||
state: unmounted |
|||
when: groups['gfs-cluster'] is defined and inventory_hostname == groups['gfs-cluster'][0] |
|||
|
@ -0,0 +1,26 @@ |
|||
--- |
|||
- name: Add PPA for GlusterFS. |
|||
apt_repository: |
|||
repo: 'ppa:gluster/glusterfs-{{ glusterfs_ppa_version }}' |
|||
state: present |
|||
update_cache: yes |
|||
register: glusterfs_ppa_added |
|||
when: glusterfs_ppa_use |
|||
|
|||
- name: Ensure GlusterFS will reinstall if the PPA was just added. |
|||
apt: |
|||
name: "{{ item }}" |
|||
state: absent |
|||
with_items: |
|||
- glusterfs-server |
|||
- glusterfs-client |
|||
when: glusterfs_ppa_added.changed |
|||
|
|||
- name: Ensure GlusterFS is installed. |
|||
apt: |
|||
name: "{{ item }}" |
|||
state: installed |
|||
default_release: "{{ glusterfs_default_release }}" |
|||
with_items: |
|||
- glusterfs-server |
|||
- glusterfs-client |
@ -0,0 +1,11 @@ |
|||
--- |
|||
- name: Install Prerequisites |
|||
yum: name={{ item }} state=present |
|||
with_items: |
|||
- "centos-release-gluster{{ glusterfs_default_release }}" |
|||
|
|||
- name: Install Packages |
|||
yum: name={{ item }} state=present |
|||
with_items: |
|||
- glusterfs-server |
|||
- glusterfs-client |
@ -0,0 +1 @@ |
|||
test file |
@ -0,0 +1,5 @@ |
|||
--- |
|||
- hosts: all |
|||
|
|||
roles: |
|||
- role_under_test |
@ -0,0 +1,2 @@ |
|||
--- |
|||
glusterfs_daemon: glusterfs-server |
@ -0,0 +1,2 @@ |
|||
--- |
|||
glusterfs_daemon: glusterd |
@ -0,0 +1,19 @@ |
|||
--- |
|||
- name: Kubernetes Apps | Lay Down k8s GlusterFS Endpoint and PV |
|||
template: src={{item.file}} dest=/etc/kubernetes/{{item.dest}} |
|||
with_items: |
|||
- { file: glusterfs-kubernetes-endpoint.json.j2, type: ep, dest: glusterfs-kubernetes-endpoint.json} |
|||
- { file: glusterfs-kubernetes-pv.yml.j2, type: pv, dest: glusterfs-kubernetes-pv.yml} |
|||
register: gluster_pv |
|||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined and hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb is defined |
|||
|
|||
- name: Kubernetes Apps | Set GlusterFS endpoint and PV |
|||
kube: |
|||
name: glusterfs |
|||
namespace: default |
|||
kubectl: "{{bin_dir}}/kubectl" |
|||
resource: "{{item.item.type}}" |
|||
filename: "/etc/kubernetes/{{item.item.dest}}" |
|||
state: "{{item.changed | ternary('latest','present') }}" |
|||
with_items: "{{ gluster_pv.results }}" |
|||
when: inventory_hostname == groups['kube-master'][0] and groups['gfs-cluster'] is defined |
@ -0,0 +1,24 @@ |
|||
{ |
|||
"kind": "Endpoints", |
|||
"apiVersion": "v1", |
|||
"metadata": { |
|||
"name": "glusterfs" |
|||
}, |
|||
"subsets": [ |
|||
{% for host in groups['gfs-cluster'] %} |
|||
{ |
|||
"addresses": [ |
|||
{ |
|||
"ip": "{{hostvars[host]['ip']|default(hostvars[host].ansible_default_ipv4['address'])}}" |
|||
} |
|||
], |
|||
"ports": [ |
|||
{ |
|||
"port": 1 |
|||
} |
|||
] |
|||
}{%- if not loop.last %}, {% endif -%} |
|||
{% endfor %} |
|||
] |
|||
} |
|||
|
@ -0,0 +1,14 @@ |
|||
apiVersion: v1 |
|||
kind: PersistentVolume |
|||
metadata: |
|||
name: glusterfs |
|||
spec: |
|||
capacity: |
|||
storage: "{{ hostvars[groups['gfs-cluster'][0]].gluster_disk_size_gb }}Gi" |
|||
accessModes: |
|||
- ReadWriteMany |
|||
glusterfs: |
|||
endpoints: glusterfs |
|||
path: gluster |
|||
readOnly: false |
|||
persistentVolumeReclaimPolicy: Retain |
@ -0,0 +1 @@ |
|||
../../../../../roles/kubernetes-apps/lib |
@ -0,0 +1,2 @@ |
|||
dependencies: |
|||
- {role: kubernetes-pv/ansible, tags: apps} |
Write
Preview
Loading…
Cancel
Save