Browse Source
Merge pull request #11748 from VannTen/cleanup/remove_inventory_builder
Merge pull request #11748 from VannTen/cleanup/remove_inventory_builder
Remove inventory_builder and re-organize docspull/11753/head
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
30 changed files with 139 additions and 1976 deletions
Split View
Diff Options
-
8.pre-commit-config.yaml
-
65README.md
-
177contrib/dind/README.md
-
11contrib/dind/dind-cluster.yaml
-
3contrib/dind/group_vars/all/all.yaml
-
41contrib/dind/group_vars/all/distro.yaml
-
15contrib/dind/hosts
-
22contrib/dind/kubespray-dind.yaml
-
1contrib/dind/requirements.txt
-
73contrib/dind/roles/dind-cluster/tasks/main.yaml
-
87contrib/dind/roles/dind-host/tasks/main.yaml
-
3contrib/dind/roles/dind-host/templates/inventory_builder.sh.j2
-
93contrib/dind/run-test-distros.sh
-
11contrib/dind/test-most_distros-some_CNIs.env
-
6contrib/dind/test-some_distros-kube_router_combo.env
-
8contrib/dind/test-some_distros-most_CNIs.env
-
480contrib/inventory_builder/inventory.py
-
3contrib/inventory_builder/requirements.txt
-
3contrib/inventory_builder/setup.cfg
-
29contrib/inventory_builder/setup.py
-
3contrib/inventory_builder/test-requirements.txt
-
595contrib/inventory_builder/tests/test_inventory.py
-
34contrib/inventory_builder/tox.ini
-
1docs/_sidebar.md
-
135docs/ansible/ansible.md
-
71docs/ansible/inventory.md
-
42docs/getting_started/getting-started.md
-
16docs/getting_started/setting-up-your-first-cluster.md
-
40docs/operations/mirror.md
-
39inventory/sample/inventory.ini
@ -1,177 +0,0 @@ |
|||
# Kubespray DIND experimental setup |
|||
|
|||
This ansible playbook creates local docker containers |
|||
to serve as Kubernetes "nodes", which in turn will run |
|||
"normal" Kubernetes docker containers, a mode usually |
|||
called DIND (Docker-IN-Docker). |
|||
|
|||
The playbook has two roles: |
|||
|
|||
- dind-host: creates the "nodes" as containers in localhost, with |
|||
appropriate settings for DIND (privileged, volume mapping for dind |
|||
storage, etc). |
|||
- dind-cluster: customizes each node container to have required |
|||
system packages installed, and some utils (swapoff, lsattr) |
|||
symlinked to /bin/true to ease mimicking a real node. |
|||
|
|||
This playbook has been test with Ubuntu 16.04 as host and ubuntu:16.04 |
|||
as docker images (note that dind-cluster has specific customization |
|||
for these images). |
|||
|
|||
The playbook also creates a `/tmp/kubespray.dind.inventory_builder.sh` |
|||
helper (wraps up running `contrib/inventory_builder/inventory.py` with |
|||
node containers IPs and prefix). |
|||
|
|||
## Deploying |
|||
|
|||
See below for a complete successful run: |
|||
|
|||
1. Create the node containers |
|||
|
|||
```shell |
|||
# From the kubespray root dir |
|||
cd contrib/dind |
|||
pip install -r requirements.txt |
|||
|
|||
ansible-playbook -i hosts dind-cluster.yaml |
|||
|
|||
# Back to kubespray root |
|||
cd ../.. |
|||
``` |
|||
|
|||
NOTE: if the playbook run fails with something like below error |
|||
message, you may need to specifically set `ansible_python_interpreter`, |
|||
see `./hosts` file for an example expanded localhost entry. |
|||
|
|||
```shell |
|||
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"} |
|||
``` |
|||
|
|||
2. Customize kubespray-dind.yaml |
|||
|
|||
Note that there's coupling between above created node containers |
|||
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro` |
|||
(as set in `group_vars/all/all.yaml`), and docker settings. |
|||
|
|||
```shell |
|||
$EDITOR contrib/dind/kubespray-dind.yaml |
|||
``` |
|||
|
|||
3. Prepare the inventory and run the playbook |
|||
|
|||
```shell |
|||
INVENTORY_DIR=inventory/local-dind |
|||
mkdir -p ${INVENTORY_DIR} |
|||
rm -f ${INVENTORY_DIR}/hosts.ini |
|||
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh |
|||
|
|||
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml |
|||
``` |
|||
|
|||
NOTE: You could also test other distros without editing files by |
|||
passing `--extra-vars` as per below commandline, |
|||
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`: |
|||
|
|||
```shell |
|||
cd contrib/dind |
|||
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO |
|||
|
|||
cd ../.. |
|||
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh |
|||
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO |
|||
``` |
|||
|
|||
## Resulting deployment |
|||
|
|||
See below to get an idea on how a completed deployment looks like, |
|||
from the host where you ran kubespray playbooks. |
|||
|
|||
### node_distro: debian |
|||
|
|||
Running from an Ubuntu Xenial host: |
|||
|
|||
```shell |
|||
$ uname -a |
|||
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24 |
|||
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
|||
|
|||
$ docker ps |
|||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES |
|||
1835dd183b75 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node5 |
|||
30b0af8d2924 debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node4 |
|||
3e0d1510c62f debian:9.5 "sh -c 'apt-get -qy …" 43 minutes ago Up 43 minutes kube-node3 |
|||
738993566f94 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node2 |
|||
c581ef662ed2 debian:9.5 "sh -c 'apt-get -qy …" 44 minutes ago Up 44 minutes kube-node1 |
|||
|
|||
$ docker exec kube-node1 kubectl get node |
|||
NAME STATUS ROLES AGE VERSION |
|||
kube-node1 Ready master,node 18m v1.12.1 |
|||
kube-node2 Ready master,node 17m v1.12.1 |
|||
kube-node3 Ready node 17m v1.12.1 |
|||
kube-node4 Ready node 17m v1.12.1 |
|||
kube-node5 Ready node 17m v1.12.1 |
|||
|
|||
$ docker exec kube-node1 kubectl get pod --all-namespaces |
|||
NAMESPACE NAME READY STATUS RESTARTS AGE |
|||
default netchecker-agent-67489 1/1 Running 0 2m51s |
|||
default netchecker-agent-6qq6s 1/1 Running 0 2m51s |
|||
default netchecker-agent-fsw92 1/1 Running 0 2m51s |
|||
default netchecker-agent-fw6tl 1/1 Running 0 2m51s |
|||
default netchecker-agent-hostnet-8f2zb 1/1 Running 0 3m |
|||
default netchecker-agent-hostnet-gq7ml 1/1 Running 0 3m |
|||
default netchecker-agent-hostnet-jfkgv 1/1 Running 0 3m |
|||
default netchecker-agent-hostnet-kwfwx 1/1 Running 0 3m |
|||
default netchecker-agent-hostnet-r46nm 1/1 Running 0 3m |
|||
default netchecker-agent-lxdrn 1/1 Running 0 2m51s |
|||
default netchecker-server-864bd4c897-9vstl 1/1 Running 0 2m40s |
|||
default sh-68fcc6db45-qf55h 1/1 Running 1 12m |
|||
kube-system coredns-7598f59475-6vknq 1/1 Running 0 14m |
|||
kube-system coredns-7598f59475-l5q5x 1/1 Running 0 14m |
|||
kube-system kube-apiserver-kube-node1 1/1 Running 0 17m |
|||
kube-system kube-apiserver-kube-node2 1/1 Running 0 18m |
|||
kube-system kube-controller-manager-kube-node1 1/1 Running 0 18m |
|||
kube-system kube-controller-manager-kube-node2 1/1 Running 0 18m |
|||
kube-system kube-proxy-5xx9d 1/1 Running 0 17m |
|||
kube-system kube-proxy-cdqq4 1/1 Running 0 17m |
|||
kube-system kube-proxy-n64ls 1/1 Running 0 17m |
|||
kube-system kube-proxy-pswmj 1/1 Running 0 18m |
|||
kube-system kube-proxy-x89qw 1/1 Running 0 18m |
|||
kube-system kube-scheduler-kube-node1 1/1 Running 4 17m |
|||
kube-system kube-scheduler-kube-node2 1/1 Running 4 18m |
|||
kube-system kubernetes-dashboard-5db4d9f45f-548rl 1/1 Running 0 14m |
|||
kube-system nginx-proxy-kube-node3 1/1 Running 4 17m |
|||
kube-system nginx-proxy-kube-node4 1/1 Running 4 17m |
|||
kube-system nginx-proxy-kube-node5 1/1 Running 4 17m |
|||
kube-system weave-net-42bfr 2/2 Running 0 16m |
|||
kube-system weave-net-6gt8m 2/2 Running 0 16m |
|||
kube-system weave-net-88nnc 2/2 Running 0 16m |
|||
kube-system weave-net-shckr 2/2 Running 0 16m |
|||
kube-system weave-net-xr46t 2/2 Running 0 16m |
|||
|
|||
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check |
|||
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null} |
|||
``` |
|||
|
|||
## Using ./run-test-distros.sh |
|||
|
|||
You can use `./run-test-distros.sh` to run a set of tests via DIND, |
|||
and excerpt from this script, to get an idea: |
|||
|
|||
```shell |
|||
# The SPEC file(s) must have two arrays as e.g. |
|||
# DISTROS=(debian centos) |
|||
# EXTRAS=( |
|||
# 'kube_network_plugin=calico' |
|||
# 'kube_network_plugin=flannel' |
|||
# 'kube_network_plugin=weave' |
|||
# ) |
|||
# that will be tested in a "combinatory" way (e.g. from above there'll be |
|||
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output. |
|||
# |
|||
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars |
|||
# to main kubespray ansible-playbook run. |
|||
``` |
|||
|
|||
See e.g. `test-some_distros-most_CNIs.env` and |
|||
`test-some_distros-kube_router_combo.env` in particular for a richer |
|||
set of CNI specific `--extra-vars` combo. |
@ -1,11 +0,0 @@ |
|||
--- |
|||
- name: Create nodes as docker containers |
|||
hosts: localhost |
|||
gather_facts: false |
|||
roles: |
|||
- { role: dind-host } |
|||
|
|||
- name: Customize each node containers |
|||
hosts: containers |
|||
roles: |
|||
- { role: dind-cluster } |
@ -1,3 +0,0 @@ |
|||
--- |
|||
# See distro.yaml for supported node_distro images |
|||
node_distro: debian |
@ -1,41 +0,0 @@ |
|||
--- |
|||
distro_settings: |
|||
debian: &DEBIAN |
|||
image: "debian:9.5" |
|||
user: "debian" |
|||
pid1_exe: /lib/systemd/systemd |
|||
init: | |
|||
sh -c "apt-get -qy update && apt-get -qy install systemd-sysv dbus && exec /sbin/init" |
|||
raw_setup: apt-get -qy update && apt-get -qy install dbus python sudo iproute2 |
|||
raw_setup_done: test -x /usr/bin/sudo |
|||
agetty_svc: getty@* |
|||
ssh_service: ssh |
|||
extra_packages: [] |
|||
ubuntu: |
|||
<<: *DEBIAN |
|||
image: "ubuntu:16.04" |
|||
user: "ubuntu" |
|||
init: | |
|||
/sbin/init |
|||
centos: &CENTOS |
|||
image: "centos:8" |
|||
user: "centos" |
|||
pid1_exe: /usr/lib/systemd/systemd |
|||
init: | |
|||
/sbin/init |
|||
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables |
|||
raw_setup_done: test -x /usr/bin/sudo |
|||
agetty_svc: getty@* serial-getty@* |
|||
ssh_service: sshd |
|||
extra_packages: [] |
|||
fedora: |
|||
<<: *CENTOS |
|||
image: "fedora:latest" |
|||
user: "fedora" |
|||
raw_setup: yum -qy install policycoreutils dbus python sudo iproute iptables; mkdir -p /etc/modules-load.d |
|||
extra_packages: |
|||
- hostname |
|||
- procps |
|||
- findutils |
|||
- kmod |
|||
- iputils |
@ -1,15 +0,0 @@ |
|||
[local] |
|||
# If you created a virtualenv for ansible, you may need to specify running the |
|||
# python binary from there instead: |
|||
#localhost ansible_connection=local ansible_python_interpreter=/home/user/kubespray/.venv/bin/python |
|||
localhost ansible_connection=local |
|||
|
|||
[containers] |
|||
kube-node1 |
|||
kube-node2 |
|||
kube-node3 |
|||
kube-node4 |
|||
kube-node5 |
|||
|
|||
[containers:vars] |
|||
ansible_connection=docker |
@ -1,22 +0,0 @@ |
|||
--- |
|||
# kubespray-dind.yaml: minimal kubespray ansible playbook usable for DIND |
|||
# See contrib/dind/README.md |
|||
kube_api_anonymous_auth: true |
|||
|
|||
kubelet_fail_swap_on: false |
|||
|
|||
# Docker nodes need to have been created with same "node_distro: debian" |
|||
# at contrib/dind/group_vars/all/all.yaml |
|||
bootstrap_os: debian |
|||
|
|||
docker_version: latest |
|||
|
|||
docker_storage_options: -s overlay2 --storage-opt overlay2.override_kernel_check=true -g /dind/docker |
|||
|
|||
dns_mode: coredns |
|||
|
|||
deploy_netchecker: true |
|||
netcheck_agent_image_repo: quay.io/l23network/k8s-netchecker-agent |
|||
netcheck_server_image_repo: quay.io/l23network/k8s-netchecker-server |
|||
netcheck_agent_image_tag: v1.0 |
|||
netcheck_server_image_tag: v1.0 |
@ -1 +0,0 @@ |
|||
docker |
@ -1,73 +0,0 @@ |
|||
--- |
|||
- name: Set_fact distro_setup |
|||
set_fact: |
|||
distro_setup: "{{ distro_settings[node_distro] }}" |
|||
|
|||
- name: Set_fact other distro settings |
|||
set_fact: |
|||
distro_user: "{{ distro_setup['user'] }}" |
|||
distro_ssh_service: "{{ distro_setup['ssh_service'] }}" |
|||
distro_extra_packages: "{{ distro_setup['extra_packages'] }}" |
|||
|
|||
- name: Null-ify some linux tools to ease DIND |
|||
file: |
|||
src: "/bin/true" |
|||
dest: "{{ item }}" |
|||
state: link |
|||
force: true |
|||
with_items: |
|||
# DIND box may have swap enable, don't bother |
|||
- /sbin/swapoff |
|||
# /etc/hosts handling would fail on trying to copy file attributes on edit, |
|||
# void it by successfully returning nil output |
|||
- /usr/bin/lsattr |
|||
# disable selinux-isms, sp needed if running on non-Selinux host |
|||
- /usr/sbin/semodule |
|||
|
|||
- name: Void installing dpkg docs and man pages on Debian based distros |
|||
copy: |
|||
content: | |
|||
# Delete locales |
|||
path-exclude=/usr/share/locale/* |
|||
# Delete man pages |
|||
path-exclude=/usr/share/man/* |
|||
# Delete docs |
|||
path-exclude=/usr/share/doc/* |
|||
path-include=/usr/share/doc/*/copyright |
|||
dest: /etc/dpkg/dpkg.cfg.d/01_nodoc |
|||
mode: "0644" |
|||
when: |
|||
- ansible_os_family == 'Debian' |
|||
|
|||
- name: Install system packages to better match a full-fledge node |
|||
package: |
|||
name: "{{ item }}" |
|||
state: present |
|||
with_items: "{{ distro_extra_packages + ['rsyslog', 'openssh-server'] }}" |
|||
|
|||
- name: Start needed services |
|||
service: |
|||
name: "{{ item }}" |
|||
state: started |
|||
with_items: |
|||
- rsyslog |
|||
- "{{ distro_ssh_service }}" |
|||
|
|||
- name: Create distro user "{{ distro_user }}" |
|||
user: |
|||
name: "{{ distro_user }}" |
|||
uid: 1000 |
|||
# groups: sudo |
|||
append: true |
|||
|
|||
- name: Allow password-less sudo to "{{ distro_user }}" |
|||
copy: |
|||
content: "{{ distro_user }} ALL=(ALL) NOPASSWD:ALL" |
|||
dest: "/etc/sudoers.d/{{ distro_user }}" |
|||
mode: "0640" |
|||
|
|||
- name: "Add my pubkey to {{ distro_user }} user authorized keys" |
|||
ansible.posix.authorized_key: |
|||
user: "{{ distro_user }}" |
|||
state: present |
|||
key: "{{ lookup('file', lookup('env', 'HOME') + '/.ssh/id_rsa.pub') }}" |
@ -1,87 +0,0 @@ |
|||
--- |
|||
- name: Set_fact distro_setup |
|||
set_fact: |
|||
distro_setup: "{{ distro_settings[node_distro] }}" |
|||
|
|||
- name: Set_fact other distro settings |
|||
set_fact: |
|||
distro_image: "{{ distro_setup['image'] }}" |
|||
distro_init: "{{ distro_setup['init'] }}" |
|||
distro_pid1_exe: "{{ distro_setup['pid1_exe'] }}" |
|||
distro_raw_setup: "{{ distro_setup['raw_setup'] }}" |
|||
distro_raw_setup_done: "{{ distro_setup['raw_setup_done'] }}" |
|||
distro_agetty_svc: "{{ distro_setup['agetty_svc'] }}" |
|||
|
|||
- name: Create dind node containers from "containers" inventory section |
|||
community.docker.docker_container: |
|||
image: "{{ distro_image }}" |
|||
name: "{{ item }}" |
|||
state: started |
|||
hostname: "{{ item }}" |
|||
command: "{{ distro_init }}" |
|||
# recreate: true |
|||
privileged: true |
|||
tmpfs: |
|||
- /sys/module/nf_conntrack/parameters |
|||
volumes: |
|||
- /boot:/boot |
|||
- /lib/modules:/lib/modules |
|||
- "{{ item }}:/dind/docker" |
|||
register: containers |
|||
with_items: "{{ groups.containers }}" |
|||
tags: |
|||
- addresses |
|||
|
|||
- name: Gather list of containers IPs |
|||
set_fact: |
|||
addresses: "{{ containers.results | map(attribute='ansible_facts') | map(attribute='docker_container') | map(attribute='NetworkSettings') | map(attribute='IPAddress') | list }}" |
|||
tags: |
|||
- addresses |
|||
|
|||
- name: Create inventory_builder helper already set with the list of node containers' IPs |
|||
template: |
|||
src: inventory_builder.sh.j2 |
|||
dest: /tmp/kubespray.dind.inventory_builder.sh |
|||
mode: "0755" |
|||
tags: |
|||
- addresses |
|||
|
|||
- name: Install needed packages into node containers via raw, need to wait for possible systemd packages to finish installing |
|||
raw: | |
|||
# agetty processes churn a lot of cpu time failing on inexistent ttys, early STOP them, to rip them in below task |
|||
pkill -STOP agetty || true |
|||
{{ distro_raw_setup_done }} && echo SKIPPED && exit 0 |
|||
until [ "$(readlink /proc/1/exe)" = "{{ distro_pid1_exe }}" ] ; do sleep 1; done |
|||
{{ distro_raw_setup }} |
|||
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" |
|||
with_items: "{{ containers.results }}" |
|||
register: result |
|||
changed_when: result.stdout.find("SKIPPED") < 0 |
|||
|
|||
- name: Remove gettys from node containers |
|||
raw: | |
|||
until test -S /var/run/dbus/system_bus_socket; do sleep 1; done |
|||
systemctl disable {{ distro_agetty_svc }} |
|||
systemctl stop {{ distro_agetty_svc }} |
|||
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" |
|||
with_items: "{{ containers.results }}" |
|||
changed_when: false |
|||
|
|||
# Running systemd-machine-id-setup doesn't create a unique id for each node container on Debian, |
|||
# handle manually |
|||
- name: Re-create unique machine-id (as we may just get what comes in the docker image), needed by some CNIs for mac address seeding (notably weave) |
|||
raw: | |
|||
echo {{ item | hash('sha1') }} > /etc/machine-id.new |
|||
mv -b /etc/machine-id.new /etc/machine-id |
|||
cmp /etc/machine-id /etc/machine-id~ || true |
|||
systemctl daemon-reload |
|||
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" |
|||
with_items: "{{ containers.results }}" |
|||
|
|||
- name: Early hack image install to adapt for DIND |
|||
raw: | |
|||
rm -fv /usr/bin/udevadm /usr/sbin/udevadm |
|||
delegate_to: "{{ item._ansible_item_label | default(item.item) }}" |
|||
with_items: "{{ containers.results }}" |
|||
register: result |
|||
changed_when: result.stdout.find("removed") >= 0 |
@ -1,3 +0,0 @@ |
|||
#!/bin/bash |
|||
# NOTE: if you change HOST_PREFIX, you also need to edit ./hosts [containers] section |
|||
HOST_PREFIX=kube-node python3 contrib/inventory_builder/inventory.py {% for ip in addresses %} {{ ip }} {% endfor %} |
@ -1,93 +0,0 @@ |
|||
#!/bin/bash |
|||
# Q&D test'em all: creates full DIND kubespray deploys |
|||
# for each distro, verifying it via netchecker. |
|||
|
|||
info() { |
|||
local msg="$*" |
|||
local date="$(date -Isec)" |
|||
echo "INFO: [$date] $msg" |
|||
} |
|||
pass_or_fail() { |
|||
local rc="$?" |
|||
local msg="$*" |
|||
local date="$(date -Isec)" |
|||
[ $rc -eq 0 ] && echo "PASS: [$date] $msg" || echo "FAIL: [$date] $msg" |
|||
return $rc |
|||
} |
|||
test_distro() { |
|||
local distro=${1:?};shift |
|||
local extra="${*:-}" |
|||
local prefix="${distro[${extra}]}" |
|||
ansible-playbook -i hosts dind-cluster.yaml -e node_distro=$distro |
|||
pass_or_fail "$prefix: dind-nodes" || return 1 |
|||
(cd ../.. |
|||
INVENTORY_DIR=inventory/local-dind |
|||
mkdir -p ${INVENTORY_DIR} |
|||
rm -f ${INVENTORY_DIR}/hosts.ini |
|||
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh |
|||
# expand $extra with -e in front of each word |
|||
extra_args=""; for extra_arg in $extra; do extra_args="$extra_args -e $extra_arg"; done |
|||
ansible-playbook --become -e ansible_ssh_user=$distro -i \ |
|||
${INVENTORY_DIR}/hosts.ini cluster.yml \ |
|||
-e @contrib/dind/kubespray-dind.yaml -e bootstrap_os=$distro ${extra_args} |
|||
pass_or_fail "$prefix: kubespray" |
|||
) || return 1 |
|||
local node0=${NODES[0]} |
|||
docker exec ${node0} kubectl get pod --all-namespaces |
|||
pass_or_fail "$prefix: kube-api" || return 1 |
|||
let retries=60 |
|||
while ((retries--)); do |
|||
# Some CNI may set NodePort on "main" node interface address (thus no localhost NodePort) |
|||
# e.g. kube-router: https://github.com/cloudnativelabs/kube-router/pull/217 |
|||
docker exec ${node0} curl -m2 -s http://${NETCHECKER_HOST:?}:31081/api/v1/connectivity_check | grep successfully && break |
|||
sleep 2 |
|||
done |
|||
[ $retries -ge 0 ] |
|||
pass_or_fail "$prefix: netcheck" || return 1 |
|||
} |
|||
|
|||
NODES=($(egrep ^kube_node hosts)) |
|||
NETCHECKER_HOST=localhost |
|||
|
|||
: ${OUTPUT_DIR:=./out} |
|||
mkdir -p ${OUTPUT_DIR} |
|||
|
|||
# The SPEC file(s) must have two arrays as e.g. |
|||
# DISTROS=(debian centos) |
|||
# EXTRAS=( |
|||
# 'kube_network_plugin=calico' |
|||
# 'kube_network_plugin=flannel' |
|||
# 'kube_network_plugin=weave' |
|||
# ) |
|||
# that will be tested in a "combinatory" way (e.g. from above there'll be |
|||
# be 6 test runs), creating a sequenced <spec_filename>-nn.out with each output. |
|||
# |
|||
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars |
|||
# to main kubespray ansible-playbook run. |
|||
|
|||
SPECS=${*:?Missing SPEC files, e.g. test-most_distros-some_CNIs.env} |
|||
for spec in ${SPECS}; do |
|||
unset DISTROS EXTRAS |
|||
echo "Loading file=${spec} ..." |
|||
. ${spec} || continue |
|||
: ${DISTROS:?} || continue |
|||
echo "DISTROS:" "${DISTROS[@]}" |
|||
echo "EXTRAS->" |
|||
printf " %s\n" "${EXTRAS[@]}" |
|||
let n=1 |
|||
for distro in "${DISTROS[@]}"; do |
|||
for extra in "${EXTRAS[@]:-NULL}"; do |
|||
# Magic value to let this for run once: |
|||
[[ ${extra} == NULL ]] && unset extra |
|||
docker rm -f "${NODES[@]}" |
|||
printf -v file_out "%s/%s-%02d.out" ${OUTPUT_DIR} ${spec} $((n++)) |
|||
{ |
|||
info "${distro}[${extra}] START: file_out=${file_out}" |
|||
time test_distro ${distro} ${extra} |
|||
} |& tee ${file_out} |
|||
# sleeping for the sake of the human to verify if they want |
|||
sleep 2m |
|||
done |
|||
done |
|||
done |
|||
egrep -H '^(....:|real)' $(ls -tr ${OUTPUT_DIR}/*.out) |
@ -1,11 +0,0 @@ |
|||
# Test spec file: used from ./run-test-distros.sh, will run |
|||
# each distro in $DISTROS overloading main kubespray ansible-playbook run |
|||
# Get all DISTROS from distro.yaml (shame no yaml parsing, but nuff anyway) |
|||
# DISTROS="${*:-$(egrep -o '^ \w+' group_vars/all/distro.yaml|paste -s)}" |
|||
DISTROS=(debian ubuntu centos fedora) |
|||
|
|||
# Each line below will be added as --extra-vars to main playbook run |
|||
EXTRAS=( |
|||
'kube_network_plugin=calico' |
|||
'kube_network_plugin=weave' |
|||
) |
@ -1,6 +0,0 @@ |
|||
DISTROS=(debian centos) |
|||
NETCHECKER_HOST=${NODES[0]} |
|||
EXTRAS=( |
|||
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":false}' |
|||
'kube_network_plugin=kube-router {"kube_router_run_service_proxy":true}' |
|||
) |
@ -1,8 +0,0 @@ |
|||
DISTROS=(debian centos) |
|||
EXTRAS=( |
|||
'kube_network_plugin=calico {}' |
|||
'kube_network_plugin=canal {}' |
|||
'kube_network_plugin=cilium {}' |
|||
'kube_network_plugin=flannel {}' |
|||
'kube_network_plugin=weave {}' |
|||
) |
@ -1,480 +0,0 @@ |
|||
#!/usr/bin/env python3 |
|||
# Licensed under the Apache License, Version 2.0 (the "License"); |
|||
# you may not use this file except in compliance with the License. |
|||
# You may obtain a copy of the License at |
|||
# |
|||
# http://www.apache.org/licenses/LICENSE-2.0 |
|||
# |
|||
# Unless required by applicable law or agreed to in writing, software |
|||
# distributed under the License is distributed on an "AS IS" BASIS, |
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
|||
# implied. |
|||
# See the License for the specific language governing permissions and |
|||
# limitations under the License. |
|||
# |
|||
# Usage: inventory.py ip1 [ip2 ...] |
|||
# Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5 |
|||
# |
|||
# Advanced usage: |
|||
# Add another host after initial creation: inventory.py 10.10.1.5 |
|||
# Add range of hosts: inventory.py 10.10.1.3-10.10.1.5 |
|||
# Add hosts with different ip and access ip: |
|||
# inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.1.3 |
|||
# Add hosts with a specific hostname, ip, and optional access ip: |
|||
# inventory.py first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3 |
|||
# Delete a host: inventory.py -10.10.1.3 |
|||
# Delete a host by id: inventory.py -node1 |
|||
# |
|||
# Load a YAML or JSON file with inventory data: inventory.py load hosts.yaml |
|||
# YAML file should be in the following format: |
|||
# group1: |
|||
# host1: |
|||
# ip: X.X.X.X |
|||
# var: val |
|||
# group2: |
|||
# host2: |
|||
# ip: X.X.X.X |
|||
|
|||
from collections import OrderedDict |
|||
from ipaddress import ip_address |
|||
from ruamel.yaml import YAML |
|||
|
|||
import os |
|||
import re |
|||
import subprocess |
|||
import sys |
|||
|
|||
ROLES = ['all', 'kube_control_plane', 'kube_node', 'etcd', 'k8s_cluster', |
|||
'calico_rr'] |
|||
PROTECTED_NAMES = ROLES |
|||
AVAILABLE_COMMANDS = ['help', 'print_cfg', 'print_ips', 'print_hostnames', |
|||
'load', 'add'] |
|||
_boolean_states = {'1': True, 'yes': True, 'true': True, 'on': True, |
|||
'0': False, 'no': False, 'false': False, 'off': False} |
|||
yaml = YAML() |
|||
yaml.Representer.add_representer(OrderedDict, yaml.Representer.represent_dict) |
|||
|
|||
|
|||
def get_var_as_bool(name, default): |
|||
value = os.environ.get(name, '') |
|||
return _boolean_states.get(value.lower(), default) |
|||
|
|||
# Configurable as shell vars start |
|||
|
|||
|
|||
CONFIG_FILE = os.environ.get("CONFIG_FILE", "./inventory/sample/hosts.yaml") |
|||
# Remove the reference of KUBE_MASTERS after some deprecation cycles. |
|||
KUBE_CONTROL_HOSTS = int(os.environ.get("KUBE_CONTROL_HOSTS", |
|||
os.environ.get("KUBE_MASTERS", 2))) |
|||
# Reconfigures cluster distribution at scale |
|||
SCALE_THRESHOLD = int(os.environ.get("SCALE_THRESHOLD", 50)) |
|||
MASSIVE_SCALE_THRESHOLD = int(os.environ.get("MASSIVE_SCALE_THRESHOLD", 200)) |
|||
|
|||
DEBUG = get_var_as_bool("DEBUG", True) |
|||
HOST_PREFIX = os.environ.get("HOST_PREFIX", "node") |
|||
USE_REAL_HOSTNAME = get_var_as_bool("USE_REAL_HOSTNAME", False) |
|||
|
|||
# Configurable as shell vars end |
|||
|
|||
|
|||
class KubesprayInventory(object): |
|||
|
|||
def __init__(self, changed_hosts=None, config_file=None): |
|||
self.config_file = config_file |
|||
self.yaml_config = {} |
|||
loadPreviousConfig = False |
|||
printHostnames = False |
|||
# See whether there are any commands to process |
|||
if changed_hosts and changed_hosts[0] in AVAILABLE_COMMANDS: |
|||
if changed_hosts[0] == "add": |
|||
loadPreviousConfig = True |
|||
changed_hosts = changed_hosts[1:] |
|||
elif changed_hosts[0] == "print_hostnames": |
|||
loadPreviousConfig = True |
|||
printHostnames = True |
|||
else: |
|||
self.parse_command(changed_hosts[0], changed_hosts[1:]) |
|||
sys.exit(0) |
|||
|
|||
# If the user wants to remove a node, we need to load the config anyway |
|||
if changed_hosts and changed_hosts[0][0] == "-": |
|||
loadPreviousConfig = True |
|||
|
|||
if self.config_file and loadPreviousConfig: # Load previous YAML file |
|||
try: |
|||
self.hosts_file = open(config_file, 'r') |
|||
self.yaml_config = yaml.load(self.hosts_file) |
|||
except OSError as e: |
|||
# I am assuming we are catching "cannot open file" exceptions |
|||
print(e) |
|||
sys.exit(1) |
|||
|
|||
if printHostnames: |
|||
self.print_hostnames() |
|||
sys.exit(0) |
|||
|
|||
self.ensure_required_groups(ROLES) |
|||
|
|||
if changed_hosts: |
|||
changed_hosts = self.range2ips(changed_hosts) |
|||
self.hosts = self.build_hostnames(changed_hosts, |
|||
loadPreviousConfig) |
|||
self.purge_invalid_hosts(self.hosts.keys(), PROTECTED_NAMES) |
|||
self.set_all(self.hosts) |
|||
self.set_k8s_cluster() |
|||
etcd_hosts_count = 3 if len(self.hosts.keys()) >= 3 else 1 |
|||
self.set_etcd(list(self.hosts.keys())[:etcd_hosts_count]) |
|||
if len(self.hosts) >= SCALE_THRESHOLD: |
|||
self.set_kube_control_plane(list(self.hosts.keys())[ |
|||
etcd_hosts_count:(etcd_hosts_count + KUBE_CONTROL_HOSTS)]) |
|||
else: |
|||
self.set_kube_control_plane( |
|||
list(self.hosts.keys())[:KUBE_CONTROL_HOSTS]) |
|||
self.set_kube_node(self.hosts.keys()) |
|||
if len(self.hosts) >= SCALE_THRESHOLD: |
|||
self.set_calico_rr(list(self.hosts.keys())[:etcd_hosts_count]) |
|||
else: # Show help if no options |
|||
self.show_help() |
|||
sys.exit(0) |
|||
|
|||
self.write_config(self.config_file) |
|||
|
|||
def write_config(self, config_file): |
|||
if config_file: |
|||
with open(self.config_file, 'w') as f: |
|||
yaml.dump(self.yaml_config, f) |
|||
|
|||
else: |
|||
print("WARNING: Unable to save config. Make sure you set " |
|||
"CONFIG_FILE env var.") |
|||
|
|||
def debug(self, msg): |
|||
if DEBUG: |
|||
print("DEBUG: {0}".format(msg)) |
|||
|
|||
def get_ip_from_opts(self, optstring): |
|||
if 'ip' in optstring: |
|||
return optstring['ip'] |
|||
else: |
|||
raise ValueError("IP parameter not found in options") |
|||
|
|||
def ensure_required_groups(self, groups): |
|||
for group in groups: |
|||
if group == 'all': |
|||
self.debug("Adding group {0}".format(group)) |
|||
if group not in self.yaml_config: |
|||
all_dict = OrderedDict([('hosts', OrderedDict({})), |
|||
('children', OrderedDict({}))]) |
|||
self.yaml_config = {'all': all_dict} |
|||
else: |
|||
self.debug("Adding group {0}".format(group)) |
|||
if group not in self.yaml_config['all']['children']: |
|||
self.yaml_config['all']['children'][group] = {'hosts': {}} |
|||
|
|||
def get_host_id(self, host): |
|||
'''Returns integer host ID (without padding) from a given hostname.''' |
|||
try: |
|||
short_hostname = host.split('.')[0] |
|||
return int(re.findall("\\d+$", short_hostname)[-1]) |
|||
except IndexError: |
|||
raise ValueError("Host name must end in an integer") |
|||
|
|||
# Keeps already specified hosts, |
|||
# and adds or removes the hosts provided as an argument |
|||
def build_hostnames(self, changed_hosts, loadPreviousConfig=False): |
|||
existing_hosts = OrderedDict() |
|||
highest_host_id = 0 |
|||
# Load already existing hosts from the YAML |
|||
if loadPreviousConfig: |
|||
try: |
|||
for host in self.yaml_config['all']['hosts']: |
|||
# Read configuration of an existing host |
|||
hostConfig = self.yaml_config['all']['hosts'][host] |
|||
existing_hosts[host] = hostConfig |
|||
# If the existing host seems |
|||
# to have been created automatically, detect its ID |
|||
if host.startswith(HOST_PREFIX): |
|||
host_id = self.get_host_id(host) |
|||
if host_id > highest_host_id: |
|||
highest_host_id = host_id |
|||
except Exception as e: |
|||
# I am assuming we are catching automatically |
|||
# created hosts without IDs |
|||
print(e) |
|||
sys.exit(1) |
|||
|
|||
# FIXME(mattymo): Fix condition where delete then add reuses highest id |
|||
next_host_id = highest_host_id + 1 |
|||
next_host = "" |
|||
|
|||
all_hosts = existing_hosts.copy() |
|||
for host in changed_hosts: |
|||
# Delete the host from config the hostname/IP has a "-" prefix |
|||
if host[0] == "-": |
|||
realhost = host[1:] |
|||
if self.exists_hostname(all_hosts, realhost): |
|||
self.debug("Marked {0} for deletion.".format(realhost)) |
|||
all_hosts.pop(realhost) |
|||
elif self.exists_ip(all_hosts, realhost): |
|||
self.debug("Marked {0} for deletion.".format(realhost)) |
|||
self.delete_host_by_ip(all_hosts, realhost) |
|||
# Host/Argument starts with a digit, |
|||
# then we assume its an IP address |
|||
elif host[0].isdigit(): |
|||
if ',' in host: |
|||
ip, access_ip = host.split(',') |
|||
else: |
|||
ip = host |
|||
access_ip = host |
|||
if self.exists_hostname(all_hosts, host): |
|||
self.debug("Skipping existing host {0}.".format(host)) |
|||
continue |
|||
elif self.exists_ip(all_hosts, ip): |
|||
self.debug("Skipping existing host {0}.".format(ip)) |
|||
continue |
|||
|
|||
if USE_REAL_HOSTNAME: |
|||
cmd = ("ssh -oStrictHostKeyChecking=no " |
|||
+ access_ip + " 'hostname -s'") |
|||
next_host = subprocess.check_output(cmd, shell=True) |
|||
next_host = next_host.strip().decode('ascii') |
|||
else: |
|||
# Generates a hostname because we have only an IP address |
|||
next_host = "{0}{1}".format(HOST_PREFIX, next_host_id) |
|||
next_host_id += 1 |
|||
# Uses automatically generated node name |
|||
# in case we dont provide it. |
|||
all_hosts[next_host] = {'ansible_host': access_ip, |
|||
'ip': ip, |
|||
'access_ip': access_ip} |
|||
# Host/Argument starts with a letter, then we assume its a hostname |
|||
elif host[0].isalpha(): |
|||
if ',' in host: |
|||
try: |
|||
hostname, ip, access_ip = host.split(',') |
|||
except Exception: |
|||
hostname, ip = host.split(',') |
|||
access_ip = ip |
|||
if self.exists_hostname(all_hosts, host): |
|||
self.debug("Skipping existing host {0}.".format(host)) |
|||
continue |
|||
elif self.exists_ip(all_hosts, ip): |
|||
self.debug("Skipping existing host {0}.".format(ip)) |
|||
continue |
|||
all_hosts[hostname] = {'ansible_host': access_ip, |
|||
'ip': ip, |
|||
'access_ip': access_ip} |
|||
return all_hosts |
|||
|
|||
# Expand IP ranges into individual addresses |
|||
def range2ips(self, hosts): |
|||
reworked_hosts = [] |
|||
|
|||
def ips(start_address, end_address): |
|||
try: |
|||
# Python 3.x |
|||
start = int(ip_address(start_address)) |
|||
end = int(ip_address(end_address)) |
|||
except Exception: |
|||
# Python 2.7 |
|||
start = int(ip_address(str(start_address))) |
|||
end = int(ip_address(str(end_address))) |
|||
return [ip_address(ip).exploded for ip in range(start, end + 1)] |
|||
|
|||
for host in hosts: |
|||
if '-' in host and not (host.startswith('-') or host[0].isalpha()): |
|||
start, end = host.strip().split('-') |
|||
try: |
|||
reworked_hosts.extend(ips(start, end)) |
|||
except ValueError: |
|||
raise Exception("Range of ip_addresses isn't valid") |
|||
else: |
|||
reworked_hosts.append(host) |
|||
return reworked_hosts |
|||
|
|||
def exists_hostname(self, existing_hosts, hostname): |
|||
return hostname in existing_hosts.keys() |
|||
|
|||
def exists_ip(self, existing_hosts, ip): |
|||
for host_opts in existing_hosts.values(): |
|||
if ip == self.get_ip_from_opts(host_opts): |
|||
return True |
|||
return False |
|||
|
|||
def delete_host_by_ip(self, existing_hosts, ip): |
|||
for hostname, host_opts in existing_hosts.items(): |
|||
if ip == self.get_ip_from_opts(host_opts): |
|||
del existing_hosts[hostname] |
|||
return |
|||
raise ValueError("Unable to find host by IP: {0}".format(ip)) |
|||
|
|||
def purge_invalid_hosts(self, hostnames, protected_names=[]): |
|||
for role in self.yaml_config['all']['children']: |
|||
if role != 'k8s_cluster' and self.yaml_config['all']['children'][role]['hosts']: # noqa |
|||
all_hosts = self.yaml_config['all']['children'][role]['hosts'].copy() # noqa |
|||
for host in all_hosts.keys(): |
|||
if host not in hostnames and host not in protected_names: |
|||
self.debug( |
|||
"Host {0} removed from role {1}".format(host, role)) # noqa |
|||
del self.yaml_config['all']['children'][role]['hosts'][host] # noqa |
|||
# purge from all |
|||
if self.yaml_config['all']['hosts']: |
|||
all_hosts = self.yaml_config['all']['hosts'].copy() |
|||
for host in all_hosts.keys(): |
|||
if host not in hostnames and host not in protected_names: |
|||
self.debug("Host {0} removed from role all".format(host)) |
|||
del self.yaml_config['all']['hosts'][host] |
|||
|
|||
def add_host_to_group(self, group, host, opts=""): |
|||
self.debug("adding host {0} to group {1}".format(host, group)) |
|||
if group == 'all': |
|||
if self.yaml_config['all']['hosts'] is None: |
|||
self.yaml_config['all']['hosts'] = {host: None} |
|||
self.yaml_config['all']['hosts'][host] = opts |
|||
elif group != 'k8s_cluster:children': |
|||
if self.yaml_config['all']['children'][group]['hosts'] is None: |
|||
self.yaml_config['all']['children'][group]['hosts'] = { |
|||
host: None} |
|||
else: |
|||
self.yaml_config['all']['children'][group]['hosts'][host] = None # noqa |
|||
|
|||
def set_kube_control_plane(self, hosts): |
|||
for host in hosts: |
|||
self.add_host_to_group('kube_control_plane', host) |
|||
|
|||
def set_all(self, hosts): |
|||
for host, opts in hosts.items(): |
|||
self.add_host_to_group('all', host, opts) |
|||
|
|||
def set_k8s_cluster(self): |
|||
k8s_cluster = {'children': {'kube_control_plane': None, |
|||
'kube_node': None}} |
|||
self.yaml_config['all']['children']['k8s_cluster'] = k8s_cluster |
|||
|
|||
def set_calico_rr(self, hosts): |
|||
for host in hosts: |
|||
if host in self.yaml_config['all']['children']['kube_control_plane']: # noqa |
|||
self.debug("Not adding {0} to calico_rr group because it " |
|||
"conflicts with kube_control_plane " |
|||
"group".format(host)) |
|||
continue |
|||
if host in self.yaml_config['all']['children']['kube_node']: |
|||
self.debug("Not adding {0} to calico_rr group because it " |
|||
"conflicts with kube_node group".format(host)) |
|||
continue |
|||
self.add_host_to_group('calico_rr', host) |
|||
|
|||
def set_kube_node(self, hosts): |
|||
for host in hosts: |
|||
if len(self.yaml_config['all']['hosts']) >= SCALE_THRESHOLD: |
|||
if host in self.yaml_config['all']['children']['etcd']['hosts']: # noqa |
|||
self.debug("Not adding {0} to kube_node group because of " |
|||
"scale deployment and host is in etcd " |
|||
"group.".format(host)) |
|||
continue |
|||
if len(self.yaml_config['all']['hosts']) >= MASSIVE_SCALE_THRESHOLD: # noqa |
|||
if host in self.yaml_config['all']['children']['kube_control_plane']['hosts']: # noqa |
|||
self.debug("Not adding {0} to kube_node group because of " |
|||
"scale deployment and host is in " |
|||
"kube_control_plane group.".format(host)) |
|||
continue |
|||
self.add_host_to_group('kube_node', host) |
|||
|
|||
def set_etcd(self, hosts): |
|||
for host in hosts: |
|||
self.add_host_to_group('etcd', host) |
|||
|
|||
def load_file(self, files=None): |
|||
'''Directly loads JSON to inventory.''' |
|||
|
|||
if not files: |
|||
raise Exception("No input file specified.") |
|||
|
|||
import json |
|||
|
|||
for filename in list(files): |
|||
# Try JSON |
|||
try: |
|||
with open(filename, 'r') as f: |
|||
data = json.load(f) |
|||
except ValueError: |
|||
raise Exception("Cannot read %s as JSON, or CSV", filename) |
|||
|
|||
self.ensure_required_groups(ROLES) |
|||
self.set_k8s_cluster() |
|||
for group, hosts in data.items(): |
|||
self.ensure_required_groups([group]) |
|||
for host, opts in hosts.items(): |
|||
optstring = {'ansible_host': opts['ip'], |
|||
'ip': opts['ip'], |
|||
'access_ip': opts['ip']} |
|||
self.add_host_to_group('all', host, optstring) |
|||
self.add_host_to_group(group, host) |
|||
self.write_config(self.config_file) |
|||
|
|||
def parse_command(self, command, args=None): |
|||
if command == 'help': |
|||
self.show_help() |
|||
elif command == 'print_cfg': |
|||
self.print_config() |
|||
elif command == 'print_ips': |
|||
self.print_ips() |
|||
elif command == 'print_hostnames': |
|||
self.print_hostnames() |
|||
elif command == 'load': |
|||
self.load_file(args) |
|||
else: |
|||
raise Exception("Invalid command specified.") |
|||
|
|||
def show_help(self): |
|||
help_text = '''Usage: inventory.py ip1 [ip2 ...] |
|||
Examples: inventory.py 10.10.1.3 10.10.1.4 10.10.1.5 |
|||
|
|||
Available commands: |
|||
help - Display this message |
|||
print_cfg - Write inventory file to stdout |
|||
print_ips - Write a space-delimited list of IPs from "all" group |
|||
print_hostnames - Write a space-delimited list of Hostnames from "all" group |
|||
add - Adds specified hosts into an already existing inventory |
|||
|
|||
Advanced usage: |
|||
Create new or overwrite old inventory file: inventory.py 10.10.1.5 |
|||
Add another host after initial creation: inventory.py add 10.10.1.6 |
|||
Add range of hosts: inventory.py 10.10.1.3-10.10.1.5 |
|||
Add hosts with different ip and access ip: inventory.py 10.0.0.1,192.168.10.1 10.0.0.2,192.168.10.2 10.0.0.3,192.168.10.3 |
|||
Add hosts with a specific hostname, ip, and optional access ip: first,10.0.0.1,192.168.10.1 second,10.0.0.2 last,10.0.0.3 |
|||
Delete a host: inventory.py -10.10.1.3 |
|||
Delete a host by id: inventory.py -node1 |
|||
|
|||
Configurable env vars: |
|||
DEBUG Enable debug printing. Default: True |
|||
CONFIG_FILE File to write config to Default: ./inventory/sample/hosts.yaml |
|||
HOST_PREFIX Host prefix for generated hosts. Default: node |
|||
KUBE_CONTROL_HOSTS Set the number of kube-control-planes. Default: 2 |
|||
SCALE_THRESHOLD Separate ETCD role if # of nodes >= 50 |
|||
MASSIVE_SCALE_THRESHOLD Separate K8s control-plane and ETCD if # of nodes >= 200 |
|||
''' # noqa |
|||
print(help_text) |
|||
|
|||
def print_config(self): |
|||
yaml.dump(self.yaml_config, sys.stdout) |
|||
|
|||
def print_hostnames(self): |
|||
print(' '.join(self.yaml_config['all']['hosts'].keys())) |
|||
|
|||
def print_ips(self): |
|||
ips = [] |
|||
for host, opts in self.yaml_config['all']['hosts'].items(): |
|||
ips.append(self.get_ip_from_opts(opts)) |
|||
print(' '.join(ips)) |
|||
|
|||
|
|||
def main(argv=None): |
|||
if not argv: |
|||
argv = sys.argv[1:] |
|||
KubesprayInventory(argv, CONFIG_FILE) |
|||
return 0 |
|||
|
|||
|
|||
if __name__ == "__main__": |
|||
sys.exit(main()) |
@ -1,3 +0,0 @@ |
|||
configparser>=3.3.0 |
|||
ipaddress |
|||
ruamel.yaml>=0.15.88 |
@ -1,3 +0,0 @@ |
|||
[metadata] |
|||
name = kubespray-inventory-builder |
|||
version = 0.1 |
@ -1,29 +0,0 @@ |
|||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. |
|||
# |
|||
# Licensed under the Apache License, Version 2.0 (the "License"); |
|||
# you may not use this file except in compliance with the License. |
|||
# You may obtain a copy of the License at |
|||
# |
|||
# http://www.apache.org/licenses/LICENSE-2.0 |
|||
# |
|||
# Unless required by applicable law or agreed to in writing, software |
|||
# distributed under the License is distributed on an "AS IS" BASIS, |
|||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
|||
# implied. |
|||
# See the License for the specific language governing permissions and |
|||
# limitations under the License. |
|||
|
|||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT |
|||
import setuptools |
|||
|
|||
# In python < 2.7.4, a lazy loading of package `pbr` will break |
|||
# setuptools if some other modules registered functions in `atexit`. |
|||
# solution from: http://bugs.python.org/issue15881#msg170215 |
|||
try: |
|||
import multiprocessing # noqa |
|||
except ImportError: |
|||
pass |
|||
|
|||
setuptools.setup( |
|||
setup_requires=[], |
|||
pbr=False) |
@ -1,3 +0,0 @@ |
|||
hacking>=0.10.2 |
|||
mock>=1.3.0 |
|||
pytest>=2.8.0 |
@ -1,595 +0,0 @@ |
|||
# Copyright 2016 Mirantis, Inc. |
|||
# |
|||
# Licensed under the Apache License, Version 2.0 (the "License"); you may |
|||
# not use this file except in compliance with the License. You may obtain |
|||
# a copy of the License at |
|||
# |
|||
# http://www.apache.org/licenses/LICENSE-2.0 |
|||
# |
|||
# Unless required by applicable law or agreed to in writing, software |
|||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
|||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
|||
# License for the specific language governing permissions and limitations |
|||
# under the License. |
|||
|
|||
import inventory |
|||
from io import StringIO |
|||
import unittest |
|||
from unittest import mock |
|||
|
|||
from collections import OrderedDict |
|||
import sys |
|||
|
|||
path = "./contrib/inventory_builder/" |
|||
if path not in sys.path: |
|||
sys.path.append(path) |
|||
|
|||
import inventory # noqa |
|||
|
|||
|
|||
class TestInventoryPrintHostnames(unittest.TestCase): |
|||
|
|||
@mock.patch('ruamel.yaml.YAML.load') |
|||
def test_print_hostnames(self, load_mock): |
|||
mock_io = mock.mock_open(read_data='') |
|||
load_mock.return_value = OrderedDict({'all': {'hosts': { |
|||
'node1': {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}, |
|||
'node2': {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'}}}}) |
|||
with mock.patch('builtins.open', mock_io): |
|||
with self.assertRaises(SystemExit) as cm: |
|||
with mock.patch('sys.stdout', new_callable=StringIO) as stdout: |
|||
inventory.KubesprayInventory( |
|||
changed_hosts=["print_hostnames"], |
|||
config_file="file") |
|||
self.assertEqual("node1 node2\n", stdout.getvalue()) |
|||
self.assertEqual(cm.exception.code, 0) |
|||
|
|||
|
|||
class TestInventory(unittest.TestCase): |
|||
@mock.patch('inventory.sys') |
|||
def setUp(self, sys_mock): |
|||
sys_mock.exit = mock.Mock() |
|||
super(TestInventory, self).setUp() |
|||
self.data = ['10.90.3.2', '10.90.3.3', '10.90.3.4'] |
|||
self.inv = inventory.KubesprayInventory() |
|||
|
|||
def test_get_ip_from_opts(self): |
|||
optstring = {'ansible_host': '10.90.3.2', |
|||
'ip': '10.90.3.2', |
|||
'access_ip': '10.90.3.2'} |
|||
expected = "10.90.3.2" |
|||
result = self.inv.get_ip_from_opts(optstring) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_get_ip_from_opts_invalid(self): |
|||
optstring = "notanaddr=value something random!chars:D" |
|||
self.assertRaisesRegex(ValueError, "IP parameter not found", |
|||
self.inv.get_ip_from_opts, optstring) |
|||
|
|||
def test_ensure_required_groups(self): |
|||
groups = ['group1', 'group2'] |
|||
self.inv.ensure_required_groups(groups) |
|||
for group in groups: |
|||
self.assertIn(group, self.inv.yaml_config['all']['children']) |
|||
|
|||
def test_get_host_id(self): |
|||
hostnames = ['node99', 'no99de01', '01node01', 'node1.domain', |
|||
'node3.xyz123.aaa'] |
|||
expected = [99, 1, 1, 1, 3] |
|||
for hostname, expected in zip(hostnames, expected): |
|||
result = self.inv.get_host_id(hostname) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_get_host_id_invalid(self): |
|||
bad_hostnames = ['node', 'no99de', '01node', 'node.111111'] |
|||
for hostname in bad_hostnames: |
|||
self.assertRaisesRegex(ValueError, "Host name must end in an", |
|||
self.inv.get_host_id, hostname) |
|||
|
|||
def test_build_hostnames_add_duplicate(self): |
|||
changed_hosts = ['10.90.0.2'] |
|||
expected = OrderedDict([('node3', |
|||
{'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'})]) |
|||
self.inv.yaml_config['all']['hosts'] = expected |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_two(self): |
|||
changed_hosts = ['10.90.0.2', '10.90.0.3'] |
|||
expected = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
self.inv.yaml_config['all']['hosts'] = OrderedDict() |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_three(self): |
|||
changed_hosts = ['10.90.0.2', '10.90.0.3', '10.90.0.4'] |
|||
expected = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'}), |
|||
('node3', {'ansible_host': '10.90.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '10.90.0.4'})]) |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_one(self): |
|||
changed_hosts = ['10.90.0.2'] |
|||
expected = OrderedDict([('node1', |
|||
{'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'})]) |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_delete_first(self): |
|||
changed_hosts = ['-10.90.0.2'] |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing_hosts |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_delete_by_hostname(self): |
|||
changed_hosts = ['-node1'] |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing_hosts |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_exists_hostname_positive(self): |
|||
hostname = 'node1' |
|||
expected = True |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.exists_hostname(existing_hosts, hostname) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_exists_hostname_negative(self): |
|||
hostname = 'node99' |
|||
expected = False |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.exists_hostname(existing_hosts, hostname) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_exists_ip_positive(self): |
|||
ip = '10.90.0.2' |
|||
expected = True |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.exists_ip(existing_hosts, ip) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_exists_ip_negative(self): |
|||
ip = '10.90.0.200' |
|||
expected = False |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
result = self.inv.exists_ip(existing_hosts, ip) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_delete_host_by_ip_positive(self): |
|||
ip = '10.90.0.2' |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
self.inv.delete_host_by_ip(existing_hosts, ip) |
|||
self.assertEqual(expected, existing_hosts) |
|||
|
|||
def test_delete_host_by_ip_negative(self): |
|||
ip = '10.90.0.200' |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'})]) |
|||
self.assertRaisesRegex(ValueError, "Unable to find host", |
|||
self.inv.delete_host_by_ip, existing_hosts, ip) |
|||
|
|||
def test_purge_invalid_hosts(self): |
|||
proper_hostnames = ['node1', 'node2'] |
|||
bad_host = 'doesnotbelong2' |
|||
existing_hosts = OrderedDict([ |
|||
('node1', {'ansible_host': '10.90.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '10.90.0.2'}), |
|||
('node2', {'ansible_host': '10.90.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '10.90.0.3'}), |
|||
('doesnotbelong2', {'whateveropts=ilike'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing_hosts |
|||
self.inv.purge_invalid_hosts(proper_hostnames) |
|||
self.assertNotIn( |
|||
bad_host, self.inv.yaml_config['all']['hosts'].keys()) |
|||
|
|||
def test_add_host_to_group(self): |
|||
group = 'etcd' |
|||
host = 'node1' |
|||
opts = {'ip': '10.90.0.2'} |
|||
|
|||
self.inv.add_host_to_group(group, host, opts) |
|||
self.assertEqual( |
|||
self.inv.yaml_config['all']['children'][group]['hosts'].get(host), |
|||
None) |
|||
|
|||
def test_set_kube_control_plane(self): |
|||
group = 'kube_control_plane' |
|||
host = 'node1' |
|||
|
|||
self.inv.set_kube_control_plane([host]) |
|||
self.assertIn( |
|||
host, self.inv.yaml_config['all']['children'][group]['hosts']) |
|||
|
|||
def test_set_all(self): |
|||
hosts = OrderedDict([ |
|||
('node1', 'opt1'), |
|||
('node2', 'opt2')]) |
|||
|
|||
self.inv.set_all(hosts) |
|||
for host, opt in hosts.items(): |
|||
self.assertEqual( |
|||
self.inv.yaml_config['all']['hosts'].get(host), opt) |
|||
|
|||
def test_set_k8s_cluster(self): |
|||
group = 'k8s_cluster' |
|||
expected_hosts = ['kube_node', 'kube_control_plane'] |
|||
|
|||
self.inv.set_k8s_cluster() |
|||
for host in expected_hosts: |
|||
self.assertIn( |
|||
host, |
|||
self.inv.yaml_config['all']['children'][group]['children']) |
|||
|
|||
def test_set_kube_node(self): |
|||
group = 'kube_node' |
|||
host = 'node1' |
|||
|
|||
self.inv.set_kube_node([host]) |
|||
self.assertIn( |
|||
host, self.inv.yaml_config['all']['children'][group]['hosts']) |
|||
|
|||
def test_set_etcd(self): |
|||
group = 'etcd' |
|||
host = 'node1' |
|||
|
|||
self.inv.set_etcd([host]) |
|||
self.assertIn( |
|||
host, self.inv.yaml_config['all']['children'][group]['hosts']) |
|||
|
|||
def test_scale_scenario_one(self): |
|||
num_nodes = 50 |
|||
hosts = OrderedDict() |
|||
|
|||
for hostid in range(1, num_nodes+1): |
|||
hosts["node" + str(hostid)] = "" |
|||
|
|||
self.inv.set_all(hosts) |
|||
self.inv.set_etcd(list(hosts.keys())[0:3]) |
|||
self.inv.set_kube_control_plane(list(hosts.keys())[0:2]) |
|||
self.inv.set_kube_node(hosts.keys()) |
|||
for h in range(3): |
|||
self.assertFalse( |
|||
list(hosts.keys())[h] in |
|||
self.inv.yaml_config['all']['children']['kube_node']['hosts']) |
|||
|
|||
def test_scale_scenario_two(self): |
|||
num_nodes = 500 |
|||
hosts = OrderedDict() |
|||
|
|||
for hostid in range(1, num_nodes+1): |
|||
hosts["node" + str(hostid)] = "" |
|||
|
|||
self.inv.set_all(hosts) |
|||
self.inv.set_etcd(list(hosts.keys())[0:3]) |
|||
self.inv.set_kube_control_plane(list(hosts.keys())[3:5]) |
|||
self.inv.set_kube_node(hosts.keys()) |
|||
for h in range(5): |
|||
self.assertFalse( |
|||
list(hosts.keys())[h] in |
|||
self.inv.yaml_config['all']['children']['kube_node']['hosts']) |
|||
|
|||
def test_range2ips_range(self): |
|||
changed_hosts = ['10.90.0.2', '10.90.0.4-10.90.0.6', '10.90.0.8'] |
|||
expected = ['10.90.0.2', |
|||
'10.90.0.4', |
|||
'10.90.0.5', |
|||
'10.90.0.6', |
|||
'10.90.0.8'] |
|||
result = self.inv.range2ips(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_range2ips_incorrect_range(self): |
|||
host_range = ['10.90.0.4-a.9b.c.e'] |
|||
self.assertRaisesRegex(Exception, "Range of ip_addresses isn't valid", |
|||
self.inv.range2ips, host_range) |
|||
|
|||
def test_build_hostnames_create_with_one_different_ips(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2'] |
|||
expected = OrderedDict([('node1', |
|||
{'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'})]) |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_create_with_two_different_ips(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2', '10.90.0.3,192.168.0.3'] |
|||
expected = OrderedDict([ |
|||
('node1', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node2', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'})]) |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_create_with_three_different_ips(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2', |
|||
'10.90.0.3,192.168.0.3', |
|||
'10.90.0.4,192.168.0.4'] |
|||
expected = OrderedDict([ |
|||
('node1', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node2', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node3', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_overwrite_one_with_different_ips(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2'] |
|||
expected = OrderedDict([('node1', |
|||
{'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'})]) |
|||
existing = OrderedDict([('node5', |
|||
{'ansible_host': '192.168.0.5', |
|||
'ip': '10.90.0.5', |
|||
'access_ip': '192.168.0.5'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_overwrite_three_with_different_ips(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2'] |
|||
expected = OrderedDict([('node1', |
|||
{'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'})]) |
|||
existing = OrderedDict([ |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'}), |
|||
('node5', {'ansible_host': '192.168.0.5', |
|||
'ip': '10.90.0.5', |
|||
'access_ip': '192.168.0.5'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_different_ips_add_duplicate(self): |
|||
changed_hosts = ['10.90.0.2,192.168.0.2'] |
|||
expected = OrderedDict([('node3', |
|||
{'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'})]) |
|||
existing = expected |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_two_different_ips_into_one_existing(self): |
|||
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4'] |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
|
|||
existing = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_two_different_ips_into_two_existing(self): |
|||
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5'] |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'}), |
|||
('node5', {'ansible_host': '192.168.0.5', |
|||
'ip': '10.90.0.5', |
|||
'access_ip': '192.168.0.5'})]) |
|||
|
|||
existing = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
def test_build_hostnames_add_two_different_ips_into_three_existing(self): |
|||
changed_hosts = ['10.90.0.5,192.168.0.5', '10.90.0.6,192.168.0.6'] |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'}), |
|||
('node5', {'ansible_host': '192.168.0.5', |
|||
'ip': '10.90.0.5', |
|||
'access_ip': '192.168.0.5'}), |
|||
('node6', {'ansible_host': '192.168.0.6', |
|||
'ip': '10.90.0.6', |
|||
'access_ip': '192.168.0.6'})]) |
|||
|
|||
existing = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
# Add two IP addresses into a config that has |
|||
# three already defined IP addresses. One of the IP addresses |
|||
# is a duplicate. |
|||
def test_build_hostnames_add_two_duplicate_one_overlap(self): |
|||
changed_hosts = ['10.90.0.4,192.168.0.4', '10.90.0.5,192.168.0.5'] |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'}), |
|||
('node5', {'ansible_host': '192.168.0.5', |
|||
'ip': '10.90.0.5', |
|||
'access_ip': '192.168.0.5'})]) |
|||
|
|||
existing = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
|||
|
|||
# Add two duplicate IP addresses into a config that has |
|||
# three already defined IP addresses |
|||
def test_build_hostnames_add_two_duplicate_two_overlap(self): |
|||
changed_hosts = ['10.90.0.3,192.168.0.3', '10.90.0.4,192.168.0.4'] |
|||
expected = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
|
|||
existing = OrderedDict([ |
|||
('node2', {'ansible_host': '192.168.0.2', |
|||
'ip': '10.90.0.2', |
|||
'access_ip': '192.168.0.2'}), |
|||
('node3', {'ansible_host': '192.168.0.3', |
|||
'ip': '10.90.0.3', |
|||
'access_ip': '192.168.0.3'}), |
|||
('node4', {'ansible_host': '192.168.0.4', |
|||
'ip': '10.90.0.4', |
|||
'access_ip': '192.168.0.4'})]) |
|||
self.inv.yaml_config['all']['hosts'] = existing |
|||
result = self.inv.build_hostnames(changed_hosts, True) |
|||
self.assertEqual(expected, result) |
@ -1,34 +0,0 @@ |
|||
[tox] |
|||
minversion = 1.6 |
|||
skipsdist = True |
|||
envlist = pep8 |
|||
|
|||
[testenv] |
|||
allowlist_externals = py.test |
|||
usedevelop = True |
|||
deps = |
|||
-r{toxinidir}/requirements.txt |
|||
-r{toxinidir}/test-requirements.txt |
|||
setenv = VIRTUAL_ENV={envdir} |
|||
passenv = |
|||
http_proxy |
|||
HTTP_PROXY |
|||
https_proxy |
|||
HTTPS_PROXY |
|||
no_proxy |
|||
NO_PROXY |
|||
commands = pytest -vv #{posargs:./tests} |
|||
|
|||
[testenv:pep8] |
|||
usedevelop = False |
|||
allowlist_externals = bash |
|||
commands = |
|||
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8" |
|||
|
|||
[testenv:venv] |
|||
commands = {posargs} |
|||
|
|||
[flake8] |
|||
show-source = true |
|||
builtins = _ |
|||
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg |
@ -0,0 +1,71 @@ |
|||
# Inventory |
|||
|
|||
The inventory is composed of 3 groups: |
|||
|
|||
* **kube_node** : list of kubernetes nodes where the pods will run. |
|||
* **kube_control_plane** : list of servers where kubernetes control plane components (apiserver, scheduler, controller) will run. |
|||
* **etcd**: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose. |
|||
|
|||
When _kube_node_ contains _etcd_, you define your etcd cluster to be as well schedulable for Kubernetes workloads. |
|||
If you want it a standalone, make sure those groups do not intersect. |
|||
If you want the server to act both as control-plane and node, the server must be defined |
|||
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and |
|||
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and |
|||
not _kube_node_. |
|||
|
|||
There are also two special groups: |
|||
|
|||
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md) |
|||
* **bastion** : configure a bastion host if your nodes are not directly reachable |
|||
|
|||
Lastly, the **k8s_cluster** is dynamically defined as the union of **kube_node**, **kube_control_plane** and **calico_rr**. |
|||
This is used internally and for the purpose of defining whole cluster variables (`<inventory>/group_vars/k8s_cluster/*.yml`) |
|||
|
|||
Below is a complete inventory example: |
|||
|
|||
```ini |
|||
## Configure 'ip' variable to bind kubernetes services on a |
|||
## different ip than the default iface |
|||
node1 ansible_host=95.54.0.12 ip=10.3.0.1 |
|||
node2 ansible_host=95.54.0.13 ip=10.3.0.2 |
|||
node3 ansible_host=95.54.0.14 ip=10.3.0.3 |
|||
node4 ansible_host=95.54.0.15 ip=10.3.0.4 |
|||
node5 ansible_host=95.54.0.16 ip=10.3.0.5 |
|||
node6 ansible_host=95.54.0.17 ip=10.3.0.6 |
|||
|
|||
[kube_control_plane] |
|||
node1 |
|||
node2 |
|||
|
|||
[etcd] |
|||
node1 |
|||
node2 |
|||
node3 |
|||
|
|||
[kube_node] |
|||
node2 |
|||
node3 |
|||
node4 |
|||
node5 |
|||
node6 |
|||
``` |
|||
|
|||
## Inventory customization |
|||
|
|||
See [Customize Ansible vars](/docs/ansible/ansible.md#customize-ansible-vars) |
|||
and [Ansible documentation on group_vars](https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-many-machines-group-variables) |
|||
|
|||
## Bastion host |
|||
|
|||
If you prefer to not make your nodes publicly accessible (nodes with private IPs only), |
|||
you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion, |
|||
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the |
|||
bastion host. |
|||
|
|||
```ShellSession |
|||
[bastion] |
|||
bastion ansible_host=x.x.x.x |
|||
``` |
|||
|
|||
For more information about Ansible and bastion hosts, read |
|||
[Running Ansible Through an SSH Bastion Host](https://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/) |
@ -1,31 +1,20 @@ |
|||
# ## Configure 'ip' variable to bind kubernetes services on a |
|||
# ## different ip than the default iface |
|||
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value. |
|||
[all] |
|||
# This inventory describe a HA typology with stacked etcd (== same nodes as control plane) |
|||
# and 3 worker nodes |
|||
# See https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html |
|||
# for tips on building your # inventory |
|||
|
|||
# Configure 'ip' variable to bind kubernetes services on a different ip than the default iface |
|||
# We should set etcd_member_name for etcd cluster. The node that are not etcd members do not need to set the value, |
|||
# or can set the empty string value. |
|||
[kube_control_plane] |
|||
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1 |
|||
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2 |
|||
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3 |
|||
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4 |
|||
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5 |
|||
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6 |
|||
|
|||
# ## configure a bastion host if your nodes are not directly reachable |
|||
# [bastion] |
|||
# bastion ansible_host=x.x.x.x ansible_user=some_user |
|||
|
|||
[kube_control_plane] |
|||
# node1 |
|||
# node2 |
|||
# node3 |
|||
|
|||
[etcd] |
|||
# node1 |
|||
# node2 |
|||
# node3 |
|||
[etcd:children] |
|||
kube_control_plane |
|||
|
|||
[kube_node] |
|||
# node2 |
|||
# node3 |
|||
# node4 |
|||
# node5 |
|||
# node6 |
|||
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 |
|||
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 |
|||
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 |
Write
Preview
Loading…
Cancel
Save