Browse Source

CI: cleanup '-scale' tests infra (#11535)

There is actually no test using this since ad6fecefa8,
so there is no reason to keep that infra in our tests scripts.
pull/11002/head
Max Gautier 2 months ago
committed by GitHub
parent
commit
76c42b4d3f
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
5 changed files with 6 additions and 18 deletions
  1. 7
      docs/developers/test_cases.md
  2. 6
      tests/cloud_playbooks/roles/packet-ci/templates/inventory.j2
  3. 2
      tests/cloud_playbooks/roles/packet-ci/vars/main.yml
  4. 6
      tests/scripts/testcases_run.sh
  5. 3
      tests/templates/fake_hosts.yml.j2

7
docs/developers/test_cases.md

@ -1,6 +1,6 @@
# Node Layouts
There are six node layout types: `default`, `separate`, `ha`, `scale`, `all-in-one`, and `node-etcd-client`.
There are five node layout types: `default`, `separate`, `ha`, `all-in-one`, and `node-etcd-client`.
`default` is a non-HA two nodes setup with one separate `kube_node`
and the `etcd` group merged with the `kube_control_plane`.
@ -11,11 +11,6 @@ and the `etcd` group merged with the `kube_control_plane`.
`ha` layout consists of two etcd nodes, two control planes and a single worker node,
with role intersection.
`scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts
in the Ansible inventory. This helps test TLS certificate generation at scale
to prevent regressions and profile certain long-running tasks. These nodes are
never actually deployed, but certificates are generated for them.
`all-in-one` layout use a single node for with `kube_control_plane`, `etcd` and `kube_node` merged.
`node-etcd-client` layout consists of a 4 nodes cluster, all of them in `kube_node`, first 3 in `etcd` and only one `kube_control_plane`.

6
tests/cloud_playbooks/roles/packet-ci/templates/inventory.j2

@ -3,7 +3,7 @@
instance-{{ loop.index }} ansible_host={{instance.stdout}}
{% endfor %}
{% if mode is defined and mode in ["separate", "separate-scale"] %}
{% if mode == "separate" %}
[kube_control_plane]
instance-1
@ -12,7 +12,7 @@ instance-2
[etcd]
instance-3
{% elif mode is defined and mode in ["ha", "ha-scale"] %}
{% elif mode == "ha" %}
[kube_control_plane]
instance-1
instance-2
@ -103,5 +103,3 @@ kube_control_plane
calico_rr
[calico_rr]
[fake_hosts]

2
tests/cloud_playbooks/roles/packet-ci/vars/main.yml

@ -1,9 +1,7 @@
---
_vm_count_dict:
separate: 3
separate-scale: 3
ha: 3
ha-scale: 3
ha-recover: 3
ha-recover-noquorum: 3
all-in-one: 1

6
tests/scripts/testcases_run.sh

@ -54,7 +54,7 @@ run_playbook () {
playbook=$1
shift
# We can set --limit here and still pass it as supplemental args because `--limit` is a 'last one wins' option
ansible-playbook --limit "all:!fake_hosts" \
ansible-playbook \
$ANSIBLE_LOG_LEVEL \
-e @${CI_TEST_SETTING} \
-e @${CI_TEST_REGISTRY_MIRROR} \
@ -85,8 +85,8 @@ fi
# Test control plane recovery
if [ "${RECOVER_CONTROL_PLANE_TEST}" != "false" ]; then
run_playbook reset.yml --limit "${RECOVER_CONTROL_PLANE_TEST_GROUPS}:!fake_hosts" -e reset_confirmation=yes
run_playbook recover-control-plane.yml -e etcd_retries=10 --limit "etcd:kube_control_plane:!fake_hosts"
run_playbook reset.yml --limit "${RECOVER_CONTROL_PLANE_TEST_GROUPS}" -e reset_confirmation=yes
run_playbook recover-control-plane.yml -e etcd_retries=10 --limit "etcd:kube_control_plane"
fi
# Test collection build and install by installing our collection, emptying our repository, adding

3
tests/templates/fake_hosts.yml.j2

@ -1,3 +0,0 @@
ansible_default_ipv4:
address: 255.255.255.255
ansible_hostname: "{{ '{{' }}inventory_hostname }}"
Loading…
Cancel
Save