* Feat: add external OCI cloud controller manager template & variable
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* Feat: add external OCI cloud controller manager workflow
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* Feat: migrate external OCI CCM config check from OCI cloud provider
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
* cloud_controller: oracle: simpler asserts
Make the asserts check for Oracle Cloud Infrastructure external cloud
controller more compact, and hence readable.
Allows to put them back in the main tasks for less back and forth when
reading the code.
---------
Signed-off-by: tico88612 <17496418+tico88612@users.noreply.github.com>
Co-authored-by: Max Gautier <mg@max.gautier.name>
This reverts commit 275c54e810.
Static tokens are no longer created automatically for service account in
Kubernetes. Instead, they are dynamically injected into pods using a
projected volume.
Thus there is no longer a need to check for this (it didn't work anyway,
since the describe output actually contains <none> when there is no
tokens:
{
"attempts": 1,
"changed": false,
"cmd": "set -o pipefail && /usr/local/bin/kubectl describe serviceaccounts default --namespace test | grep Tokens | awk '{print $2}'",
"delta": "0:00:00.075633",
"end": "2024-10-19 14:25:04.858871",
"msg": "",
"rc": 0,
"start": "2024-10-19 14:25:04.783238",
"stderr": "",
"stderr_lines": [],
"stdout": "<none>",
"stdout_lines": [
"<none>"
]
}
)
* Feat: bump CoreDNS version to v1.11.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
* Docs: update README.md CoreDNS version to v1.11.3
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
---------
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
This leverage the Kubernetes GC to delete kubevirt VMs, by using
ownerReferences, with the CI pod running the playbook as the owner.
This concretely means that the control plane in our CI cluster will
delete the kubevirt VMs associated with a particular ci job as soon as
that pod job is deleted, which usually happens when the job terminates,
(barring errors, which will be addressed in the cluster directly)
Upgrade to kubevirt.io/v1 for the VirtualMachine manifests, since the
alpha version is deprecated.
Before adding these changes, `ansible_facts.services["containerd.service"]` will not defined and fail to check for triggering the container stop and delete behaviors.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Simplify registry mirror rendering in config.toml.
The map filter can extract the host list from mirrors so we can
just unique them and render them without needing to construct vars
for it.
For the registry mirror tls section, we can first extract mirrors
from the dict then filter on only the ones having skip_veridy defined
first and then filter on the ones having true (as the dict might not
have skip_verify defined and that would cause errors of undefined var).
This will speed up and simply the templating.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
Dropping the ansible dependencies for ansible-lint will allow us to
catch missing dependencies collections in galaxy.yml. For collections
needed for contrib/ or tests/ (i.e: not part of core kubespray
dependencies), we can just configure ansible-lint to mock them.
This mean it won't check the mocked module parameters, but for those
area of the code base it's an acceptable trade-off.
The fallback_ips tasks are essentially serializing the gathering of one
fact on all the hosts, which can have dramatic performance implications
on large clusters (several minutes).
This is essentially a reversal of 35f248dff0
Being able to run without refreshing the cache facts is not worth it.
We keep fallback_ip for now, simply changing the access to a normal
hostvars variable instead of a custom dictionnary.
Using the hosts directive at the play level prevent those tasks from
being run when using --limit and the group in question is not part of
the limit (ex: running scale.yml on new worker nodes only)
Instead, run on all hosts, and for each group, partition between that
group and '_' (generic group name which is not used; using an empty
string as the group is not supported by ansible.builtin.group_by)
Reported-by: asteppat <asteppat@cisco.com>
* Add Fedora 39/40 to Vagrantfile
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
* Add CI tests for Fedora 39/40
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
* Update CI tests documentation
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
* Update support OS version in README.md
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
---------
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>