* Add vars for configuring cilium IP load balancer pools and bgp peer policies
* Cilium 1.16+ Support - Add vars for configuring cilium bgpv2 api & handle cilium_kube_proxy_replacement unsupported values
When using
dns_upstream_forward_extra_opts:
prefer_udp: "" # the option as no value so use empty string to just
# put the key
This is rendered in the dns configmap as ($ for end-of-line)
...
prefer_udp $
...
Note the trailing space.
This triggers https://github.com/kubernetes/kubernetes/issues/36222,
which makes the configmap hardly readable when editing them manually or
simply putting them in a yaml file for inspection.
Trim the concatenation of option + value to get rid of any trailing
space.
* Make Helm's 'atomic' parameter configurable from role variables
* Configure Helm with 'atomic' and 'wait' set to false for generic CNI to prevent kubelet-csr-approver installation failures
We use shell scripts and conf files in some roles (notably, certificates
provisioning), so we need to include them in order for the collection to
work when using the configurations depending on those roles.
* kubeadm: do not ignore preflight errors blindly
The "ignoring all errors" seems to date back to the inception of the
kubeadm support (it was --skip-preflight-check before).
This can mask real errors and prevent users from seeing them.
Do not ignore any errors by default and make the set of ignored errors
configurable.
* download/kubeadm: remove redundant task
The mode is already set by the previous `copy` task.
* Validate kubeadm configs
This should help to fail early when we have invalid kubeadm configs (from
a kubespray bug or a misconfiguration).
* kubeadm-upgrade: remove unnecessary bool cast
* Convert kubeadm join discovery timeout to v1beta4 config
* CI: Ignore kubeadm:Mem errors on some setup.
The new CI does not define k8s_cluster group, so it relies on
kubernetes-sigs/kubespray#11559.
This does not work for upgrade testing (which use the previous release).
We can revert this commit after 2.27.0
We should not rollback our test setup during upgrade test.
The only reason to do that would be for incompatible changes in the test
inventory, and we already checkout master for those (${CI_JOB_NAME}.yml)
Also do some cleanup by removing unnecessary intermediary variables
VirtualMachineInstance resources sometimes temporarily loose their
IP (at least as far as the kubevirt controllers can see).
See https://github.com/kubevirt/kubevirt/issues/12698 for the upstream
bug.
This does not seems to affect actual connection (if it did, our current
CI would not work).
However, our CI execute multiple playbooks, and in particular:
1. The provisioning playbook (which checks that the IPs have been
provisioned by querying the K8S API)
2. Kubespray itself
If any of the VirtualMachineInstance looses its IP between after 1
checked for it, and before 2 starts, the dynamic inventory (which is
invoked when the playbook is launched by ansible-playbook) will not have
an ip for that host, and will try to use the name for ssh, which of
course will not work.
Instead, when we have a valid state during provisioning (all IPs
presents), use it to construct a static inventory which will be used for
the rest of the CI run.
This allows a single source of truth for the virtual machines in a
kubevirt ci-run.
`etcd_member_name` should be correctly handled in kubespray-defaults for
testing the recover cases.
Not constraining the inventory to .ini allows us to use dynamic
inventory, which is needed for simplifying kubevirt jobs inventory.
Also reduces the scope of the ANSIBLE_INVENTORY variable.
VMI in Kubevirt are the abstraction below VirtualMachine.
- We don't really need the extra abstraction of VirtualMachine objects
- Convert the waiting for VMs ip address to use kubernetes.core.k8s_info
and no shell pipeline
We're still getting bug reports circumventing the bug report template
and omitting informations.
Blank issue (aka, not using the form templates) can still be created
using the gh cli, the API, etc. This only disable the possibility in the
Web UI.
- Lookup was not returning a list, making the difference filter spit out
garbage -> query always return a list
- hostvars is a dictionnary, so convert to list before selectattr and
map back to only get keys
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Remove kubeadm api version condition.
Currently there is not much difference between the files, if there are more changes in the future,
please use different files to distinguish them (you can use the kubeadm_config_api_version variable)
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
I added the kubeadm_config_api_version variable in the previous commit,
and remove kubeadm api version condition.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
v1beta4 has changed a lot in this file (e.g. ExtraArgs etc.), so it was implemented in separate files.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
If kube_version is v1.31 or higher, it will be v1beta4, otherwise it
will be v1beta3.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>
Since a2019c1c2 (Add a JSON schema describing the packages install
structure, 2024-04-25), we use a custom structure to select which
packages should be installed on a particular host OS.
This has proven too rigid in practice, and the query is pretty
complicated.
Replace this by simply using an array of jinja conditions for the
packages, which should be easier to understand for everyone and more
flexible.
Also remove the associated schema and validation which are no longer
needed.
* etcd: throttle restart for availability
During upgrade, etcd member are restarted all at once.
This can impact the availability of the etcd cluster and subsequently of
the Kubernetes cluster.
Limit the concurrent restart so that the etcd cluster can keep quorum.
* Simplify etcd handlers
For this change, `external_cloud_provider` set default value to empty
string.
Signed-off-by: ChengHao Yang <17496418+tico88612@users.noreply.github.com>