* Automatically derive defaults versions from checksums
Currently, when updating checksums, we manually update the default
versions.
However, AFAICT, for all components where we have checksums, we're using
the newest version out of those checksums.
Codify this in the `_version` defaults variables definition to make the
process automatic and reduce manual steps (as well as the diff size
during reviews).
We assume the versions are sorted, with newest first. This should be
guaranteed by the pre-commit hooks.
* Validate checksums are ordered by versions, newest first
* Generalize render-readme-versions hook for other static files
The pre-commit hook introduced a142f40e2 (Update versions in README.md
with pre-commit, 2025-01-21) allow to update our README with new
versions.
It turns out other "static" files (== which don't interpret Ansible
variables) also use the default version (in that case, our Dockefiles,
but there might be others)
The Dockerfile breaks if the variable they use (`kube_version`) is a
Jinja template.
For helping with automatic version upgrade, generalize the hook to deal
with other static files, and make a template out of the Dockerfile.
* Dockerfile: template kube_version with pre-commit instead of runtime
* Validate all versions/checksums are strings in pre-commit
All the ansible/python tooling for version is for version strings. YAML
unhelpfully consider some stuff as number, so enforce this.
* Stringify checksums versions
* Remove krew installation support
Krew is fundamentally to install kubectl plugins, which are eminently a
client side things.
It's also not difficult to install on a client machine.
* Remove krew cleanup
Currently, versions in README.md need to be manually updated, and we
check it's done with a bash script.
Add a small utility playbook to add versions in README.md from their
actual default values, automatically.
This is done in pre-commit, and replace the scripted check ; instead it
will autofix the README.md, and fails in CI if needed.
We switch markdownlint behind the local hooks to gave it the opportunity
to catch a problem with the rendering.
This is handy when some component releases is buggy (missing file at the
download links) to not block everything else.
Move the filtering up the stack so we don't have to do it multiples
times.
Gvisor releases, besides only being tags, have some particularities:
- they are of the form yyyymmdd.p -> this get interpreted as a yaml
float, so we need to explicitely convert to string to make it work.
- there is no semver-like attached to the version numbers, but the API
(= OCI container runtime interface) is expected to be stable (see
linked discussion)
- some older tags don't have hashs for some archs
Link: https://groups.google.com/g/gvisor-users/c/SxMeHt0Yb6Y/m/Xtv7seULCAAJ
Gvisor is the only one of our deployed components which use tags instead
of proper releases. So the tags scraping support will, for now, cater to
gvisor particularities, notably in the tag name format and the fact that
some older releases don't have the same URL scheme.
Containerd use the same repository for releases of it's gRPC API (which
we are not interested in).
Conveniently, those releases have tags which are not valid version
number (being prefixed with 'api/').
This could also be potentially useful for similar cases.
The risk of missing releases because of this are low, since it would
require that a project issue a new release with an invalid format, then
switch back to the previous format (or we miss the fact it's not
updating for a long period of time).
The Github graphQL API needs IDs for querying a variable array of
repository.
Use a dict for components instead of an array of url and record the
corresponding node ID for each component (there are duplicates because
some binaries are provided by the same project/repository).
Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.
runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.
The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.
The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
the last one until no newer version is found (newer in the version
order sense, irrespective of actual age)