Browse Source

fix: missing 'v' prefix in offline image tags (#12086)

Signed-off-by: bo.jiang <bo.jiang@daocloud.io>
pull/11998/head
ERIK 2 weeks ago
committed by GitHub
parent
commit
a4843eaf5e
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
10 changed files with 23 additions and 22 deletions
  1. 2
      RELEASE.md
  2. 2
      contrib/offline/generate_list.sh
  3. 2
      docs/CNI/cilium.md
  4. 18
      docs/operations/upgrades.md
  5. 4
      inventory/sample/group_vars/k8s_cluster/addons.yml
  6. 2
      inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml
  7. 2
      inventory/sample/group_vars/k8s_cluster/k8s-net-cilium.yml
  8. 4
      inventory/sample/group_vars/k8s_cluster/k8s-net-kube-router.yml
  9. 4
      roles/kubernetes-apps/metallb/templates/metallb.yaml.j2
  10. 5
      roles/kubespray-defaults/defaults/main/download.yml

2
RELEASE.md

@ -45,7 +45,7 @@ The Kubespray Project is released on an as-needed basis. The process is as follo
* Minor releases can change components' versions, but not the major `kube_version`.
Greater `kube_version` requires a new major or minor release. For example, if Kubespray v2.0.0
is bound to `kube_version: 1.4.x`, `calico_version: 0.22.0`, `etcd_version: v3.0.6`,
is bound to `kube_version: 1.4.x`, `calico_version: 0.22.0`, `etcd_version: 3.0.6`,
then Kubespray v2.1.0 may be bound to only minor changes to `kube_version`, like v1.5.1
and *any* changes to other components, like etcd v4, or calico 1.2.3.
And Kubespray v3.x.x shall be bound to `kube_version: 2.x.x` respectively.

2
contrib/offline/generate_list.sh

@ -24,7 +24,7 @@ sed -n '/^downloads:/,/download_defaults:/p' ${REPO_ROOT_DIR}/${DOWNLOAD_YML} \
# list separately.
KUBE_IMAGES="kube-apiserver kube-controller-manager kube-scheduler kube-proxy"
for i in $KUBE_IMAGES; do
echo "{{ kube_image_repo }}/$i:{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
echo "{{ kube_image_repo }}/$i:v{{ kube_version }}" >> ${TEMP_DIR}/images.list.template
done
# run ansible to expand templates

2
docs/CNI/cilium.md

@ -233,7 +233,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version
```yml
cilium_version: v1.12.1
cilium_version: "1.15.9"
```
## Add variable to config

18
docs/operations/upgrades.md

@ -28,13 +28,13 @@ If you wanted to upgrade just kube_version from v1.18.10 to v1.19.7, you could
deploy the following way:
```ShellSession
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.18.10 -e upgrade_cluster_setup=true
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=1.18.10 -e upgrade_cluster_setup=true
```
And then repeat with v1.19.7 as kube_version:
And then repeat with 1.19.7 as kube_version:
```ShellSession
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.19.7 -e upgrade_cluster_setup=true
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=1.19.7 -e upgrade_cluster_setup=true
```
The var ```-e upgrade_cluster_setup=true``` is needed to be set in order to migrate the deploys of e.g kube-apiserver inside the cluster immediately which is usually only done in the graceful upgrade. (Refer to [#4139](https://github.com/kubernetes-sigs/kubespray/issues/4139) and [#4736](https://github.com/kubernetes-sigs/kubespray/issues/4736))
@ -48,7 +48,7 @@ existing cluster. That means there must be at least 1 kube_control_plane already
deployed.
```ShellSession
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.19.7
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.19.7
```
After a successful upgrade, the Server Version should be updated:
@ -62,7 +62,7 @@ Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCom
You can control how many nodes are upgraded at the same time by modifying the ansible variable named `serial`, as explained [here](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html#setting-the-batch-size-with-serial). If you don't set this variable, it will upgrade the cluster nodes in batches of 20% of the available nodes. Setting `serial=1` would mean upgrade one node at a time.
```ShellSession
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 -e "serial=1"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 -e "serial=1"
```
### Pausing the upgrade
@ -90,14 +90,14 @@ ansible-playbook facts.yml -b -i inventory/sample/hosts.ini
After this upgrade control plane and etcd groups [#5147](https://github.com/kubernetes-sigs/kubespray/issues/5147):
```ShellSession
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "kube_control_plane:etcd"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "kube_control_plane:etcd"
```
Now you can upgrade other nodes in any order and quantity:
```ShellSession
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "node4:node6:node7:node12"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.20.7 --limit "node5*"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "node4:node6:node7:node12"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "node5*"
```
## Multiple upgrades
@ -126,7 +126,7 @@ v.22.0 -> v2.24.0 : ✕
Assuming you don't explicitly define a kubernetes version in your k8s_cluster.yml, you simply check out the next tag and run the upgrade-cluster.yml playbook
* If you do define kubernetes version in your inventory (e.g. group_vars/k8s_cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=v1.11.3`
* If you do define kubernetes version in your inventory (e.g. group_vars/k8s_cluster.yml) then either make sure to update it before running upgrade-cluster, or specify the new version you're upgrading to: `ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml -e kube_version=1.11.3`
Otherwise, the upgrade will leave your cluster at the same k8s version defined in your inventory vars.

4
inventory/sample/group_vars/k8s_cluster/addons.yml

@ -180,7 +180,7 @@ cert_manager_enabled: false
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"
# metallb_version: v0.13.9
# metallb_version: 0.13.9
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
@ -242,7 +242,7 @@ metallb_namespace: "metallb-system"
# - pool2
argocd_enabled: false
# argocd_version: v2.14.5
# argocd_version: 2.14.5
# argocd_namespace: argocd
# Default password:
# - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli

2
inventory/sample/group_vars/k8s_cluster/k8s-cluster.yml

@ -17,7 +17,7 @@ kube_token_dir: "{{ kube_config_dir }}/tokens"
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.32.2
kube_version: 1.32.2
# Where the binaries will be downloaded.
# Note: ensure that you've enough disk space (about 1G)

2
inventory/sample/group_vars/k8s_cluster/k8s-net-cilium.yml

@ -1,5 +1,5 @@
---
# cilium_version: "v1.15.9"
# cilium_version: "1.15.9"
# Log-level
# cilium_debug: false

4
inventory/sample/group_vars/k8s_cluster/k8s-net-kube-router.yml

@ -2,9 +2,9 @@
# Kube router version
# Default to v2
# kube_router_version: "v2.0.0"
# kube_router_version: "2.0.0"
# Uncomment to use v1 (Deprecated)
# kube_router_version: "v1.6.0"
# kube_router_version: "1.6.0"
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true

4
roles/kubernetes-apps/metallb/templates/metallb.yaml.j2

@ -1716,7 +1716,7 @@ spec:
value: memberlist
- name: METALLB_DEPLOYMENT
value: controller
image: "{{ metallb_controller_image_repo }}:v{{ metallb_version }}"
image: "{{ metallb_controller_image_repo }}:{{ metallb_image_tag }}"
livenessProbe:
failureThreshold: 3
httpGet:
@ -1824,7 +1824,7 @@ spec:
secretKeyRef:
key: secretkey
name: memberlist
image: "{{ metallb_speaker_image_repo }}:v{{ metallb_version }}"
image: "{{ metallb_speaker_image_repo }}:{{ metallb_image_tag }}"
livenessProbe:
failureThreshold: 3
httpGet:

5
roles/kubespray-defaults/defaults/main/download.yml

@ -398,6 +398,7 @@ dashboard_metrics_scraper_tag: "v1.0.8"
metallb_speaker_image_repo: "{{ quay_image_repo }}/metallb/speaker"
metallb_controller_image_repo: "{{ quay_image_repo }}/metallb/controller"
metallb_version: 0.13.9
metallb_image_tag: "v{{ metallb_version }}"
node_feature_discovery_version: 0.16.4
node_feature_discovery_image_repo: "{{ kube_image_repo }}/nfd/node-feature-discovery"
@ -1112,7 +1113,7 @@ downloads:
enabled: "{{ metallb_speaker_enabled }}"
container: true
repo: "{{ metallb_speaker_image_repo }}"
tag: "{{ metallb_version }}"
tag: "{{ metallb_image_tag }}"
checksum: "{{ metallb_speaker_digest_checksum | default(None) }}"
groups:
- kube_control_plane
@ -1121,7 +1122,7 @@ downloads:
enabled: "{{ metallb_enabled }}"
container: true
repo: "{{ metallb_controller_image_repo }}"
tag: "{{ metallb_version }}"
tag: "{{ metallb_image_tag }}"
checksum: "{{ metallb_controller_digest_checksum | default(None) }}"
groups:
- kube_control_plane

Loading…
Cancel
Save