Browse Source

Partial Cilium 1.16+ Support & Add vars for configuring cilium IP load balancer pools and bgp v1 & v2 apis (#11620)

* Add vars for configuring cilium IP load balancer pools and bgp peer policies

* Cilium 1.16+ Support - Add vars for configuring cilium bgpv2 api & handle cilium_kube_proxy_replacement unsupported values
master
logicsys 5 days ago
committed by GitHub
parent
commit
b8541962f3
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
16 changed files with 550 additions and 5 deletions
  1. 136
      docs/CNI/cilium.md
  2. 95
      inventory/sample/group_vars/k8s_cluster/k8s-net-cilium.yml
  3. 2
      roles/kubespray-defaults/defaults/main/main.yml
  4. 22
      roles/network_plugin/cilium/defaults/main.yml
  5. 216
      roles/network_plugin/cilium/tasks/apply.yml
  6. 5
      roles/network_plugin/cilium/templates/cilium-operator/cr.yml.j2
  7. 2
      roles/network_plugin/cilium/templates/cilium-operator/deploy.yml.j2
  8. 12
      roles/network_plugin/cilium/templates/cilium/cilium-bgp-advertisement.yml.j2
  9. 9
      roles/network_plugin/cilium/templates/cilium/cilium-bgp-cluster-config.yml.j2
  10. 9
      roles/network_plugin/cilium/templates/cilium/cilium-bgp-node-config-override.yml.j2
  11. 9
      roles/network_plugin/cilium/templates/cilium/cilium-bgp-peer-config.yml.j2
  12. 9
      roles/network_plugin/cilium/templates/cilium/cilium-bgp-peering-policy.yml.j2
  13. 12
      roles/network_plugin/cilium/templates/cilium/cilium-loadbalancer-ip-pool.yml.j2
  14. 9
      roles/network_plugin/cilium/templates/cilium/config.yml.j2
  15. 4
      roles/network_plugin/cilium/templates/cilium/cr.yml.j2
  16. 4
      roles/network_plugin/cilium/templates/cilium/ds.yml.j2

136
docs/CNI/cilium.md

@ -45,10 +45,144 @@ cilium_pool_mask_size Specifies the size allocated to node.ipam.podCIDRs from cl
cilium_pool_mask_size_ipv6: "120"
```
### IP Load Balancer Pools
Cilium's IP Load Balancer Pools can be configured with the `cilium_loadbalancer_ip_pools` variable:
```yml
cilium_loadbalancer_ip_pools:
- name: "blue-pool"
cidrs:
- "10.0.10.0/24"
```
For further information, check [LB IPAM documentation](https://docs.cilium.io/en/stable/network/lb-ipam/)
### BGP Control Plane
Cilium's BGP Control Plane can be enabled by setting `cilium_enable_bgp_control_plane` to `true`.:
```yml
cilium_enable_bgp_control_plane: true
```
For further information, check [BGP Peering Policy documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v1/)
### BGP Control Plane Resources (New bgpv2 API v1.16+)
Cilium BGP control plane is managed by a set of custom resources which provide a flexible way to configure BGP peers, policies, and advertisements.
Cilium's BGP Instances can be configured with the `cilium_bgp_cluster_configs` variable:
```yml
cilium_bgp_cluster_configs:
- name: "cilium-bgp"
spec:
bgpInstances:
- name: "instance-64512"
localASN: 64512
peers:
- name: "peer-64512-tor1"
peerASN: 64512
peerAddress: '10.47.1.1'
peerConfigRef:
name: "cilium-peer"
nodeSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
Cillium's BGP Peers can be configured with the `cilium_bgp_peer_configs` variable:
```yml
cilium_bgp_peer_configs:
- name: cilium-peer
spec:
# authSecretRef: bgp-auth-secret
gracefulRestart:
enabled: true
restartTimeSeconds: 15
families:
- afi: ipv4
safi: unicast
advertisements:
matchLabels:
advertise: "bgp"
- afi: ipv6
safi: unicast
advertisements:
matchLabels:
advertise: "bgp"
```
Cillium's BGP Advertisements can be configured with the `cilium_bgp_advertisements` variable:
```yml
cilium_bgp_advertisements:
- name: bgp-advertisements
labels:
advertise: bgp
spec:
advertisements:
- advertisementType: "PodCIDR"
attributes:
communities:
standard: [ "64512:99" ]
- advertisementType: "Service"
service:
addresses:
- ClusterIP
- ExternalIP
- LoadBalancerIP
selector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
Cillium's BGP Node Config Overrides can be configured with the `cilium_bgp_node_config_overrides` variable:
```yml
cilium_bgp_node_config_overrides:
- name: bgpv2-cplane-dev-multi-homing-worker
spec:
bgpInstances:
- name: "instance-65000"
routerID: "192.168.10.1"
localPort: 1790
peers:
- name: "peer-65000-tor1"
localAddress: fd00:10:0:2::2
- name: "peer-65000-tor2"
localAddress: fd00:11:0:2::2
```
For further information, check [BGP Control Plane Resources documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v2/)
### BGP Peering Policies (Legacy < v1.16)
Cilium's BGP Peering Policies can be configured with the `cilium_bgp_peering_policies` variable:
```yml
cilium_bgp_peering_policies:
- name: "01-bgp-peering-policy"
spec:
virtualRouters:
- localASN: 64512
exportPodCIDR: false
neighbors:
- peerAddress: '10.47.1.1/24'
peerASN: 64512
serviceSelector:
matchExpressions:
- {key: somekey, operator: NotIn, values: ['never-used-value']}
```
For further information, check [BGP Peering Policy documentation](https://docs.cilium.io/en/latest/network/bgp-control-plane/bgp-control-plane-v1/#bgp-peering-policy-legacy)
## Kube-proxy replacement with Cilium
Cilium can run without kube-proxy by setting `cilium_kube_proxy_replacement`
to `strict`.
to `strict` (< v1.16) or `true` (Cilium v1.16+ no longer accepts `strict`, however this is converted to `true` by kubespray when running v1.16+).
Without kube-proxy, cilium needs to know the address of the kube-apiserver
and this must be set globally for all Cilium components (agents and operators).

95
inventory/sample/group_vars/k8s_cluster/k8s-net-cilium.yml

@ -247,6 +247,101 @@ cilium_l2announcements: false
# -- Enable native IP masquerade support in eBPF
# cilium_enable_bpf_masquerade: false
# -- Enable BGP Control Plane
# cilium_enable_bgp_control_plane: false
# -- Configure Loadbalancer IP Pools
# cilium_loadbalancer_ip_pools:
# - name: "blue-pool"
# cidrs:
# - "10.0.10.0/24"
# -- Configure BGP Instances (New bgpv2 API v1.16+)
# cilium_bgp_cluster_configs:
# - name: "cilium-bgp"
# spec:
# bgpInstances:
# - name: "instance-64512"
# localASN: 64512
# peers:
# - name: "peer-64512-tor1"
# peerASN: 64512
# peerAddress: '10.47.1.1'
# peerConfigRef:
# name: "cilium-peer"
# nodeSelector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure BGP Peers (New bgpv2 API v1.16+)
# cilium_bgp_peer_configs:
# - name: cilium-peer
# spec:
# # authSecretRef: bgp-auth-secret
# gracefulRestart:
# enabled: true
# restartTimeSeconds: 15
# families:
# - afi: ipv4
# safi: unicast
# advertisements:
# matchLabels:
# advertise: "bgp"
# - afi: ipv6
# safi: unicast
# advertisements:
# matchLabels:
# advertise: "bgp"
# -- Configure BGP Advertisements (New bgpv2 API v1.16+)
# cilium_bgp_advertisements:
# - name: bgp-advertisements
# labels:
# advertise: bgp
# spec:
# advertisements:
# # - advertisementType: "PodCIDR"
# # attributes:
# # communities:
# # standard: [ "64512:99" ]
# - advertisementType: "Service"
# service:
# addresses:
# - ClusterIP
# - ExternalIP
# - LoadBalancerIP
# selector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure BGP Node Config Overrides (New bgpv2 API v1.16+)
# cilium_bgp_node_config_overrides:
# - name: bgp-node-config-override
# spec:
# bgpInstances:
# - name: "instance-65000"
# routerID: "192.168.10.1"
# localPort: 1790
# peers:
# - name: "peer-65000-tor1"
# localAddress: fd00:10:0:2::2
# - name: "peer-65000-tor2"
# localAddress: fd00:11:0:2::2
# -- Configure BGP Peers (Legacy v1.16+)
# cilium_bgp_peering_policies:
# - name: "01-bgp-peering-policy"
# spec:
# virtualRouters:
# - localASN: 64512
# exportPodCIDR: false
# neighbors:
# - peerAddress: '10.47.1.1/24'
# peerASN: 64512
# serviceSelector:
# matchExpressions:
# - {key: somekey, operator: NotIn, values: ['never-used-value']}
# -- Configure whether direct routing mode should route traffic via
# host stack (true) or directly and more efficiently out of BPF (false) if
# the kernel supports it. The latter has the implication that it will also

2
roles/kubespray-defaults/defaults/main/main.yml

@ -43,7 +43,7 @@ kubeadm_init_phases_skip_default: [ "addon/coredns" ]
kubeadm_init_phases_skip: >-
{%- if kube_network_plugin == 'kube-router' and (kube_router_run_service_proxy is defined and kube_router_run_service_proxy) -%}
{{ kubeadm_init_phases_skip_default + ["addon/kube-proxy"] }}
{%- elif kube_network_plugin == 'cilium' and (cilium_kube_proxy_replacement is defined and cilium_kube_proxy_replacement == 'strict') -%}
{%- elif kube_network_plugin == 'cilium' and (cilium_kube_proxy_replacement is defined and (cilium_kube_proxy_replacement == 'strict' or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') )) -%}
{{ kubeadm_init_phases_skip_default + ["addon/kube-proxy"] }}
{%- elif kube_network_plugin == 'calico' and (calico_bpf_enabled is defined and calico_bpf_enabled) -%}
{{ kubeadm_init_phases_skip_default + ["addon/kube-proxy"] }}

22
roles/network_plugin/cilium/defaults/main.yml

@ -46,6 +46,9 @@ cilium_tunnel_mode: vxlan
# LoadBalancer Mode (snat/dsr/hybrid) Ref: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/#dsr-mode
cilium_loadbalancer_mode: snat
# -- Configure Loadbalancer IP Pools
cilium_loadbalancer_ip_pools: []
# Optional features
cilium_enable_prometheus: false
# Enable if you want to make use of hostPort mappings
@ -277,6 +280,25 @@ cilium_monitor_aggregation_flags: "all"
cilium_enable_bpf_clock_probe: true
# -- Enable BGP Control Plane
cilium_enable_bgp_control_plane: false
# -- Configure BGP Instances (New bgpv2 API v1.16+)
cilium_bgp_cluster_configs: []
# -- Configure BGP Peers (New bgpv2 API v1.16+)
cilium_bgp_peer_configs: []
# -- Configure BGP Advertisements (New bgpv2 API v1.16+)
cilium_bgp_advertisements: []
# -- Configure BGP Node Config Overrides (New bgpv2 API v1.16+)
cilium_bgp_node_config_overrides: []
# -- Configure BGP Peers (Legacy < v1.16)
cilium_bgp_peering_policies: []
# -- Whether to enable CNP status updates.
cilium_disable_cnp_status_updates: true

216
roles/network_plugin/cilium/tasks/apply.yml

@ -31,3 +31,219 @@
when:
- inventory_hostname == groups['kube_control_plane'][0] and not item is skipped
- cilium_enable_hubble and cilium_hubble_install
- name: Cilium | Wait for CiliumLoadBalancerIPPool CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumloadbalancerippools.cilium.io"
register: cillium_lbippool_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_loadbalancer_ip_pools is defined and (cilium_loadbalancer_ip_pools|length>0)
- name: Cilium | Create CiliumLoadBalancerIPPool manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-loadbalancer-ip-pool.yml, type: CiliumLoadBalancerIPPool}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_lbippool_crd_ready is defined and cillium_lbippool_crd_ready.rc is defined and cillium_lbippool_crd_ready.rc == 0
- cilium_loadbalancer_ip_pools is defined and (cilium_loadbalancer_ip_pools|length>0)
- name: Cilium | Apply CiliumLoadBalancerIPPool from cilium_loadbalancer_ip_pools
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-loadbalancer-ip-pool.yml, type: CiliumLoadBalancerIPPool}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_lbippool_crd_ready is defined and cillium_lbippool_crd_ready.rc is defined and cillium_lbippool_crd_ready.rc == 0
- cilium_loadbalancer_ip_pools is defined and (cilium_loadbalancer_ip_pools|length>0)
- name: Cilium | Wait for CiliumBGPPeeringPolicy CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumbgppeeringpolicies.cilium.io"
register: cillium_bgpppolicy_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_peering_policies is defined and (cilium_bgp_peering_policies|length>0)
- name: Cilium | Create CiliumBGPPeeringPolicy manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-bgp-peering-policy.yml, type: CiliumBGPPeeringPolicy}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpppolicy_crd_ready is defined and cillium_bgpppolicy_crd_ready.rc is defined and cillium_bgpppolicy_crd_ready.rc == 0
- cilium_bgp_peering_policies is defined and (cilium_bgp_peering_policies|length>0)
- name: Cilium | Apply CiliumBGPPeeringPolicy from cilium_bgp_peering_policies
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-bgp-peering-policy.yml, type: CiliumBGPPeeringPolicy}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpppolicy_crd_ready is defined and cillium_bgpppolicy_crd_ready.rc is defined and cillium_bgpppolicy_crd_ready.rc == 0
- cilium_bgp_peering_policies is defined and (cilium_bgp_peering_policies|length>0)
- name: Cilium | Wait for CiliumBGPClusterConfig CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumbgpclusterconfigs.cilium.io"
register: cillium_bgpcconfig_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_cluster_configs is defined and (cilium_bgp_cluster_configs|length>0)
- name: Cilium | Create CiliumBGPClusterConfig manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-bgp-cluster-config.yml, type: CiliumBGPClusterConfig}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpcconfig_crd_ready is defined and cillium_bgpcconfig_crd_ready.rc is defined and cillium_bgpcconfig_crd_ready.rc == 0
- cilium_bgp_cluster_configs is defined and (cilium_bgp_cluster_configs|length>0)
- name: Cilium | Apply CiliumBGPClusterConfig from cilium_bgp_cluster_configs
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-bgp-cluster-config.yml, type: CiliumBGPClusterConfig}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpcconfig_crd_ready is defined and cillium_bgpcconfig_crd_ready.rc is defined and cillium_bgpcconfig_crd_ready.rc == 0
- cilium_bgp_cluster_configs is defined and (cilium_bgp_cluster_configs|length>0)
- name: Cilium | Wait for CiliumBGPPeerConfig CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumbgppeerconfigs.cilium.io"
register: cillium_bgppconfig_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_peer_configs is defined and (cilium_bgp_peer_configs|length>0)
- name: Cilium | Create CiliumBGPPeerConfig manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-bgp-peer-config.yml, type: CiliumBGPPeerConfig}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgppconfig_crd_ready is defined and cillium_bgppconfig_crd_ready.rc is defined and cillium_bgppconfig_crd_ready.rc == 0
- cilium_bgp_peer_configs is defined and (cilium_bgp_peer_configs|length>0)
- name: Cilium | Apply CiliumBGPPeerConfig from cilium_bgp_peer_configs
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-bgp-peer-config.yml, type: CiliumBGPPeerConfig}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgppconfig_crd_ready is defined and cillium_bgppconfig_crd_ready.rc is defined and cillium_bgppconfig_crd_ready.rc == 0
- cilium_bgp_peer_configs is defined and (cilium_bgp_peer_configs|length>0)
- name: Cilium | Wait for CiliumBGPAdvertisement CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumbgpadvertisements.cilium.io"
register: cillium_bgpadvert_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_advertisements is defined and (cilium_bgp_advertisements|length>0)
- name: Cilium | Create CiliumBGPAdvertisement manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-bgp-advertisement.yml, type: CiliumBGPAdvertisement}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpadvert_crd_ready is defined and cillium_bgpadvert_crd_ready.rc is defined and cillium_bgpadvert_crd_ready.rc == 0
- cilium_bgp_advertisements is defined and (cilium_bgp_advertisements|length>0)
- name: Cilium | Apply CiliumBGPAdvertisement from cilium_bgp_advertisements
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-bgp-advertisement.yml, type: CiliumBGPAdvertisement}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cillium_bgpadvert_crd_ready is defined and cillium_bgpadvert_crd_ready.rc is defined and cillium_bgpadvert_crd_ready.rc == 0
- cilium_bgp_advertisements is defined and (cilium_bgp_advertisements|length>0)
- name: Cilium | Wait for CiliumBGPNodeConfigOverride CRD to be present
command: "{{ kubectl }} wait --for condition=established --timeout=60s crd/ciliumbgpnodeconfigoverrides.cilium.io"
register: cilium_bgp_node_config_crd_ready
retries: "{{ cilium_rolling_restart_wait_retries_count | int }}"
delay: "{{ cilium_rolling_restart_wait_retries_delay_seconds | int }}"
failed_when: false
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_advertisements is defined and (cilium_bgp_advertisements|length>0)
- name: Cilium | Create CiliumBGPNodeConfigOverride manifests
template:
src: "{{ item.name }}/{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
mode: "0644"
with_items:
- {name: cilium, file: cilium-bgp-node-config-override.yml, type: CiliumBGPNodeConfigOverride}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_node_config_crd_ready is defined and cilium_bgp_node_config_crd_ready.rc is defined and cilium_bgp_node_config_crd_ready.rc == 0
- cilium_bgp_node_config_overrides is defined and (cilium_bgp_node_config_overrides|length>0)
- name: Cilium | Apply CiliumBGPNodeConfigOverride from cilium_bgp_node_config_overrides
kube:
name: "{{ item.name }}"
kubectl: "{{ bin_dir }}/kubectl"
resource: "{{ item.type }}"
filename: "{{ kube_config_dir }}/{{ item.name }}-{{ item.file }}"
state: "latest"
loop:
- {name: cilium, file: cilium-bgp-node-config-override.yml, type: CiliumBGPNodeConfigOverride}
when:
- inventory_hostname == groups['kube_control_plane'][0]
- cilium_bgp_node_config_crd_ready is defined and cilium_bgp_node_config_crd_ready.rc is defined and cilium_bgp_node_config_crd_ready.rc == 0
- cilium_bgp_node_config_overrides is defined and (cilium_bgp_node_config_overrides|length>0)

5
roles/network_plugin/cilium/templates/cilium-operator/cr.yml.j2

@ -102,6 +102,11 @@ rules:
- ciliumbgppeerconfigs
- ciliumbgpadvertisements
- ciliumbgpnodeconfigs
{% endif %}
{% if cilium_version | regex_replace('v') is version('1.16', '>=') %}
- ciliumbgpclusterconfigs
- ciliumbgpclusterconfigs/status
- ciliumbgpnodeconfigoverrides
{% endif %}
verbs:
- '*'

2
roles/network_plugin/cilium/templates/cilium-operator/deploy.yml.j2

@ -84,7 +84,7 @@ spec:
name: cilium-aws
key: AWS_DEFAULT_REGION
optional: true
{% if cilium_kube_proxy_replacement == 'strict' %}
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT

12
roles/network_plugin/cilium/templates/cilium/cilium-bgp-advertisement.yml.j2

@ -0,0 +1,12 @@
{% for cilium_bgp_advertisement in cilium_bgp_advertisements %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPAdvertisement
metadata:
name: "{{ cilium_bgp_advertisement.name }}"
{% if cilium_bgp_advertisement.labels %}
labels: {{ cilium_bgp_advertisement.labels | to_yaml }}
{% endif %}
spec:
{{ cilium_bgp_advertisement.spec | to_yaml | indent(4) }}
{% endfor %}

9
roles/network_plugin/cilium/templates/cilium/cilium-bgp-cluster-config.yml.j2

@ -0,0 +1,9 @@
{% for cilium_bgp_cluster_config in cilium_bgp_cluster_configs %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPClusterConfig
metadata:
name: "{{ cilium_bgp_cluster_config.name }}"
spec:
{{ cilium_bgp_cluster_config.spec | to_yaml | indent(2) }}
{% endfor %}

9
roles/network_plugin/cilium/templates/cilium/cilium-bgp-node-config-override.yml.j2

@ -0,0 +1,9 @@
{% for cilium_bgp_node_config_override in cilium_bgp_node_config_overrides %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPNodeConfigOverride
metadata:
name: "{{ cilium_bgp_node_config_override.name }}"
spec:
{{ cilium_bgp_node_config_override.spec | to_yaml | indent(2) }}
{% endfor %}

9
roles/network_plugin/cilium/templates/cilium/cilium-bgp-peer-config.yml.j2

@ -0,0 +1,9 @@
{% for cilium_bgp_peer_config in cilium_bgp_peer_configs %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeerConfig
metadata:
name: "{{ cilium_bgp_peer_config.name }}"
spec:
{{ cilium_bgp_peer_config.spec | to_yaml | indent(2) }}
{% endfor %}

9
roles/network_plugin/cilium/templates/cilium/cilium-bgp-peering-policy.yml.j2

@ -0,0 +1,9 @@
{% for cilium_bgp_peering_policy in cilium_bgp_peering_policies %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumBGPPeeringPolicy
metadata:
name: "{{ cilium_bgp_peering_policy.name }}"
spec:
{{ cilium_bgp_peering_policy.spec | to_yaml | indent(2) }}
{% endfor %}

12
roles/network_plugin/cilium/templates/cilium/cilium-loadbalancer-ip-pool.yml.j2

@ -0,0 +1,12 @@
{% for cilium_loadbalancer_ip_pool in cilium_loadbalancer_ip_pools %}
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "{{ cilium_loadbalancer_ip_pool.name }}"
spec:
blocks:
{% for cblock in cilium_loadbalancer_ip_pool.cidrs %}
- cidr: "{{ cblock }}"
{% endfor %}
{% endfor %}

9
roles/network_plugin/cilium/templates/cilium/config.yml.j2

@ -167,7 +167,14 @@ data:
wait-bpf-mount: "false"
{% endif %}
# `kube-proxy-replacement=partial|strict|disabled` is deprecated since january 2024 and unsupported in 1.16.
# Replaced by `kube-proxy-replacement=true|false`
# https://github.com/cilium/cilium/pull/31286
{% if cilium_version | regex_replace('v') is version('1.16', '<') %}
kube-proxy-replacement: "{{ cilium_kube_proxy_replacement }}"
{% else %}
kube-proxy-replacement: "{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}true{% else %}false{% endif %}"
{% endif %}
# `native-routing-cidr` is deprecated in 1.10, removed in 1.12.
# Replaced by `ipv4-native-routing-cidr`
@ -267,6 +274,8 @@ data:
enable-bpf-clock-probe: "{{ cilium_enable_bpf_clock_probe }}"
enable-bgp-control-plane: "{{ cilium_enable_bgp_control_plane }}"
disable-cnp-status-updates: "{{ cilium_disable_cnp_status_updates }}"
{% if cilium_ip_masq_agent_enable %}
---

4
roles/network_plugin/cilium/templates/cilium/cr.yml.j2

@ -124,6 +124,9 @@ rules:
- ciliumbgpnodeconfigs/status
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
{% endif %}
{% if cilium_version | regex_replace('v') is version('1.16', '>=') %}
- ciliumbgpclusterconfigs
{% endif %}
verbs:
- '*'
@ -145,6 +148,7 @@ rules:
- ciliumcidrgroups
- ciliuml2announcementpolicies
- ciliumpodippools
- ciliumloadbalancerippools
- ciliuml2announcementpolicies/status
verbs:
- list

4
roles/network_plugin/cilium/templates/cilium/ds.yml.j2

@ -96,7 +96,7 @@ spec:
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
{% if cilium_kube_proxy_replacement == 'strict' %}
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT
@ -285,7 +285,7 @@ spec:
name: cilium-config
optional: true
{% endif %}
{% if cilium_kube_proxy_replacement == 'strict' %}
{% if (cilium_kube_proxy_replacement == 'strict') or (cilium_kube_proxy_replacement | bool) or (cilium_kube_proxy_replacement | string | lower == 'true') %}
- name: KUBERNETES_SERVICE_HOST
value: "{{ kube_apiserver_global_endpoint | urlsplit('hostname') }}"
- name: KUBERNETES_SERVICE_PORT

Loading…
Cancel
Save