You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

268 lines
8.8 KiB

  1. # Cilium
  2. ## IP Address Management (IPAM)
  3. IP Address Management (IPAM) is responsible for the allocation and management of IP addresses used by network endpoints (container and others) managed by Cilium. The default mode is "Cluster Scope".
  4. You can set the following parameters, for example: cluster-pool, kubernetes:
  5. ```yml
  6. cilium_ipam_mode: cluster-pool
  7. ```
  8. ### Set the cluster Pod CIDRs
  9. Cluster Pod CIDRs use the kube_pods_subnet value by default.
  10. If your node network is in the same range you will lose connectivity to other nodes.
  11. Defaults to kube_pods_subnet if not set.
  12. You can set the following parameters:
  13. ```yml
  14. cilium_pool_cidr: 10.233.64.0/18
  15. ```
  16. When cilium_enable_ipv6 is used. Defaults to kube_pods_subnet_ipv6 if not set.
  17. you need to set the IPV6 value:
  18. ```yml
  19. cilium_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
  20. ```
  21. ### Set the Pod CIDR size of a node
  22. When cilium IPAM uses the "Cluster Scope" mode, it will pre-allocate a segment of IP to each node,
  23. schedule the Pod to this node, and then allocate IP from here. cilium_pool_mask_size Specifies
  24. the size allocated from cluster Pod CIDR to node.ipam.podCIDRs.
  25. Defaults to kube_network_node_prefix if not set.
  26. ```yml
  27. cilium_pool_mask_size: "24"
  28. ```
  29. cilium_pool_mask_size Specifies the size allocated to node.ipam.podCIDRs from cluster Pod IPV6 CIDR. Defaults to kube_network_node_prefix_ipv6 if not set.
  30. ```yml
  31. cilium_pool_mask_size_ipv6: "120"
  32. ```
  33. ## Kube-proxy replacement with Cilium
  34. Cilium can run without kube-proxy by setting `cilium_kube_proxy_replacement`
  35. to `strict`.
  36. Without kube-proxy, cilium needs to know the address of the kube-apiserver
  37. and this must be set globally for all Cilium components (agents and operators).
  38. We can only use the localhost apiserver loadbalancer in this mode
  39. whenever it uses the same port as the kube-apiserver (by default it does).
  40. ## Cilium Operator
  41. Unlike some operators, Cilium Operator does not exist for installation purposes.
  42. > The Cilium Operator is responsible for managing duties in the cluster which should logically be handled once for the entire cluster, rather than once for each node in the cluster.
  43. ### Adding custom flags to the Cilium Operator
  44. You can set additional cilium-operator container arguments using `cilium_operator_custom_args`.
  45. This is an advanced option, and you should only use it if you know what you are doing.
  46. Accepts an array or a string.
  47. ```yml
  48. cilium_operator_custom_args: ["--foo=bar", "--baz=qux"]
  49. ```
  50. or
  51. ```yml
  52. cilium_operator_custom_args: "--foo=bar"
  53. ```
  54. You do not need to add a custom flag to enable debugging. Instead, feel free to use the `CILIUM_DEBUG` variable.
  55. ### Adding extra volumes and mounting them
  56. You can use `cilium_operator_extra_volumes` to add extra volumes to the Cilium Operator, and use `cilium_operator_extra_volume_mounts` to mount those volumes.
  57. This is an advanced option, and you should only use it if you know what you are doing.
  58. ```yml
  59. cilium_operator_extra_volumes:
  60. - configMap:
  61. name: foo
  62. name: foo-mount-path
  63. cilium_operator_extra_volume_mounts:
  64. - mountPath: /tmp/foo/bar
  65. name: foo-mount-path
  66. readOnly: true
  67. ```
  68. ## Choose Cilium version
  69. ```yml
  70. cilium_version: v1.12.1
  71. ```
  72. ## Add variable to config
  73. Use following variables:
  74. Example:
  75. ```yml
  76. cilium_config_extra_vars:
  77. enable-endpoint-routes: true
  78. ```
  79. ## Change Identity Allocation Mode
  80. Cilium assigns an identity for each endpoint. This identity is used to enforce basic connectivity between endpoints.
  81. Cilium currently supports two different identity allocation modes:
  82. - "crd" stores identities in kubernetes as CRDs (custom resource definition).
  83. - These can be queried with `kubectl get ciliumid`
  84. - "kvstore" stores identities in an etcd kvstore.
  85. ## Enable Transparent Encryption
  86. Cilium supports the transparent encryption of Cilium-managed host traffic and
  87. traffic between Cilium-managed endpoints either using IPsec or Wireguard.
  88. Wireguard option is only available in Cilium 1.10.0 and newer.
  89. ### IPsec Encryption
  90. For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/security/network/encryption-ipsec/)
  91. To enable IPsec encryption, you just need to set three variables.
  92. ```yml
  93. cilium_encryption_enabled: true
  94. cilium_encryption_type: "ipsec"
  95. ```
  96. The third variable is `cilium_ipsec_key`. You need to create a secret key string for this variable.
  97. Kubespray does not automate this process.
  98. Cilium documentation currently recommends creating a key using the following command:
  99. ```shell
  100. echo "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128"
  101. ```
  102. Note that Kubespray handles secret creation. So you only need to pass the key as the `cilium_ipsec_key` variable, base64 encoded:
  103. ```shell
  104. echo "cilium_ipsec_key: "$(echo -n "3 rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128" | base64 -w0)
  105. ```
  106. ### Wireguard Encryption
  107. For further information, make sure to check the official [Cilium documentation.](https://docs.cilium.io/en/stable/security/network/encryption-wireguard/)
  108. To enable Wireguard encryption, you just need to set two variables.
  109. ```yml
  110. cilium_encryption_enabled: true
  111. cilium_encryption_type: "wireguard"
  112. ```
  113. Kubespray currently supports Linux distributions with Wireguard Kernel mode on Linux 5.6 and newer.
  114. ## Bandwidth Manager
  115. Cilium's bandwidth manager supports the kubernetes.io/egress-bandwidth Pod annotation.
  116. Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies.
  117. In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods.
  118. Bandwidth Manager requires a v5.1.x or more recent Linux kernel.
  119. For further information, make sure to check the official [Cilium documentation](https://docs.cilium.io/en/latest/network/kubernetes/bandwidth-manager/)
  120. To use this function, set the following parameters
  121. ```yml
  122. cilium_enable_bandwidth_manager: true
  123. ```
  124. ## Host Firewall
  125. Host Firewall enforces security policies for Kubernetes nodes. It is disable by default, since it can break the cluster connectivity.
  126. ```yaml
  127. cilium_enable_host_firewall: true
  128. ```
  129. For further information, check [host firewall documentation](https://docs.cilium.io/en/latest/security/host-firewall/)
  130. ## Policy Audit Mode
  131. When _Policy Audit Mode_ is enabled, no network policy is enforced. This feature helps to validate the impact of host policies before enforcing them.
  132. ```yaml
  133. cilium_policy_audit_mode: true
  134. ```
  135. It is disable by default, and should not be enabled in production.
  136. ## Install Cilium Hubble
  137. k8s-net-cilium.yml:
  138. ```yml
  139. cilium_enable_hubble: true ## enable support hubble in cilium
  140. cilium_hubble_install: true ## install hubble-relay, hubble-ui
  141. cilium_hubble_tls_generate: true ## install hubble-certgen and generate certificates
  142. ```
  143. To validate that Hubble UI is properly configured, set up a port forwarding for hubble-ui service:
  144. ```shell script
  145. kubectl port-forward -n kube-system svc/hubble-ui 12000:80
  146. ```
  147. and then open [http://localhost:12000/](http://localhost:12000/).
  148. ## Hubble metrics
  149. ```yml
  150. cilium_enable_hubble_metrics: true
  151. cilium_hubble_metrics:
  152. - dns
  153. - drop
  154. - tcp
  155. - flow
  156. - icmp
  157. - http
  158. ```
  159. [More](https://docs.cilium.io/en/v1.9/operations/metrics/#hubble-exported-metrics)
  160. ## Upgrade considerations
  161. ### Rolling-restart timeouts
  162. Cilium relies on the kernel's BPF support, which is extremely fast at runtime but incurs a compilation penalty on initialization and update.
  163. As a result, the Cilium DaemonSet pods can take a significant time to start, which scales with the number of nodes and endpoints in your cluster.
  164. As part of cluster.yml, this DaemonSet is restarted, and Kubespray's [default timeouts for this operation](../roles/network_plugin/cilium/defaults/main.yml)
  165. are not appropriate for large clusters.
  166. This means that you will likely want to update these timeouts to a value more in-line with your cluster's number of nodes and their respective CPU performance.
  167. This is configured by the following values:
  168. ```yaml
  169. # Configure how long to wait for the Cilium DaemonSet to be ready again
  170. cilium_rolling_restart_wait_retries_count: 30
  171. cilium_rolling_restart_wait_retries_delay_seconds: 10
  172. ```
  173. The total time allowed (count * delay) should be at least `($number_of_nodes_in_cluster * $cilium_pod_start_time)` for successful rolling updates. There are no
  174. drawbacks to making it higher and giving yourself a time buffer to accommodate transient slowdowns.
  175. Note: To find the `$cilium_pod_start_time` for your cluster, you can simply restart a Cilium pod on a node of your choice and look at how long it takes for it
  176. to become ready.
  177. Note 2: The default CPU requests/limits for Cilium pods is set to a very conservative 100m:500m which will likely yield very slow startup for Cilium pods. You
  178. probably want to significantly increase the CPU limit specifically if short bursts of CPU from Cilium are acceptable to you.