You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

374 lines
13 KiB

  1. # Calico
  2. Check if the calico-node container is running
  3. ```ShellSession
  4. docker ps | grep calico
  5. ```
  6. The **calicoctl.sh** is wrap script with configured access credentials for command calicoctl allows to check the status of the network workloads.
  7. * Check the status of Calico nodes
  8. ```ShellSession
  9. calicoctl.sh node status
  10. ```
  11. * Show the configured network subnet for containers
  12. ```ShellSession
  13. calicoctl.sh get ippool -o wide
  14. ```
  15. * Show the workloads (ip addresses of containers and their location)
  16. ```ShellSession
  17. calicoctl.sh get workloadEndpoint -o wide
  18. ```
  19. and
  20. ```ShellSession
  21. calicoctl.sh get hostEndpoint -o wide
  22. ```
  23. ## Configuration
  24. ### Optional : Define datastore type
  25. The default datastore, Kubernetes API datastore is recommended for on-premises deployments, and supports only Kubernetes workloads; etcd is the best datastore for hybrid deployments.
  26. Allowed values are `kdd` (default) and `etcd`.
  27. Note: using kdd and more than 50 nodes, consider using the `typha` daemon to provide scaling.
  28. To re-define you need to edit the inventory and add a group variable `calico_datastore`
  29. ```yml
  30. calico_datastore: kdd
  31. ```
  32. ### Optional : Define network backend
  33. In some cases you may want to define Calico network backend. Allowed values are `bird`, `vxlan` or `none`. Bird is a default value.
  34. To re-define you need to edit the inventory and add a group variable `calico_network_backend`
  35. ```yml
  36. calico_network_backend: none
  37. ```
  38. ### Optional : Define the default pool CIDRs
  39. By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool, and `kube_pods_subnet_ipv6` for IPv6.
  40. In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet` and `kube_pods_subnet_ipv6` ), it starts with the default IP Pools of which IP range CIDRs can by defined in group_vars (k8s_cluster/k8s-net-calico.yml):
  41. ```ShellSession
  42. calico_pool_cidr: 10.233.64.0/20
  43. calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
  44. ```
  45. ### Optional : BGP Peering with border routers
  46. In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
  47. For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
  48. The following variables need to be set:
  49. `peer_with_router` to enable the peering with the datacenter's border router (default value: false).
  50. you'll need to edit the inventory and add a hostvar `local_as` by node.
  51. ```ShellSession
  52. node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
  53. ```
  54. ### Optional : Defining BGP peers
  55. Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
  56. In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
  57. In order to define peers on a per node basis, the `peers` variable must be defined in hostvars.
  58. NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining both global and per node peers would end up with having only per node peers. If having both global and per node peers defined was meant to happen, global peers would have to be defined in hostvars for each host (as well as per node peers)
  59. Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
  60. This can be enabled by setting the following variable as follow in group_vars (k8s_cluster/k8s-net-calico.yml)
  61. ```yml
  62. calico_advertise_cluster_ips: true
  63. ```
  64. Since calico 3.10, Calico supports advertising Kubernetes service ExternalIPs over BGP in addition to cluster IPs advertising.
  65. This can be enabled by setting the following variable in group_vars (k8s_cluster/k8s-net-calico.yml)
  66. ```yml
  67. calico_advertise_service_external_ips:
  68. - x.x.x.x/24
  69. - y.y.y.y/32
  70. ```
  71. ### Optional : Define global AS number
  72. Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
  73. It defaults to "64512".
  74. ### Optional : BGP Peering with route reflectors
  75. At large scale you may want to disable full node-to-node mesh in order to
  76. optimize your BGP topology and improve `calico-node` containers' start times.
  77. To do so you can deploy BGP route reflectors and peer `calico-node` with them as
  78. recommended here:
  79. * <https://hub.docker.com/r/calico/routereflector/>
  80. * <https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric>
  81. You need to edit your inventory and add:
  82. * `calico_rr` group with nodes in it. `calico_rr` can be combined with
  83. `kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
  84. group of `k8s_cluster` group.
  85. * `cluster_id` by route reflector node/group (see details
  86. [here](https://hub.docker.com/r/calico/routereflector/))
  87. Here's an example of Kubespray inventory with standalone route reflectors:
  88. ```ini
  89. [all]
  90. rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
  91. rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
  92. node2 ansible_ssh_host=10.210.1.12 ip=10.210.1.12
  93. node3 ansible_ssh_host=10.210.1.13 ip=10.210.1.13
  94. node4 ansible_ssh_host=10.210.1.14 ip=10.210.1.14
  95. node5 ansible_ssh_host=10.210.1.15 ip=10.210.1.15
  96. [kube_control_plane]
  97. node2
  98. node3
  99. [etcd]
  100. node2
  101. node3
  102. node4
  103. [kube_node]
  104. node2
  105. node3
  106. node4
  107. node5
  108. [k8s_cluster:children]
  109. kube_node
  110. kube_control_plane
  111. calico_rr
  112. [calico_rr]
  113. rr0
  114. rr1
  115. [rack0]
  116. rr0
  117. rr1
  118. node2
  119. node3
  120. node4
  121. node5
  122. [rack0:vars]
  123. cluster_id="1.0.0.1"
  124. ```
  125. The inventory above will deploy the following topology assuming that calico's
  126. `global_as_num` is set to `65400`:
  127. ![Image](figures/kubespray-calico-rr.png?raw=true)
  128. ### Optional : Define default endpoint to host action
  129. By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see <https://github.com/projectcalico/felix/issues/660> and <https://github.com/projectcalico/calicoctl/issues/1389).> Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
  130. To re-define default action please set the following variable in your inventory:
  131. ```yml
  132. calico_endpoint_to_host_action: "ACCEPT"
  133. ```
  134. ### Optional : Define address on which Felix will respond to health requests
  135. Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
  136. To re-define health host please set the following variable in your inventory:
  137. ```yml
  138. calico_healthhost: "0.0.0.0"
  139. ```
  140. ### Optional : Configure Calico Node probe timeouts
  141. Under certain conditions a deployer may need to tune the Calico liveness and readiness probes timeout settings. These can be configured like this:
  142. ```yml
  143. calico_node_livenessprobe_timeout: 10
  144. calico_node_readinessprobe_timeout: 10
  145. ```
  146. ## Config encapsulation for cross server traffic
  147. Calico supports two types of encapsulation: [VXLAN and IP in IP](https://docs.projectcalico.org/v3.11/networking/vxlan-ipip). VXLAN is supported in some environments where IP in IP is not (for example, Azure).
  148. *IP in IP* and *VXLAN* is mutualy exclusive modes.
  149. Configure Ip in Ip mode. Possible values is `Always`, `CrossSubnet`, `Never`.
  150. ```yml
  151. calico_ipip_mode: 'Always'
  152. ```
  153. Configure VXLAN mode. Possible values is `Always`, `CrossSubnet`, `Never`.
  154. ```yml
  155. calico_vxlan_mode: 'Never'
  156. ```
  157. If you use VXLAN mode, BGP networking is not required. You can disable BGP to reduce the moving parts in your cluster by `calico_network_backend: vxlan`
  158. ## Configuring interface MTU
  159. This is an advanced topic and should usually not be modified unless you know exactly what you are doing. Calico is smart enough to deal with the defaults and calculate the proper MTU. If you do need to set up a custom MTU you can change `calico_veth_mtu` as follows:
  160. * If Wireguard is enabled, subtract 60 from your network MTU (i.e. 1500-60=1440)
  161. * If using VXLAN or BPF mode is enabled, subtract 50 from your network MTU (i.e. 1500-50=1450)
  162. * If using IPIP, subtract 20 from your network MTU (i.e. 1500-20=1480)
  163. * if not using any encapsulation, set to your network MTU (i.e. 1500 or 9000)
  164. ```yaml
  165. calico_veth_mtu: 1440
  166. ```
  167. ## Cloud providers configuration
  168. Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``calico_ipip_mode: Always`` if the cloud provider was defined.
  169. ### Optional : Ignore kernel's RPF check setting
  170. By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
  171. ```yml
  172. calico_node_ignorelooserpf: true
  173. ```
  174. Note that in OpenStack you must allow `ipip` traffic in your security groups,
  175. otherwise you will experience timeouts.
  176. To do this you must add a rule which allows it, for example:
  177. ### Optional : Felix configuration via extraenvs of calico node
  178. Possible environment variable parameters for [configuring Felix](https://docs.projectcalico.org/reference/felix/configuration)
  179. ```yml
  180. calico_node_extra_envs:
  181. FELIX_DEVICEROUTESOURCEADDRESS: 172.17.0.1
  182. ```
  183. ```ShellSession
  184. neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
  185. neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
  186. ```
  187. ### Optional : Use Calico CNI host-local IPAM plugin
  188. Calico currently supports two types of CNI IPAM plugins, `host-local` and `calico-ipam` (default).
  189. To allow Calico to determine the subnet to use from the Kubernetes API based on the `Node.podCIDR` field, enable the following setting.
  190. ```yml
  191. calico_ipam_host_local: true
  192. ```
  193. Refer to Project Calico section [Using host-local IPAM](https://docs.projectcalico.org/reference/cni-plugin/configuration#using-host-local-ipam) for further information.
  194. ## eBPF Support
  195. Calico supports eBPF for its data plane see [an introduction to the Calico eBPF Dataplane](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/) for further information.
  196. Note that it is advisable to always use the latest version of Calico when using the eBPF dataplane.
  197. ### Enabling eBPF support
  198. To enable the eBPF dataplane support ensure you add the following to your inventory. Note that the `kube-proxy` is incompatible with running Calico in eBPF mode and the kube-proxy should be removed from the system.
  199. ```yaml
  200. calico_bpf_enabled: true
  201. kube_proxy_remove: true
  202. ```
  203. ### Cleaning up after kube-proxy
  204. Calico node cannot clean up after kube-proxy has run in ipvs mode. If you are converting an existing cluster to eBPF you will need to ensure the `kube-proxy` DaemonSet is deleted and that ipvs rules are cleaned.
  205. To check that kube-proxy was running in ipvs mode:
  206. ```ShellSession
  207. # ipvsadm -l
  208. ```
  209. To clean up any ipvs leftovers:
  210. ```ShellSession
  211. # ipvsadm -C
  212. ```
  213. ### Calico access to the kube-api
  214. Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
  215. Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](./ha-mode.md).
  216. If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints.
  217. ```yaml
  218. loadbalancer_apiserver_localhost: true
  219. use_localhost_as_kubeapi_loadbalancer: true
  220. ```
  221. ### Tunneled versus Direct Server Return
  222. By default Calico usese Tunneled service mode but it can use direct server return (DSR) in order to optimize the return path for a service.
  223. To configure DSR:
  224. ```yaml
  225. calico_bpf_service_mode: "DSR"
  226. ```
  227. ### eBPF Logging and Troubleshooting
  228. In order to enable Calico eBPF mode logging:
  229. ```yaml
  230. calico_bpf_log_level: "Debug"
  231. ```
  232. To view the logs you need to use the `tc` command to read the kernel trace buffer:
  233. ```ShellSession
  234. tc exec bpf debug
  235. ```
  236. Please see [Calico eBPF troubleshooting guide](https://docs.projectcalico.org/maintenance/troubleshoot/troubleshoot-ebpf#ebpf-program-debug-logs).
  237. ## Wireguard Encryption
  238. Calico supports using Wireguard for encryption. Please see the docs on [encryptiong cluster pod traffic](https://docs.projectcalico.org/security/encrypt-cluster-pod-traffic).
  239. To enable wireguard support:
  240. ```yaml
  241. calico_wireguard_enabled: true
  242. ```
  243. The following OSes will require enabling the EPEL repo in order to bring in wireguard tools:
  244. * CentOS 7 & 8
  245. * AlmaLinux 8
  246. * Amazon Linux 2
  247. ```yaml
  248. epel_enabled: true
  249. ```