When using resolvconf_mode host_resolvconf, there is an early DNS
config stage where Kubernetes cluster DNS is not injected for host
DNS intially. Later, the cluster DNS is enabled, but we do not
need to run every task from the kubernetes/preinstall role.
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
remove empty when line
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
force kubeadm upgrade due to failure without --force flag
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
added nodeSelector to have compatibility with hybrid cluster with win nodes, also fix for download with missing container type
fixes in syntax and LF for newline in files
fix on yamllint check
ensure there is pin priority for docker package to avoid upgrade of docker to incompatible version
some cleanup for innecesary lines
remove conditions for nodeselector
The current way to setup the etc cluster is messy and buggy.
- It checks for cluster is healthy before the cluster is even created.
- The unit files are started on handlers, not in the task, so you mess with "flush handlers".
- The join_member.yml is not used.
- etcd events cluster is not configured for kubeadm
- remove duplicate runs between running the role on etcd nodes and k8s nodes
When proxy vars are set, `uri` module tasks will attempt to route traffic through the proxy. This causes the "Wait for" tasks in the `etcd` and `kubernetes/master` roles to hang, as localhost connections struggle with a proxy.
As far as I know these roles only need local/cluster networking, so a proxy doesn't apply here anyway.
* Refactor downloads to use download role directly
Also disable fact delegation so download delegate works acros OSes.
* clean up bools and ansible_os_family conditionals
This role only support Red Hat type distros and is not maintained
or used by many users. It should be removed because it creates
feature disparity between supported OSes and is not maintained.
The value cannot be determined properly via local facts, so
checking k8s api is the most reliable way to look up what hostname
is used when using a cloudprovider.
New files: /etc/kubernetes/admin.conf
/root/.kube/config
$GITDIR/artifacts/{kubectl,admin.conf}
Optional method to download kubectl and admin.conf if
kubeconfig_lcoalhost is set to true (default false)
* kubeadm support
* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job
* change ci boolean vars to json format
* fixup
* Update create-gce.yml
* Update create-gce.yml
* Update create-gce.yml
By default Calico CNI does not create any network access policies
or profiles if 'policy' is enabled in CNI config. And without any
policies/profiles network access to/from PODs is blocked.
K8s related policies are created by calico-policy-controller in
such case. So we need to start it as soon as possible, before any
real workloads.
This patch also fixes kube-api port in calico-policy-controller
yaml template.
Closes#1132
* Re-enable ansible_ssh_pipelining as expected for the cluster.yml
* Do not use 'all' wildcasts for hosts, limit only to k8s-cluster, etcd,
calico-rr groups instead. Other nodes in inventory are out of Kargo
scope and it's up to users how to manage them.
Signed-off-by: Bogdan Dobrelya <bogdando@mail.ru>
Add BGP route reflectors support in order to optimize BGP topology
for deployments with Calico network plugin.
Also bump version of calico/ctl for some bug fixes.