Browse Source

Docs: Replace master with control plane (#7767)

This replaces master with "control plane" in Kubespray docs
because of [1].

[1]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md#motivation
pull/7772/head
Kenichi Omichi 3 years ago
committed by GitHub
parent
commit
b77f207512
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 24 additions and 24 deletions
  1. 4
      docs/ansible.md
  2. 12
      docs/getting-started.md
  3. 2
      docs/kubernetes-reliability.md
  4. 2
      docs/large-deployments.md
  5. 16
      docs/nodes.md
  6. 2
      docs/proxy.md
  7. 2
      docs/recover-control-plane.md
  8. 2
      docs/roadmap.md
  9. 2
      docs/test_cases.md
  10. 4
      docs/vars.md

4
docs/ansible.md

@ -20,7 +20,7 @@ When _kube_node_ contains _etcd_, you define your etcd cluster to be as well sch
If you want it a standalone, make sure those groups do not intersect. If you want it a standalone, make sure those groups do not intersect.
If you want the server to act both as control-plane and node, the server must be defined If you want the server to act both as control-plane and node, the server must be defined
on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and on both groups _kube_control_plane_ and _kube_node_. If you want a standalone and
unschedulable master, the server must be defined only in the _kube_control_plane_ and
unschedulable control plane, the server must be defined only in the _kube_control_plane_ and
not _kube_node_. not _kube_node_.
There are also two special groups: There are also two special groups:
@ -67,7 +67,7 @@ The group variables to control main deployment options are located in the direct
Optional variables are located in the `inventory/sample/group_vars/all.yml`. Optional variables are located in the `inventory/sample/group_vars/all.yml`.
Mandatory variables that are common for at least one role (or a node group) can be found in the Mandatory variables that are common for at least one role (or a node group) can be found in the
`inventory/sample/group_vars/k8s_cluster.yml`. `inventory/sample/group_vars/k8s_cluster.yml`.
There are also role vars for docker, kubernetes preinstall and master roles.
There are also role vars for docker, kubernetes preinstall and control plane roles.
According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable), According to the [ansible docs](https://docs.ansible.com/ansible/latest/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overridden from the group vars. In order to override, one should use those cannot be overridden from the group vars. In order to override, one should use
the `-e` runtime flags (most simple way) or other layers described in the docs. the `-e` runtime flags (most simple way) or other layers described in the docs.

12
docs/getting-started.md

@ -11,7 +11,7 @@ You can use an
to create or modify an Ansible inventory. Currently, it is limited in to create or modify an Ansible inventory. Currently, it is limited in
functionality and is only used for configuring a basic Kubespray cluster inventory, but it does functionality and is only used for configuring a basic Kubespray cluster inventory, but it does
support creating inventory file for large clusters as well. It now supports support creating inventory file for large clusters as well. It now supports
separated ETCD and Kubernetes master roles from node role if the size exceeds a
separated ETCD and Kubernetes control plane roles from node role if the size exceeds a
certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information. certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` for more information.
Example inventory generator usage: Example inventory generator usage:
@ -40,7 +40,7 @@ See more details in the [ansible guide](/docs/ansible.md).
### Adding nodes ### Adding nodes
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
You may want to add worker, control plane or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your control planes. This is especially helpful when doing something like autoscaling your clusters.
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html)). - Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html)).
- Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`: - Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`:
@ -52,7 +52,7 @@ ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \
### Remove nodes ### Remove nodes
You may want to remove **master**, **worker**, or **etcd** nodes from your
You may want to remove **control plane**, **worker**, or **etcd** nodes from your
existing cluster. This can be done by re-running the `remove-node.yml` existing cluster. This can be done by re-running the `remove-node.yml`
playbook. First, all specified nodes will be drained, then stop some playbook. First, all specified nodes will be drained, then stop some
kubernetes services and delete some certificates, kubernetes services and delete some certificates,
@ -108,11 +108,11 @@ Accessing through Ingress is highly recommended. For proxy access, please note t
For token authentication, guide to create Service Account is provided in [dashboard sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) doc. Still take care of default namespace. For token authentication, guide to create Service Account is provided in [dashboard sample user](https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md) doc. Still take care of default namespace.
Access can also by achieved via ssh tunnel on a master :
Access can also by achieved via ssh tunnel on a control plane :
```bash ```bash
# localhost:8081 will be sent to master-1's own localhost:8081
ssh -L8001:localhost:8001 user@master-1
# localhost:8081 will be sent to control-plane-1's own localhost:8081
ssh -L8001:localhost:8001 user@control-plane-1
sudo -i sudo -i
kubectl proxy kubectl proxy
``` ```

2
docs/kubernetes-reliability.md

@ -21,7 +21,7 @@ By default the normal behavior looks like:
> Kubernetes controller manager and Kubelet work asynchronously. It means that > Kubernetes controller manager and Kubelet work asynchronously. It means that
> the delay may include any network latency, API Server latency, etcd latency, > the delay may include any network latency, API Server latency, etcd latency,
> latency caused by load on one's master nodes and so on. So if
> latency caused by load on one's control plane nodes and so on. So if
> `--node-status-update-frequency` is set to 5s in reality it may appear in > `--node-status-update-frequency` is set to 5s in reality it may appear in
> etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum > etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
> nodes. > nodes.

2
docs/large-deployments.md

@ -12,7 +12,7 @@ For a large scaled deployments, consider the following configuration changes:
See download modes for details. See download modes for details.
* Adjust the `retry_stagger` global var as appropriate. It should provide sane * Adjust the `retry_stagger` global var as appropriate. It should provide sane
load on a delegate (the first K8s master node) then retrying failed
load on a delegate (the first K8s control plane node) then retrying failed
push or download operations. push or download operations.
* Tune parameters for DNS related applications * Tune parameters for DNS related applications

16
docs/nodes.md

@ -6,9 +6,9 @@ Modified from [comments in #3471](https://github.com/kubernetes-sigs/kubespray/i
Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to: Currently you can't remove the first node in your kube_control_plane and etcd-master list. If you still want to remove this node you have to:
### 1) Change order of current masters
### 1) Change order of current control planes
Modify the order of your master list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
Modify the order of your control plane list by pushing your first entry to any other position. E.g. if you want to remove `node-1` of the following example:
```yaml ```yaml
children: children:
@ -71,13 +71,13 @@ Before using `--limit` run playbook `facts.yml` without the limit to refresh fac
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars: `-e node=NODE_NAME -e reset_nodes=false`. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars: `-e node=NODE_NAME -e reset_nodes=false`.
Use this flag even when you remove other types of nodes like a master or etcd nodes.
Use this flag even when you remove other types of nodes like a control plane or etcd nodes.
### 4) Remove the node from the inventory ### 4) Remove the node from the inventory
That's it. That's it.
## Adding/replacing a master node
## Adding/replacing a control plane node
### 1) Run `cluster.yml` ### 1) Run `cluster.yml`
@ -92,7 +92,7 @@ In all hosts, restart nginx-proxy pod. This pod is a local proxy for the apiserv
docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart
``` ```
### 3) Remove old master nodes
### 3) Remove old control plane nodes
With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed. With the old node still in the inventory, run `remove-node.yml`. You need to pass `-e node=NODE_NAME` to the playbook to limit the execution to the node being removed.
If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars. If the node you want to remove is not online, you should add `reset_nodes=false` to your extra-vars.
@ -104,7 +104,7 @@ You need to make sure there are always an odd number of etcd nodes in the cluste
### 1) Add the new node running cluster.yml ### 1) Add the new node running cluster.yml
Update the inventory and run `cluster.yml` passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. Update the inventory and run `cluster.yml` passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`.
If the node you want to add as an etcd node is already a worker or master node in your cluster, you have to remove him first using `remove-node.yml`.
If the node you want to add as an etcd node is already a worker or control plane node in your cluster, you have to remove him first using `remove-node.yml`.
Run `upgrade-cluster.yml` also passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster. Run `upgrade-cluster.yml` also passing `--limit=etcd,kube_control_plane -e ignore_assert_errors=yes`. This is necessary to update all etcd configuration in the cluster.
@ -117,7 +117,7 @@ Otherwise the etcd cluster might still be processing the first join and fail on
### 2) Add the new node to apiserver config ### 2) Add the new node to apiserver config
In every master node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure the new etcd nodes are present in the apiserver command line parameter `--etcd-servers=...`.
In every control plane node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure the new etcd nodes are present in the apiserver command line parameter `--etcd-servers=...`.
## Removing an etcd node ## Removing an etcd node
@ -136,7 +136,7 @@ Run `cluster.yml` to regenerate the configuration files on all remaining nodes.
### 4) Remove the old etcd node from apiserver config ### 4) Remove the old etcd node from apiserver config
In every master node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure only active etcd nodes are still present in the apiserver command line parameter `--etcd-servers=...`.
In every control plane node, edit `/etc/kubernetes/manifests/kube-apiserver.yaml`. Make sure only active etcd nodes are still present in the apiserver command line parameter `--etcd-servers=...`.
### 5) Shutdown the old instance ### 5) Shutdown the old instance

2
docs/proxy.md

@ -18,6 +18,6 @@ If you set http and https proxy, all nodes and loadbalancer will be excluded fro
## Exclude workers from no_proxy ## Exclude workers from no_proxy
Since workers are included in the no_proxy variable, by default, docker engine will be restarted on all nodes (all Since workers are included in the no_proxy variable, by default, docker engine will be restarted on all nodes (all
pods will restart) when adding or removing workers. To override this behaviour by only including master nodes in the
pods will restart) when adding or removing workers. To override this behaviour by only including control plane nodes in the
no_proxy variable, set: no_proxy variable, set:
`no_proxy_exclude_workers: true` `no_proxy_exclude_workers: true`

2
docs/recover-control-plane.md

@ -20,7 +20,7 @@ __Note that you need at least one functional node to be able to recover using th
## Runbook ## Runbook
* Move any broken etcd nodes into the "broken\_etcd" group, make sure the "etcd\_member\_name" variable is set. * Move any broken etcd nodes into the "broken\_etcd" group, make sure the "etcd\_member\_name" variable is set.
* Move any broken master nodes into the "broken\_kube\_control\_plane" group.
* Move any broken control plane nodes into the "broken\_kube\_control\_plane" group.
Then run the playbook with ```--limit etcd,kube_control_plane``` and increase the number of ETCD retries by setting ```-e etcd_retries=10``` or something even larger. The amount of retries required is difficult to predict. Then run the playbook with ```--limit etcd,kube_control_plane``` and increase the number of ETCD retries by setting ```-e etcd_retries=10``` or something even larger. The amount of retries required is difficult to predict.

2
docs/roadmap.md

@ -28,7 +28,7 @@
- [x] Run kubernetes e2e tests - [x] Run kubernetes e2e tests
- [ ] Test idempotency on single OS but for all network plugins/container engines - [ ] Test idempotency on single OS but for all network plugins/container engines
- [ ] single test on AWS per day - [ ] single test on AWS per day
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
- [ ] test scale up cluster: +1 etcd, +1 control plane, +1 node
- [x] Reorganize CI test vars into group var files - [x] Reorganize CI test vars into group var files
## Lifecycle ## Lifecycle

2
docs/test_cases.md

@ -8,7 +8,7 @@ and the `etcd` group merged with the `kube_control_plane`.
`separate` layout is when there is only node of each type, which includes `separate` layout is when there is only node of each type, which includes
a kube_control_plane, kube_node, and etcd cluster member. a kube_control_plane, kube_node, and etcd cluster member.
`ha` layout consists of two etcd nodes, two masters and a single worker node,
`ha` layout consists of two etcd nodes, two control planes and a single worker node,
with role intersection. with role intersection.
`scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts `scale` layout can be combined with above layouts (`ha-scale`, `separate-scale`). It includes 200 fake hosts

4
docs/vars.md

@ -180,7 +180,7 @@ node_taints:
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments.
Extra flags for the kubelet can be specified using these variables, Extra flags for the kubelet can be specified using these variables,
in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubelet YAML config file. The `kubelet_node_config_extra_args` apply kubelet settings only to nodes and not masters. Example:
in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubelet YAML config file. The `kubelet_node_config_extra_args` apply kubelet settings only to nodes and not control planes. Example:
```yml ```yml
kubelet_config_extra_args: kubelet_config_extra_args:
@ -202,7 +202,7 @@ Previously, the same parameters could be passed as flags to kubelet binary with
* *kubelet_custom_flags* * *kubelet_custom_flags*
* *kubelet_node_custom_flags* * *kubelet_node_custom_flags*
The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not masters. Example:
The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not control planes. Example:
```yml ```yml
kubelet_custom_flags: kubelet_custom_flags:

Loading…
Cancel
Save