Browse Source

modify doc structure and update existing doc-links as preparation for new doc generation script

pull/11128/head
Payback159 6 months ago
parent
commit
4dbfd42f1d
82 changed files with 70 additions and 70 deletions
  1. 102
      README.md
  2. 2
      contrib/azurerm/README.md
  3. 2
      contrib/terraform/equinix/README.md
  4. 2
      contrib/terraform/openstack/README.md
  5. 2
      docs/CNI/calico.md
  6. 2
      docs/CNI/cilium.md
  7. 0
      docs/CNI/cni.md
  8. 0
      docs/CNI/flannel.md
  9. 0
      docs/CNI/kube-ovn.md
  10. 0
      docs/CNI/kube-router.md
  11. 0
      docs/CNI/macvlan.md
  12. 0
      docs/CNI/multus.md
  13. 0
      docs/CNI/weave.md
  14. 0
      docs/CRI/containerd.md
  15. 0
      docs/CRI/cri-o.md
  16. 0
      docs/CRI/docker.md
  17. 0
      docs/CRI/gvisor.md
  18. 0
      docs/CRI/kata-containers.md
  19. 0
      docs/CSI/aws-ebs-csi.md
  20. 0
      docs/CSI/azure-csi.md
  21. 0
      docs/CSI/cinder-csi.md
  22. 0
      docs/CSI/gcp-pd-csi.md
  23. 0
      docs/CSI/vsphere-csi.md
  24. 0
      docs/advanced/arch.md
  25. 0
      docs/advanced/cert_manager.md
  26. 0
      docs/advanced/dns-stack.md
  27. 0
      docs/advanced/downloads.md
  28. 0
      docs/advanced/gcp-lb.md
  29. 0
      docs/advanced/kubernetes-reliability.md
  30. 0
      docs/advanced/mitogen.md
  31. 0
      docs/advanced/netcheck.md
  32. 0
      docs/advanced/ntp.md
  33. 0
      docs/advanced/proxy.md
  34. 0
      docs/advanced/registry.md
  35. 4
      docs/ansible/ansible.md
  36. 0
      docs/ansible/ansible_collection.md
  37. 4
      docs/ansible/vars.md
  38. 0
      docs/cloud_providers/aws.md
  39. 0
      docs/cloud_providers/azure.md
  40. 0
      docs/cloud_providers/cloud.md
  41. 2
      docs/cloud_providers/equinix-metal.md
  42. 0
      docs/cloud_providers/openstack.md
  43. 2
      docs/cloud_providers/vsphere.md
  44. 0
      docs/developers/ci-setup.md
  45. 0
      docs/developers/ci.md
  46. 2
      docs/developers/test_cases.md
  47. 2
      docs/developers/vagrant.md
  48. 0
      docs/external_storage_provisioners/cephfs_provisioner.md
  49. 0
      docs/external_storage_provisioners/local_volume_provisioner.md
  50. 0
      docs/external_storage_provisioners/rbd_provisioner.md
  51. 0
      docs/external_storage_provisioners/scheduler_plugins.md
  52. 0
      docs/getting_started/comparisons.md
  53. 6
      docs/getting_started/getting-started.md
  54. 0
      docs/getting_started/setting-up-your-first-cluster.md
  55. 0
      docs/ingress/alb_ingress_controller.md
  56. 0
      docs/ingress/ingress_nginx.md
  57. 0
      docs/ingress/kube-vip.md
  58. 0
      docs/ingress/metallb.md
  59. 0
      docs/operating_systems/amazonlinux.md
  60. 0
      docs/operating_systems/bootstrap-os.md
  61. 0
      docs/operating_systems/centos.md
  62. 0
      docs/operating_systems/fcos.md
  63. 0
      docs/operating_systems/flatcar.md
  64. 0
      docs/operating_systems/kylinlinux.md
  65. 0
      docs/operating_systems/openeuler.md
  66. 0
      docs/operating_systems/opensuse.md
  67. 0
      docs/operating_systems/rhel.md
  68. 0
      docs/operating_systems/uoslinux.md
  69. 0
      docs/operations/cgroups.md
  70. 0
      docs/operations/encrypting-secret-data-at-rest.md
  71. 0
      docs/operations/etcd.md
  72. 0
      docs/operations/ha-mode.md
  73. 0
      docs/operations/hardening.md
  74. 0
      docs/operations/integration.md
  75. 6
      docs/operations/large-deployments.md
  76. 0
      docs/operations/mirror.md
  77. 0
      docs/operations/nodes.md
  78. 0
      docs/operations/offline-environment.md
  79. 0
      docs/operations/port-requirements.md
  80. 0
      docs/operations/recover-control-plane.md
  81. 0
      docs/operations/upgrades.md
  82. 0
      docs/roadmap/roadmap.md

102
README.md

@ -5,7 +5,7 @@
If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**. If you have questions, check the documentation at [kubespray.io](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
You can get your invite [here](http://slack.k8s.io/) You can get your invite [here](http://slack.k8s.io/)
- Can be deployed on **[AWS](docs/aws.md), GCE, [Azure](docs/azure.md), [OpenStack](docs/openstack.md), [vSphere](docs/vsphere.md), [Equinix Metal](docs/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- Can be deployed on **[AWS](docs/cloud_providers/aws.md), GCE, [Azure](docs/cloud_providers/azure.md), [OpenStack](docs/cloud_providers/openstack.md), [vSphere](docs/cloud_providers/vsphere.md), [Equinix Metal](docs/cloud_providers/equinix-metal.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal**
- **Highly available** cluster - **Highly available** cluster
- **Composable** (Choice of the network plugin for instance) - **Composable** (Choice of the network plugin for instance)
- Supports most popular **Linux distributions** - Supports most popular **Linux distributions**
@ -19,7 +19,7 @@ Below are several ways to use Kubespray to deploy a Kubernetes cluster.
#### Usage #### Usage
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following steps: then run the following steps:
```ShellSession ```ShellSession
@ -86,7 +86,7 @@ ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa clu
#### Collection #### Collection
See [here](docs/ansible_collection.md) if you wish to use this repository as an Ansible collection
See [here](docs/ansible/ansible_collection.md) if you wish to use this repository as an Ansible collection
### Vagrant ### Vagrant
@ -99,7 +99,7 @@ python -V && pip -V
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/> If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
then run the following step: then run the following step:
```ShellSession ```ShellSession
@ -109,51 +109,51 @@ vagrant up
## Documents ## Documents
- [Requirements](#requirements) - [Requirements](#requirements)
- [Kubespray vs ...](docs/comparisons.md)
- [Getting started](docs/getting-started.md)
- [Setting up your first cluster](docs/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible.md)
- [Integration with existing ansible repo](docs/integration.md)
- [Deployment data variables](docs/vars.md)
- [DNS stack](docs/dns-stack.md)
- [HA mode](docs/ha-mode.md)
- [Kubespray vs ...](docs/getting_started/comparisons.md)
- [Getting started](docs/getting_started/getting-started.md)
- [Setting up your first cluster](docs/getting_started/setting-up-your-first-cluster.md)
- [Ansible inventory and tags](docs/ansible/ansible.md)
- [Integration with existing ansible repo](docs/operations/integration.md)
- [Deployment data variables](docs/ansible/vars.md)
- [DNS stack](docs/advanced/dns-stack.md)
- [HA mode](docs/operations/ha-mode.md)
- [Network plugins](#network-plugins) - [Network plugins](#network-plugins)
- [Vagrant install](docs/vagrant.md)
- [Flatcar Container Linux bootstrap](docs/flatcar.md)
- [Fedora CoreOS bootstrap](docs/fcos.md)
- [openSUSE setup](docs/opensuse.md)
- [Downloaded artifacts](docs/downloads.md)
- [Cloud providers](docs/cloud.md)
- [OpenStack](docs/openstack.md)
- [AWS](docs/aws.md)
- [Azure](docs/azure.md)
- [vSphere](docs/vsphere.md)
- [Equinix Metal](docs/equinix-metal.md)
- [Large deployments](docs/large-deployments.md)
- [Adding/replacing a node](docs/nodes.md)
- [Upgrades basics](docs/upgrades.md)
- [Air-Gap installation](docs/offline-environment.md)
- [NTP](docs/ntp.md)
- [Hardening](docs/hardening.md)
- [Mirror](docs/mirror.md)
- [Roadmap](docs/roadmap.md)
- [Vagrant install](docs/developers/vagrant.md)
- [Flatcar Container Linux bootstrap](docs/operating_systems/flatcar.md)
- [Fedora CoreOS bootstrap](docs/operating_systems/fcos.md)
- [openSUSE setup](docs/operating_systems/opensuse.md)
- [Downloaded artifacts](docs/advanced/downloads.md)
- [Cloud providers](docs/cloud_providers/cloud.md)
- [OpenStack](docs/cloud_providers/openstack.md)
- [AWS](docs/cloud_providers/aws.md)
- [Azure](docs/cloud_providers/azure.md)
- [vSphere](docs/cloud_providers/vsphere.md)
- [Equinix Metal](docs/cloud_providers/equinix-metal.md)
- [Large deployments](docs/operations/large-deployments.md)
- [Adding/replacing a node](docs/operations/nodes.md)
- [Upgrades basics](docs/operations/upgrades.md)
- [Air-Gap installation](docs/operations/offline-environment.md)
- [NTP](docs/advanced/ntp.md)
- [Hardening](docs/operations/hardening.md)
- [Mirror](docs/operations/mirror.md)
- [Roadmap](docs/roadmap/roadmap.md)
## Supported Linux Distributions ## Supported Linux Distributions
- **Flatcar Container Linux by Kinvolk** - **Flatcar Container Linux by Kinvolk**
- **Debian** Bookworm, Bullseye, Buster - **Debian** Bookworm, Bullseye, Buster
- **Ubuntu** 20.04, 22.04 - **Ubuntu** 20.04, 22.04
- **CentOS/RHEL** 7, [8, 9](docs/centos.md#centos-8)
- **CentOS/RHEL** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
- **Fedora** 37, 38 - **Fedora** 37, 38
- **Fedora CoreOS** (see [fcos Note](docs/fcos.md))
- **Fedora CoreOS** (see [fcos Note](docs/operating_systems/fcos.md))
- **openSUSE** Leap 15.x/Tumbleweed - **openSUSE** Leap 15.x/Tumbleweed
- **Oracle Linux** 7, [8, 9](docs/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/openeuler.md))
- **Oracle Linux** 7, [8, 9](docs/operating_systems/centos.md#centos-8)
- **Alma Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Rocky Linux** [8, 9](docs/operating_systems/centos.md#centos-8)
- **Kylin Linux Advanced Server V10** (experimental: see [kylin linux notes](docs/operating_systems/kylinlinux.md))
- **Amazon Linux 2** (experimental: see [amazon linux notes](docs/operating_systems/amazonlinux.md))
- **UOS Linux** (experimental: see [uos linux notes](docs/operating_systems/uoslinux.md))
- **openEuler** (experimental: see [openEuler notes](docs/operating_systems/openeuler.md))
Note: Upstart/SysV init based OS types are not supported. Note: Upstart/SysV init based OS types are not supported.
@ -164,7 +164,7 @@ Note: Upstart/SysV init based OS types are not supported.
- [etcd](https://github.com/etcd-io/etcd) v3.5.12 - [etcd](https://github.com/etcd-io/etcd) v3.5.12
- [docker](https://www.docker.com/) v24.0 (see [Note](#container-runtime-notes)) - [docker](https://www.docker.com/) v24.0 (see [Note](#container-runtime-notes))
- [containerd](https://containerd.io/) v1.7.16 - [containerd](https://containerd.io/) v1.7.16
- [cri-o](http://cri-o.io/) v1.29.1 (experimental: see [CRI-O Note](docs/cri-o.md). Only on fedora, ubuntu and centos based OS)
- [cri-o](http://cri-o.io/) v1.29.1 (experimental: see [CRI-O Note](docs/CRI/cri-o.md). Only on fedora, ubuntu and centos based OS)
- Network Plugin - Network Plugin
- [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0 - [cni-plugins](https://github.com/containernetworking/plugins) v1.2.0
- [calico](https://github.com/projectcalico/calico) v3.27.3 - [calico](https://github.com/projectcalico/calico) v3.27.3
@ -204,7 +204,7 @@ Note: Upstart/SysV init based OS types are not supported.
- **Minimum required version of Kubernetes is v1.27** - **Minimum required version of Kubernetes is v1.27**
- **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands** - **Ansible v2.14+, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/operations/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**. - The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**. - If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to. - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
@ -225,7 +225,7 @@ These limits are safeguarded by Kubespray. Actual requirements for your workload
You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
- [flannel]CNI/flannel.md): gre/vxlan (layer 2) networking.
- [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options - [Calico](https://docs.tigera.io/calico/latest/about/) is a networking and network policy provider. Calico supports a flexible set of networking options
designed to give you the most efficient networking across a range of situations, including non-overlay designed to give you the most efficient networking across a range of situations, including non-overlay
@ -234,32 +234,32 @@ You can choose among ten network plugins. (default: `calico`, except Vagrant use
- [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic. - [cilium](http://docs.cilium.io/en/latest/): layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- [weave](docs/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
- [weave](docs/CNI/weave.md): Weave is a lightweight container overlay network that doesn't require an external K/V database cluster.
(Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)). (Please refer to `weave` [troubleshooting documentation](https://www.weave.works/docs/net/latest/troubleshooting/)).
- [kube-ovn](docs/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-ovn](docs/CNI/kube-ovn.md): Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
- [kube-router](docs/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
- [kube-router](docs/CNI/kube-router.md): Kube-router is a L3 CNI for Kubernetes networking aiming to provide operational
simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy), simplicity and high performance: it uses IPVS to provide Kube Services Proxy (if setup to replace kube-proxy),
iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers). iptables for network policies, and BGP for ods L3 networking (with optionally BGP peering with out-of-cluster BGP peers).
It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs. It can also optionally advertise routes to Kubernetes cluster Pods CIDRs, ClusterIPs, ExternalIPs and LoadBalancerIPs.
- [macvlan](docs/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [macvlan](docs/CNI/macvlan.md): Macvlan is a Linux network driver. Pods have their own unique Mac and Ip address, connected directly the physical (layer 2) network.
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [multus](docs/CNI/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
- [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray. - [custom_cni](roles/network-plugin/custom_cni/) : You can specify some manifests that will be applied to the clusters to bring you own CNI and use non-supported ones by Kubespray.
See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart. See `tests/files/custom_cni/README.md` and `tests/files/custom_cni/values.yaml`for an example with a CNI provided by a Helm Chart.
The network plugin to use is defined by the variable `kube_network_plugin`. There is also an The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
See also [Network checker](docs/advanced/netcheck.md).
## Ingress Plugins ## Ingress Plugins
- [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller. - [nginx](https://kubernetes.github.io/ingress-nginx): the NGINX Ingress Controller.
- [metallb](docs/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
- [metallb](docs/ingress/metallb.md): the MetalLB bare-metal service LoadBalancer provider.
## Community docs and resources ## Community docs and resources
@ -280,4 +280,4 @@ See also [Network checker](docs/netcheck.md).
CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/). CI/end-to-end tests sponsored by: [CNCF](https://cncf.io), [Equinix Metal](https://metal.equinix.com/), [OVHcloud](https://www.ovhcloud.com/), [ELASTX](https://elastx.se/).
See the [test matrix](docs/test_cases.md) for details.
See the [test matrix](docs/developers/test_cases.md) for details.

2
contrib/azurerm/README.md

@ -49,7 +49,7 @@ If you need to delete all resources from a resource group, simply call:
## Installing Ansible and the dependencies ## Installing Ansible and the dependencies
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Generating an inventory for kubespray ## Generating an inventory for kubespray

2
contrib/terraform/equinix/README.md

@ -35,7 +35,7 @@ now six total etcd replicas.
## Requirements ## Requirements
- [Install Terraform](https://www.terraform.io/intro/getting-started/install.html) - [Install Terraform](https://www.terraform.io/intro/getting-started/install.html)
- [Install Ansible dependencies](/docs/ansible.md#installing-ansible)
- [Install Ansible dependencies](/docs/ansible/ansible.md#installing-ansible)
- Account with Equinix Metal - Account with Equinix Metal
- An SSH key pair - An SSH key pair

2
contrib/terraform/openstack/README.md

@ -619,7 +619,7 @@ Edit `inventory/$CLUSTER/group_vars/k8s_cluster/k8s_cluster.yml`:
- Set variable **kube_network_plugin** to your desired networking plugin. - Set variable **kube_network_plugin** to your desired networking plugin.
- **flannel** works out-of-the-box - **flannel** works out-of-the-box
- **calico** requires [configuring OpenStack Neutron ports](/docs/openstack.md) to allow service and pod subnets
- **calico** requires [configuring OpenStack Neutron ports](/docs/cloud_providers/openstack.md) to allow service and pod subnets
```yml ```yml
# Choose network plugin (calico, weave or flannel) # Choose network plugin (calico, weave or flannel)

docs/calico.md → docs/CNI/calico.md

@ -382,7 +382,7 @@ To clean up any ipvs leftovers:
Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this. Calico node, typha and kube-controllers need to be able to talk to the kubernetes API. Please reference the [Enabling eBPF Calico Docs](https://docs.projectcalico.org/maintenance/ebpf/enabling-bpf) for guidelines on how to do this.
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/ha-mode.md).
Kubespray sets up the `kubernetes-services-endpoint` configmap based on the contents of the `loadbalancer_apiserver` inventory variable documented in [HA Mode](/docs/operations/ha-mode.md).
If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. We are able to do so only if you use the same port for the localhost apiserver loadbalancer and the kube-apiserver. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints. If no external loadbalancer is used, Calico eBPF can also use the localhost loadbalancer option. We are able to do so only if you use the same port for the localhost apiserver loadbalancer and the kube-apiserver. In this case Calico Automatic Host Endpoints need to be enabled to allow services like `coredns` and `metrics-server` to communicate with the kubernetes host endpoint. See [this blog post](https://www.projectcalico.org/securing-kubernetes-nodes-with-calico-automatic-host-endpoints/) on enabling automatic host endpoints.

docs/cilium.md → docs/CNI/cilium.md

@ -99,7 +99,7 @@ cilium_operator_extra_volume_mounts:
## Choose Cilium version ## Choose Cilium version
```yml ```yml
cilium_version: v1.15.4
cilium_version: v1.12.1
``` ```
## Add variable to config ## Add variable to config

docs/cni.md → docs/CNI/cni.md

docs/flannel.md → docs/CNI/flannel.md

docs/kube-ovn.md → docs/CNI/kube-ovn.md

docs/kube-router.md → docs/CNI/kube-router.md

docs/macvlan.md → docs/CNI/macvlan.md

docs/multus.md → docs/CNI/multus.md

docs/weave.md → docs/CNI/weave.md

docs/containerd.md → docs/CRI/containerd.md

docs/cri-o.md → docs/CRI/cri-o.md

docs/docker.md → docs/CRI/docker.md

docs/gvisor.md → docs/CRI/gvisor.md

docs/kata-containers.md → docs/CRI/kata-containers.md

docs/aws-ebs-csi.md → docs/CSI/aws-ebs-csi.md

docs/azure-csi.md → docs/CSI/azure-csi.md

docs/cinder-csi.md → docs/CSI/cinder-csi.md

docs/gcp-pd-csi.md → docs/CSI/gcp-pd-csi.md

docs/vsphere-csi.md → docs/CSI/vsphere-csi.md

docs/arch.md → docs/advanced/arch.md

docs/cert_manager.md → docs/advanced/cert_manager.md

docs/dns-stack.md → docs/advanced/dns-stack.md

docs/downloads.md → docs/advanced/downloads.md

docs/gcp-lb.md → docs/advanced/gcp-lb.md

docs/kubernetes-reliability.md → docs/advanced/kubernetes-reliability.md

docs/mitogen.md → docs/advanced/mitogen.md

docs/netcheck.md → docs/advanced/netcheck.md

docs/ntp.md → docs/advanced/ntp.md

docs/proxy.md → docs/advanced/proxy.md

docs/kubernetes-apps/registry.md → docs/advanced/registry.md

docs/ansible.md → docs/ansible/ansible.md

@ -59,7 +59,7 @@ not _kube_node_.
There are also two special groups: There are also two special groups:
* **calico_rr** : explained for [advanced Calico networking cases](/docs/calico.md)
* **calico_rr** : explained for [advanced Calico networking cases](/docs/CNI/calico.md)
* **bastion** : configure a bastion host if your nodes are not directly reachable * **bastion** : configure a bastion host if your nodes are not directly reachable
Below is a complete inventory example: Below is a complete inventory example:
@ -285,7 +285,7 @@ For more information about Ansible and bastion hosts, read
## Mitogen ## Mitogen
Mitogen support is deprecated, please see [mitogen related docs](/docs/mitogen.md) for usage and reasons for deprecation.
Mitogen support is deprecated, please see [mitogen related docs](/docs/advanced/mitogen.md) for usage and reasons for deprecation.
## Beyond ansible 2.9 ## Beyond ansible 2.9

docs/ansible_collection.md → docs/ansible/ansible_collection.md

docs/vars.md → docs/ansible/vars.md

@ -46,11 +46,11 @@ Some variables of note include:
* *loadbalancer_apiserver* - If defined, all hosts will connect to this * *loadbalancer_apiserver* - If defined, all hosts will connect to this
address instead of localhost for kube_control_planes and kube_control_plane[0] for address instead of localhost for kube_control_planes and kube_control_plane[0] for
kube_nodes. See more details in the kube_nodes. See more details in the
[HA guide](/docs/ha-mode.md).
[HA guide](/docs/operations/ha-mode.md).
* *loadbalancer_apiserver_localhost* - makes all hosts to connect to * *loadbalancer_apiserver_localhost* - makes all hosts to connect to
the apiserver internally load balanced endpoint. Mutual exclusive to the the apiserver internally load balanced endpoint. Mutual exclusive to the
`loadbalancer_apiserver`. See more details in the `loadbalancer_apiserver`. See more details in the
[HA guide](/docs/ha-mode.md).
[HA guide](/docs/operations/ha-mode.md).
## Cluster variables ## Cluster variables

docs/aws.md → docs/cloud_providers/aws.md

docs/azure.md → docs/cloud_providers/azure.md

docs/cloud.md → docs/cloud_providers/cloud.md

docs/equinix-metal.md → docs/cloud_providers/equinix-metal.md

@ -54,7 +54,7 @@ cd kubespray
## Install Ansible ## Install Ansible
Install Ansible according to [Ansible installation guide](/docs/ansible.md#installing-ansible)
Install Ansible according to [Ansible installation guide](/docs/ansible/ansible.md#installing-ansible)
## Cluster Definition ## Cluster Definition

docs/openstack.md → docs/cloud_providers/openstack.md

docs/vsphere.md → docs/cloud_providers/vsphere.md

@ -54,7 +54,7 @@ external_vsphere_kubernetes_cluster_id: "kubernetes-cluster-id"
vsphere_csi_enabled: true vsphere_csi_enabled: true
``` ```
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/vsphere-csi.md) documentation.
For a more fine-grained CSI setup, refer to the [vsphere-csi](/docs/CSI/vsphere-csi.md) documentation.
### Deployment ### Deployment

docs/ci-setup.md → docs/developers/ci-setup.md

docs/ci.md → docs/developers/ci.md

docs/test_cases.md → docs/developers/test_cases.md

@ -25,7 +25,7 @@ Note, the canal network plugin deploys flannel as well plus calico policy contro
## Test cases ## Test cases
The [CI Matrix](/docs/ci.md) displays OS, Network Plugin and Container Manager tested.
The [CI Matrix](/docs/developers/ci.md) displays OS, Network Plugin and Container Manager tested.
All tests are breakdown into 3 "stages" ("Stage" means a build step of the build pipeline) as follows: All tests are breakdown into 3 "stages" ("Stage" means a build step of the build pipeline) as follows:

docs/vagrant.md → docs/developers/vagrant.md

@ -52,7 +52,7 @@ speed, the variable 'download_run_once' is set. This will make kubespray
download all files and containers just once and then redistributes them to download all files and containers just once and then redistributes them to
the other nodes and as a bonus, also cache all downloads locally and re-use the other nodes and as a bonus, also cache all downloads locally and re-use
them on the next provisioning run. For more information on download settings them on the next provisioning run. For more information on download settings
see [download documentation](/docs/downloads.md).
see [download documentation](/docs/advanced/downloads.md).
## Example use of Vagrant ## Example use of Vagrant

docs/kubernetes-apps/cephfs_provisioner.md → docs/external_storage_provisioners/cephfs_provisioner.md

docs/kubernetes-apps/local_volume_provisioner.md → docs/external_storage_provisioners/local_volume_provisioner.md

docs/kubernetes-apps/rbd_provisioner.md → docs/external_storage_provisioners/rbd_provisioner.md

docs/kubernetes-apps/scheduler_plugins.md → docs/external_storage_provisioners/scheduler_plugins.md

docs/comparisons.md → docs/getting_started/comparisons.md

docs/getting-started.md → docs/getting_started/getting-started.md

@ -36,7 +36,7 @@ ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
--private-key=~/.ssh/private_key --private-key=~/.ssh/private_key
``` ```
See more details in the [ansible guide](/docs/ansible.md).
See more details in the [ansible guide](/docs/ansible/ansible.md).
### Adding nodes ### Adding nodes
@ -81,7 +81,7 @@ kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
because kubectl will use <http://localhost:8080> to connect. The kubeconfig files because kubectl will use <http://localhost:8080> to connect. The kubeconfig files
generated will point to localhost (on kube_control_planes) and kube_node hosts will generated will point to localhost (on kube_control_planes) and kube_node hosts will
connect either to a localhost nginx proxy or to a loadbalancer if configured. connect either to a localhost nginx proxy or to a loadbalancer if configured.
More details on this process are in the [HA guide](/docs/ha-mode.md).
More details on this process are in the [HA guide](/docs/operations/ha-mode.md).
Kubespray permits connecting to the cluster remotely on any IP of any Kubespray permits connecting to the cluster remotely on any IP of any
kube_control_plane host on port 6443 by default. However, this requires kube_control_plane host on port 6443 by default. However, this requires
@ -140,5 +140,5 @@ If desired, copy admin.conf to ~/.kube/config.
## Setting up your first cluster ## Setting up your first cluster
[Setting up your first cluster](/docs/setting-up-your-first-cluster.md) is an
[Setting up your first cluster](/docs/getting_started/setting-up-your-first-cluster.md) is an
applied step-by-step guide for setting up your first cluster with Kubespray. applied step-by-step guide for setting up your first cluster with Kubespray.

docs/setting-up-your-first-cluster.md → docs/getting_started/setting-up-your-first-cluster.md

docs/ingress_controller/alb_ingress_controller.md → docs/ingress/alb_ingress_controller.md

docs/ingress_controller/ingress_nginx.md → docs/ingress/ingress_nginx.md

docs/kube-vip.md → docs/ingress/kube-vip.md

docs/metallb.md → docs/ingress/metallb.md

docs/amazonlinux.md → docs/operating_systems/amazonlinux.md

docs/bootstrap-os.md → docs/operating_systems/bootstrap-os.md

docs/centos.md → docs/operating_systems/centos.md

docs/fcos.md → docs/operating_systems/fcos.md

docs/flatcar.md → docs/operating_systems/flatcar.md

docs/kylinlinux.md → docs/operating_systems/kylinlinux.md

docs/openeuler.md → docs/operating_systems/openeuler.md

docs/opensuse.md → docs/operating_systems/opensuse.md

docs/rhel.md → docs/operating_systems/rhel.md

docs/uoslinux.md → docs/operating_systems/uoslinux.md

docs/cgroups.md → docs/operations/cgroups.md

docs/encrypting-secret-data-at-rest.md → docs/operations/encrypting-secret-data-at-rest.md

docs/etcd.md → docs/operations/etcd.md

docs/ha-mode.md → docs/operations/ha-mode.md

docs/hardening.md → docs/operations/hardening.md

docs/integration.md → docs/operations/integration.md

docs/large-deployments.md → docs/operations/large-deployments.md

@ -9,7 +9,7 @@ For a large scaled deployments, consider the following configuration changes:
* Override containers' `foo_image_repo` vars to point to intranet registry. * Override containers' `foo_image_repo` vars to point to intranet registry.
* Override the ``download_run_once: true`` and/or ``download_localhost: true``. * Override the ``download_run_once: true`` and/or ``download_localhost: true``.
See [Downloading binaries and containers](/docs/downloads.md) for details.
See [Downloading binaries and containers](/docs/advanced/downloads.md) for details.
* Adjust the `retry_stagger` global var as appropriate. It should provide sane * Adjust the `retry_stagger` global var as appropriate. It should provide sane
load on a delegate (the first K8s control plane node) then retrying failed load on a delegate (the first K8s control plane node) then retrying failed
@ -32,7 +32,7 @@ For a large scaled deployments, consider the following configuration changes:
``kube_controller_node_monitor_period``, ``kube_controller_node_monitor_period``,
``kube_apiserver_pod_eviction_not_ready_timeout_seconds`` & ``kube_apiserver_pod_eviction_not_ready_timeout_seconds`` &
``kube_apiserver_pod_eviction_unreachable_timeout_seconds`` for better Kubernetes reliability. ``kube_apiserver_pod_eviction_unreachable_timeout_seconds`` for better Kubernetes reliability.
Check out [Kubernetes Reliability](/docs/kubernetes-reliability.md)
Check out [Kubernetes Reliability](/docs/advanced/kubernetes-reliability.md)
* Tune network prefix sizes. Those are ``kube_network_node_prefix``, * Tune network prefix sizes. Those are ``kube_network_node_prefix``,
``kube_service_addresses`` and ``kube_pods_subnet``. ``kube_service_addresses`` and ``kube_pods_subnet``.
@ -41,7 +41,7 @@ For a large scaled deployments, consider the following configuration changes:
from host/network interruption much quicker with calico_rr. from host/network interruption much quicker with calico_rr.
* Check out the * Check out the
[Inventory](/docs/getting-started.md#building-your-own-inventory)
[Inventory](/docs/getting_started/getting-started.md#building-your-own-inventory)
section of the Getting started guide for tips on creating a large scale section of the Getting started guide for tips on creating a large scale
Ansible inventory. Ansible inventory.

docs/mirror.md → docs/operations/mirror.md

docs/nodes.md → docs/operations/nodes.md

docs/offline-environment.md → docs/operations/offline-environment.md

docs/port-requirements.md → docs/operations/port-requirements.md

docs/recover-control-plane.md → docs/operations/recover-control-plane.md

docs/upgrades.md → docs/operations/upgrades.md

docs/roadmap.md → docs/roadmap/roadmap.md

Loading…
Cancel
Save