Browse Source

Merge pull request #862 from bogdando/docs

Update docs
pull/874/head
Bogdan Dobrelya 8 years ago
committed by GitHub
parent
commit
a5f93d6013
3 changed files with 98 additions and 34 deletions
  1. 69
      README.md
  2. 39
      docs/ansible.md
  3. 24
      docs/comparisons.md

69
README.md

@ -1,4 +1,4 @@
![Kubespray Logo](http://s9.postimg.org/md5dyjl67/kubespray_logoandkubespray_small.png)
![Kubernetes Logo](https://s28.postimg.org/lf3q4ocpp/k8s.png)
##Deploy a production ready kubernetes cluster ##Deploy a production ready kubernetes cluster
@ -14,75 +14,88 @@ If you have questions, join us on the [kubernetes slack](https://slack.k8s.io),
To deploy the cluster you can use : To deploy the cluster you can use :
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br> [**kargo-cli**](https://github.com/kubespray/kargo-cli) <br>
**Ansible** usual commands <br>
**Ansible** usual commands and [**inventory builder**](https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py) <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br> **vagrant** by simply running `vagrant up` (for tests purposes) <br>
* [Requirements](#requirements) * [Requirements](#requirements)
* [Kargo vs ...](docs/comparisons.md)
* [Getting started](docs/getting-started.md) * [Getting started](docs/getting-started.md)
* [Ansible inventory and tags](docs/ansible.md)
* [Deployment data variables](docs/vars.md)
* [DNS stack](docs/dns-stack.md)
* [HA mode](docs/ha-mode.md)
* [Network plugins](#network-plugins)
* [Vagrant install](docs/vagrant.md) * [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md) * [CoreOS bootstrap](docs/coreos.md)
* [Ansible variables](docs/ansible.md)
* [Downloaded artifacts](docs/downloads.md)
* [Cloud providers](docs/cloud.md) * [Cloud providers](docs/cloud.md)
* [OpenStack](docs/openstack.md) * [OpenStack](docs/openstack.md)
* [AWS](docs/aws.md) * [AWS](docs/aws.md)
* [Azure](docs/azure.md) * [Azure](docs/azure.md)
* [Network plugins](#network-plugins)
* [Large deployments](docs/large-deployments.md)
* [Upgrades basics](docs/upgrades.md)
* [Roadmap](docs/roadmap.md) * [Roadmap](docs/roadmap.md)
Supported Linux distributions Supported Linux distributions
=============== ===============
* **CoreOS**
* **Debian** Wheezy, Jessie
* **Ubuntu** 14.10, 15.04, 15.10, 16.04
* **Fedora** 23
* **Container Linux by CoreOS**
* **Debian** Jessie
* **Ubuntu** 16.04
* **CentOS/RHEL** 7 * **CentOS/RHEL** 7
Versions
--------------
Note: Upstart/SysV init based OS types are not supported.
Versions of supported components
--------------------------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.4.6 <br>
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.5.1 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br> [etcd](https://github.com/coreos/etcd/releases) v3.0.6 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br> [flanneld](https://github.com/coreos/flannel/releases) v0.6.2 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.22.0 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.23.0 <br>
[canal](https://github.com/projectcalico/canal) (given calico/flannel versions) <br>
[weave](http://weave.works/) v1.6.1 <br> [weave](http://weave.works/) v1.6.1 <br>
[docker](https://www.docker.com/) v1.10.3 <br>
[docker](https://www.docker.com/) v1.12.5 <br>
[rkt](https://coreos.com/rkt/docs/latest/) v1.21.0 <br>
Note: rkt support as docker alternative is limited to control plane (etcd and
kubelet). Docker is still used for Kubernetes cluster workloads and network
plugins' related OS services. Also note, only one of the supported network
plugins can be deployed for a given single cluster.
Requirements Requirements
-------------- --------------
* The target servers must have **access to the Internet** in order to pull docker images. * The target servers must have **access to the Internet** in order to pull docker images.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to. * The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall
in order to avoid any issue during deployment you should disable your firewall.
* The target servers are configured to allow **IPv4 forwarding**.
* **Copy your ssh keys** to all the servers part of your inventory. * **Copy your ssh keys** to all the servers part of your inventory.
* **Ansible v2.1 (or newer) and python-netaddr**
* **Ansible v2.2 (or newer) and python-netaddr**
## Network plugins ## Network plugins
You can choose between 3 network plugins. (default: `flannel` with vxlan backend)
You can choose between 4 network plugins. (default: `flannel` with vxlan backend)
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking. * [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking.
* [**calico**](docs/calico.md): bgp (layer 3) networking. * [**calico**](docs/calico.md): bgp (layer 3) networking.
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html))
* [**canal**](https://github.com/projectcalico/canal): a composition of calico and flannel plugins.
The choice is defined with the variable `kube_network_plugin`
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html)).
The choice is defined with the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md).
## CI Tests ## CI Tests
[![Build Status](https://travis-ci.org/kubernetes-incubator/kargo.svg)](https://travis-ci.org/kubernetes-incubator/kargo) </br>
### Google Compute Engine
![Gitlab Logo](https://s27.postimg.org/wmtaig1wz/gitlabci.png)
| Calico | Flannel | Weave |
------------- | ------------- | ------------- | ------------- |
Ubuntu Xenial |[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-xenial-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-xenial-weave)|
CentOS 7 |[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-centos7-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-centos7-weave/)|
CoreOS (stable) |[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-calico/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-calico/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-flannel/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-flannel/)|[![Build Status](https://ci.kubespray.io/job/kargo-gce-coreos-weave/badge/icon)](https://ci.kubespray.io/job/kargo-gce-coreos-weave/)|
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-incubator__kargo/pipelines) </br>
CI tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.
CI/end-to-end tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.
See the [test matrix](docs/test_cases.md) for details.

39
docs/ansible.md

@ -45,9 +45,38 @@ kube-master
etcd etcd
``` ```
Group vars
--------------
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
Group vars and overriding variables precedence
----------------------------------------------
The group variables to control main deployment options are located in the directory
```inventory/group_vars```.
There are also role vars for docker, rkt, kubernetes preinstall and master roles.
According to the [ansible docs](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable),
those cannot be overriden from the group vars. In order to override, one should use
the `-e ` runtime flags (most simple way) or other layers described in the docs.
Kargo uses only a few layers to override things (or expect them to
be overriden for roles):
Layer | Comment
------|--------
**role defaults** | provides best UX to override things for Kargo deployments
inventory vars | Unused
**inventory group_vars** | Expects users to use ``all.yml``,``k8s-cluster.yml`` etc. to override things
inventory host_vars | Unused
playbook group_vars | Unuses
playbook host_vars | Unused
**host facts** | Kargo overrides for internal roles' logic, like state flags
play vars | Unused
play vars_prompt | Unused
play vars_files | Unused
registered vars | Unused
set_facts | Kargo overrides those, for some places
**role and include vars** | Provides bad UX to override things! Use extra vars to enforce
block vars (only for tasks in block) | Kargo overrides for internal roles' logic
task vars (only for the task) | Unused for roles, but only for helper scripts
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
Ansible tags Ansible tags
------------ ------------
@ -132,5 +161,5 @@ bastion host.
bastion ansible_ssh_host=x.x.x.x bastion ansible_ssh_host=x.x.x.x
``` ```
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)
For more information about Ansible and bastion hosts, read
[Running Ansible Through an SSH Bastion Host](http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/)

24
docs/comparisons.md

@ -1,3 +1,25 @@
Kargo vs [Kops](https://github.com/kubernetes/kops) Kargo vs [Kops](https://github.com/kubernetes/kops)
--------------- ---------------
Kargo runs on bare metal and most clouds, using Ansible as its substrate for provisioning and orchestration. Kops performs the provisioning and orchestration itself, and as such is less flexible in deployment platforms. For people with familiarity with Ansible, existing Ansible deployments or the desire to run a Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops, however, iss more tightly integrated with the unique features of the clouds it supports so it could be a better choice if you know that you will only be using one platform for the foreseeable future.
Kargo runs on bare metal and most clouds, using Ansible as its substrate for
provisioning and orchestration. Kops performs the provisioning and orchestration
itself, and as such is less flexible in deployment platforms. For people with
familiarity with Ansible, existing Ansible deployments or the desire to run a
Kubernetes cluster across multiple platforms, Kargo is a good choice. Kops,
however, is more tightly integrated with the unique features of the clouds it
supports so it could be a better choice if you know that you will only be using
one platform for the foreseeable future.
Kargo vs [Kubeadm](https://github.com/kubernetes/kubeadm)
------------------
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
management, including self-hosted layouts, dynamic discovery services and so
on. Had it belong to the new [operators world](https://coreos.com/blog/introducing-operators.html),
it would've likely been named a "Kubernetes cluster operator". Kargo however,
does generic configuration management tasks from the "OS operators" ansible
world, plus some initial K8s clustering (with networking plugins included) and
control plane bootstrapping. Kargo [strives](https://github.com/kubernetes-incubator/kargo/issues/553)
to adopt kubeadm as a tool in order to consume life cycle management domain
knowledge from it and offload generic OS configuration things from it, which
hopefully benefits both sides.
Loading…
Cancel
Save