You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

281 lines
8.4 KiB

  1. Private Docker Registry in Kubernetes
  2. =====================================
  3. Kubernetes offers an optional private Docker registry addon, which you can turn
  4. on when you bring up a cluster or install later. This gives you a place to
  5. store truly private Docker images for your cluster.
  6. How it works
  7. ------------
  8. The private registry runs as a `Pod` in your cluster. It does not currently
  9. support SSL or authentication, which triggers Docker's "insecure registry"
  10. logic. To work around this, we run a proxy on each node in the cluster,
  11. exposing a port onto the node (via a hostPort), which Docker accepts as
  12. "secure", since it is accessed by `localhost`.
  13. Turning it on
  14. -------------
  15. Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
  16. `ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
  17. whether the registry is run or not. To set this flag, you can specify
  18. `KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
  19. does not include this flag, the following steps should work. Note that some of
  20. this is cloud-provider specific, so you may have to customize it a bit.
  21. ### Make some storage
  22. The primary job of the registry is to store data. To do that we have to decide
  23. where to store it. For cloud environments that have networked storage, we can
  24. use Kubernetes's `PersistentVolume` abstraction. The following template is
  25. expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
  26. other situations:
  27. <!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
  28. ``` yaml
  29. kind: PersistentVolume
  30. apiVersion: v1
  31. metadata:
  32. name: kube-system-kube-registry-pv
  33. labels:
  34. kubernetes.io/cluster-service: "true"
  35. spec:
  36. {% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
  37. capacity:
  38. storage: {{ pillar['cluster_registry_disk_size'] }}
  39. accessModes:
  40. - ReadWriteOnce
  41. gcePersistentDisk:
  42. pdName: "{{ pillar['cluster_registry_disk_name'] }}"
  43. fsType: "ext4"
  44. {% endif %}
  45. ```
  46. <!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
  47. If, for example, you wanted to use NFS you would just need to change the
  48. `gcePersistentDisk` block to `nfs`. See
  49. [here](https://kubernetes.io/docs/user-guide/volumes.md) for more details on volumes.
  50. Note that in any case, the storage (in the case the GCE PersistentDisk) must be
  51. created independently - this is not something Kubernetes manages for you (yet).
  52. ### I don't want or don't have persistent storage
  53. If you are running in a place that doesn't have networked storage, or if you
  54. just want to kick the tires on this without committing to it, you can easily
  55. adapt the `ReplicationController` specification below to use a simple
  56. `emptyDir` volume instead of a `persistentVolumeClaim`.
  57. Claim the storage
  58. -----------------
  59. Now that the Kubernetes cluster knows that some storage exists, you can put a
  60. claim on that storage. As with the `PersistentVolume` above, you can start
  61. with the `salt` template:
  62. <!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
  63. ``` yaml
  64. kind: PersistentVolumeClaim
  65. apiVersion: v1
  66. metadata:
  67. name: kube-registry-pvc
  68. namespace: kube-system
  69. labels:
  70. kubernetes.io/cluster-service: "true"
  71. spec:
  72. accessModes:
  73. - ReadWriteOnce
  74. resources:
  75. requests:
  76. storage: {{ pillar['cluster_registry_disk_size'] }}
  77. ```
  78. <!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
  79. This tells Kubernetes that you want to use storage, and the `PersistentVolume`
  80. you created before will be bound to this claim (unless you have other
  81. `PersistentVolumes` in which case those might get bound instead). This claim
  82. gives you the right to use this storage until you release the claim.
  83. Run the registry
  84. ----------------
  85. Now we can run a Docker registry:
  86. <!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
  87. ``` yaml
  88. apiVersion: v1
  89. kind: ReplicationController
  90. metadata:
  91. name: kube-registry-v0
  92. namespace: kube-system
  93. labels:
  94. k8s-app: registry
  95. version: v0
  96. kubernetes.io/cluster-service: "true"
  97. spec:
  98. replicas: 1
  99. selector:
  100. k8s-app: registry
  101. version: v0
  102. template:
  103. metadata:
  104. labels:
  105. k8s-app: registry
  106. version: v0
  107. kubernetes.io/cluster-service: "true"
  108. spec:
  109. containers:
  110. - name: registry
  111. image: registry:2
  112. resources:
  113. limits:
  114. cpu: 100m
  115. memory: 100Mi
  116. env:
  117. - name: REGISTRY_HTTP_ADDR
  118. value: :5000
  119. - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
  120. value: /var/lib/registry
  121. volumeMounts:
  122. - name: image-store
  123. mountPath: /var/lib/registry
  124. ports:
  125. - containerPort: 5000
  126. name: registry
  127. protocol: TCP
  128. volumes:
  129. - name: image-store
  130. persistentVolumeClaim:
  131. claimName: kube-registry-pvc
  132. ```
  133. <!-- END MUNGE: EXAMPLE registry-rc.yaml -->
  134. Expose the registry in the cluster
  135. ----------------------------------
  136. Now that we have a registry `Pod` running, we can expose it as a Service:
  137. <!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
  138. ``` yaml
  139. apiVersion: v1
  140. kind: Service
  141. metadata:
  142. name: kube-registry
  143. namespace: kube-system
  144. labels:
  145. k8s-app: registry
  146. kubernetes.io/cluster-service: "true"
  147. kubernetes.io/name: "KubeRegistry"
  148. spec:
  149. selector:
  150. k8s-app: registry
  151. ports:
  152. - name: registry
  153. port: 5000
  154. protocol: TCP
  155. ```
  156. <!-- END MUNGE: EXAMPLE registry-svc.yaml -->
  157. Expose the registry on each node
  158. --------------------------------
  159. Now that we have a running `Service`, we need to expose it onto each Kubernetes
  160. `Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
  161. node by creating following daemonset.
  162. <!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  163. ``` yaml
  164. apiVersion: extensions/v1beta1
  165. kind: DaemonSet
  166. metadata:
  167. name: kube-registry-proxy
  168. namespace: kube-system
  169. labels:
  170. k8s-app: kube-registry-proxy
  171. kubernetes.io/cluster-service: "true"
  172. version: v0.4
  173. spec:
  174. template:
  175. metadata:
  176. labels:
  177. k8s-app: kube-registry-proxy
  178. kubernetes.io/name: "kube-registry-proxy"
  179. kubernetes.io/cluster-service: "true"
  180. version: v0.4
  181. spec:
  182. containers:
  183. - name: kube-registry-proxy
  184. image: gcr.io/google_containers/kube-registry-proxy:0.4
  185. resources:
  186. limits:
  187. cpu: 100m
  188. memory: 50Mi
  189. env:
  190. - name: REGISTRY_HOST
  191. value: kube-registry.kube-system.svc.cluster.local
  192. - name: REGISTRY_PORT
  193. value: "5000"
  194. ports:
  195. - name: registry
  196. containerPort: 80
  197. hostPort: 5000
  198. ```
  199. <!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  200. When modifying replication-controller, service and daemon-set defintions, take
  201. care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
  202. Failing to do so will have register the localhost proxy daemon-sets to the
  203. upstream service. As a result they will then try to proxy themselves, which
  204. will, for obvious reasons, not work.
  205. This ensures that port 5000 on each node is directed to the registry `Service`.
  206. You should be able to verify that it is running by hitting port 5000 with a web
  207. browser and getting a 404 error:
  208. ``` console
  209. $ curl localhost:5000
  210. 404 page not found
  211. ```
  212. Using the registry
  213. ------------------
  214. To use an image hosted by this registry, simply say this in your `Pod`'s
  215. `spec.containers[].image` field:
  216. ``` yaml
  217. image: localhost:5000/user/container
  218. ```
  219. Before you can use the registry, you have to be able to get images into it,
  220. though. If you are building an image on your Kubernetes `Node`, you can spell
  221. out `localhost:5000` when you build and push. More likely, though, you are
  222. building locally and want to push to your cluster.
  223. You can use `kubectl` to set up a port-forward from your local node to a
  224. running Pod:
  225. ``` console
  226. $ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
  227. -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
  228. | grep Running | head -1 | cut -f1 -d' ')
  229. $ kubectl port-forward --namespace kube-system $POD 5000:5000 &
  230. ```
  231. Now you can build and push images on your local computer as
  232. `localhost:5000/yourname/container` and those images will be available inside
  233. your kubernetes cluster with the same name.
  234. More Extensions
  235. ===============
  236. - [Use GCS as storage backend](gcs/README.md)
  237. - [Enable TLS/SSL](tls/README.md)
  238. - [Enable Authentication](auth/README.md)
  239. Future improvements
  240. -------------------
  241. - Allow port-forwarding to a Service rather than a pod (\#15180)