You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

272 lines
8.1 KiB

  1. Private Docker Registry in Kubernetes
  2. =====================================
  3. Kubernetes offers an optional private Docker registry addon, which you can turn
  4. on when you bring up a cluster or install later. This gives you a place to
  5. store truly private Docker images for your cluster.
  6. How it works
  7. ------------
  8. The private registry runs as a `Pod` in your cluster. It does not currently
  9. support SSL or authentication, which triggers Docker's "insecure registry"
  10. logic. To work around this, we run a proxy on each node in the cluster,
  11. exposing a port onto the node (via a hostPort), which Docker accepts as
  12. "secure", since it is accessed by `localhost`.
  13. Turning it on
  14. -------------
  15. Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
  16. `ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
  17. whether the registry is run or not. To set this flag, you can specify
  18. `KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
  19. does not include this flag, the following steps should work. Note that some of
  20. this is cloud-provider specific, so you may have to customize it a bit.
  21. ### Make some storage
  22. The primary job of the registry is to store data. To do that we have to decide
  23. where to store it. For cloud environments that have networked storage, we can
  24. use Kubernetes's `PersistentVolume` abstraction. The following template is
  25. expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
  26. other situations:
  27. <!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
  28. ``` yaml
  29. kind: PersistentVolume
  30. apiVersion: v1
  31. metadata:
  32. name: kube-system-kube-registry-pv
  33. spec:
  34. {% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
  35. capacity:
  36. storage: {{ pillar['cluster_registry_disk_size'] }}
  37. accessModes:
  38. - ReadWriteOnce
  39. gcePersistentDisk:
  40. pdName: "{{ pillar['cluster_registry_disk_name'] }}"
  41. fsType: "ext4"
  42. {% endif %}
  43. ```
  44. <!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
  45. If, for example, you wanted to use NFS you would just need to change the
  46. `gcePersistentDisk` block to `nfs`. See
  47. [here](https://kubernetes.io/docs/concepts/storage/volumes/) for more details on volumes.
  48. Note that in any case, the storage (in the case the GCE PersistentDisk) must be
  49. created independently - this is not something Kubernetes manages for you (yet).
  50. ### I don't want or don't have persistent storage
  51. If you are running in a place that doesn't have networked storage, or if you
  52. just want to kick the tires on this without committing to it, you can easily
  53. adapt the `ReplicationController` specification below to use a simple
  54. `emptyDir` volume instead of a `persistentVolumeClaim`.
  55. Claim the storage
  56. -----------------
  57. Now that the Kubernetes cluster knows that some storage exists, you can put a
  58. claim on that storage. As with the `PersistentVolume` above, you can start
  59. with the `salt` template:
  60. <!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
  61. ``` yaml
  62. kind: PersistentVolumeClaim
  63. apiVersion: v1
  64. metadata:
  65. name: kube-registry-pvc
  66. namespace: kube-system
  67. spec:
  68. accessModes:
  69. - ReadWriteOnce
  70. resources:
  71. requests:
  72. storage: {{ pillar['cluster_registry_disk_size'] }}
  73. ```
  74. <!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
  75. This tells Kubernetes that you want to use storage, and the `PersistentVolume`
  76. you created before will be bound to this claim (unless you have other
  77. `PersistentVolumes` in which case those might get bound instead). This claim
  78. gives you the right to use this storage until you release the claim.
  79. Run the registry
  80. ----------------
  81. Now we can run a Docker registry:
  82. <!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
  83. ``` yaml
  84. apiVersion: v1
  85. kind: ReplicationController
  86. metadata:
  87. name: kube-registry-v0
  88. namespace: kube-system
  89. labels:
  90. k8s-app: registry
  91. version: v0
  92. spec:
  93. replicas: 1
  94. selector:
  95. k8s-app: registry
  96. version: v0
  97. template:
  98. metadata:
  99. labels:
  100. k8s-app: registry
  101. version: v0
  102. spec:
  103. containers:
  104. - name: registry
  105. image: registry:2
  106. resources:
  107. limits:
  108. cpu: 100m
  109. memory: 100Mi
  110. env:
  111. - name: REGISTRY_HTTP_ADDR
  112. value: :5000
  113. - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
  114. value: /var/lib/registry
  115. volumeMounts:
  116. - name: image-store
  117. mountPath: /var/lib/registry
  118. ports:
  119. - containerPort: 5000
  120. name: registry
  121. protocol: TCP
  122. volumes:
  123. - name: image-store
  124. persistentVolumeClaim:
  125. claimName: kube-registry-pvc
  126. ```
  127. <!-- END MUNGE: EXAMPLE registry-rc.yaml -->
  128. Expose the registry in the cluster
  129. ----------------------------------
  130. Now that we have a registry `Pod` running, we can expose it as a Service:
  131. <!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
  132. ``` yaml
  133. apiVersion: v1
  134. kind: Service
  135. metadata:
  136. name: kube-registry
  137. namespace: kube-system
  138. labels:
  139. k8s-app: registry
  140. kubernetes.io/name: "KubeRegistry"
  141. spec:
  142. selector:
  143. k8s-app: registry
  144. ports:
  145. - name: registry
  146. port: 5000
  147. protocol: TCP
  148. ```
  149. <!-- END MUNGE: EXAMPLE registry-svc.yaml -->
  150. Expose the registry on each node
  151. --------------------------------
  152. Now that we have a running `Service`, we need to expose it onto each Kubernetes
  153. `Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
  154. node by creating following daemonset.
  155. <!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  156. ``` yaml
  157. apiVersion: apps/v1
  158. kind: DaemonSet
  159. metadata:
  160. name: kube-registry-proxy
  161. namespace: kube-system
  162. labels:
  163. k8s-app: kube-registry-proxy
  164. version: v0.4
  165. spec:
  166. template:
  167. metadata:
  168. labels:
  169. k8s-app: kube-registry-proxy
  170. kubernetes.io/name: "kube-registry-proxy"
  171. version: v0.4
  172. spec:
  173. containers:
  174. - name: kube-registry-proxy
  175. image: gcr.io/google_containers/kube-registry-proxy:0.4
  176. resources:
  177. limits:
  178. cpu: 100m
  179. memory: 50Mi
  180. env:
  181. - name: REGISTRY_HOST
  182. value: kube-registry.kube-system.svc.cluster.local
  183. - name: REGISTRY_PORT
  184. value: "5000"
  185. ports:
  186. - name: registry
  187. containerPort: 80
  188. hostPort: 5000
  189. ```
  190. <!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  191. When modifying replication-controller, service and daemon-set definitions, take
  192. care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
  193. Failing to do so will have register the localhost proxy daemon-sets to the
  194. upstream service. As a result they will then try to proxy themselves, which
  195. will, for obvious reasons, not work.
  196. This ensures that port 5000 on each node is directed to the registry `Service`.
  197. You should be able to verify that it is running by hitting port 5000 with a web
  198. browser and getting a 404 error:
  199. ``` console
  200. $ curl localhost:5000
  201. 404 page not found
  202. ```
  203. Using the registry
  204. ------------------
  205. To use an image hosted by this registry, simply say this in your `Pod`'s
  206. `spec.containers[].image` field:
  207. ``` yaml
  208. image: localhost:5000/user/container
  209. ```
  210. Before you can use the registry, you have to be able to get images into it,
  211. though. If you are building an image on your Kubernetes `Node`, you can spell
  212. out `localhost:5000` when you build and push. More likely, though, you are
  213. building locally and want to push to your cluster.
  214. You can use `kubectl` to set up a port-forward from your local node to a
  215. running Pod:
  216. ``` console
  217. $ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
  218. -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
  219. | grep Running | head -1 | cut -f1 -d' ')
  220. $ kubectl port-forward --namespace kube-system $POD 5000:5000 &
  221. ```
  222. Now you can build and push images on your local computer as
  223. `localhost:5000/yourname/container` and those images will be available inside
  224. your kubernetes cluster with the same name.
  225. More Extensions
  226. ===============
  227. - [Use GCS as storage backend](gcs/README.md)
  228. - [Enable TLS/SSL](tls/README.md)
  229. - [Enable Authentication](auth/README.md)
  230. Future improvements
  231. -------------------
  232. - Allow port-forwarding to a Service rather than a pod (\#15180)