You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

274 lines
8.4 KiB

  1. # Private Docker Registry in Kubernetes
  2. Kubernetes offers an optional private Docker registry addon, which you can turn
  3. on when you bring up a cluster or install later. This gives you a place to
  4. store truly private Docker images for your cluster.
  5. ## How it works
  6. The private registry runs as a `Pod` in your cluster. It does not currently
  7. support SSL or authentication, which triggers Docker's "insecure registry"
  8. logic. To work around this, we run a proxy on each node in the cluster,
  9. exposing a port onto the node (via a hostPort), which Docker accepts as
  10. "secure", since it is accessed by `localhost`.
  11. ## Turning it on
  12. Some cluster installs (e.g. GCE) support this as a cluster-birth flag. The
  13. `ENABLE_CLUSTER_REGISTRY` variable in `cluster/gce/config-default.sh` governs
  14. whether the registry is run or not. To set this flag, you can specify
  15. `KUBE_ENABLE_CLUSTER_REGISTRY=true` when running `kube-up.sh`. If your cluster
  16. does not include this flag, the following steps should work. Note that some of
  17. this is cloud-provider specific, so you may have to customize it a bit.
  18. ### Make some storage
  19. The primary job of the registry is to store data. To do that we have to decide
  20. where to store it. For cloud environments that have networked storage, we can
  21. use Kubernetes's `PersistentVolume` abstraction. The following template is
  22. expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
  23. other situations:
  24. <!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
  25. ```yaml
  26. kind: PersistentVolume
  27. apiVersion: v1
  28. metadata:
  29. name: kube-system-kube-registry-pv
  30. labels:
  31. kubernetes.io/cluster-service: "true"
  32. spec:
  33. {% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
  34. capacity:
  35. storage: {{ pillar['cluster_registry_disk_size'] }}
  36. accessModes:
  37. - ReadWriteOnce
  38. gcePersistentDisk:
  39. pdName: "{{ pillar['cluster_registry_disk_name'] }}"
  40. fsType: "ext4"
  41. {% endif %}
  42. ```
  43. <!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
  44. If, for example, you wanted to use NFS you would just need to change the
  45. `gcePersistentDisk` block to `nfs`. See
  46. [here](https://kubernetes.io/docs/user-guide/volumes.md) for more details on volumes.
  47. Note that in any case, the storage (in the case the GCE PersistentDisk) must be
  48. created independently - this is not something Kubernetes manages for you (yet).
  49. ### I don't want or don't have persistent storage
  50. If you are running in a place that doesn't have networked storage, or if you
  51. just want to kick the tires on this without committing to it, you can easily
  52. adapt the `ReplicationController` specification below to use a simple
  53. `emptyDir` volume instead of a `persistentVolumeClaim`.
  54. ## Claim the storage
  55. Now that the Kubernetes cluster knows that some storage exists, you can put a
  56. claim on that storage. As with the `PersistentVolume` above, you can start
  57. with the `salt` template:
  58. <!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
  59. ```yaml
  60. kind: PersistentVolumeClaim
  61. apiVersion: v1
  62. metadata:
  63. name: kube-registry-pvc
  64. namespace: kube-system
  65. labels:
  66. kubernetes.io/cluster-service: "true"
  67. spec:
  68. accessModes:
  69. - ReadWriteOnce
  70. resources:
  71. requests:
  72. storage: {{ pillar['cluster_registry_disk_size'] }}
  73. ```
  74. <!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
  75. This tells Kubernetes that you want to use storage, and the `PersistentVolume`
  76. you created before will be bound to this claim (unless you have other
  77. `PersistentVolumes` in which case those might get bound instead). This claim
  78. gives you the right to use this storage until you release the claim.
  79. ## Run the registry
  80. Now we can run a Docker registry:
  81. <!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
  82. ```yaml
  83. apiVersion: v1
  84. kind: ReplicationController
  85. metadata:
  86. name: kube-registry-v0
  87. namespace: kube-system
  88. labels:
  89. k8s-app: kube-registry-upstream
  90. version: v0
  91. kubernetes.io/cluster-service: "true"
  92. spec:
  93. replicas: 1
  94. selector:
  95. k8s-app: kube-registry-upstream
  96. version: v0
  97. template:
  98. metadata:
  99. labels:
  100. k8s-app: kube-registry-upstream
  101. version: v0
  102. kubernetes.io/cluster-service: "true"
  103. spec:
  104. containers:
  105. - name: registry
  106. image: registry:2
  107. resources:
  108. limits:
  109. cpu: 100m
  110. memory: 100Mi
  111. env:
  112. - name: REGISTRY_HTTP_ADDR
  113. value: :5000
  114. - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
  115. value: /var/lib/registry
  116. volumeMounts:
  117. - name: image-store
  118. mountPath: /var/lib/registry
  119. ports:
  120. - containerPort: 5000
  121. name: registry
  122. protocol: TCP
  123. volumes:
  124. - name: image-store
  125. persistentVolumeClaim:
  126. claimName: kube-registry-pvc
  127. ```
  128. <!-- END MUNGE: EXAMPLE registry-rc.yaml -->
  129. ## Expose the registry in the cluster
  130. Now that we have a registry `Pod` running, we can expose it as a Service:
  131. <!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
  132. ```yaml
  133. apiVersion: v1
  134. kind: Service
  135. metadata:
  136. name: kube-registry
  137. namespace: kube-system
  138. labels:
  139. k8s-app: kube-registry-upstream
  140. kubernetes.io/cluster-service: "true"
  141. kubernetes.io/name: "KubeRegistry"
  142. spec:
  143. selector:
  144. k8s-app: kube-registry-upstream
  145. ports:
  146. - name: registry
  147. port: 5000
  148. protocol: TCP
  149. ```
  150. <!-- END MUNGE: EXAMPLE registry-svc.yaml -->
  151. ## Expose the registry on each node
  152. Now that we have a running `Service`, we need to expose it onto each Kubernetes
  153. `Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
  154. node by creating following daemonset.
  155. <!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  156. ```yaml
  157. apiVersion: extensions/v1beta1
  158. kind: DaemonSet
  159. metadata:
  160. name: kube-registry-proxy
  161. namespace: kube-system
  162. labels:
  163. k8s-app: kube-registry-proxy
  164. kubernetes.io/cluster-service: "true"
  165. version: v0.4
  166. spec:
  167. template:
  168. metadata:
  169. labels:
  170. k8s-app: kube-registry-proxy
  171. kubernetes.io/name: "kube-registry-proxy"
  172. kubernetes.io/cluster-service: "true"
  173. version: v0.4
  174. spec:
  175. containers:
  176. - name: kube-registry-proxy
  177. image: gcr.io/google_containers/kube-registry-proxy:0.4
  178. resources:
  179. limits:
  180. cpu: 100m
  181. memory: 50Mi
  182. env:
  183. - name: REGISTRY_HOST
  184. value: kube-registry.kube-system.svc.cluster.local
  185. - name: REGISTRY_PORT
  186. value: "5000"
  187. ports:
  188. - name: registry
  189. containerPort: 80
  190. hostPort: 5000
  191. ```
  192. <!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
  193. When modifying replication-controller, service and daemon-set defintions, take
  194. care to ensure _unique_ identifiers for the rc-svc couple and the daemon-set.
  195. Failing to do so will have register the localhost proxy daemon-sets to the
  196. upstream service. As a result they will then try to proxy themselves, which
  197. will, for obvious reasons, not work.
  198. This ensures that port 5000 on each node is directed to the registry `Service`.
  199. You should be able to verify that it is running by hitting port 5000 with a web
  200. browser and getting a 404 error:
  201. ```console
  202. $ curl localhost:5000
  203. 404 page not found
  204. ```
  205. ## Using the registry
  206. To use an image hosted by this registry, simply say this in your `Pod`'s
  207. `spec.containers[].image` field:
  208. ```yaml
  209. image: localhost:5000/user/container
  210. ```
  211. Before you can use the registry, you have to be able to get images into it,
  212. though. If you are building an image on your Kubernetes `Node`, you can spell
  213. out `localhost:5000` when you build and push. More likely, though, you are
  214. building locally and want to push to your cluster.
  215. You can use `kubectl` to set up a port-forward from your local node to a
  216. running Pod:
  217. ```console
  218. $ POD=$(kubectl get pods --namespace kube-system -l k8s-app=kube-registry-upstream \
  219. -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
  220. | grep Running | head -1 | cut -f1 -d' ')
  221. $ kubectl port-forward --namespace kube-system $POD 5000:5000 &
  222. ```
  223. Now you can build and push images on your local computer as
  224. `localhost:5000/yourname/container` and those images will be available inside
  225. your kubernetes cluster with the same name.
  226. # More Extensions
  227. - [Use GCS as storage backend](gcs/README.md)
  228. - [Enable TLS/SSL](tls/README.md)
  229. - [Enable Authentication](auth/README.md)
  230. ## Future improvements
  231. * Allow port-forwarding to a Service rather than a pod (#15180)
  232. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/registry/README.md?pixel)]()