16

I am using a third party library that creates sibling docker containers via:

docker run -d /var/run/docker.sock:/var/run/docker.sock ...

I am trying to create a Kubernetes deployment out of the above container, but currently getting:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

This is expected because I am not declaring /var/run/docker.sock as a volume in the deployment yaml.

The problem is I don't know how to do this. Is it possible to mount /var/run/docker.sock as a volume in a deployment yaml?

If not, what is the best approach to run docker sibling-containers from within a Kubernetes deployment/pod?

rys
  • 339
  • 1
  • 3
  • 9

2 Answers2

22

Unverified as it sounds brittle to me to start a container outside of k8s supervision, but you should be able to mount /var/run/docker.sock with a hostPath volume.

Example variation from the documentation:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: gcr.io/google_containers/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: docker-sock-volume
  volumes:
  - name: docker-sock-volume
    hostPath:
      # location on host
      path: /var/run/docker.sock
      # this field is optional
      type: File

I think a simple mount should be enough to allow communication from docker client within the container to docker daemon on host but in case you get a write permission error it means you need to run your container as privileged container using a securityContext object like such (just an extract from above to show the addition, values taken from the documentation):

spec:
  containers:
  - image: gcr.io/google_containers/test-webserver
    securityContext:
      privileged: true
    name: test-container
Tensibai
  • 11,366
  • 2
  • 35
  • 62
  • This worked, thanks. Yeah it is a third party tool so it is not ideal. But I at least want the main container in Kubernetes to make it more reliable. The container ramps up temporary containers with browsers for automation UI testing, then the browser container is destroyed. – rys Nov 07 '17 at 17:23
  • @rys yes, that was a case I had think of, you may still run into problems if the node load goes too high as k8s may move your 'launcher' container. But I assume the failure of the test suite is something acceptable in this case – Tensibai Nov 08 '17 at 06:55
  • 1
    This is no longer supported https://cloud.google.com/kubernetes-engine/docs/deprecations/docker-containerd. You can use sidecar dind or k8s jobs. I have the exact same problem and honestly the community has been very hostile when I've asked about this lol here's me trying to get an answer: https://www.reddit.com/r/kubernetes/comments/yrf215/eks_ive_been_struggling_with_the_upcoming_removal/ – Shanteva Dec 14 '22 at 22:07
  • The question was specifically about a k8s using docker as container runtime. Kubernetes has moved away from only docker to a CRI interface so obviously it won't apply to managed k8s not using docker as CRI. – Tensibai Dec 15 '22 at 06:12
3

Although this is a working solution (I use it myself), there some drawbacks for running Docker in a Kubernetes pod by mounting /var/run/docker.sock

Mostly the fact you are working with Docker containers outside the control of Kubernetes.

Another suggested solution I found is using a side-car container in your pod. See A Case for Docker-in-Docker on Kubernetes. There are two parts to it where the proposed solution is in part 2.

I hope this helps.

Eldad Assis
  • 378
  • 3
  • 9