24

Here's my problem: I want to build a chroot environment inside a docker container. The problem is that debootstrap cannot run, because it cannot mount proc in the chroot:

W: Failure trying to run: chroot /var/chroot mount -t proc proc /proc

(in the log the problem turns out to be: mount: permission denied)

If I run --privileged the container, it (of course) works... I'd really really really like to debootstrap the chroot in the Dockerfile (much much cleaner). Is there a way I can get it to work?

Thanks a lot!

fbrusch
  • 517
  • 1
  • 5
  • 7

8 Answers8

9

You could use the fakechroot variant of debootstrap, like this:

fakechroot fakeroot debootstrap --variant=fakechroot ...

Cheers!

Luis Alejandro
  • 326
  • 4
  • 3
5

No, this is not currently possible.

Issue #1916 (which concerns running privileged operations during docker build) is still an open issue. There was discussion at one point of adding a command-line flag and RUNP command but neither of these have been implemented.

Nathan Osman
  • 67,908
  • 69
  • 250
  • 347
4

Adding --cap-add=SYS_ADMIN --security-opt apparmor:unconfined to the docker run command works for me.

See moby/moby issue 16429

  • in combination with `--privileged` flag or config this will work, but OP has specified they would like to avoid this. – MrMesees Feb 18 '19 at 08:38
2

This still doesn't work (2018-05-31).

Currently the only option is debootstrap followed by docker import - Import from a local directory

# mkdir /path/to/target
# debootstrap bionic /path/to/target
# tar -C /path/to/target -c . | docker import - ubuntu:bionic
Olaf Dietsche
  • 69,448
  • 7
  • 95
  • 188
1

debootstrap version 1.0.107, which is available since Debian 10 Buster (July 2019) or in Debian 9 Stretch-Backports has native support for Docker and allows building a Debian root image without requiring privileges.

  • Dockerfile:
FROM debian:buster-slim AS builder
RUN apt-get -qq update \
&& apt-get -q install --assume-yes debootstrap
ARG MIRROR=http://deb.debian.org/debian
ARG SUITE=sid
RUN debootstrap --variant=minbase "$SUITE" /work "$MIRROR"
RUN chroot /work apt-get -q clean

FROM scratch
COPY --from=builder /work /
CMD ["bash"]
  • docker build -t my-debian .
  • docker build -t my-debian:bullseye --build-arg SUITE=bullseye .
pmhahn
  • 81
  • 2
0

There is a fun workaround, but it involves running Docker twice. The first time, using a standard docker image like ubuntu:latest, only run the first stage of debootstrap by using the --foreign option.

debootstrap --foreign bionic /path/to/target

Then don't let it do anything that would require privileged and isn't needed anyway by modifying the functions that will be used in the second stage.

sed -i '/setup_devices ()/a return 0' /path/to/target/debootstrap/functions
sed -i '/setup_proc ()/a return 0' /path/to/target/functions

The last step for that docker run is to have that docker execution tar itself up to a directory that is included as a volume.

tar --exclude='dev/*' -cvf /guestpath/to/volume/rootfs.tar -C /path/to/target .

Ok, now prep for a second run. First load your tar file as a docker image.

cat /hostpath/to/volume/rootfs.tar | docker import - my_image:latest

Then, run docker using FROM my_image:latest and run the second debootstrap stage.

/debootstrap/debootstrap --second-stage

That might be obtuse, but it does work without requiring --priveledged. You are effectively replacing running chroot with running a 2nd docker container.

corbin
  • 1,346
  • 2
  • 24
  • 39
0

This does not address the OP requirements for doing chroot in a container without --privileged set, but it is an alternative method that may be of use.

See Docker Moby for hetergenous rootfs builds. It creates a native temp directory and creates a rootfs in it using debootstrap which needs sudo. THEN it creates a docker image using

FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/bash"]

This is a common recipe for running a pre-made rootfs in a docker image. Once the image is built, it does not need special permissions. AND it's supported by the docker devel team.

dturvene
  • 1,996
  • 1
  • 16
  • 16
-1

Short answer, without privileged mode no there isn't a way.

Docker is targeted at micro-services and is not a drop in replacement for virtual machines. Having multiple installations in one container definitely not congruent with that. Why not use multiple docker containers instead?

Usman Ismail
  • 17,055
  • 14
  • 78
  • 161
  • I built an application that run code snippets (mostly students' solutions to exercises) into caged environments. I run some hundred of node.js scripts at the same time, the best solution I found was chroot+aufs (I initially tried with docker as the execution cage, but had some limitations, like not being able to share file descriptors among the controller process and the spawned caged children, which I use for IPC). I used to do that all in a Virtualbox machine, but now I was trying to take advantage of the awesome docker ecosystem (provisioning, deploying etc). – fbrusch Oct 16 '14 at 16:13
  • 1
    I am not clear why you said 'file descriptors' instead of files... but you can share files across containers with shared volumes – Rondo Oct 18 '14 at 02:52
  • 5
    Please answer the question, not complain why people need to do it. Sometimes you can't choose your tools and need to do something. Sometimes you simply want to learn about it. – hazydev Jan 07 '16 at 00:17