810

I want to do something like this where I can run multiple commands in order.

db:
  image: postgres
web:
  build: .
  command: python manage.py migrate
  command: python manage.py runserver 0.0.0.0:8000
  volumes:
    - .:/code
  ports:
    - "8000:8000"
  links:
    - db
Muhammad Reda
  • 25,541
  • 13
  • 88
  • 102
RustyShackleford
  • 22,464
  • 5
  • 20
  • 38

19 Answers19

1262

Figured it out, use bash -c.

Example:

command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"

Same example in multilines:

command: >
    bash -c "python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000"

Or:

command: bash -c "
    python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000
  "
x-yuri
  • 13,809
  • 12
  • 96
  • 141
RustyShackleford
  • 22,464
  • 5
  • 20
  • 38
  • 11
    @Pedram Make sure you are using an image that actually has bash installed. Some images may also require a direct path to bash e.g. `/bin/bash` – codemaven May 10 '16 at 20:57
  • 10
    If there is no bash installed you could try sh -c "your command" – Chaoste Oct 19 '16 at 08:12
  • Make sure you wrap your commands in quotes when passing to bash and I had to slip a "sleep 5" to ensure the db was up, but it worked for me. – traday Oct 26 '16 at 01:38
  • 123
    Alpine-based images actually seem to have no bash installed - do like @Chaoste recommends and use `sh` instead: `[sh, -c, "cd /usr/src/app && npm start"]` – Florian Loch Feb 20 '17 at 23:02
  • 6
    Can also use just `ash` on alpine :) – Jonathan Apr 09 '17 at 11:24
  • 1
    seems commands are separated by &&. in this case if one of my commands include a '&' sign like nohup. this will not work. – danny Apr 20 '17 at 05:54
  • toke/mosquitto also has bash but I'm failing to do the same thing, `command: [bash, -c, "echo something > something"]` it just keep restarting and it doesn't present any log. Ps: Comporser Ver. 3 – Rinaldi Segecin Jun 22 '17 at 22:41
  • 1
    For me `bash -c` did not work but using `/bin/bash -c` worked ! – g.lahlou Sep 17 '18 at 08:45
  • 11
    I use `sh -cx` so i can also see the commands running. useful for debugging. – Mike D Aug 28 '19 at 18:18
  • The problem with these is that the command doesn't fail if any of the constituent commands fail. It always returns 0. – shinvu Dec 17 '20 at 14:06
  • when trying the multiline approach, i got the error `yaml.scanner.ScannerError: while scanning a quoted scalar` . So I kept it all on one line, but it was ugly... – John Apr 30 '21 at 17:56
  • What does `-c` mean? – Divelix Aug 03 '21 at 07:19
  • Is this just in Python3, or even in general: I have to drop the suffix ".py" and thus run with `python3 -m main`, else I get `/usr/bin/python3: Error while finding module specification for 'main.py' (ModuleNotFoundError: __path__ attribute not found on 'main' while trying to find 'main.py'). Try using 'main' instead of 'main.py' as the module name.` – questionto42standswithUkraine Jan 14 '22 at 10:52
  • @Divelix If I'm not mistaken, `-c` takes the next argument as a command, executes it and then exists the shell. In regard to what Florian Loch mentioned about Alpine, I believe it's always better to use the generic `sh` than `bash` unless you're running some Bash-specific commands because `sh` (or `/bin/sh` if $PATH is not configured) is present in almost all images, whereas `bash` may or may not be. – natiiix Feb 14 '22 at 20:52
215

I run pre-startup stuff like migrations in a separate ephemeral container, like so (note, compose file has to be of version '2' type):

db:
  image: postgres
web:
  image: app
  command: python manage.py runserver 0.0.0.0:8000
  volumes:
    - .:/code
  ports:
    - "8000:8000"
  links:
    - db
  depends_on:
    - migration
migration:
  build: .
  image: app
  command: python manage.py migrate
  volumes:
    - .:/code
  links:
    - db
  depends_on:
    - db

This helps things keeping clean and separate. Two things to consider:

  1. You have to ensure the correct startup sequence (using depends_on).

  2. You want to avoid multiple builds which is achieved by tagging it the first time round using build and image; you can refer to image in other containers then.

Pang
  • 9,073
  • 146
  • 84
  • 117
Bjorn Stiel
  • 3,542
  • 1
  • 17
  • 17
  • 2
    This seems like the best option to me, and I would like to use it. Can you elaborate on your tagging setup to avoid multiple builds? I would prefer to avoid extra steps, so if this needs some, I might go with `bash -c` above. – Stavros Korokithakis Apr 28 '16 at 12:21
  • 3
    In the yaml above, the build and tagging happens in the migration section. It's not really obvious at first sight, but docker-compose tags it when you specify the build AND image properties - whereby the image property specifies the tag for that build. That can then be used subsequently without triggering a new build (if you look at web, you see it has no build but only an image property). Here's some more details https://docs.docker.com/compose/compose-file/) – Bjorn Stiel Apr 28 '16 at 12:35
  • 42
    While I like the idea of this, the problem is that depends_on only ensures they start in that order, not that they are ready in that order. wait-for-it.sh may be the solution some people need. – traday Oct 26 '16 at 01:40
  • 3
    That is absolutely correct and a bit of a shame that docker-compose doesn't support any fine grained control like waiting for a container to exit or start listening on a port. But yes, a custom script does solve this, good point! – Bjorn Stiel Oct 26 '16 at 04:22
  • 5
    This answer gives incorrect and potentially destructive information about how depends_on work. – antonagestam Aug 07 '19 at 19:12
  • If you are using alpine, you can use `apk add wait4ports` then use `wait4ports tcp://servicename:port` in your startup. No extra script needed. See https://github.com/erikogan/wait4ports. – Mike D Aug 28 '19 at 18:21
  • Does this command set from docker compose file override the `CMD` defined in dockerfile ? – Chang Zhao Oct 03 '21 at 12:29
  • thankyou very very much – akash maurya Oct 16 '21 at 15:33
208

I recommend using sh as opposed to bash because it is more readily available on most Unix based images (alpine, etc).

Here is an example docker-compose.yml:

version: '3'

services:
  app:
    build:
      context: .
    command: >
      sh -c "python manage.py wait_for_db &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8000"

This will call the following commands in order:

  • python manage.py wait_for_db - wait for the DB to be ready
  • python manage.py migrate - run any migrations
  • python manage.py runserver 0.0.0.0:8000 - start my development server
Pang
  • 9,073
  • 146
  • 84
  • 117
LondonAppDev
  • 6,974
  • 8
  • 55
  • 78
  • 3
    Personally this is my favourite, and cleanest, solution. – BugHunterUK Oct 10 '18 at 18:06
  • 2
    Mine too. As @LondonAppDev points-out, bash isn't available by default in all containers to optimize on space (e.g., most containers built on top of Alpine Linux) – ewilan Nov 20 '18 at 14:32
  • 2
    I had to escape the multiline && with a \ – Andre Van Zuydam Nov 22 '18 at 15:52
  • @AndreVanZuydam hmmm that's strange, I didn't need to do that. Did you surround with quotes? What flavour of docker are you running? – LondonAppDev Nov 22 '18 at 15:57
  • What does the `>` do in the first line? – oligofren Mar 20 '19 at 10:44
  • 2
    @oligofren the `>` is used to start a multi-line input (see https://stackoverflow.com/a/3790497/2220370) – LondonAppDev Mar 22 '19 at 15:50
  • This should have been the accepted answer, cleanest and something you can reason about. – Malik Bagwala Apr 20 '20 at 05:12
  • Any thoughts why I would be getting. "Error: Unknown Command "sh"" even though if I remove the command from docker-compose, start up the service, I can connect to the container and execute sh – dade May 09 '20 at 19:50
  • @dade strange, which image are you using? – LondonAppDev May 10 '20 at 20:17
  • @LondonAppDev has it all resolved. It had to do with the way the "exec" was being called in the entrypoint script, which made it impossible to tag in more commands to it. Once I updated that I was able to use the command in the docker-compose – dade May 11 '20 at 06:22
  • How can we get the output of a command executed this way? – Jonath P Sep 14 '20 at 10:00
  • @JonathP how do you mean? In Docker the output is printed automatically. – LondonAppDev Sep 15 '20 at 16:22
  • @LondonAppDev: Yes. I had made the mistake of having several lines with "command" and was wondering why it was not printing anything... Thx for the confirmation. – Jonath P Sep 16 '20 at 12:27
  • Thank, worked fine, I use the next solution: ```command: sh -c "apk add bash && /wait-for-it.sh db:5432 -- echo -----5432 IS UP-----" ``` – storenth May 26 '22 at 14:59
129

This works for me:

version: '3.1'
services:
  db:
    image: postgres
  web:
    build: .
    command:
      - /bin/bash
      - -c
      - |
        python manage.py migrate
        python manage.py runserver 0.0.0.0:8000

    volumes:
      - .:/code
    ports:
      - "8000:8000"
    links:
      - db

docker-compose tries to dereference variables before running the command, so if you want bash to handle variables you'll need to escape the dollar-signs by doubling them...

    command:
      - /bin/bash
      - -c
      - |
        var=$$(echo 'foo')
        echo $$var # prints foo

...otherwise you'll get an error:

Invalid interpolation format for "command" option in service "web":

Pang
  • 9,073
  • 146
  • 84
  • 117
MatrixManAtYrService
  • 6,603
  • 1
  • 42
  • 55
  • Hi, mate. I met a problem: ``` unrecognized arguments: /bin/bash -c python3 /usr/local/airflow/__init__.py -C Local -T Windows ``` the command in my docker-compose.yml is: command: - /bin/bash - -c - | python3 /usr/local/airflow/__init__.py -C ${Client} -T ${Types} Do you know how to fix that? I add Client and Types in my .env file. – Newt May 04 '20 at 03:18
  • Here's a doc for you: https://docs.docker.com/compose/compose-file/#variable-substitution I think what's happening is that your .env file places those variables in the container environment, but docker-compose is looking in your shell environment. Try instead `$${Types}` and `$${Client}`. I think this will prevent docker compose from interpreting those variables and looking for their values in whatever shell you invoke docker-compose from, which will mean that they're still around for bash to dereference them (_after_ docker has processed your `.env` file). – MatrixManAtYrService May 06 '20 at 16:34
  • Thanks for your comment. I did what you said in fact. So I got the $(Client) in the error information. I changed the way to read environment variables to use os.getenv in python, which is easier. Thanks anyway. – Newt May 07 '20 at 02:14
70

Cleanest ?

---
version: "2"
services:
  test:
    image: alpine
    entrypoint: ["/bin/sh","-c"]
    command:
    - |
       echo a
       echo b
       echo c
MUHAHA
  • 1,190
  • 13
  • 24
26

You can use entrypoint here. entrypoint in docker is executed before the command while command is the default command that should be run when container starts. So most of the applications generally carry setup procedure in entrypoint file and in the last they allow command to run.

make a shell script file may be as docker-entrypoint.sh (name does not matter) with following contents in it.

#!/bin/bash
python manage.py migrate
exec "$@"

in docker-compose.yml file use it with entrypoint: /docker-entrypoint.sh and register command as command: python manage.py runserver 0.0.0.0:8000 P.S : do not forget to copy docker-entrypoint.sh along with your code.

Harshad Yeola
  • 892
  • 7
  • 10
21

Another idea:

If, as in this case, you build the container just place a startup script in it and run this with command. Or mount the startup script as volume.

rweng
  • 5,956
  • 7
  • 26
  • 28
  • Yes, at the end I created a run.sh script: ```#!/bin/bash \n python manage.py migrate \n python manage.py runserver 0.0.0.0:8000``` (ugly oneline) – fero Nov 20 '17 at 12:43
14

* UPDATE *

I figured the best way to run some commands is to write a custom Dockerfile that does everything I want before the official CMD is ran from the image.

docker-compose.yaml:

version: '3'

# Can be used as an alternative to VBox/Vagrant
services:

  mongo:
    container_name: mongo
    image: mongo
    build:
      context: .
      dockerfile: deploy/local/Dockerfile.mongo
    ports:
      - "27017:27017"
    volumes:
      - ../.data/mongodb:/data/db

Dockerfile.mongo:

FROM mongo:3.2.12

RUN mkdir -p /fixtures

COPY ./fixtures /fixtures

RUN (mongod --fork --syslog && \
     mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
     mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
     mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
     mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
     mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
     mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
     mongoimport --db wcm-local --collection videos --file /fixtures/videos.json)

This is probably the cleanest way to do it.

* OLD WAY *

I created a shell script with my commands. In this case I wanted to start mongod, and run mongoimport but calling mongod blocks you from running the rest.

docker-compose.yaml:

version: '3'

services:
  mongo:
    container_name: mongo
    image: mongo:3.2.12
    ports:
      - "27017:27017"
    volumes:
      - ./fixtures:/fixtures
      - ./deploy:/deploy
      - ../.data/mongodb:/data/db
    command: sh /deploy/local/start_mongod.sh

start_mongod.sh:

mongod --fork --syslog && \
mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection videos --file /fixtures/videos.json && \
pkill -f mongod && \
sleep 2 && \
mongod

So this forks mongo, does monogimport and then kills the forked mongo which is detached, and starts it up again without detaching. Not sure if there is a way to attach to a forked process but this does work.

NOTE: If you strictly want to load some initial db data this is the way to do it:

mongo_import.sh

#!/bin/bash
# Import from fixtures

# Used in build and docker-compose mongo (different dirs)
DIRECTORY=../deploy/local/mongo_fixtures
if [[ -d "/fixtures" ]]; then
    DIRECTORY=/fixtures
fi
echo ${DIRECTORY}

mongoimport --db wcm-local --collection clients --file ${DIRECTORY}/clients.json && \
mongoimport --db wcm-local --collection configs --file ${DIRECTORY}/configs.json && \
mongoimport --db wcm-local --collection content --file ${DIRECTORY}/content.json && \
mongoimport --db wcm-local --collection licenses --file ${DIRECTORY}/licenses.json && \
mongoimport --db wcm-local --collection lists --file ${DIRECTORY}/lists.json && \
mongoimport --db wcm-local --collection properties --file ${DIRECTORY}/properties.json && \
mongoimport --db wcm-local --collection videos --file ${DIRECTORY}/videos.json

mongo_fixtures/*.json files were created via mongoexport command.

docker-compose.yaml

version: '3'

services:
  mongo:
    container_name: mongo
    image: mongo:3.2.12
    ports:
      - "27017:27017"
    volumes:
      - mongo-data:/data/db:cached
      - ./deploy/local/mongo_fixtures:/fixtures
      - ./deploy/local/mongo_import.sh:/docker-entrypoint-initdb.d/mongo_import.sh


volumes:
  mongo-data:
    driver: local
radtek
  • 30,748
  • 10
  • 135
  • 106
11

To run multiple commands in the docker-compose file by using bash -c.

command: >
    bash -c "python manage.py makemigrations
    && python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000"

Source: https://intellipaat.com/community/19590/docker-run-multiple-commands-using-docker-compose-at-once?show=19597#a19597

Amritpal Singh
  • 2,996
  • 1
  • 9
  • 7
10

There are many great answers in this thread already, however, I found that a combination of a few of them seemed to work best, especially for Debian based users.

services:
  db:
    . . . 
  web:
    . . .
    depends_on:
       - "db"
    command: >      
      bash -c "./wait-for-it.sh db:5432 -- python manage.py makemigrations
      && python manage.py migrate
      && python manage.py runserver 0.0.0.0:8000"

Prerequisites: add wait-for-it.sh to your project directory.

Warning from the docs: "(When using wait-for-it.sh) in production, your database could become unavailable or move hosts at any time ... (This solution is for people that) don’t need this level of resilience."

Edit:

This is a cool short term fix but for a long term solution you should try using entrypoints in the Dockerfiles for each image.

User
  • 542
  • 9
  • 24
6

Alpine-based images actually seem to have no bash installed, but you can use sh or ash which link to /bin/busybox.

Example docker-compose.yml:

version: "3"
services:

  api:
    restart: unless-stopped
    command: ash -c "flask models init && flask run"
Dinko Pehar
  • 4,475
  • 3
  • 20
  • 42
5

If you need to run more than one daemon process, there's a suggestion in the Docker documentation to use Supervisord in an un-detached mode so all the sub-daemons will output to the stdout.

From another SO question, I discovered you can redirect the child processes output to the stdout. That way you can see all the output!

Community
  • 1
  • 1
Tim Tisdall
  • 9,107
  • 3
  • 44
  • 73
  • Looking at this again, this answer seems more suited for running multiple commands in parallel instead of serially. – Tim Tisdall Aug 13 '18 at 12:48
5

tht's my solution for this problem:

services:
  mongo1:
    container_name: mongo1
    image: mongo:4.4.4
    restart: always
#    OPTION 01:
#    command: >
#      bash -c "chmod +x /scripts/rs-init.sh
#      && sh /scripts/rs-init.sh"
#    OPTION 02:
    entrypoint: [ "bash", "-c", "chmod +x /scripts/rs-init.sh && sh /scripts/rs-init.sh"]
    ports:
      - "9042:9042"
    networks:
      - mongo-cluster
    volumes:
      - ~/mongors/data1:/data/db
      - ./rs-init.sh:/scripts/rs-init.sh
      - api_vol:/data/db
    environment:
      *env-vars
    depends_on:
      - mongo2
      - mongo3

GtdDev
  • 570
  • 4
  • 10
2

Use a tool such as wait-for-it or dockerize. These are small wrapper scripts which you can include in your application’s image. Or write your own wrapper script to perform a more application-specific commands. according to: https://docs.docker.com/compose/startup-order/

Eran
  • 31
  • 5
  • 1
    link to [wait-for-it.sh](https://github.com/vishnubob/wait-for-it). It should be noted (as it was in the docs you linked to) "The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures." This solution is for people that "don’t need this level of resilience" – User Jul 21 '20 at 22:22
1

Building on @Bjorn answer, docker has recently introduced special dependency rules that allows you to wait until the "init container" has exited successfully which gives

db:
  image: postgres
web:
  image: app
  command: python manage.py runserver 0.0.0.0:8000
  depends_on:
    db:
    migration:
      condition: service_completed_successfully
migration:
  build: .
  image: app
  command: python manage.py migrate
  depends_on:
    - db

I'm not sure if you still need buildkit or not, but on my side it works with

DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose up
Cyril Duchon-Doris
  • 11,839
  • 9
  • 68
  • 137
0

I ran into this while trying to get my jenkins container set up to build docker containers as the jenkins user.

I needed to touch the docker.sock file in the Dockerfile as i link it later on in the docker-compose file. Unless i touch'ed it first, it didn't yet exist. This worked for me.

Dockerfile:

USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; 
echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
RUN groupmod -g 492 docker && \
usermod -aG docker jenkins  && \
touch /var/run/docker.sock && \
chmod 777 /var/run/docker.sock

USER Jenkins

docker-compose.yml:

version: '3.3'
services:
jenkins_pipeline:
    build: .
    ports:
      - "8083:8083"
      - "50083:50080"
    volumes:
        - /root/pipeline/jenkins/mount_point_home:/var/jenkins_home
        - /var/run/docker.sock:/var/run/docker.sock
0

I was having same problem where I wanted to run my react app on port 3000 and storybook on port 6006 both in the same containers.

I tried to start both as entrypoint commands from Dockerfile as well as using docker-compose command option.

After spending time on this, decided to separate these services into separate containers and it worked like charm

Mihir Bhende
  • 7,846
  • 1
  • 24
  • 34
0

In case anyone else is trying to figure out multiple commands with Windows based containers the following format works:
command: "cmd.exe /c call C:/Temp/script1.bat && dir && C:/Temp/script2.bat && ..."

Including the 'call' directive was what fixed it for me.

Alternatively if each command can execute without previous commands succeeding, just separate each with semicolons:
command: "cmd.exe /c call C:/Temp/script1.bat; dir; C:/Temp/script2.bat; ... "

Jeremy Beale
  • 226
  • 3
  • 9
-8

try using ";" to separate the commands if you are in verions two e.g.

command: "sleep 20; echo 'a'"

chanllen
  • 276
  • 2
  • 4