8

I am using minio to create an s3 like object-store server and I want to test some code against this server during my ci cd process.
Using Github actions, I tried to add minio as service in the workflow file but since minio requires a command and some arguments I can't actually run it using this mechanism.
This is the part of the relevant configuration from my ci.yml:

minio-container:
runs-on: ubuntu-latest
container: python:3.8.2

services:
  minio:
    image: minio/minio:latest
    ports:
      - 9000:9000
    env:
      MINIO_ACCESS_KEY: XXXX
      MINIO_SECRET_KEY: XXXXX

I read a little bit and figured out that behind the scene github runs the docker crate service [OPTIONS] IMAGE_NAME but I need to also be able to run docker create service [OPTIONS] IMAGE_NAME COMMAND [ARGS]

In case this is not implemented yet what are other options I can try?

k0pernikus
  • 50,568
  • 57
  • 198
  • 317
Or Chen
  • 179
  • 2
  • 4
  • I [answered](https://stackoverflow.com/a/64188150/1423507) [a similar question](https://stackoverflow.com/q/64031598/1423507) if that could help. – masseyb Oct 03 '20 at 19:30

2 Answers2

1

From a quick look at the Github Actions documentation this is not yet supported. You could easily get around this by using the Minio image from Bitnami .

I believe something like this should work:

    services:
  minio:
    image: bitnami/minio:latest
    env:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    ports:
      - 9000:9000
    options: --name minio-server
  • It is better to use MinIO's official images published quay.io/minio/minio or minio/minio - Bitnami images are not supported by MinIO project. – Harshavardhana Nov 10 '21 at 19:13
0

On a closer inspection, there is a way. But before I came to it, I tried a couple of ideas. First I thought I can mount the directory with the source code into the container and run one of the project files (a script) by specifying --entrypoint option, but services are started before git clone. Then I thought that maybe I can pass a command to the container, but no, that is not possible. The third option I considered is passing a command via an environment variable to some executable that comes with the image, supposedly a shell. But shells can take a path to a script, not a command (ENV variable). Then I thought, "let the service die," I just need to restart the container after I clone the repository. But that brings nothing to the table compared to...

"just create the container by hand." Which is what I did:

.github/workflows/django.yml:

...
jobs:
    build:
        runs-on: ubuntu-latest
        container: python:3.5-alpine3.12
        steps:
            - uses: actions/checkout@v2
            - run: apk add expect && unbuffer ./create-cypress-container.sh
...

create-cypress-container.sh:

#!/bin/sh -eux
apk add docker jq
network=$(docker inspect --format '{{json .NetworkSettings.Networks}}' `hostname` \
  | jq -r 'keys[0]')
docker pull -q cypress/base:12
docker run \
  -v /home/runner/work:/__w \
  -w "$GITHUB_WORKSPACE" \
  --name cypress \
  --network "$network" \
  -d \
  cypress/base:12 sh -xc 'ls && whoami && pwd'
sleep 10
docker ps
docker logs cypress

The job container is started with the following options (see Initialize containers > Starting job containeir):

...
--workdir /__w/PROJECT_NAME/PROJECT_NAME
-v "/home/runner/work":"/__w"
...

and environment variables:

...
GITHUB_WORKSPACE='/__w/PROJECT_NAME/PROJECT_NAME'
...

/__w/PROJECT_NAME/PROJECT_NAME is where your repository is cloned.

P.S. Having that said, I'm going to run front end and back end tests in separate jobs, which should simplify matters and might eliminate the need to manually start containers.

x-yuri
  • 13,809
  • 12
  • 96
  • 141