8

I'm creating an app which runs on Google's App Engine with the custom flex environment. This app uses several (relative) symlinks which point to other directories in the project. But somehow those symlinks are ignored when I deploy the app.

It seems that the gcloud tool sends the source context (which is, all the files in my project) to the google container builder before building and deploying the app:

$ gcloud --project=my-project --verbosity=info app deploy
(...)
Beginning deployment of service [default]...
Building and pushing image for service [default]
INFO: Uploading [/tmp/tmpZ4Jha_/src.tgz] to [eu.gcr.io/my-project/appengine/default.20171212t160803:latest]
Started cloud build [some-uid].

If I extract the contents of the .tgz file I can see that all the files and directories in the project are there. Except for symlinks pointing to directories (symlinks to files are included though). So the source context is missing all the symlinks to directories.

Not using symlinks is not an option, so does anybody know how to include symlinks to directories in the source context send to google?

Although I don't think it's relevant, here are the contents of the app.yaml:

env: flex
runtime: custom

runtime_config:
  document_root: docroot

manual_scaling:
  instances: 1

resources:
  cpu: 2
  memory_gb: 2
  disk_size_gb: 10
Wessel van der Linden
  • 2,432
  • 2
  • 19
  • 39

2 Answers2

3

I've worked around this by deploying my python cloud functions from a temp directory, and using tar (on a Mac) to include files inside symlinked directories:

  tar hc --exclude='__pycache__' {name} | tar x -C {tmpdirname}
Steve Alexander
  • 2,349
  • 1
  • 7
  • 6
0

I use a workaround solution similar to Steve Alexander's, but in a more elaborate way: I have a shell script that creates a temp dir, copies the dependencies into in, sets the environment and runs the gcloud command. It is basically something like this:

. .env.sh

SRC_FILE=$1
SRC_FUNC=$2
TRIGGER_RESOURCE=$3
TRIGGER_EVENT=$4

TMP_DIR=./tmp/deploy

mkdir -p $TMP_DIR
cp -r modules/dep1 $TMP_DIR
cp -r modules/dep2  $TMP_DIR
cp requirements.txt $TMP_DIR
cp $SRC_FILE $TMP_DIR/main.py

gcloud functions deploy $SRC_FUNC \
    --source=$TMP_DIR \
    --runtime=python39 \
    --trigger-resource $TRIGGER_RESOURCE \
    --trigger-event $TRIGGER_EVENT \
    --env-vars-file=./.env.yml \
    --timeout 540s
rm -rf $TMP_DIR

This script is tailored for a Google Storage event, ie. to deploy a function that should be triggered when a new file is uploaded to a bucket:

./deploy.func.sh functions.py gs_new_file_event project-bucket1 google.storage.object.finalize

So in the example above gs_new_file_event is a Python function defined in functions.py. The script copies the file with the Python code to the temp dir as main.py which is what the function deployer expects. This works well for a project where there are multiple cloud functions defined in the same repository that also contains dependencies and it is not possible to have all of the apps and functions defined in the top-level main.py. The script removes the temp dir after it is done, but it is a good idea to add the path to .gitingnore.

Here are a few things you can do to adapt the script to your own needs:

  • Set up the env files with all the required variables: .env.sh for the build and deployment, .env.yml for the function/app runtime.
  • Fix the paths and dependencies.
  • Improve the handling of the command line arguments to make it more flexible and work for all kinds of GCloud triggers.
mac13k
  • 2,091
  • 20
  • 29