0

Let's suppose that we have some long-lasting cronjob/worker, running or some production servers. We have new application code release and ready to deploy it into production.

What should we do with this cronjob/workers?

  • Wait for cronjob/worker complete it's job and do not start next on until deployment is over + restrict all other cronjobs/workers start until deployment is over
  • Implement SIGTERM handlers in cronjob/worker & send SIGTERM to all nessesary processes before deploy? (sometimes it is rather difficult to implement this kind of hanglers)
  • Split long lasting cronjob/worker into parts, push into queue and forget about this problem?
  • Any ideas?
Kara
  • 5,996
  • 16
  • 49
  • 56
Kirzilla
  • 15,710
  • 24
  • 81
  • 126

1 Answers1

0

I'm assuming you are worried about writing files that at the time of deployment are being executed by cron or changing files that those jobs may use?

You could have job.lock files that each job touches when the job runs and deletes once it's done.

At the start of a deployment, you could

  • create a deploy.lock file that all jobs/workers watch and don't start if present
  • have your deployment process watch the job.lock files, and only proceed if there's none
  • deletes deploy.lock once done

In essence, the same thing that you suggest in your first bullet point.

I've read this question not long ago and it discusses the exact concern Can a shell script delete or overwrite itself?

I'm not that confident about shell read and disk write buffers so I don't want to talk nonsense. Maybe my approach is too simplistic anyway or isn't effective at all, maybe you overwriting a file that is being executed is harmless. Curious to find out too.

Community
  • 1
  • 1
hlev
  • 551
  • 1
  • 7
  • 20