I currently have the following script below that runs every 1 second on a cron job
#!/bin/bash
LOCK_FILE=/src/lock-file.lock
exec 99>"$LOCK_FILE"
flock -xn 99 -c "node /src/cron.js" || exit 1
exec 99<&- #close
The code is based off some tutorial sites I found below.
https://dev.to/rrampage/ensuring-that-a-shell-script-runs-exactly-once-3d3f
https://www.baeldung.com/linux/file-locking
Currently after about a few days the script gets stuck on the lock. I can type in lslocks and I see the following below proving it's stuck on the lock, even if I stop the cron. This is a problem because the script basically stops running every 1 second.
COMMAND PID TYPE SIZE MODE M START END PATH
(undefined) -1 OFDLCK READ 0 0 0
flock 261 FLOCK WRITE 0 0 0 /root/99
As you can see above the file is still locked and it never seems to unlock for whatever reason. I read that I can use -u option to unlock, but I also read this is not good to do as well and that closing the file is preferred with the exec 99<&- option. The nodejs script should be impossible to have any infinite loops the way it's written.
For that matter the 'file descriptors' are very confusing for me to understand as well. What exactly does 99>"$LOCK_FILE" really do to the file /src/lockfile.lock? I can't seem to find any explanations online and just hear you are supposed to do it. I'm aware 99 is supposed to be some file descriptor that doesn't interfere with more common descriptors like 0 and 1, but this is still very confusing what it all means which might be where my problem is.
What is really weird is it creates a file in my directory called 99, I thought the file descriptor was supposed to be just an integer like an open port on a socket not an actual file that gets created? I'm not sure if this is part of the problem either.
My question is, does anyone see anything wrong with the code I have below that could cause a 'forever' lock situation?