Flocking shell

Yesterday, I had an interesting problem. My cron task spawned hundreds of copies of itself because it was blocking on a database call. If a process spawns enough times, you'll eventually run out of file descriptors and will be unable to fork more processes. To avoid further repeats, I needed to add a check to see if the script was already running and exit early.

My requirements for the script in question, also requires that it be able to spawn a specific instance. Instance in this case, could mean connecting to a different database. The important takeaway is that each instance, must be allow a spawn single copy of itself.

I could've gone down the route of using creating a PID or lock file (storing the current process id of the script), checking if the current process and the PID file matched and exiting if not.

Instead I fancied trying something different and according to StackOverflow flock was a popular choice.

Here's a snippet of how to enable file locking in your scripts.

# how to allow the script multiple times for different instances

# to avoid command block, link file descriptor (auto incremented) to our lock file
exec {lock_fd}>"$LOCKFILE"

# early exit if instance is already running
flock -n ${lock_fd} || exit 1

The funny notation {lock_fd} is an auto-incrementing named file descriptor which doesn't appear until bash 4.1.x.x (so you're out of luck Mac users). To add the Mac woes, flock isn't bundled with Mac, but someone's created a cross platform version with the same name.

To prove my script no longer spawned multiple copies I wrote the following script (safe-driver.sh):



for i in $(seq 3)
        echo "> BEGIN FOO $i"
        safe.sh FOO
        echo "> END FOO $i exit code: $?"
    ) & 

if [ ! -z "$IN_DOCKER" ]; then
    sleep 1 # allow scripts to run (needed for docker)

printf "\n\njobs running (should only see one process running)\n"
jobs -l

printf "\n\nlist file locks\n"
lsof /tmp/safe*.lock

if [ ! -z "$IN_DOCKER" ]; then
    printf "\n\npausing, press any key to return early\n"
    read -r



You may have seen an earlier post (on the 10th), which I withdrew because I didn't feel I had solved the problem sufficiently and there was a misunderstanding of how automatic file descriptor allocation works.