Page 1 of 1

Limiting the number of queue'd filebot run instances

Posted: 09 Dec 2022, 13:48
by vtmikel
Good day-

I read a thread on the forums about the expected queue behavior when you run multiple instances of filebot with the --log-file parameters.

I'm not sure if I am alone on this method, but I run my torrent client in a docker, so I execute filebot on the host, and I have no event-driven mechanism to execute filebot.

I have cron executing filebot every 5 minutes. This approach has served me well for a long time.

As the size of movies has gone up, filebot is taking longer to finish it's work. The number of filebot instances in queue is adding up. It's never been a problem per se, I give my host machine enough memory, but I'm considering implementing an approach where I check to see how many instances of filebot are running in my wrapper shell script before running another instance, but wanted to check to see if there are other ideas before doing so.

Thank you.

Re: Limiting the number of queue'd filebot run instances

Posted: 09 Dec 2022, 14:00
by rednoah
If you're already running FileBot in intervals, then you can just run filebot, wait for it to finish, and then wait X minutes, before repeating:

Code: Select all

while true; do
  filebot ...
  sleep 600
done

You can (and should) use flock to take care of synchronizing filebot calls if flock is available to you, so that you can queue filebot calls before actually calling filebot or just not calling filebot if an instance is already running:

Code: Select all

flock --nonblock /path/to/filebot.lock filebot ...
:idea: https://man7.org/linux/man-pages/man1/flock.1.html



:idea: filebot calls shouldn't take multiple minutes. They also shouldn't take longer and longer over time. Unless you're doing something wrong. You'll want to process files within the same file system so that IO operations (e.g. move or hardlink) are instant. You'll also want to not reprocess files that have already been processed. If you need to move files to another machine, then you'll still want to process files within the same file system instantly, and then rsync them elsewhere in a separate long-running process if and when needed.