but I presume this is happening a lot more often than it should.
I currently have the license.txt file in /filebot/license.txt; these are some relevant env variables:
The /filebot directory is already persistent (mounted on the host system).
I specified in FILEBOT_OPTS which contains -Dapplication.cache=/filebot/cache -Dapplication.dir=/filebot.
I'm seeing the log entry every two or three calls, which, as you confirmed, is much higher than expected.
Any suggestions to debug?
I only work in black and sometimes very, very dark grey. (Batman)
Perhaps try playing with the reference docker container and see if you can find any obvious differences in the behaviour & logs produced, and then narrow down from there: https://hub.docker.com/r/rednoah/filebot/
docker filesystem abstraction might introduce filesystem strangeness, perhaps file locks not working across containers, IDK. I'd ensure that only one filebot instance is running at a time, so they don't end up corrupting each others application files. There should be obvious warnings on the console though, if application state was inconsistent.
There's only one FileBot instance running, I can check that on the process manager.
That single instance is consuming over 30 GB of memory though, and I tried it with the reference container.
The system has 64 GB of memory but it still seems an impressive amount.
I ended up editing the configuration to force the same home for both applications (running both client and FileBot in the same container as I still don't have a solid way for post-processing script to call FileBot in another container) and forcing the use of system libraries.
I only work in black and sometimes very, very dark grey. (Batman)