1.
What is your use case? What is your
amc script command? What have you tried so far? What tests did you run and what results did you get from your tests? Where is it slower than expected? Is it more of a CPU bottleneck or more of a network request bottleneck? What does
filebot -script fn:sysinfo say?
2.
Pat489 wrote: ↑29 Apr 2022, 02:27
optimize the autodetect?
Depending on how your files are currently organized, and depending on where the bottleneck is, there might be ways of going about it in way more optimized to your specific use case.
For example, you can use
--def ut_label=TV and
--def ut_label=Movie to force one or the other, instead of relying on auto-detection:
rednoah wrote: ↑01 Aug 2012, 13:04
You can
(and should) force Movie / Series / Anime mode or force ignore files via labels, e.g. label as
Movie to force TheMovieDB,
Series to force TheTVDB,
Anime to force AniDB, or
other to ignore all files. Alternatively, standard folder names such as Movies / TV Shows / Anime may also be used to force a specific mode.
3.
Pat489 wrote: ↑29 Apr 2022, 02:27
Can you help me solve the cache problem please? Do you have a parallel process limit?
Looks like you're limited to 10 parallel processes, and the 11th instance will run out of 0..9 folders, and thus fail to launch.
The official limit
1 process. However, technically
filebot can only limit itself to
1 running instance per log file, and the error above indicates that you have probably already worked around that limit by using a different log file for each
filebot call. Running many
filebot instances in parallel is technically possible, but completely untested and not recommended, so you might run into unexpected problems there, and it may or may not be faster.
Running multiple
filebot instances in parallel on the same machine is not generally recommended or necessary, because a single
filebot instance can just use all your CPUs, and that's what the FileBot desktop application does by default. However, the
amc script will do things step by step, print log messages in sequence, and generally run in the background unnoticed. If you want
filebot to use all your CPUs and all your memory, then maybe we could add an option here, but the use case hasn't really come up.
(rather the opposite, the 64+ core / 128+ GB RAM machines are usually shared hosting monsters that auto-kill processes with excessive resource usage)
Running multiple instances can help with parallelizing file IO operations
(i.e. file move / copy operations) but there are probably better ways to go about that.
(e.g. hardlink into a new structure on the same file system, then rsync to the final destination)
Running multiple instances can help with parallelizing network IO operations and getting around network request flood limits that FileBot imposes on itself, but you might get yourself IP banned by the database you're hitting.