Hi I am having an issue where if I have multiple files.
If couple of files fail to process because it already exist, but than when it comes to a file that does not exist, it does not copy rename and copy that file to the output directory because the error had occurred for other previous files.\
My script I run, let me know if there is adjust I can make to improve this
filebot -script fn:amc --output "/data/Videos" --action copy --conflict override -non-strict --log all --log-file /data/filebot/log/filebot_amc.log --def clean=y artwork=y excludeList=amc.txt "/data/Torrents/Completed"
Not sure if that makes sense. For example below
TV Show 1- already exist in output
TV Show 2- already exist in output
TV Show 3- already exist in output
TV Show 4- does not exist in output
Scrip runs, identify the files that already exist and does not copy, so that is good, but when it gets to TV show 4 that does not exist, it extracts content but never renames and moves the file to output.
What I did for troubleshooting is, I remove all the files from my input directory and just kept TV show 4, when just that file exist in the input folder it works fine, so that is why I am thinking it is when it identifies X amount of fails it will stop processing.
Continue to process rest of the files to output directory
Re: Continue to process rest of the files to output directory
What exactly does the log say? Since you're using --conflict skip (default) as opposed to --conflict fail, we would not typically expect the behaviour you seem to be describing.
Unless you have binary duplicates with different file paths (thus not excluded via --def excludeList) in which case you may need to eliminate binary duplicate first in a pre-processing step:
viewtopic.php?p=23171#p23171
Alternatively, you could adjust your process, e.g. only processing recently modified input files:
This snippet assumes that Last-Modified is set when the files was moved to the input folder, and assumes that the amc script is executed once very 24 hours, implying that files older than 1 day have already been processed in the previous run.
Unless you have binary duplicates with different file paths (thus not excluded via --def excludeList) in which case you may need to eliminate binary duplicate first in a pre-processing step:
viewtopic.php?p=23171#p23171
Alternatively, you could adjust your process, e.g. only processing recently modified input files:
Code: Select all
--file-filter "age < 1"
