Es läuft bei mir auf einem Debian basierten NAS-OS namens OpenMediaVault (beste nassoftware ever!!!)
Theoretisch aber auch auf jedem anderen System einfach nach zu bauen
Ich erklär wie:
- man pyLoad installiert und konfigurieren muss.
- man filebot installiert (um das gedownloadede nach imdb/tvdb umzubenennen und dann in einen vorgefertigten ordner zu verschieben)
- man flexget installiert (um ein selbst angelegten RSS-feed (feed43) zu scannen und die links pyLoad automatisiert hinzuzufügen)
- Filebot konfiguriert
- FlexGet konfiguriert
- sonstiges
1.pyLoad installieren
- dafür hab ich ganz einfach das miller script benutzt. (nicht benutzen wenn ihr kein OMV habt)
Code: Select all
cd /
wget https://raw.github.com/cptjhmiller/OMV_Installers/master/omv.sh
chmod +x /omv.sh
./omv.sh
- Link: http://forums.openmediavault.org/viewto ... =13&t=2038
2. pyLoad konfigurieren
um nicht die OS platte vollzuladen, müssen wir einen mountpunkt erstellen.
sozusagen ein shortcut vom pyLoadDownloadordner auf die "große" platte
Code: Select all
sudo nano /etc/fstab
Code: Select all
/media/5a24e136-09b9-48e1-95db-b44d5db3e28a/Medien/Downloads// /root/.pyload/Downloads/ none bind 0 0
- webbrowser öffnen "NAS-IP:8888"
- LOGIN: User: admin ; pwd: admin
- config - accounts - add [euren OCH provider angeben]
- General - Menu - General - create folder for each package - on
- plugins - externalscripts - on
- plugins - extractarchive
- - on
- on
- on
- leer lassen
- on
- off
- 0
- unrar_passwords.txt
- on
- on
Code: Select all
cd /
wget http://downloads.sourceforge.net/project/filebot/filebot/FileBot_3.62/filebot_3.62_amd64.deb
sudo dpkg -i filebot_3.62_amd64.deb
sudo apt-get -f install
Code: Select all
touch /root/.pyload/scripts/unrar_finished/run.sh
sudo nano /root/.pyload/scripts/unrar_finished/run.sh
Code: Select all
#!/bin/sh
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
# Globale Variablen #
BaseDir=/media/5a24e136-09b9-48e1-95db-b44d5db3e28a
DownloadDir=${BaseDir}/Medien/Downloads/
MediaDir=${BaseDir}/Medien
#Pyload
DownloadFolder=$MediaDir/$1
# Funktionen #
sortiere(){
filebot -script fn:amc "$DownloadFolder" --output "$MediaDir" --conflict override -non-strict --action move --def "ignore=\b(?i:doku)\b" "ignore=\b(?i:xxx)\b" clean=y artwork=n xbmc=192.168.0.104,192.168.0.109 subtitles=de
#http://www.filebot.net/forums/viewtopic.php?f=4&t=215
}
cleaning(){
filebot -script fn:cleaner "$DownloadFolder" --def root=y "exts=jpg|nfo|rar|etc" "terms=sample|trailer|etc"
#http://www.filebot.net/forums/viewtopic.php?f=4&t=5#p1341
}
# Ausführen #
IFS=$SAVEIFS
echo '### Filebot AMC ####'
sortiere
echo '### aufraeumen ###'
cleaning
Code: Select all
chmod +x /root/.pyload/scripts/unrar_finished/run.sh
Code: Select all
pyLoadCore -q
pyLoadCore &
disown
4. Flexget installiern
Code: Select all
apt-get install python-setuptools
pip install flexget
pip install --upgrade flexget
Code: Select all
cd /
mkdir flexget
touch /flexget/config.yml
nano /flexget/config.yml
Hier meine config:
Code: Select all
presets:
global:
pyload:
api: http://localhost:8888/api
queue: no
username: admin
password: xxx
parse_url: no
hoster:
- UploadedTo
- ShareOnlineBiz
multiple_hoster: yes
tv:
series:
720p:
- New Girl
- Circus halligalli
- Misfits
- Hannibal
- The Slap
- Tatort
- The Following
- Suits
movies:
regexp:
reject:
- (?i)\b(1080p)\b: {from: title}
- (?i)\b(COMPLETE)\b: {from: title}
- (?i)\b(Remux)\b: {from: title}
- (?i)\b(Eroti(c|k))\b: {from: title}
- \b(AC3MD)\b: {from: title}
- (?i)\b(Web-DL)\b: {from: title}
- (?i)\b(XXX)\b: {from: title}
tasks:
xxxxxies.org:
rss: http://xxxxxies.org/xml/feeds/episoden.xml
accept_all: no
preset: tv
Top-Releases:
rss: http://feed43.com/xxx83782.xml
accept_all: yes
preset: movies
720p:
rss: http://feed43.com/xxx6285.xml
accept_all: yes
preset: movies
# xxxxxxxxxkies
# language: de
# hoster: ul
feed43 generiert anhand von deinen eigens angegebenen "extraction rules" ein rss feed den du dann mit flexget auslesen kannst.
mit feed43 müsst ihr euch aber selbst auseinander setzen - zumal ich hier nicht angeben kann wo ich meine links her beziehe.
aber ich poste hier mal mein "extraction - code" mit dem ich die titel und links herausfiltere
auf der webseite unter "Item (repeatable) Search Pattern" einzufügen:
Code: Select all
<h1 id="{*}"><a href="{*}" rel="{*}" title="{*}">{%}</a></h1>{*}<strong>Download: </strong><a target="_blank" href="{%}" >Uploaded.net</a><br />
-das ganze ohne "--test" im OMV webui als cron job stündlich ausführen lassen
Code: Select all
flexget -c /flexget/config.yml
Code: Select all
flexget -c /flexget/config.yml --test
5.dann gibts da noch ne seite wo ich meine amerikanischen serien lade und via plugin in pyLoad einfüge
Code: Select all
touch /root/.pyload/userplugins/hooks/DirectDownloadFetcher.py
nano /root/.pyload/userplugins/hooks/DirectDownloadFetcher.py
Code: Select all
# -*- coding: utf-8 -*-
"""
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License,
or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, see <http://www.gnu.org/licenses/>.
@author: wongdong thanks to mkaay for Ev0-fetcher
"""
from module.plugins.Hook import Hook
from module.lib import feedparser
from time import mktime, time
class DirectDownloadFetcher(Hook):
__name__ = "DirectDownloadFetcher"
__version__ = "1.0"
__description__ = "Checks your personal Newsfeed on Directxxxxxxxx.tv for new episodes. "
__config__ = [("activated", "bool", "Activated", "False"),
("rssnumber", "str", "Your personal RSS identifier (subscribe on Directxxxxxxxx.tv and enter the feednumber)", "0"),
("interval", "int", "Check interval in minutes", "120"),
("queue", "bool", "Move new shows directly to Queue", "False"),
("hoster", "str", "Hoster to use (comma seperated)", "FilesonicCom,UploadStationCom,FilejungleCom,DepositfilesCom,FileserveCom,FilefactoryCom")]
__author_name__ = ("wongdong")
__author_mail__ = ("[email protected]")
def setup(self):
self.interval = self.getConfig("interval") * 60
def filterLinks(self, links):
results = self.core.pluginManager.parseUrls(links)
sortedLinks = {}
for url, hoster in results:
if hoster not in sortedLinks:
sortedLinks[hoster] = []
sortedLinks[hoster].append(url)
for h in self.getConfig("hoster").split(","):
try:
if (len(sortedLinks[h.strip()]) > 0) : return sortedLinks[h.strip()] # returns the first hoster with links in it (a bit hardcore but whatever...)
except:
continue
return []
def periodical(self):
self.interval = self.getConfig("interval") * 60 # Re-set the interval if it has changed (need to restart pyload otherwise)
feed = feedparser.parse("http://Directxxxxxxxx.tv/rss/"+self.getConfig("rssnumber")) # This is your personal feed Number from Directxxxxxxxx.tv
lastupdate = 0 # The last timestamp of a downloaded file
try:
lastupdate = int(self.getStorage("lastupdate", 0)) # Try to load the last updated timestamp
except:
pass
maxtime =lastupdate
for item in feed['entries']: # Thats a single Episode item in the feed
currenttime = int(mktime(item['updated_parsed']))
if ( currenttime > lastupdate): # Take only those not already parsed
links = str(item['summary'].replace("\n","").replace("<br /><br />","<br />")).split("<br />") # Get all links (first element is the name)
title = links.pop(0).strip()
links = filter (lambda x:x.startswith("http://") , links) # remove all non-links (Empty lines, and whatnot)
self.core.log.info("DDFetcher: New Episode found: %s" % (title))
if (len(self.filterLinks(links)) > 0) :
self.core.api.addPackage(title.encode("utf-8"), self.filterLinks(links), 1 if self.getConfig("queue") else 0)
maxtime = max(maxtime, currenttime) # no links found. Try again next time.
else:
self.core.log.info("DDFetcher: Couldn't parse any valid links from this episode. Check allowed hosters. Available links are: %s" % (links))
if (maxtime == lastupdate):
self.core.log.debug("DDFetcher: No new Episodes found")
else:
self.setStorage("lastupdate",int(maxtime))

Den code hab ich wegen den URLS anpassen müssen.
falls ihr die seite also kennt einfach wieder anpassen oder bei mir per pn anfragen
pyLoad neustarten:
Code: Select all
pyLoadCore -q
pyLoadCore &
disown
- Nas-ip:8888
- config - plugins - DirectDownloadFetcher
- und dort dann eure einstellungen treffen