Re: support for XEM?
Posted: 06 May 2019, 14:58
Also the results seem wrong, ID 13033 absolute 01 should return tvdb season 5 episode 1 and 2, absolute 62 and 63
The ultimate TV and Movie Renamer
https://www.filebot.net/forums/
Code: Select all
"6327": [
"Monogatari",
{
"Bakemonogatari": 1
},
{
"Nisemonogatari": 2
},
{
"Monogatari Series Second Season": 3
},
{
"Owarimonogatari": 4
},
{
"Owarimonogatari S2": 5
},
{
"Owarimonogatari Second Season": 5
}
]
Code: Select all
--mapper "episode"
Code: Select all
--mapper "new net.filebot.web.Episode(/Series Name/, 1, 1, /Episode Title/)"
Code: Select all
--mapper /path/to/mapper.groovy
Code: Select all
filebot -list --q "Firefly" --mapper "[seriesName: ny, season: s + 1, episode: e, title: /Episode / + absolute]"
Code: Select all
Firefly (2002) - 2x01 - Episode 2
Firefly (2002) - 2x02 - Episode 3
Firefly (2002) - 2x03 - Episode 6
Firefly (2002) - 2x04 - Episode 7
Firefly (2002) - 2x05 - Episode 8
...
Code: Select all
Episode(String seriesName, Integer season, Integer episode, String title, Integer absolute, Integer special, SimpleDate airdate, Integer id, SeriesInfo seriesInfo)
Code: Select all
// test with filebot -list --q "Monogatari" --db AniDB --mapper xem.groovy
import groovy.json.JsonSlurper
String origin = anime ? "anidb" : "tvdb"
Integer seas
if (anime) {
seas = 1
} else {
seas = s
}
def spec = call{special}
def ep = call{e}
def baseURL = new URL("http://thexem.de")
def reqHeaders = [:]
def params = [
"origin": origin
]
def query = params.collect { k, v -> "$k=$v" }.join('&')
def getResponse = new URL(baseURL, "/map/havemap?$query").get(reqHeaders)
String stringID = id.toString()
Map json = new JsonSlurper().parseText(getResponse.text)
def item = json.data.any{ it == stringID }
if (item) {
def paramsName = [
"origin": origin,
"id": id,
"defaultNames": 1,
]
def queryName = paramsName.collect { k, v -> "$k=$v" }.join('&')
def getResName = new URL(baseURL, "/map/names?$queryName").get(reqHeaders)
Map jsonName = new JsonSlurper().parseText(getResName.text)
def mat = jsonName.data.collect{
if (it.value instanceof Map) {
def name = it.value.entrySet().value
[(it.key): name.flatten()]
} else if (it.value instanceof String) {
[(it.key): it.value]
}
}
String newN = mat.findAll{ it.all }?.all.first()
Integer foundS = mat.findAll{
it.entrySet().value.any{ v -> v =~ /(?i)$n/ }
}.first().entrySet().key.first().toInteger()
Integer newS = (foundS < 0) ? seas : foundS
def paramsMap = [
"origin": origin,
"id": id,
"season": newS,
"episode": ep,
]
def queryMap = paramsMap.collect { k, v -> "$k=$v" }.join('&')
def getResponseMap = new URL(baseURL, "/map/single?$queryMap").get(reqHeaders)
Map jsonMap = new JsonSlurper().parseText(getResponseMap.text)
// assuming tvdb destination, could be included in the query
def result = jsonMap.data.entrySet().findAll{ it.key.matches(/tvdb.*/) }
if (result.size() < 2) {
return new net.filebot.web.Episode(newN, newS, result.first().value.episode, t, result.first().value.absolute, spec, d, id, series)
} else {
def multi = []
for ( i in 0..result.size()-1 ) {
multi << new net.filebot.web.Episode(newN, newS, result[i].value.episode, t, result[i].value.absolute, spec, d, id, series)
}
return new net.filebot.web.MultiEpisode(*multi)
}
}
// hopefully return the episode untouched if not matched
return new net.filebot.web.Episode(n, seas, ep, t, absolute, spec, d, id, series)
Code: Select all
def x = [1, 2, 3]
def f = { a, b, c -> a * b * c }
f(*x)
Code: Select all
def cache = Cache.getCache('xem', CacheType.Daily)
def url = 'https://www.filebot.net/update.xml'
def content = cache.text(url, String.&toURL).get()
println content
Code: Select all
# this returns only the first 2 episodes despite XEM having 3 mappings, but it's an issue with AniDB only reporting the first 2 as regular episodes and the third as special.
# they also join titles on their own, so that shouldn't be a byproduct of the MultiEpisode object
filebot -list --q "Owarimonogatari Second Season" --db AniDB --mapper xem.groovy
# this should return episodes unchanged
filebot -list --q "Firefly" --db TheTVDB --mapper xem.groovy
Code: Select all
filebot -list --q "Owarimonogatari Second Season" --db AniDB --format "{id} | {episode.id}"
Code: Select all
if (result.size() < 2) {
return new net.filebot.web.Episode(newN, newS, result.first().value.episode, t, result.first().value.absolute, spec, d, episode.id, series)
} else {
def multi = []
for ( i in 0..result.size()-1 ) {
multi << new net.filebot.web.Episode(newN, newS, result[i].value.episode, t, result[i].value.absolute, spec, d, episode.id, series)
}
return new net.filebot.web.MultiEpisode(*multi)
}
}
// hopefully return the episode untouched if not matched
return episode
Code: Select all
// test with filebot -list --q "Monogatari" --db AniDB --mapper xem.groovy
import groovy.json.JsonSlurper
import net.filebot.Cache
import net.filebot.CacheType
Closure<Object> request = { Map headers = [:], String base = "http://thexem.de", String path, Map params ->
Cache cache = net.filebot.Cache.getCache('xem', CacheType.Daily)
URL baseURL = new URL(base)
String query = params.collect { k, v -> "$k=$v" }.join('&')
Object response = new URL(baseURL, "$path?$query").get(headers)
response
// TODO: daily caching
// def content = cache.text(url, String.&toURL).get()
}
String origin = anime ? "anidb" : "tvdb"
Integer seas = anime ? 1 : episode?.season
Object hasMap = request("/map/havemap", ["origin": origin])
Map jHasMap = new JsonSlurper().parseText(hasMap.text)
Boolean item = jHasMap.data.any{ it == id.toString() }
if (item) {
Object names = request("/map/names", [
"origin": origin,
"id": id,
"defaultNames": 1,
])
Map jName = new JsonSlurper().parseText(names.text)
ArrayList reflect = jName.data.collect{
if (it.value instanceof Map) {
def name = it.value.entrySet().value
[(it.key): name.flatten()]
} else if (it.value instanceof String) {
[(it.key): it.value]
}
}
// String newN = reflect.findAll{ it instanceof String }.first()
String newN = reflect.findAll{ it.all }?.all.first()
Integer foundS = reflect.findAll{
it.entrySet().value.any{ v -> v =~ /(?i)$episode.seriesName/ }
}.first().entrySet().key.first().toInteger()
// Integer foundS = item.findAll{ it instanceof Map }*.find{ k, v -> k.match(/$n/) }.find{ it != null }?.value
Integer newS = (foundS < 0) ? seas : foundS
Map old = [
ep: episode.episode ? episode.episode : episode.special,
se: episode.special ? 0 : newS,
]
// assuming TVDB destination, could be included in the query
Object mapping = request("/map/single", [
"origin": origin,
"id": id,
"season": old.se,
"episode": old.ep,
])
Map jMapping = new JsonSlurper().parseText(mapping.text)
// also assuming TVDB destination
if (jMapping.data.isEmpty()) {
return episode
}
def result = jMapping.data.entrySet().findAll{ it.key.matches(/tvdb.*/) }
if (result.size() < 2) {
return new net.filebot.web.Episode(newN, newS, result.first().value.episode, episode?.title, result.first().value.absolute, episode?.special, episode?.airdate, episode.id, series)
} else {
def multi = []
for ( i in 0..result.size()-1 ) {
// hopefully all multi-episodes are just multi-part because I couldn't find a way to merge titles
multi << new net.filebot.web.Episode(newN, newS, result[i].value.episode, episode?.title, result[i].value.absolute, episode?.special, episode?.airdate, episode.id, series)
}
return new net.filebot.web.MultiEpisode(*multi)
}
}
// hopefully return the episode untouched if not matched
return episode
Code: Select all
XEM.TheTVDB
Code: Select all
filebot -list --q "Monogatari" --db AniDB --mapper XEM.TheTVDB
Error:rednoah wrote: ↑28 May 2019, 04:57 I'd add a bit of caching as well, like so:Code: Select all
def cache = Cache.getCache('xem', CacheType.Daily) def url = 'https://www.filebot.net/update.xml' def content = cache.text(url, String.&toURL).get() println content
Expression yields empty value: No signature of method: java.lang.String.toURL() is applicable for argument types: (String) values: [https://www.filebot.net/update.xml]
Possible solutions: toURL(), toURL(), toURI(), toURI(), toSet(), toFile(java.lang.String)
Code: Select all
def cache = Cache.getCache('xem', CacheType.Daily)
def url = 'https://www.filebot.net/update.xml'
def content = cache.text(url, { new URL(it) }).get()
println content
Code: Select all
filebot -list --q 14444 --db AniDB --mapper "AnimeLists.TheTVDB"
Apply mapper [AnimeLists.TheTVDB] on [10] items
Map [Attack on Titan Season 3 (2019) - 01 - The Town Where Everything Began] to [Attack on Titan Season 3 (2019) - 3x13 - The Town Where Everything Began]
Map [Attack on Titan Season 3 (2019) - 02 - Thunder Spears] to [Attack on Titan Season 3 (2019) - 3x14 - Thunder Spears]
Map [Attack on Titan Season 3 (2019) - 03 - Descent] to [Attack on Titan Season 3 (2019) - 3x15 - Descent]
Map [Attack on Titan Season 3 (2019) - 04 - Perfect Game] to [Attack on Titan Season 3 (2019) - 3x16 - Perfect Game]
Map [Attack on Titan Season 3 (2019) - 05 - Hero] to [Attack on Titan Season 3 (2019) - 3x17 - Hero]
Map [Attack on Titan Season 3 (2019) - 06 - Midnight Sun] to [Attack on Titan Season 3 (2019) - 3x18 - Midnight Sun]
Map [Attack on Titan Season 3 (2019) - 07 - The Basement] to [Attack on Titan Season 3 (2019) - 3x19 - The Basement]
Map [Attack on Titan Season 3 (2019) - 08 - Episode 8] to [Attack on Titan Season 3 (2019) - 3x20 - Episode 8]
Map [Attack on Titan Season 3 (2019) - 09 - Episode 9] to [Attack on Titan Season 3 (2019) - 3x21 - Episode 9]
Map [Attack on Titan Season 3 (2019) - 10 - Episode 10] to [Attack on Titan Season 3 (2019) - 3x22 - Episode 10]
Attack on Titan Season 3 (2019) - 3x13 - The Town Where Everything Began
Attack on Titan Season 3 (2019) - 3x14 - Thunder Spears
Attack on Titan Season 3 (2019) - 3x15 - Descent
Attack on Titan Season 3 (2019) - 3x16 - Perfect Game
Attack on Titan Season 3 (2019) - 3x17 - Hero
Attack on Titan Season 3 (2019) - 3x18 - Midnight Sun
Attack on Titan Season 3 (2019) - 3x19 - The Basement
Attack on Titan Season 3 (2019) - 3x20 - Episode 8
Attack on Titan Season 3 (2019) - 3x21 - Episode 9
Attack on Titan Season 3 (2019) - 3x22 - Episode 10
thxrednoah wrote: ↑03 Jun 2019, 03:24 This should work better:Code: Select all
def cache = Cache.getCache('xem', CacheType.Daily) def url = 'https://www.filebot.net/update.xml' def content = cache.text(url, { new URL(it) }).get() println content
The code above will either do a network request to get the data, or read the data from local disk cache. It's all handled implicitly if you use it like in the example above. Your code doesn't know or care if the data came from cache or via web request.
If you use the code above, then it'll get cached for about a day. If you tell me what expiration times or properties you're looking for, then I can give you examples.