Merge pull request #2069 from Kometa-Team/requests-update

created requests module to handle all outgoing requests
pull/2076/head
meisnate12 7 months ago committed by GitHub
commit 67fc3cafc5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -2,6 +2,7 @@ AAC
accessModes
Addon
Adlib
AFI's
Amblin
analytics
AniDB
@ -19,6 +20,7 @@ Arrowverse
Atmos
Avenir
BAFTA
Bambara
BBFC
bearlikelion
Berlinale
@ -56,6 +58,7 @@ customizable
customizations
César
dbader
d'Or
de
deva
DIIIVOY
@ -177,6 +180,7 @@ microsoft
mikenobbs
minikube
mnt
Mojo's
monetization
Mossi
MPAA
@ -202,6 +206,7 @@ OMDb
oscar
OSX
ozzy
Palme
pathing
PCM
PersistentVolumeClaim

@ -10,12 +10,12 @@ Please include a summary of the changes.
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation change (non-code changes affecting only the wiki)
- [ ] Infrastructure change (changes related to the github repo, build process, or the like)
- [] Bug fix (non-breaking change which fixes an issue)
- [] New feature (non-breaking change which adds functionality)
- [] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [] Documentation change (non-code changes affecting only the wiki)
- [] Infrastructure change (changes related to the github repo, build process, or the like)
## Checklist
- [ ] My code was submitted to the nightly branch of the repository.
- [] My code was submitted to the nightly branch of the repository.

@ -0,0 +1,27 @@
name: Merge Nightly into Develop
on:
workflow_dispatch:
jobs:
merge-develop:
runs-on: ubuntu-latest
steps:
- name: Create App Token
uses: actions/create-github-app-token@v1
id: app-token
with:
app-id: ${{ vars.APP_ID }}
private-key: ${{ secrets.APP_TOKEN }}
- name: Check Out Repo
uses: actions/checkout@v4
with:
token: ${{ steps.app-token.outputs.token }}
ref: nightly
fetch-depth: 0
- name: Push Nightly into Develop
run: |
git push origin refs/heads/nightly:refs/heads/develop

@ -0,0 +1,27 @@
name: Merge Develop into Master
on:
workflow_dispatch:
jobs:
merge-master:
runs-on: ubuntu-latest
steps:
- name: Create App Token
uses: actions/create-github-app-token@v1
id: app-token
with:
app-id: ${{ vars.APP_ID }}
private-key: ${{ secrets.APP_TOKEN }}
- name: Check Out Repo
uses: actions/checkout@v4
with:
token: ${{ steps.app-token.outputs.token }}
ref: develop
fetch-depth: 0
- name: Push Develop into Master
run: |
git push origin refs/heads/develop:refs/heads/master

@ -1,12 +0,0 @@
name: Spellcheck Action
on: pull_request
jobs:
spellcheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: rojopolis/spellcheck-github-actions@0.36.0

@ -0,0 +1,27 @@
name: Validate Pull Request
on: pull_request
jobs:
validate-pull:
runs-on: ubuntu-latest
steps:
- name: Display Refs
run: |
echo "Base Repo: ${{ github.event.pull_request.base.repo.full_name }}"
echo "Base Ref: ${{ github.base_ref }}"
echo "Head Repo: ${{ github.event.pull_request.head.repo.full_name }}"
echo "Head Ref: ${{ github.head_ref }}"
- name: Check Base Branch
if: github.base_ref == 'master' || github.base_ref == 'develop'
run: |
echo "ERROR: Pull Requests cannot be submitted to master or develop. Please submit the Pull Request to the nightly branch"
exit 1
- name: Checkout Repo
uses: actions/checkout@v4
- name: Run Spellcheck
uses: rojopolis/spellcheck-github-actions@0.36.0

@ -1 +1 @@
2.0.1-develop24
2.0.1-develop26

@ -934,31 +934,6 @@ The available setting attributes which can be set at each level are outlined bel
- metadata
```
??? blank "`verify_ssl` - Turn SSL Verification on or off.<a class="headerlink" href="#verify-ssl" title="Permanent link"></a>"
<div id="verify-ssl" />Turn SSL Verification on or off.
???+ note
set to false if your log file shows any errors similar to "SSL: CERTIFICATE_VERIFY_FAILED"
<hr style="margin: 0px;">
**Attribute:** `verify_ssl`
**Levels with this Attribute:** Global
**Accepted Values:** `true` or `false`
**Default Value:** `true`
???+ example "Example"
```yaml
settings:
verify_ssl: false
```
??? blank "`custom_repo` - Used to set up the custom `repo` [file block type](files.md#location-types-and-paths).<a class="headerlink" href="#custom-repo" title="Permanent link"></a>"
<div id="custom-repo" />Specify where the `repo` attribute's base is when defining `collection_files`, `metadata_files`, `playlist_file` and `overlay_files`.

@ -39,6 +39,7 @@ These collections are applied by calling the below paths into the `collection_fi
| [Trakt Charts](chart/trakt.md)<sup>2</sup> | `trakt` | Trakt Popular, Trakt Trending | :fontawesome-solid-circle-check:{ .green } | :fontawesome-solid-circle-check:{ .green } |
| [AniList Charts](chart/anilist.md) | `anilist` | AniList Popular, AniList Season | :fontawesome-solid-circle-check:{ .green } | :fontawesome-solid-circle-check:{ .green } |
| [MyAnimeList Charts](chart/myanimelist.md) | `myanimelist` | MyAnimeList Popular, MyAnimeList Top Rated | :fontawesome-solid-circle-check:{ .green } | :fontawesome-solid-circle-check:{ .green } |
| [Letterboxd Charts](chart/letterboxd.md) | `letterboxd` | Letterboxd Top 250, Top 250 Most Fans | :fontawesome-solid-circle-check:{ .green } | :fontawesome-solid-circle-xmark:{ .red } |
| [Other Charts](chart/other.md) | `other_chart` | AniDB Popular, Common Sense Selection | :fontawesome-solid-circle-check:{ .green } | :fontawesome-solid-circle-check:{ .green } |
<sup>1</sup> Requires [Tautulli Authentication](../config/tautulli.md)

@ -68,6 +68,7 @@ This is the default Kometa collection ordering:
| `basic` | `010` |
| `anilist` | `020` |
| `imdb` | `020` |
| `letterboxd` | `020` |
| `myanimelist` | `020` |
| `other_chart` | `020` |
| `tautulli` | `020` |
@ -211,4 +212,4 @@ libraries:
{%
include-markdown "./example.md"
%}
%}

@ -229,6 +229,32 @@ different ways to specify these things.
docker run -it -v "X:\Media\Kometa\config:/config:rw" kometateam/kometa --timeout 360
```
??? blank "No Verify SSL&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`-nv`/`--no-verify-ssl`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`KOMETA_NO_VERIFY_SSL`<a class="headerlink" href="#no-verify-ssl" title="Permanent link"></a>"
<div id="no-verify-ssl" />Turn SSL Verification off.
???+ note
set to false if your log file shows any errors similar to "SSL: CERTIFICATE_VERIFY_FAILED"
<hr style="margin: 0px;">
**Accepted Values:** Integer (value is in seconds)
**Shell Flags:** `-nv` or `--no-verify-ssl` (ex. `--no-verify-ssl`)
**Environment Variable:** `KOMETA_NO_VERIFY_SSL` (ex. `KOMETA_NO_VERIFY_SSL=true`)
!!! example
=== "Local Environment"
```
python kometa.py --no-verify-ssl
```
=== "Docker Environment"
```
docker run -it -v "X:\Media\Kometa\config:/config:rw" kometateam/kometa --no-verify-ssl
```
??? blank "Collections Only&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`-co`/`--collections-only`&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`KOMETA_COLLECTIONS_ONLY`<a class="headerlink" href="#collections-only" title="Permanent link"></a>"
<div id="collections-only" />Only run collection YAML files, skip library operations, metadata, overlays, and playlists.

@ -50,6 +50,7 @@ arguments = {
"trace": {"args": "tr", "type": "bool", "help": "Run with extra Trace Debug Logs"},
"log-requests": {"args": ["lr", "log-request"], "type": "bool", "help": "Run with all Requests printed"},
"timeout": {"args": "ti", "type": "int", "default": 180, "help": "Kometa Global Timeout (Default: 180)"},
"no-verify-ssl": {"args": "nv", "type": "bool", "help": "Turns off Global SSL Verification"},
"collections-only": {"args": ["co", "collection-only"], "type": "bool", "help": "Run only collection files"},
"metadata-only": {"args": ["mo", "metadatas-only"], "type": "bool", "help": "Run only metadata files"},
"playlists-only": {"args": ["po", "playlist-only"], "type": "bool", "help": "Run only playlist files"},
@ -204,6 +205,7 @@ from modules import util
util.logger = logger
from modules.builder import CollectionBuilder
from modules.config import ConfigFile
from modules.request import Requests, parse_version
from modules.util import Failed, FilterFailed, NonExisting, NotScheduled, Deleted
def my_except_hook(exctype, value, tb):
@ -223,15 +225,13 @@ def new_send(*send_args, **kwargs):
requests.Session.send = new_send
version = ("Unknown", "Unknown", 0)
file_version = ("Unknown", "Unknown", 0)
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), "VERSION")) as handle:
for line in handle.readlines():
line = line.strip()
if len(line) > 0:
version = util.parse_version(line)
file_version = parse_version(line)
break
branch = util.guess_branch(version, env_version, git_branch)
version = (version[0].replace("develop", branch), version[1].replace("develop", branch), version[2])
uuid_file = os.path.join(default_dir, "UUID")
uuid_num = None
@ -255,179 +255,181 @@ def process(attrs):
executor.submit(start, *[attrs])
def start(attrs):
logger.add_main_handler()
logger.separator()
logger.info("")
logger.info_center(" __ ___ ______ ___ ___ _______ __________ ___ ")
logger.info_center("| |/ / / __ \\ | \\/ | | ____|| | / \\ ")
logger.info_center("| ' / | | | | | \\ / | | |__ `---| |---` / ^ \\ ")
logger.info_center("| < | | | | | |\\/| | | __| | | / /_\\ \\ ")
logger.info_center("| . \\ | `--` | | | | | | |____ | | / _____ \\ ")
logger.info_center("|__|\\__\\ \\______/ |__| |__| |_______| |__| /__/ \\__\\ ")
logger.info("")
if is_lxml:
system_ver = "lxml Docker"
elif is_linuxserver:
system_ver = "Linuxserver"
elif is_docker:
system_ver = "Docker"
else:
system_ver = f"Python {platform.python_version()}"
logger.info(f" Version: {version[0]} ({system_ver}){f' (Git: {git_branch})' if git_branch else ''}")
latest_version = util.current_version(version, branch=branch)
new_version = latest_version[0] if latest_version and (version[1] != latest_version[1] or (version[2] and version[2] < latest_version[2])) else None
if new_version:
logger.info(f" Newest Version: {new_version}")
logger.info(f" Platform: {platform.platform()}")
logger.info(f" Memory: {round(psutil.virtual_memory().total / (1024.0 ** 3))} GB")
if not is_docker and not is_linuxserver:
try:
with open(os.path.abspath(os.path.join(os.path.dirname(__file__), "requirements.txt")), "r") as file:
required_versions = {ln.split("==")[0]: ln.split("==")[1].strip() for ln in file.readlines()}
for req_name, sys_ver in system_versions.items():
if sys_ver and sys_ver != required_versions[req_name]:
logger.info(f" {req_name} version: {sys_ver} requires an update to: {required_versions[req_name]}")
except FileNotFoundError:
logger.error(" File Error: requirements.txt not found")
if "time" in attrs and attrs["time"]: start_type = f"{attrs['time']} "
elif run_args["tests"]: start_type = "Test "
elif "collections" in attrs and attrs["collections"]: start_type = "Collections "
elif "libraries" in attrs and attrs["libraries"]: start_type = "Libraries "
else: start_type = ""
start_time = datetime.now()
if "time" not in attrs:
attrs["time"] = start_time.strftime("%H:%M")
attrs["time_obj"] = start_time
attrs["version"] = version
attrs["branch"] = branch
attrs["config_file"] = run_args["config"]
attrs["ignore_schedules"] = run_args["ignore-schedules"]
attrs["read_only"] = run_args["read-only-config"]
attrs["no_missing"] = run_args["no-missing"]
attrs["no_report"] = run_args["no-report"]
attrs["collection_only"] = run_args["collections-only"]
attrs["metadata_only"] = run_args["metadata-only"]
attrs["playlist_only"] = run_args["playlists-only"]
attrs["operations_only"] = run_args["operations-only"]
attrs["overlays_only"] = run_args["overlays-only"]
attrs["plex_url"] = plex_url
attrs["plex_token"] = plex_token
logger.separator(debug=True)
logger.debug(f"Run Command: {run_arg}")
for akey, adata in arguments.items():
if isinstance(adata["help"], str):
ext = '"' if adata["type"] == "str" and run_args[akey] not in [None, "None"] else ""
logger.debug(f"--{akey} (KOMETA_{akey.replace('-', '_').upper()}): {ext}{run_args[akey]}{ext}")
logger.debug("")
if secret_args:
logger.debug("Kometa Secrets Read:")
for sec in secret_args:
logger.debug(f"--kometa-{sec} (KOMETA_{sec.upper().replace('-', '_')}): (redacted)")
logger.debug("")
logger.separator(f"Starting {start_type}Run")
config = None
stats = {"created": 0, "modified": 0, "deleted": 0, "added": 0, "unchanged": 0, "removed": 0, "radarr": 0, "sonarr": 0, "names": []}
try:
config = ConfigFile(default_dir, attrs, secret_args)
except Exception as e:
logger.stacktrace()
logger.critical(e)
else:
logger.add_main_handler()
logger.separator()
logger.info("")
logger.info_center(" __ ___ ______ ___ ___ _______ __________ ___ ")
logger.info_center("| |/ / / __ \\ | \\/ | | ____|| | / \\ ")
logger.info_center("| ' / | | | | | \\ / | | |__ `---| |---` / ^ \\ ")
logger.info_center("| < | | | | | |\\/| | | __| | | / /_\\ \\ ")
logger.info_center("| . \\ | `--` | | | | | | |____ | | / _____ \\ ")
logger.info_center("|__|\\__\\ \\______/ |__| |__| |_______| |__| /__/ \\__\\ ")
logger.info("")
if is_lxml:
system_ver = "lxml Docker"
elif is_linuxserver:
system_ver = "Linuxserver"
elif is_docker:
system_ver = "Docker"
else:
system_ver = f"Python {platform.python_version()}"
my_requests = Requests(file_version, env_version, git_branch, verify_ssl=False if run_args["no-verify-ssl"] else True)
logger.info(f" Version: {my_requests.version[0]} ({system_ver}){f' (Git: {git_branch})' if git_branch else ''}")
if my_requests.new_version:
logger.info(f" Newest Version: {my_requests.new_version}")
logger.info(f" Platform: {platform.platform()}")
logger.info(f" Total Memory: {round(psutil.virtual_memory().total / (1024.0 ** 3))} GB")
logger.info(f" Available Memory: {round(psutil.virtual_memory().available / (1024.0 ** 3))} GB")
if not is_docker and not is_linuxserver:
try:
with open(os.path.abspath(os.path.join(os.path.dirname(__file__), "requirements.txt")), "r") as file:
required_versions = {ln.split("==")[0]: ln.split("==")[1].strip() for ln in file.readlines()}
for req_name, sys_ver in system_versions.items():
if sys_ver and sys_ver != required_versions[req_name]:
logger.info(f" {req_name} version: {sys_ver} requires an update to: {required_versions[req_name]}")
except FileNotFoundError:
logger.error(" File Error: requirements.txt not found")
if "time" in attrs and attrs["time"]: start_type = f"{attrs['time']} "
elif run_args["tests"]: start_type = "Test "
elif "collections" in attrs and attrs["collections"]: start_type = "Collections "
elif "libraries" in attrs and attrs["libraries"]: start_type = "Libraries "
else: start_type = ""
start_time = datetime.now()
if "time" not in attrs:
attrs["time"] = start_time.strftime("%H:%M")
attrs["time_obj"] = start_time
attrs["config_file"] = run_args["config"]
attrs["ignore_schedules"] = run_args["ignore-schedules"]
attrs["read_only"] = run_args["read-only-config"]
attrs["no_missing"] = run_args["no-missing"]
attrs["no_report"] = run_args["no-report"]
attrs["collection_only"] = run_args["collections-only"]
attrs["metadata_only"] = run_args["metadata-only"]
attrs["playlist_only"] = run_args["playlists-only"]
attrs["operations_only"] = run_args["operations-only"]
attrs["overlays_only"] = run_args["overlays-only"]
attrs["plex_url"] = plex_url
attrs["plex_token"] = plex_token
logger.separator(debug=True)
logger.debug(f"Run Command: {run_arg}")
for akey, adata in arguments.items():
if isinstance(adata["help"], str):
ext = '"' if adata["type"] == "str" and run_args[akey] not in [None, "None"] else ""
logger.debug(f"--{akey} (KOMETA_{akey.replace('-', '_').upper()}): {ext}{run_args[akey]}{ext}")
logger.debug("")
if secret_args:
logger.debug("Kometa Secrets Read:")
for sec in secret_args:
logger.debug(f"--kometa-{sec} (KOMETA_{sec.upper().replace('-', '_')}): (redacted)")
logger.debug("")
logger.separator(f"Starting {start_type}Run")
config = None
stats = {"created": 0, "modified": 0, "deleted": 0, "added": 0, "unchanged": 0, "removed": 0, "radarr": 0, "sonarr": 0, "names": []}
try:
stats = run_config(config, stats)
config = ConfigFile(my_requests, default_dir, attrs, secret_args)
except Exception as e:
config.notify(e)
logger.stacktrace()
logger.critical(e)
logger.info("")
end_time = datetime.now()
run_time = str(end_time - start_time).split(".")[0]
if config:
else:
try:
stats = run_config(config, stats)
except Exception as e:
config.notify(e)
logger.stacktrace()
logger.critical(e)
logger.info("")
end_time = datetime.now()
run_time = str(end_time - start_time).split(".")[0]
if config:
try:
config.Webhooks.end_time_hooks(start_time, end_time, run_time, stats)
except Failed as e:
logger.stacktrace()
logger.error(f"Webhooks Error: {e}")
version_line = f"Version: {my_requests.version[0]}"
if my_requests.new_version:
version_line = f"{version_line} Newest Version: {my_requests.new_version}"
try:
config.Webhooks.end_time_hooks(start_time, end_time, run_time, stats)
except Failed as e:
logger.stacktrace()
logger.error(f"Webhooks Error: {e}")
version_line = f"Version: {version[0]}"
if new_version:
version_line = f"{version_line} Newest Version: {new_version}"
try:
log_data = {}
no_overlays = []
no_overlays_count = 0
convert_errors = {}
other_log_groups = [
("No Items found for", r"No Items found for .* \(\d+\) (.*)"),
("Convert Warning: No TVDb ID or IMDb ID found for AniDB ID:", r"Convert Warning: No TVDb ID or IMDb ID found for AniDB ID: (.*)"),
("Convert Warning: No AniDB ID Found for AniList ID:", r"Convert Warning: No AniDB ID Found for AniList ID: (.*)"),
("Convert Warning: No AniDB ID Found for MyAnimeList ID:", r"Convert Warning: No AniDB ID Found for MyAnimeList ID: (.*)"),
("Convert Warning: No IMDb ID Found for TMDb ID:", r"Convert Warning: No IMDb ID Found for TMDb ID: (.*)"),
("Convert Warning: No TMDb ID Found for IMDb ID:", r"Convert Warning: No TMDb ID Found for IMDb ID: (.*)"),
("Convert Warning: No TVDb ID Found for TMDb ID:", r"Convert Warning: No TVDb ID Found for TMDb ID: (.*)"),
("Convert Warning: No TMDb ID Found for TVDb ID:", r"Convert Warning: No TMDb ID Found for TVDb ID: (.*)"),
("Convert Warning: No IMDb ID Found for TVDb ID:", r"Convert Warning: No IMDb ID Found for TVDb ID: (.*)"),
("Convert Warning: No TVDb ID Found for IMDb ID:", r"Convert Warning: No TVDb ID Found for IMDb ID: (.*)"),
("Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid:", r"Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid: (.*)"),
("Convert Warning: No MyAnimeList Found for AniDB ID:", r"Convert Warning: No MyAnimeList Found for AniDB ID: (.*) of Guid: .*"),
]
other_message = {}
with open(logger.main_log, encoding="utf-8") as f:
for log_line in f:
for err_type in ["WARNING", "ERROR", "CRITICAL"]:
if f"[{err_type}]" in log_line:
log_line = log_line.split("|")[1].strip()
other = False
for key, reg in other_log_groups:
if log_line.startswith(key):
other = True
_name = re.match(reg, log_line).group(1)
if key not in other_message:
other_message[key] = {"list": [], "count": 0}
other_message[key]["count"] += 1
if _name not in other_message[key]:
other_message[key]["list"].append(_name)
if other is False:
if err_type not in log_data:
log_data[err_type] = []
log_data[err_type].append(log_line)
if "No Items found for" in other_message:
logger.separator(f"Overlay Errors Summary", space=False, border=False)
logger.info("")
logger.info(f"No Items found for {other_message['No Items found for']['count']} Overlays: {other_message['No Items found for']['list']}")
logger.info("")
log_data = {}
no_overlays = []
no_overlays_count = 0
convert_errors = {}
other_log_groups = [
("No Items found for", r"No Items found for .* \(\d+\) (.*)"),
("Convert Warning: No TVDb ID or IMDb ID found for AniDB ID:", r"Convert Warning: No TVDb ID or IMDb ID found for AniDB ID: (.*)"),
("Convert Warning: No AniDB ID Found for AniList ID:", r"Convert Warning: No AniDB ID Found for AniList ID: (.*)"),
("Convert Warning: No AniDB ID Found for MyAnimeList ID:", r"Convert Warning: No AniDB ID Found for MyAnimeList ID: (.*)"),
("Convert Warning: No IMDb ID Found for TMDb ID:", r"Convert Warning: No IMDb ID Found for TMDb ID: (.*)"),
("Convert Warning: No TMDb ID Found for IMDb ID:", r"Convert Warning: No TMDb ID Found for IMDb ID: (.*)"),
("Convert Warning: No TVDb ID Found for TMDb ID:", r"Convert Warning: No TVDb ID Found for TMDb ID: (.*)"),
("Convert Warning: No TMDb ID Found for TVDb ID:", r"Convert Warning: No TMDb ID Found for TVDb ID: (.*)"),
("Convert Warning: No IMDb ID Found for TVDb ID:", r"Convert Warning: No IMDb ID Found for TVDb ID: (.*)"),
("Convert Warning: No TVDb ID Found for IMDb ID:", r"Convert Warning: No TVDb ID Found for IMDb ID: (.*)"),
("Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid:", r"Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid: (.*)"),
("Convert Warning: No MyAnimeList Found for AniDB ID:", r"Convert Warning: No MyAnimeList Found for AniDB ID: (.*) of Guid: .*"),
]
other_message = {}
with open(logger.main_log, encoding="utf-8") as f:
for log_line in f:
for err_type in ["WARNING", "ERROR", "CRITICAL"]:
if f"[{err_type}]" in log_line:
log_line = log_line.split("|")[1].strip()
other = False
for key, reg in other_log_groups:
if log_line.startswith(key):
other = True
_name = re.match(reg, log_line).group(1)
if key not in other_message:
other_message[key] = {"list": [], "count": 0}
other_message[key]["count"] += 1
if _name not in other_message[key]:
other_message[key]["list"].append(_name)
if other is False:
if err_type not in log_data:
log_data[err_type] = []
log_data[err_type].append(log_line)
if "No Items found for" in other_message:
logger.separator(f"Overlay Errors Summary", space=False, border=False)
logger.info("")
logger.info(f"No Items found for {other_message['No Items found for']['count']} Overlays: {other_message['No Items found for']['list']}")
logger.info("")
convert_title = False
for key, _ in other_log_groups:
if key.startswith("Convert Warning") and key in other_message:
if convert_title is False:
logger.separator("Convert Summary", space=False, border=False)
logger.info("")
convert_title = True
logger.info(f"{key[17:]}")
logger.info(", ".join(other_message[key]["list"]))
if convert_title:
logger.info("")
convert_title = False
for key, _ in other_log_groups:
if key.startswith("Convert Warning") and key in other_message:
if convert_title is False:
logger.separator("Convert Summary", space=False, border=False)
logger.info("")
convert_title = True
logger.info(f"{key[17:]}")
logger.info(", ".join(other_message[key]["list"]))
if convert_title:
logger.info("")
for err_type in ["WARNING", "ERROR", "CRITICAL"]:
if err_type not in log_data:
continue
logger.separator(f"{err_type.lower().capitalize()} Summary", space=False, border=False)
for err_type in ["WARNING", "ERROR", "CRITICAL"]:
if err_type not in log_data:
continue
logger.separator(f"{err_type.lower().capitalize()} Summary", space=False, border=False)
logger.info("")
logger.info("Count | Message")
logger.separator(f"{logger.separating_character * 5}|", space=False, border=False, side_space=False, left=True)
for k, v in Counter(log_data[err_type]).most_common():
logger.info(f"{v:>5} | {k}")
logger.info("")
except Failed as e:
logger.stacktrace()
logger.error(f"Report Error: {e}")
logger.info("")
logger.info("Count | Message")
logger.separator(f"{logger.separating_character * 5}|", space=False, border=False, side_space=False, left=True)
for k, v in Counter(log_data[err_type]).most_common():
logger.info(f"{v:>5} | {k}")
logger.info("")
except Failed as e:
logger.stacktrace()
logger.error(f"Report Error: {e}")
logger.separator(f"Finished {start_type}Run\n{version_line}\nFinished: {end_time.strftime('%H:%M:%S %Y-%m-%d')} Run Time: {run_time}")
logger.remove_main_handler()
logger.separator(f"Finished {start_type}Run\n{version_line}\nFinished: {end_time.strftime('%H:%M:%S %Y-%m-%d')} Run Time: {run_time}")
logger.remove_main_handler()
except Exception as e:
logger.stacktrace()
logger.critical(e)
def run_config(config, stats):
library_status = run_libraries(config)

@ -225,6 +225,7 @@ nav:
- Basic Charts: defaults/chart/basic.md
- AniList Charts: defaults/chart/anilist.md
- IMDb Charts: defaults/chart/imdb.md
- Letterboxd Charts: defaults/chart/letterboxd.md
- MyAnimeList Charts: defaults/chart/myanimelist.md
- Tautulli Charts: defaults/chart/tautulli.md
- TMDb Charts: defaults/chart/tmdb.md

@ -89,8 +89,9 @@ class AniDBObj:
class AniDB:
def __init__(self, config, data):
self.config = config
def __init__(self, requests, cache, data):
self.requests = requests
self.cache = cache
self.language = data["language"]
self.expiration = 60
self.client = None
@ -104,19 +105,19 @@ class AniDB:
self.version = version
self.expiration = expiration
logger.secret(self.client)
if self.config.Cache:
value1, value2, success = self.config.Cache.query_testing("anidb_login")
if self.cache:
value1, value2, success = self.cache.query_testing("anidb_login")
if str(value1) == str(client) and str(value2) == str(version) and success:
return
try:
self.get_anime(69, ignore_cache=True)
if self.config.Cache:
self.config.Cache.update_testing("anidb_login", self.client, self.version, "True")
if self.cache:
self.cache.update_testing("anidb_login", self.client, self.version, "True")
except Failed:
self.client = None
self.version = None
if self.config.Cache:
self.config.Cache.update_testing("anidb_login", self.client, self.version, "False")
if self.cache:
self.cache.update_testing("anidb_login", self.client, self.version, "False")
raise
@property
@ -137,9 +138,9 @@ class AniDB:
if params:
logger.trace(f"Params: {params}")
if data:
return self.config.post_html(url, data=data, headers=util.header(self.language))
return self.requests.post_html(url, data=data, language=self.language)
else:
return self.config.get_html(url, params=params, headers=util.header(self.language))
return self.requests.get_html(url, params=params, language=self.language)
def _popular(self):
response = self._request(urls["popular"])
@ -184,8 +185,8 @@ class AniDB:
def get_anime(self, anidb_id, ignore_cache=False):
expired = None
anidb_dict = None
if self.config.Cache and not ignore_cache:
anidb_dict, expired = self.config.Cache.query_anidb(anidb_id, self.expiration)
if self.cache and not ignore_cache:
anidb_dict, expired = self.cache.query_anidb(anidb_id, self.expiration)
if expired or not anidb_dict:
time_check = time.time()
if self._delay is not None:
@ -200,8 +201,8 @@ class AniDB:
})
self._delay = time.time()
obj = AniDBObj(self, anidb_id, anidb_dict)
if self.config.Cache and not ignore_cache:
self.config.Cache.update_anidb(expired, anidb_id, obj, self.expiration)
if self.cache and not ignore_cache:
self.cache.update_anidb(expired, anidb_id, obj, self.expiration)
return obj
def get_anidb_ids(self, method, data):

@ -57,8 +57,8 @@ country_codes = [
]
class AniList:
def __init__(self, config):
self.config = config
def __init__(self, requests):
self.requests = requests
self._options = None
@property
@ -79,7 +79,7 @@ class AniList:
def _request(self, query, variables, level=1):
logger.trace(f"Query: {query}")
logger.trace(f"Variables: {variables}")
response = self.config.post(base_url, json={"query": query, "variables": variables})
response = self.requests.post(base_url, json={"query": query, "variables": variables})
json_obj = response.json()
logger.trace(f"Response: {json_obj}")
if "errors" in json_obj:

@ -6,11 +6,10 @@ from modules import anidb, anilist, icheckmovies, imdb, letterboxd, mal, mojo, p
from modules.util import Failed, FilterFailed, NonExisting, NotScheduled, NotScheduledRange, Deleted
from modules.overlay import Overlay
from modules.poster import KometaImage
from modules.request import quote
from plexapi.audio import Artist, Album, Track
from plexapi.exceptions import NotFound
from plexapi.video import Movie, Show, Season, Episode
from requests.exceptions import ConnectionError
from urllib.parse import quote
logger = util.logger
@ -559,9 +558,7 @@ class CollectionBuilder:
self.obj = getter(self.name)
break
except Failed as e:
error = e
else:
logger.error(error)
logger.error(e)
raise Deleted(self.delete())
else:
self.libraries.append(self.library)
@ -1182,11 +1179,9 @@ class CollectionBuilder:
if method_name == "url_poster":
try:
if not method_data.startswith("https://theposterdb.com/api/assets/"):
image_response = self.config.get(method_data, headers=util.header())
if image_response.status_code >= 400 or image_response.headers["Content-Type"] not in util.image_content_types:
raise ConnectionError
self.config.Requests.get_image(method_data)
self.posters[method_name] = method_data
except ConnectionError:
except Failed:
logger.warning(f"{self.Type} Warning: No Poster Found at {method_data}")
elif method_name == "tmdb_list_poster":
self.posters[method_name] = self.config.TMDb.get_list(util.regex_first_int(method_data, "TMDb List ID")).poster_url
@ -1209,11 +1204,9 @@ class CollectionBuilder:
def _background(self, method_name, method_data):
if method_name == "url_background":
try:
image_response = self.config.get(method_data, headers=util.header())
if image_response.status_code >= 400 or image_response.headers["Content-Type"] not in util.image_content_types:
raise ConnectionError
self.config.Requests.get_image(method_data)
self.backgrounds[method_name] = method_data
except ConnectionError:
except Failed:
logger.warning(f"{self.Type} Warning: No Background Found at {method_data}")
elif method_name == "tmdb_background":
self.backgrounds[method_name] = self.config.TMDb.get_movie_show_or_collection(util.regex_first_int(method_data, 'TMDb ID'), self.library.is_movie).backdrop_url
@ -2875,7 +2868,7 @@ class CollectionBuilder:
if self.details["changes_webhooks"]:
self.notification_removals.append(util.item_set(item, self.library.get_id_from_maps(item.ratingKey)))
if self.playlist and items_removed:
self.library._reload(self.obj)
self.library.item_reload(self.obj)
self.obj.removeItems(items_removed)
elif items_removed:
self.library.alter_collection(items_removed, self.name, smart_label_collection=self.smart_label_collection, add=False)
@ -3328,7 +3321,7 @@ class CollectionBuilder:
logger.error("Metadata: Failed to Update Please delete the collection and run again")
logger.info("")
else:
self.library._reload(self.obj)
self.library.item_reload(self.obj)
#self.obj.batchEdits()
batch_display = "Collection Metadata Edits"
if summary[1] and str(summary[1]) != str(self.obj.summary):
@ -3449,8 +3442,8 @@ class CollectionBuilder:
elif style_data and "tpdb_background" in style_data and style_data["tpdb_background"]:
self.backgrounds["style_data"] = f"https://theposterdb.com/api/assets/{style_data['tpdb_background']}"
self.collection_poster = util.pick_image(self.obj.title, self.posters, self.library.prioritize_assets, self.library.download_url_assets, asset_location)
self.collection_background = util.pick_image(self.obj.title, self.backgrounds, self.library.prioritize_assets, self.library.download_url_assets, asset_location, is_poster=False)
self.collection_poster = self.library.pick_image(self.obj.title, self.posters, self.library.prioritize_assets, self.library.download_url_assets, asset_location)
self.collection_background = self.library.pick_image(self.obj.title, self.backgrounds, self.library.prioritize_assets, self.library.download_url_assets, asset_location, is_poster=False)
clean_temp = False
if isinstance(self.collection_poster, KometaImage):
@ -3520,7 +3513,7 @@ class CollectionBuilder:
logger.separator(f"Syncing {self.name} {self.Type} to Trakt List {self.sync_to_trakt_list}", space=False, border=False)
logger.info("")
if self.obj:
self.library._reload(self.obj)
self.library.item_reload(self.obj)
self.load_collection_items()
current_ids = []
for item in self.items:
@ -3597,7 +3590,7 @@ class CollectionBuilder:
def send_notifications(self, playlist=False):
if self.obj and self.details["changes_webhooks"] and \
(self.created or len(self.notification_additions) > 0 or len(self.notification_removals) > 0):
self.library._reload(self.obj)
self.library.item_reload(self.obj)
try:
self.library.Webhooks.collection_hooks(
self.details["changes_webhooks"],

@ -1,6 +1,5 @@
import base64, os, re, requests
import os, re
from datetime import datetime
from lxml import html
from modules import util, radarr, sonarr, operations
from modules.anidb import AniDB
from modules.anilist import AniList
@ -27,9 +26,8 @@ from modules.tautulli import Tautulli
from modules.tmdb import TMDb
from modules.trakt import Trakt
from modules.tvdb import TVDb
from modules.util import Failed, NotScheduled, NotScheduledRange, YAML
from modules.util import Failed, NotScheduled, NotScheduledRange
from modules.webhooks import Webhooks
from retrying import retry
logger = util.logger
@ -142,7 +140,7 @@ library_operations = {
}
class ConfigFile:
def __init__(self, default_dir, attrs, secrets):
def __init__(self, in_request, default_dir, attrs, secrets):
logger.info("Locating config...")
config_file = attrs["config_file"]
if config_file and os.path.exists(config_file): self.config_path = os.path.abspath(config_file)
@ -153,10 +151,9 @@ class ConfigFile:
logger.clear_errors()
self._mediastingers = None
self.Requests = in_request
self.default_dir = default_dir
self.secrets = secrets
self.version = attrs["version"] if "version" in attrs else None
self.branch = attrs["branch"] if "branch" in attrs else None
self.read_only = attrs["read_only"] if "read_only" in attrs else False
self.no_missing = attrs["no_missing"] if "no_missing" in attrs else None
self.no_report = attrs["no_report"] if "no_report" in attrs else None
@ -196,7 +193,7 @@ class ConfigFile:
logger.debug(re.sub(r"(token|client.*|url|api_*key|secret|error|delete|run_start|run_end|version|changes|username|password): .+", r"\1: (redacted)", line.strip("\r\n")))
logger.debug("")
self.data = YAML(self.config_path).data
self.data = self.Requests.file_yaml(self.config_path).data
def replace_attr(all_data, in_attr, par):
if "settings" not in all_data:
@ -364,7 +361,7 @@ class ConfigFile:
if data is None or attribute not in data:
message = f"{text} not found"
if parent and save is True:
yaml = YAML(self.config_path)
yaml = self.Requests.file_yaml(self.config_path)
endline = f"\n{parent} sub-attribute {attribute} added to config"
if parent not in yaml.data or not yaml.data[parent]: yaml.data[parent] = {attribute: default}
elif attribute not in yaml.data[parent]: yaml.data[parent][attribute] = default
@ -480,7 +477,7 @@ class ConfigFile:
"playlist_sync_to_users": check_for_attribute(self.data, "playlist_sync_to_users", parent="settings", default="all", default_is_none=True),
"playlist_exclude_users": check_for_attribute(self.data, "playlist_exclude_users", parent="settings", default_is_none=True),
"playlist_report": check_for_attribute(self.data, "playlist_report", parent="settings", var_type="bool", default=True),
"verify_ssl": check_for_attribute(self.data, "verify_ssl", parent="settings", var_type="bool", default=True),
"verify_ssl": check_for_attribute(self.data, "verify_ssl", parent="settings", var_type="bool", default=True, save=False),
"custom_repo": check_for_attribute(self.data, "custom_repo", parent="settings", default_is_none=True),
"overlay_artwork_filetype": check_for_attribute(self.data, "overlay_artwork_filetype", parent="settings", test_list=filetype_list, translations={"webp": "webp_lossy"}, default="jpg"),
"overlay_artwork_quality": check_for_attribute(self.data, "overlay_artwork_quality", parent="settings", var_type="int", default_is_none=True, int_min=1, int_max=100),
@ -492,7 +489,9 @@ class ConfigFile:
if "https://github.com/" in repo:
repo = repo.replace("https://github.com/", "https://raw.githubusercontent.com/").replace("/tree/", "/")
self.custom_repo = repo
self.latest_version = util.current_version(self.version, branch=self.branch)
if not self.general["verify_ssl"]:
self.Requests.no_verify_ssl()
add_operations = True if "operations" not in self.general["run_order"] else False
add_metadata = True if "metadata" not in self.general["run_order"] else False
@ -516,25 +515,20 @@ class ConfigFile:
new_run_order.append("overlays")
self.general["run_order"] = new_run_order
yaml = YAML(self.config_path)
if "settings" not in yaml.data or not yaml.data["settings"]:
yaml.data["settings"] = {}
yaml.data["settings"]["run_order"] = new_run_order
yaml.save()
self.session = requests.Session()
if not self.general["verify_ssl"]:
self.session.verify = False
if self.session.verify is False:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
config_yaml = self.Requests.file_yaml(self.config_path)
if "settings" not in config_yaml.data or not config_yaml.data["settings"]:
config_yaml.data["settings"] = {}
config_yaml.data["settings"]["run_order"] = new_run_order
config_yaml.save()
if self.general["cache"]:
logger.separator()
self.Cache = Cache(self.config_path, self.general["cache_expiration"])
else:
self.Cache = None
self.GitHub = GitHub(self, {"token": check_for_attribute(self.data, "token", parent="github", default_is_none=True)})
self.GitHub = GitHub(self.Requests, {
"token": check_for_attribute(self.data, "token", parent="github", default_is_none=True)
})
logger.separator()
@ -542,7 +536,9 @@ class ConfigFile:
if "notifiarr" in self.data:
logger.info("Connecting to Notifiarr...")
try:
self.NotifiarrFactory = Notifiarr(self, {"apikey": check_for_attribute(self.data, "apikey", parent="notifiarr", throw=True)})
self.NotifiarrFactory = Notifiarr(self.Requests, {
"apikey": check_for_attribute(self.data, "apikey", parent="notifiarr", throw=True)
})
except Failed as e:
if str(e).endswith("is blank"):
logger.warning(e)
@ -557,7 +553,7 @@ class ConfigFile:
if "gotify" in self.data:
logger.info("Connecting to Gotify...")
try:
self.GotifyFactory = Gotify(self, {
self.GotifyFactory = Gotify(self.Requests, {
"url": check_for_attribute(self.data, "url", parent="gotify", throw=True),
"token": check_for_attribute(self.data, "token", parent="gotify", throw=True)
})
@ -582,8 +578,8 @@ class ConfigFile:
self.Webhooks = Webhooks(self, self.webhooks, notifiarr=self.NotifiarrFactory, gotify=self.GotifyFactory)
try:
self.Webhooks.start_time_hooks(self.start_time)
if self.version[0] != "Unknown" and self.latest_version[0] != "Unknown" and self.version[1] != self.latest_version[1] or (self.version[2] and self.version[2] < self.latest_version[2]):
self.Webhooks.version_hooks(self.version, self.latest_version)
if self.Requests.has_new_version():
self.Webhooks.version_hooks(self.Requests.version, self.Requests.latest_version)
except Failed as e:
logger.stacktrace()
logger.error(f"Webhooks Error: {e}")
@ -613,7 +609,7 @@ class ConfigFile:
if "omdb" in self.data:
logger.info("Connecting to OMDb...")
try:
self.OMDb = OMDb(self, {
self.OMDb = OMDb(self.Requests, self.Cache, {
"apikey": check_for_attribute(self.data, "apikey", parent="omdb", throw=True),
"expiration": check_for_attribute(self.data, "cache_expiration", parent="omdb", var_type="int", default=60, int_min=1)
})
@ -628,7 +624,7 @@ class ConfigFile:
logger.separator()
self.MDBList = MDBList(self)
self.MDBList = MDBList(self.Requests, self.Cache)
if "mdblist" in self.data:
logger.info("Connecting to MDBList...")
try:
@ -652,7 +648,7 @@ class ConfigFile:
if "trakt" in self.data:
logger.info("Connecting to Trakt...")
try:
self.Trakt = Trakt(self, {
self.Trakt = Trakt(self.Requests, self.read_only, {
"client_id": check_for_attribute(self.data, "client_id", parent="trakt", throw=True),
"client_secret": check_for_attribute(self.data, "client_secret", parent="trakt", throw=True),
"pin": check_for_attribute(self.data, "pin", parent="trakt", default_is_none=True),
@ -674,7 +670,7 @@ class ConfigFile:
if "mal" in self.data:
logger.info("Connecting to My Anime List...")
try:
self.MyAnimeList = MyAnimeList(self, {
self.MyAnimeList = MyAnimeList(self.Requests, self.Cache, self.read_only, {
"client_id": check_for_attribute(self.data, "client_id", parent="mal", throw=True),
"client_secret": check_for_attribute(self.data, "client_secret", parent="mal", throw=True),
"localhost_url": check_for_attribute(self.data, "localhost_url", parent="mal", default_is_none=True),
@ -691,7 +687,9 @@ class ConfigFile:
else:
logger.info("mal attribute not found")
self.AniDB = AniDB(self, {"language": check_for_attribute(self.data, "language", parent="anidb", default="en")})
self.AniDB = AniDB(self.Requests, self.Cache, {
"language": check_for_attribute(self.data, "language", parent="anidb", default="en")
})
if "anidb" in self.data:
logger.separator()
logger.info("Connecting to AniDB...")
@ -745,15 +743,15 @@ class ConfigFile:
logger.info("")
logger.separator(f"Skipping {e} Playlist File")
self.TVDb = TVDb(self, self.general["tvdb_language"], self.general["cache_expiration"])
self.IMDb = IMDb(self)
self.Convert = Convert(self)
self.AniList = AniList(self)
self.ICheckMovies = ICheckMovies(self)
self.Letterboxd = Letterboxd(self)
self.BoxOfficeMojo = BoxOfficeMojo(self)
self.Reciperr = Reciperr(self)
self.Ergast = Ergast(self)
self.TVDb = TVDb(self.Requests, self.Cache, self.general["tvdb_language"], self.general["cache_expiration"])
self.IMDb = IMDb(self.Requests, self.Cache, self.default_dir)
self.Convert = Convert(self.Requests, self.Cache, self.TMDb)
self.AniList = AniList(self.Requests)
self.ICheckMovies = ICheckMovies(self.Requests)
self.Letterboxd = Letterboxd(self.Requests, self.Cache)
self.BoxOfficeMojo = BoxOfficeMojo(self.Requests, self.Cache)
self.Reciperr = Reciperr(self.Requests)
self.Ergast = Ergast(self.Requests, self.Cache)
logger.separator()
@ -1165,15 +1163,15 @@ class ConfigFile:
for attr in ["clean_bundles", "empty_trash", "optimize"]:
try:
params["plex"][attr] = check_for_attribute(lib, attr, parent="plex", var_type="bool", save=False, throw=True)
except Failed as er:
test = lib["plex"][attr] if "plex" in lib and attr in lib["plex"] and lib["plex"][attr] else self.general["plex"][attr]
except Failed:
test_attr = lib["plex"][attr] if "plex" in lib and attr in lib["plex"] and lib["plex"][attr] else self.general["plex"][attr]
params["plex"][attr] = False
if test is not True and test is not False:
if test_attr is not True and test_attr is not False:
try:
util.schedule_check(attr, test, current_time, self.run_hour)
util.schedule_check(attr, test_attr, current_time, self.run_hour)
params["plex"][attr] = True
except NotScheduled:
logger.info(f"Skipping Operation Not Scheduled for {test}")
logger.info(f"Skipping Operation Not Scheduled for {test_attr}")
if params["plex"]["url"].lower() == "env":
params["plex"]["url"] = self.env_plex_url
@ -1201,7 +1199,7 @@ class ConfigFile:
logger.info(f"Connecting to {display_name} library's Radarr...")
logger.info("")
try:
library.Radarr = Radarr(self, library, {
library.Radarr = Radarr(self.Requests, self.Cache, library, {
"url": check_for_attribute(lib, "url", parent="radarr", var_type="url", default=self.general["radarr"]["url"], req_default=True, save=False),
"token": check_for_attribute(lib, "token", parent="radarr", default=self.general["radarr"]["token"], req_default=True, save=False),
"add_missing": check_for_attribute(lib, "add_missing", parent="radarr", var_type="bool", default=self.general["radarr"]["add_missing"], save=False),
@ -1231,7 +1229,7 @@ class ConfigFile:
logger.info(f"Connecting to {display_name} library's Sonarr...")
logger.info("")
try:
library.Sonarr = Sonarr(self, library, {
library.Sonarr = Sonarr(self.Requests, self.Cache, library, {
"url": check_for_attribute(lib, "url", parent="sonarr", var_type="url", default=self.general["sonarr"]["url"], req_default=True, save=False),
"token": check_for_attribute(lib, "token", parent="sonarr", default=self.general["sonarr"]["token"], req_default=True, save=False),
"add_missing": check_for_attribute(lib, "add_missing", parent="sonarr", var_type="bool", default=self.general["sonarr"]["add_missing"], save=False),
@ -1264,7 +1262,7 @@ class ConfigFile:
logger.info(f"Connecting to {display_name} library's Tautulli...")
logger.info("")
try:
library.Tautulli = Tautulli(self, library, {
library.Tautulli = Tautulli(self.Requests, library, {
"url": check_for_attribute(lib, "url", parent="tautulli", var_type="url", default=self.general["tautulli"]["url"], req_default=True, save=False),
"apikey": check_for_attribute(lib, "apikey", parent="tautulli", default=self.general["tautulli"]["apikey"], req_default=True, save=False)
})
@ -1315,44 +1313,8 @@ class ConfigFile:
logger.stacktrace()
logger.error(f"Webhooks Error: {e}")
def get_html(self, url, headers=None, params=None):
return html.fromstring(self.get(url, headers=headers, params=params).content)
def get_json(self, url, json=None, headers=None, params=None):
response = self.get(url, json=json, headers=headers, params=params)
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
@retry(stop_max_attempt_number=6, wait_fixed=10000)
def get(self, url, json=None, headers=None, params=None):
return self.session.get(url, json=json, headers=headers, params=params)
def get_image_encoded(self, url):
return base64.b64encode(self.get(url).content).decode('utf-8')
def post_html(self, url, data=None, json=None, headers=None):
return html.fromstring(self.post(url, data=data, json=json, headers=headers).content)
def post_json(self, url, data=None, json=None, headers=None):
response = self.post(url, data=data, json=json, headers=headers)
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
@retry(stop_max_attempt_number=6, wait_fixed=10000)
def post(self, url, data=None, json=None, headers=None):
return self.session.post(url, data=data, json=json, headers=headers)
def load_yaml(self, url):
return YAML(input_data=self.get(url).content).data
@property
def mediastingers(self):
if self._mediastingers is None:
self._mediastingers = self.load_yaml(mediastingers_url)
self._mediastingers = self.Requests.get_yaml(mediastingers_url)
return self._mediastingers

@ -1,15 +1,19 @@
import re, requests
import re
from modules import util
from modules.util import Failed, NonExisting
from modules.request import urlparse
from plexapi.exceptions import BadRequest
from requests.exceptions import ConnectionError
logger = util.logger
anime_lists_url = "https://raw.githubusercontent.com/Kometa-Team/Anime-IDs/master/anime_ids.json"
class Convert:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache, tmdb):
self.requests = requests
self.cache = cache
self.tmdb = tmdb
self._anidb_ids = {}
self._mal_to_anidb = {}
self._anidb_to_mal = {}
@ -22,7 +26,7 @@ class Convert:
self._tmdb_show_to_anidb = {}
self._imdb_to_anidb = {}
self._tvdb_to_anidb = {}
self._anidb_ids = self.config.get_json(anime_lists_url)
self._anidb_ids = self.requests.get_json(anime_lists_url)
for anidb_id, ids in self._anidb_ids.items():
anidb_id = int(anidb_id)
if "mal_id" in ids:
@ -78,6 +82,11 @@ class Convert:
else:
return None
def anidb_to_mal(self, anidb_id):
if anidb_id not in self._anidb_to_mal:
raise Failed(f"Convert Warning: No MyAnimeList Found for AniDB ID: {anidb_id}")
return self._anidb_to_mal[anidb_id]
def anidb_to_ids(self, anidb_ids, library):
ids = []
anidb_list = anidb_ids if isinstance(anidb_ids, list) else [anidb_ids]
@ -139,15 +148,15 @@ class Convert:
def tmdb_to_imdb(self, tmdb_id, is_movie=True, fail=False):
media_type = "movie" if is_movie else "show"
expired = False
if self.config.Cache and is_movie:
cache_id, expired = self.config.Cache.query_imdb_to_tmdb_map(tmdb_id, imdb=False, media_type=media_type)
if self.cache and is_movie:
cache_id, expired = self.cache.query_imdb_to_tmdb_map(tmdb_id, imdb=False, media_type=media_type)
if cache_id and not expired:
return cache_id
try:
imdb_id = self.config.TMDb.convert_from(tmdb_id, "imdb_id", is_movie)
imdb_id = self.tmdb.convert_from(tmdb_id, "imdb_id", is_movie)
if imdb_id:
if self.config.Cache:
self.config.Cache.update_imdb_to_tmdb_map(media_type, expired, imdb_id, tmdb_id)
if self.cache:
self.cache.update_imdb_to_tmdb_map(media_type, expired, imdb_id, tmdb_id)
return imdb_id
except Failed:
pass
@ -158,15 +167,15 @@ class Convert:
def imdb_to_tmdb(self, imdb_id, fail=False):
expired = False
if self.config.Cache:
cache_id, cache_type, expired = self.config.Cache.query_imdb_to_tmdb_map(imdb_id, imdb=True, return_type=True)
if self.cache:
cache_id, cache_type, expired = self.cache.query_imdb_to_tmdb_map(imdb_id, imdb=True, return_type=True)
if cache_id and not expired:
return cache_id, cache_type
try:
tmdb_id, tmdb_type = self.config.TMDb.convert_imdb_to(imdb_id)
tmdb_id, tmdb_type = self.tmdb.convert_imdb_to(imdb_id)
if tmdb_id:
if self.config.Cache:
self.config.Cache.update_imdb_to_tmdb_map(tmdb_type, expired, imdb_id, tmdb_id)
if self.cache:
self.cache.update_imdb_to_tmdb_map(tmdb_type, expired, imdb_id, tmdb_id)
return tmdb_id, tmdb_type
except Failed:
pass
@ -177,15 +186,15 @@ class Convert:
def tmdb_to_tvdb(self, tmdb_id, fail=False):
expired = False
if self.config.Cache:
cache_id, expired = self.config.Cache.query_tmdb_to_tvdb_map(tmdb_id, tmdb=True)
if self.cache:
cache_id, expired = self.cache.query_tmdb_to_tvdb_map(tmdb_id, tmdb=True)
if cache_id and not expired:
return cache_id
try:
tvdb_id = self.config.TMDb.convert_from(tmdb_id, "tvdb_id", False)
tvdb_id = self.tmdb.convert_from(tmdb_id, "tvdb_id", False)
if tvdb_id:
if self.config.Cache:
self.config.Cache.update_tmdb_to_tvdb_map(expired, tmdb_id, tvdb_id)
if self.cache:
self.cache.update_tmdb_to_tvdb_map(expired, tmdb_id, tvdb_id)
return tvdb_id
except Failed:
pass
@ -196,15 +205,15 @@ class Convert:
def tvdb_to_tmdb(self, tvdb_id, fail=False):
expired = False
if self.config.Cache:
cache_id, expired = self.config.Cache.query_tmdb_to_tvdb_map(tvdb_id, tmdb=False)
if self.cache:
cache_id, expired = self.cache.query_tmdb_to_tvdb_map(tvdb_id, tmdb=False)
if cache_id and not expired:
return cache_id
try:
tmdb_id = self.config.TMDb.convert_tvdb_to(tvdb_id)
tmdb_id = self.tmdb.convert_tvdb_to(tvdb_id)
if tmdb_id:
if self.config.Cache:
self.config.Cache.update_tmdb_to_tvdb_map(expired, tmdb_id, tvdb_id)
if self.cache:
self.cache.update_tmdb_to_tvdb_map(expired, tmdb_id, tvdb_id)
return tmdb_id
except Failed:
pass
@ -215,15 +224,15 @@ class Convert:
def tvdb_to_imdb(self, tvdb_id, fail=False):
expired = False
if self.config.Cache:
cache_id, expired = self.config.Cache.query_imdb_to_tvdb_map(tvdb_id, imdb=False)
if self.cache:
cache_id, expired = self.cache.query_imdb_to_tvdb_map(tvdb_id, imdb=False)
if cache_id and not expired:
return cache_id
try:
imdb_id = self.tmdb_to_imdb(self.tvdb_to_tmdb(tvdb_id, fail=True), is_movie=False, fail=True)
if imdb_id:
if self.config.Cache:
self.config.Cache.update_imdb_to_tvdb_map(expired, imdb_id, tvdb_id)
if self.cache:
self.cache.update_imdb_to_tvdb_map(expired, imdb_id, tvdb_id)
return imdb_id
except Failed:
pass
@ -234,8 +243,8 @@ class Convert:
def imdb_to_tvdb(self, imdb_id, fail=False):
expired = False
if self.config.Cache:
cache_id, expired = self.config.Cache.query_imdb_to_tvdb_map(imdb_id, imdb=True)
if self.cache:
cache_id, expired = self.cache.query_imdb_to_tvdb_map(imdb_id, imdb=True)
if cache_id and not expired:
return cache_id
try:
@ -243,8 +252,8 @@ class Convert:
if tmdb_type == "show":
tvdb_id = self.tmdb_to_tvdb(tmdb_id, fail=True)
if tvdb_id:
if self.config.Cache:
self.config.Cache.update_imdb_to_tvdb_map(expired, imdb_id, tvdb_id)
if self.cache:
self.cache.update_imdb_to_tvdb_map(expired, imdb_id, tvdb_id)
return tvdb_id
except Failed:
pass
@ -258,8 +267,8 @@ class Convert:
cache_id = None
imdb_check = None
expired = None
if self.config.Cache:
cache_id, imdb_check, media_type, expired = self.config.Cache.query_guid_map(guid)
if self.cache:
cache_id, imdb_check, media_type, expired = self.cache.query_guid_map(guid)
if (cache_id or imdb_check) and not expired:
media_id_type = "movie" if "movie" in media_type else "show"
if item_type == "hama" and check_id.startswith("anidb"):
@ -270,7 +279,7 @@ class Convert:
return media_id_type, cache_id, imdb_check, expired
def scan_guid(self, guid_str):
guid = requests.utils.urlparse(guid_str)
guid = urlparse(guid_str)
return guid.scheme.split(".")[-1], guid.netloc
def get_id(self, item, library):
@ -288,13 +297,13 @@ class Convert:
try:
for guid_tag in item.guids:
try:
url_parsed = requests.utils.urlparse(guid_tag.id)
url_parsed = urlparse(guid_tag.id)
if url_parsed.scheme == "tvdb": tvdb_id.append(int(url_parsed.netloc))
elif url_parsed.scheme == "imdb": imdb_id.append(url_parsed.netloc)
elif url_parsed.scheme == "tmdb": tmdb_id.append(int(url_parsed.netloc))
except ValueError:
pass
except requests.exceptions.ConnectionError:
except ConnectionError:
library.query(item.refresh)
logger.stacktrace()
raise Failed("No External GUIDs found")
@ -375,12 +384,12 @@ class Convert:
imdb_id.append(imdb)
def update_cache(cache_ids, id_type, imdb_in, guid_type):
if self.config.Cache:
if self.cache:
cache_ids = ",".join([str(c) for c in cache_ids])
imdb_in = ",".join([str(i) for i in imdb_in]) if imdb_in else None
ids = f"{item.guid:<46} | {id_type} ID: {cache_ids:<7} | IMDb ID: {str(imdb_in):<10}"
logger.info(f" Cache | {'^' if expired else '+'} | {ids} | {item.title}")
self.config.Cache.update_guid_map(item.guid, cache_ids, imdb_in, expired, guid_type)
self.cache.update_guid_map(item.guid, cache_ids, imdb_in, expired, guid_type)
if (tmdb_id or imdb_id) and library.is_movie:
update_cache(tmdb_id, "TMDb", imdb_id, "movie")

@ -156,20 +156,21 @@ class Race:
class Ergast:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache):
self.requests = requests
self.cache = cache
def get_races(self, year, language, ignore_cache=False):
expired = None
if self.config.Cache and not ignore_cache:
race_list, expired = self.config.Cache.query_ergast(year, self.config.Cache.expiration)
if self.cache and not ignore_cache:
race_list, expired = self.cache.query_ergast(year, self.cache.expiration)
if race_list and expired is False:
return [Race(r, language) for r in race_list]
response = self.config.get(f"{base_url}{year}.json")
response = self.requests.get(f"{base_url}{year}.json")
if response.status_code < 400:
races = [Race(r, language) for r in response.json()["MRData"]["RaceTable"]["Races"]]
if self.config.Cache and not ignore_cache:
self.config.Cache.update_ergast(expired, year, races, self.config.Cache.expiration)
if self.cache and not ignore_cache:
self.cache.update_ergast(expired, year, races, self.cache.expiration)
return races
else:
raise Failed(f"Ergast Error: F1 Season: {year} Not found")

@ -10,8 +10,8 @@ kometa_base = f"{base_url}/repos/Kometa-Team/Kometa"
configs_raw_url = f"{raw_url}/Kometa-Team/Community-Configs"
class GitHub:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, params):
self.requests = requests
self.token = params["token"]
logger.secret(self.token)
self.headers = {"Authorization": f"token {self.token}"} if self.token else None
@ -22,19 +22,19 @@ class GitHub:
self._translation_keys = []
self._translations = {}
def _requests(self, url, err_msg=None, json=True, params=None):
response = self.config.get(url, headers=self.headers, params=params)
def _requests(self, url, err_msg=None, params=None, yaml=False):
if not err_msg:
err_msg = f"URL Not Found: {url}"
if yaml:
return self.requests.get_yaml(url, headers=self.headers, params=params)
response = self.requests.get(url, headers=self.headers, params=params)
if response.status_code >= 400:
raise Failed(f"Git Error: {err_msg}")
if json:
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
return response
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
def get_top_tree(self, repo):
if not str(repo).startswith("/"):
@ -77,8 +77,8 @@ class GitHub:
def configs_url(self):
if self._configs_url is None:
self._configs_url = f"{configs_raw_url}/master/"
if self.config.version[1] in self.config_tags and (self.config.latest_version[1] != self.config.version[1] or self.config.branch == "master"):
self._configs_url = f"{configs_raw_url}/v{self.config.version[1]}/"
if self.requests.version[1] in self.config_tags and (self.requests.latest_version[1] != self.requests.version[1] or self.requests.branch == "master"):
self._configs_url = f"{configs_raw_url}/v{self.requests.version[1]}/"
return self._configs_url
@property
@ -90,8 +90,7 @@ class GitHub:
def translation_yaml(self, translation_key):
if translation_key not in self._translations:
url = f"{self.translation_url}{translation_key}.yml"
yaml = util.YAML(input_data=self._requests(url, json=False).content).data
yaml = self._requests(f"{self.translation_url}{translation_key}.yml", yaml=True).data
output = {"collections": {}, "key_names": {}, "variables": {}}
for k in output:
if k in yaml:

@ -5,8 +5,8 @@ from modules.util import Failed
logger = util.logger
class Gotify:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, params):
self.requests = requests
self.token = params["token"]
self.url = params["url"].rstrip("/")
logger.secret(self.url)
@ -19,9 +19,9 @@ class Gotify:
def _request(self, path="message", json=None, post=True):
if post:
response = self.config.post(f"{self.url}/{path}", headers={"X-Gotify-Key": self.token}, json=json)
response = self.requests.post(f"{self.url}/{path}", headers={"X-Gotify-Key": self.token}, json=json)
else:
response = self.config.get(f"{self.url}/{path}")
response = self.requests.get(f"{self.url}/{path}")
try:
response_json = response.json()
except JSONDecodeError as e:

@ -7,12 +7,12 @@ builders = ["icheckmovies_list", "icheckmovies_list_details"]
base_url = "https://www.icheckmovies.com/lists/"
class ICheckMovies:
def __init__(self, config):
self.config = config
def __init__(self, requests):
self.requests = requests
def _request(self, url, language, xpath):
logger.trace(f"URL: {url}")
return self.config.get_html(url, headers=util.header(language)).xpath(xpath)
return self.requests.get_html(url, language=language).xpath(xpath)
def _parse_list(self, list_url, language):
imdb_urls = self._request(list_url, language, "//a[@class='optionIcon optionIMDB external']/@href")

@ -1,7 +1,7 @@
import csv, gzip, json, math, os, re, requests, shutil, time
import csv, gzip, json, math, os, re, shutil, time
from modules import util
from modules.request import parse_qs, urlparse
from modules.util import Failed
from urllib.parse import urlparse, parse_qs
logger = util.logger
@ -94,8 +94,10 @@ graphql_url = "https://api.graphql.imdb.com/"
list_url = f"{base_url}/list/ls"
class IMDb:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache, default_dir):
self.requests = requests
self.cache = cache
self.default_dir = default_dir
self._ratings = None
self._genres = None
self._episode_ratings = None
@ -108,28 +110,27 @@ class IMDb:
logger.trace(f"URL: {url}")
if params:
logger.trace(f"Params: {params}")
headers = util.header(language) if language else util.header()
response = self.config.get_html(url, headers=headers, params=params)
response = self.requests.get_html(url, params=params, header=True, language=language)
return response.xpath(xpath) if xpath else response
def _graph_request(self, json_data):
return self.config.post_json(graphql_url, headers={"content-type": "application/json"}, json=json_data)
return self.requests.post_json(graphql_url, headers={"content-type": "application/json"}, json=json_data)
@property
def hash(self):
if self._hash is None:
self._hash = self.config.get(hash_url).text.strip()
self._hash = self.requests.get(hash_url).text.strip()
return self._hash
@property
def events_validation(self):
if self._events_validation is None:
self._events_validation = self.config.load_yaml(f"{git_base}/event_validation.yml")
self._events_validation = self.requests.get_yaml(f"{git_base}/event_validation.yml").data
return self._events_validation
def get_event(self, event_id):
if event_id not in self._events:
self._events[event_id] = self.config.load_yaml(f"{git_base}/events/{event_id}.yml")
self._events[event_id] = self.requests.get_yaml(f"{git_base}/events/{event_id}.yml").data
return self._events[event_id]
def validate_imdb_lists(self, err_type, imdb_lists, language):
@ -213,7 +214,7 @@ class IMDb:
def _watchlist(self, user, language):
imdb_url = f"{base_url}/user/{user}/watchlist"
for text in self._request(imdb_url, language=language , xpath="//div[@class='article']/script/text()")[0].split("\n"):
for text in self._request(imdb_url, language=language, xpath="//div[@class='article']/script/text()")[0].split("\n"):
if text.strip().startswith("IMDbReactInitialState.push"):
jsonline = text.strip()
return [f for f in json.loads(jsonline[jsonline.find('{'):-2])["starbars"]]
@ -450,8 +451,8 @@ class IMDb:
def keywords(self, imdb_id, language, ignore_cache=False):
imdb_keywords = {}
expired = None
if self.config.Cache and not ignore_cache:
imdb_keywords, expired = self.config.Cache.query_imdb_keywords(imdb_id, self.config.Cache.expiration)
if self.cache and not ignore_cache:
imdb_keywords, expired = self.cache.query_imdb_keywords(imdb_id, self.cache.expiration)
if imdb_keywords and expired is False:
return imdb_keywords
keywords = self._request(f"{base_url}/title/{imdb_id}/keywords", language=language, xpath="//td[@class='soda sodavote']")
@ -465,15 +466,15 @@ class IMDb:
imdb_keywords[name] = (int(result.group(1)), int(result.group(2)))
else:
imdb_keywords[name] = (0, 0)
if self.config.Cache and not ignore_cache:
self.config.Cache.update_imdb_keywords(expired, imdb_id, imdb_keywords, self.config.Cache.expiration)
if self.cache and not ignore_cache:
self.cache.update_imdb_keywords(expired, imdb_id, imdb_keywords, self.cache.expiration)
return imdb_keywords
def parental_guide(self, imdb_id, ignore_cache=False):
parental_dict = {}
expired = None
if self.config.Cache and not ignore_cache:
parental_dict, expired = self.config.Cache.query_imdb_parental(imdb_id, self.config.Cache.expiration)
if self.cache and not ignore_cache:
parental_dict, expired = self.cache.query_imdb_parental(imdb_id, self.cache.expiration)
if parental_dict and expired is False:
return parental_dict
response = self._request(f"{base_url}/title/{imdb_id}/parentalguide")
@ -483,8 +484,8 @@ class IMDb:
parental_dict[ptype] = results[0].strip()
else:
raise Failed(f"IMDb Error: No Item Found for IMDb ID: {imdb_id}")
if self.config.Cache and not ignore_cache:
self.config.Cache.update_imdb_parental(expired, imdb_id, parental_dict, self.config.Cache.expiration)
if self.cache and not ignore_cache:
self.cache.update_imdb_parental(expired, imdb_id, parental_dict, self.cache.expiration)
return parental_dict
def _ids_from_chart(self, chart, language):
@ -542,26 +543,15 @@ class IMDb:
raise Failed(f"IMDb Error: Method {method} not supported")
def _interface(self, interface):
gz = os.path.join(self.config.default_dir, f"title.{interface}.tsv.gz")
tsv = os.path.join(self.config.default_dir, f"title.{interface}.tsv")
gz = os.path.join(self.default_dir, f"title.{interface}.tsv.gz")
tsv = os.path.join(self.default_dir, f"title.{interface}.tsv")
if os.path.exists(gz):
os.remove(gz)
if os.path.exists(tsv):
os.remove(tsv)
with requests.get(f"https://datasets.imdbws.com/title.{interface}.tsv.gz", stream=True) as r:
r.raise_for_status()
total_length = r.headers.get('content-length')
if total_length is not None:
total_length = int(total_length)
dl = 0
with open(gz, "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
dl += len(chunk)
f.write(chunk)
logger.ghost(f"Downloading IMDb Interface: {dl / total_length * 100:6.2f}%")
logger.exorcise()
self.requests.get_stream(f"https://datasets.imdbws.com/title.{interface}.tsv.gz", gz, "IMDb Interface")
with open(tsv, "wb") as f_out:
with gzip.open(gz, "rb") as f_in:

@ -8,14 +8,15 @@ builders = ["letterboxd_list", "letterboxd_list_details"]
base_url = "https://letterboxd.com"
class Letterboxd:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache):
self.requests = requests
self.cache = cache
def _parse_page(self, list_url, language):
if "ajax" not in list_url:
list_url = list_url.replace("https://letterboxd.com/films", "https://letterboxd.com/films/ajax")
logger.trace(f"URL: {list_url}")
response = self.config.get_html(list_url, headers=util.header(language))
response = self.requests.get_html(list_url, language=language)
letterboxd_ids = response.xpath("//li[contains(@class, 'poster-container') or contains(@class, 'film-detail')]/div/@data-film-id")
items = []
for letterboxd_id in letterboxd_ids:
@ -44,7 +45,7 @@ class Letterboxd:
def _tmdb(self, letterboxd_url, language):
logger.trace(f"URL: {letterboxd_url}")
response = self.config.get_html(letterboxd_url, headers=util.header(language))
response = self.requests.get_html(letterboxd_url, language=language)
ids = response.xpath("//a[@data-track-action='TMDb']/@href")
if len(ids) > 0 and ids[0]:
if "themoviedb.org/movie" in ids[0]:
@ -54,7 +55,7 @@ class Letterboxd:
def get_list_description(self, list_url, language):
logger.trace(f"URL: {list_url}")
response = self.config.get_html(list_url, headers=util.header(language))
response = self.requests.get_html(list_url, language=language)
descriptions = response.xpath("//meta[@property='og:description']/@content")
return descriptions[0] if len(descriptions) > 0 and len(descriptions[0]) > 0 else None
@ -106,16 +107,16 @@ class Letterboxd:
logger.ghost(f"Finding TMDb ID {i}/{total_items}")
tmdb_id = None
expired = None
if self.config.Cache:
tmdb_id, expired = self.config.Cache.query_letterboxd_map(letterboxd_id)
if self.cache:
tmdb_id, expired = self.cache.query_letterboxd_map(letterboxd_id)
if not tmdb_id or expired is not False:
try:
tmdb_id = self._tmdb(f"{base_url}{slug}", language)
except Failed as e:
logger.error(e)
continue
if self.config.Cache:
self.config.Cache.update_letterboxd_map(expired, letterboxd_id, tmdb_id)
if self.cache:
self.cache.update_letterboxd_map(expired, letterboxd_id, tmdb_id)
ids.append((tmdb_id, "tmdb"))
logger.info(f"Processed {total_items} TMDb IDs")
if filtered_ids:

@ -1,9 +1,10 @@
import os, time
from abc import ABC, abstractmethod
from modules import util, operations
from modules import util
from modules.meta import MetadataFile, OverlayFile
from modules.operations import Operations
from modules.util import Failed, NotScheduled, YAML
from modules.poster import ImageData
from modules.util import Failed, NotScheduled
from PIL import Image
logger = util.logger
@ -274,6 +275,36 @@ class Library(ABC):
def image_update(self, item, image, tmdb=None, title=None, poster=True):
pass
def pick_image(self, title, images, prioritize_assets, download_url_assets, item_dir, is_poster=True, image_name=None):
image_type = "poster" if is_poster else "background"
if image_name is None:
image_name = image_type
if images:
logger.debug(f"{len(images)} {image_type}{'s' if len(images) > 1 else ''} found:")
for i in images:
logger.debug(f"Method: {i} {image_type.capitalize()}: {images[i]}")
if prioritize_assets and "asset_directory" in images:
return images["asset_directory"]
for attr in ["style_data", f"url_{image_type}", f"file_{image_type}", f"tmdb_{image_type}", "tmdb_profile",
"tmdb_list_poster", "tvdb_list_poster", f"tvdb_{image_type}", "asset_directory",
f"pmm_{image_type}",
"tmdb_person", "tmdb_collection_details", "tmdb_actor_details", "tmdb_crew_details",
"tmdb_director_details",
"tmdb_producer_details", "tmdb_writer_details", "tmdb_movie_details", "tmdb_list_details",
"tvdb_list_details", "tvdb_movie_details", "tvdb_show_details", "tmdb_show_details"]:
if attr in images:
if attr in ["style_data", f"url_{image_type}"] and download_url_assets and item_dir:
if "asset_directory" in images:
return images["asset_directory"]
else:
try:
return self.config.Requests.download_image(title, images[attr], item_dir, is_poster=is_poster, filename=image_name)
except Failed as e:
logger.error(e)
if attr in ["asset_directory", f"pmm_{image_type}"]:
return images[attr]
return ImageData(attr, images[attr], is_poster=is_poster, is_url=attr != f"file_{image_type}")
@abstractmethod
def reload(self, item, force=False):
pass
@ -291,7 +322,7 @@ class Library(ABC):
pass
def check_image_for_overlay(self, image_url, image_path, remove=False):
image_path = util.download_image("", image_url, image_path).location
image_path = self.config.Requests.download_image("", image_url, image_path).location
while util.is_locked(image_path):
time.sleep(1)
with Image.open(image_path) as image:
@ -350,7 +381,7 @@ class Library(ABC):
self.report_data[collection][other] = []
self.report_data[collection][other].append(title)
yaml = YAML(self.report_path, start_empty=True)
yaml = self.config.Requests.file_yaml(self.report_path, start_empty=True)
yaml.data = self.report_data
yaml.save()

@ -2,7 +2,7 @@ import re, secrets, time, webbrowser
from datetime import datetime
from json import JSONDecodeError
from modules import util
from modules.util import Failed, TimeoutExpired, YAML
from modules.util import Failed, TimeoutExpired
logger = util.logger
@ -79,8 +79,10 @@ class MyAnimeListObj:
class MyAnimeList:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, cache, read_only, params):
self.requests = requests
self.cache = cache
self.read_only = read_only
self.client_id = params["client_id"]
self.client_secret = params["client_secret"]
self.localhost_url = params["localhost_url"]
@ -175,8 +177,8 @@ class MyAnimeList:
def _save(self, authorization):
if authorization is not None and "access_token" in authorization and authorization["access_token"] and self._check(authorization):
if self.authorization != authorization and not self.config.read_only:
yaml = YAML(self.config_path)
if self.authorization != authorization and not self.read_only:
yaml = self.requests.file_yaml(self.config_path)
yaml.data["mal"]["authorization"] = {
"access_token": authorization["access_token"],
"token_type": authorization["token_type"],
@ -191,13 +193,13 @@ class MyAnimeList:
return False
def _oauth(self, data):
return self.config.post_json(urls["oauth_token"], data=data)
return self.requests.post_json(urls["oauth_token"], data=data)
def _request(self, url, authorization=None):
token = authorization["access_token"] if authorization else self.authorization["access_token"]
logger.trace(f"URL: {url}")
try:
response = self.config.get_json(url, headers={"Authorization": f"Bearer {token}"})
response = self.requests.get_json(url, headers={"Authorization": f"Bearer {token}"})
logger.trace(f"Response: {response}")
if "error" in response: raise Failed(f"MyAnimeList Error: {response['error']}")
else: return response
@ -211,7 +213,7 @@ class MyAnimeList:
if self._delay is not None:
while time_check - self._delay < 1:
time_check = time.time()
data = self.config.get_json(f"{jikan_base_url}{url}", params=params)
data = self.requests.get_json(f"{jikan_base_url}{url}", params=params)
self._delay = time.time()
return data
@ -286,8 +288,8 @@ class MyAnimeList:
def get_anime(self, mal_id):
expired = None
if self.config.Cache:
mal_dict, expired = self.config.Cache.query_mal(mal_id, self.expiration)
if self.cache:
mal_dict, expired = self.cache.query_mal(mal_id, self.expiration)
if mal_dict and expired is False:
return MyAnimeListObj(self, mal_id, mal_dict, cache=True)
try:
@ -297,8 +299,8 @@ class MyAnimeList:
if "data" not in response:
raise Failed(f"MyAnimeList Error: No Anime found for MyAnimeList ID: {mal_id}")
mal = MyAnimeListObj(self, mal_id, response["data"])
if self.config.Cache:
self.config.Cache.update_mal(expired, mal_id, mal, self.expiration)
if self.cache:
self.cache.update_mal(expired, mal_id, mal, self.expiration)
return mal
def get_mal_ids(self, method, data):

@ -2,8 +2,8 @@ import time
from datetime import datetime
from json import JSONDecodeError
from modules import util
from modules.request import urlparse
from modules.util import Failed, LimitReached
from urllib.parse import urlparse
logger = util.logger
@ -72,8 +72,9 @@ class MDbObj:
class MDBList:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache):
self.requests = requests
self.cache = cache
self.apikey = None
self.expiration = 60
self.limit = False
@ -108,7 +109,7 @@ class MDBList:
final_params[k] = v
try:
time.sleep(0.2 if self.supporter else 1)
response = self.config.get_json(url, params=final_params)
response = self.requests.get_json(url, params=final_params)
except JSONDecodeError:
raise Failed("MDBList Error: JSON Decoding Failed")
if "response" in response and (response["response"] is False or response["response"] == "False"):
@ -134,14 +135,14 @@ class MDBList:
else:
raise Failed("MDBList Error: Either IMDb ID, TVDb ID, or TMDb ID and TMDb Type Required")
expired = None
if self.config.Cache and not ignore_cache:
mdb_dict, expired = self.config.Cache.query_mdb(key, self.expiration)
if self.cache and not ignore_cache:
mdb_dict, expired = self.cache.query_mdb(key, self.expiration)
if mdb_dict and expired is False:
return MDbObj(mdb_dict)
logger.trace(f"ID: {key}")
mdb = MDbObj(self._request(api_url, params=params))
if self.config.Cache and not ignore_cache:
self.config.Cache.update_mdb(expired, key, mdb, self.expiration)
if self.cache and not ignore_cache:
self.cache.update_mdb(expired, key, mdb, self.expiration)
return mdb
def get_imdb(self, imdb_id):
@ -212,7 +213,7 @@ class MDBList:
url_base = url_base if url_base.endswith("/") else f"{url_base}/"
url_base = url_base if url_base.endswith("json/") else f"{url_base}json/"
try:
response = self.config.get_json(url_base, headers=headers, params=params)
response = self.requests.get_json(url_base, headers=headers, params=params)
if (isinstance(response, dict) and "error" in response) or (isinstance(response, list) and response and "error" in response[0]):
err = response["error"] if isinstance(response, dict) else response[0]["error"]
if err in ["empty", "empty or private list"]:

@ -1,7 +1,8 @@
import math, operator, os, re
from datetime import datetime
from modules import plex, ergast, util
from modules.util import Failed, NotScheduled, YAML
from modules.request import quote
from modules.util import Failed, NotScheduled
from plexapi.exceptions import NotFound, BadRequest
logger = util.logger
@ -128,10 +129,7 @@ class DataFile:
dir_path = content_path
if translation:
content_path = f"{content_path}/default.yml"
response = self.config.get(content_path)
if response.status_code >= 400:
raise Failed(f"URL Error: No file found at {content_path}")
yaml = YAML(input_data=response.content, check_empty=True)
yaml = self.config.Requests.get_yaml(content_path, check_empty=True)
else:
if file_type == "Default":
if not overlay and file_path.startswith(("movie/", "chart/", "award/")):
@ -157,7 +155,7 @@ class DataFile:
raise Failed(f"File Error: Default does not exist {file_path}")
else:
raise Failed(f"File Error: File does not exist {content_path}")
yaml = YAML(path=content_path, check_empty=True)
yaml = self.config.Requests.file_yaml(content_path, check_empty=True)
if not translation:
logger.debug(f"File Loaded From: {content_path}")
return yaml.data
@ -169,8 +167,11 @@ class DataFile:
key_names = {}
variables = {k: {"default": v[lib_type]} for k, v in yaml.data["variables"].items()}
def add_translation(yaml_path, yaml_key, data=None):
yaml_content = YAML(input_data=data, path=yaml_path if data is None else None, check_empty=True)
def add_translation(yaml_path, yaml_key, url=False):
if url:
yaml_content = self.config.Requests.get_yaml(yaml_path, check_empty=True)
else:
yaml_content = self.config.Requests.file_yaml(yaml_path, check_empty=True)
if "variables" in yaml_content.data and yaml_content.data["variables"]:
for var_key, var_value in yaml_content.data["variables"].items():
if lib_type in var_value:
@ -196,10 +197,9 @@ class DataFile:
if file_type in ["URL", "Git", "Repo"]:
if "languages" in yaml.data and isinstance(yaml.data["language"], list):
for language in yaml.data["language"]:
response = self.config.get(f"{dir_path}/{language}.yml")
if response.status_code < 400:
add_translation(f"{dir_path}/{language}.yml", language, data=response.content)
else:
try:
add_translation(f"{dir_path}/{language}.yml", language, url=True)
except Failed:
logger.error(f"URL Error: Language file not found at {dir_path}/{language}.yml")
else:
for file in os.listdir(dir_path):
@ -343,7 +343,7 @@ class DataFile:
if "<<" in str(d_value):
default[f"{final_key}_encoded"] = re.sub(r'<<(.+)>>', r'<<\1_encoded>>', d_value)
else:
default[f"{final_key}_encoded"] = util.quote(d_value)
default[f"{final_key}_encoded"] = quote(d_value)
if "optional" in template:
if template["optional"]:
@ -434,7 +434,7 @@ class DataFile:
condition_found = True
if condition["value"] is not None:
variables[final_key] = condition["value"]
variables[f"{final_key}_encoded"] = util.quote(condition["value"])
variables[f"{final_key}_encoded"] = quote(condition["value"])
else:
optional.append(final_key)
break
@ -442,7 +442,7 @@ class DataFile:
if "default" in con_value:
logger.trace(f'Conditional Variable: {final_key} defaults to "{con_value["default"]}"')
variables[final_key] = con_value["default"]
variables[f"{final_key}_encoded"] = util.quote(con_value["default"])
variables[f"{final_key}_encoded"] = quote(con_value["default"])
else:
logger.trace(f"Conditional Variable: {final_key} added as optional variable")
optional.append(str(final_key))
@ -465,7 +465,7 @@ class DataFile:
if not sort_mapping and variables["mapping_name"].startswith(f"{op} "):
sort_mapping = f"{variables['mapping_name'][len(op):].strip()}, {op}"
if sort_name and sort_mapping:
break
break
else:
raise Failed(f"{self.data_type} Error: template sub-attribute move_prefix is blank")
variables[f"{self.data_type.lower()}_sort"] = sort_name if sort_name else variables[name_var]
@ -482,7 +482,7 @@ class DataFile:
if key not in variables:
variables[key] = value
for key, value in variables.copy().items():
variables[f"{key}_encoded"] = util.quote(value)
variables[f"{key}_encoded"] = quote(value)
default = {k: v for k, v in default.items() if k not in variables}
og_optional = optional
@ -1374,7 +1374,7 @@ class MetadataFile(DataFile):
if sub:
sub_str = ""
for folder in sub.split("/"):
folder_encode = util.quote(folder)
folder_encode = quote(folder)
sub_str += f"{folder_encode}/"
if folder not in top_tree:
raise Failed(f"Image Set Error: Subfolder {folder} Not Found at https://github.com{repo}tree/master/{sub_str}")
@ -1385,21 +1385,21 @@ class MetadataFile(DataFile):
return f"https://raw.githubusercontent.com{repo}master/{sub}{u}"
def from_repo(u):
return self.config.get(repo_url(u)).content.decode().strip()
return self.config.Requests.get(repo_url(u)).content.decode().strip()
def check_for_definition(check_key, check_tree, is_poster=True, git_name=None):
attr_name = "poster" if is_poster and (git_name is None or "background" not in git_name) else "background"
if (git_name and git_name.lower().endswith(".tpdb")) or (not git_name and f"{attr_name}.tpdb" in check_tree):
return f"tpdb_{attr_name}", from_repo(f"{check_key}/{util.quote(git_name) if git_name else f'{attr_name}.tpdb'}")
return f"tpdb_{attr_name}", from_repo(f"{check_key}/{quote(git_name) if git_name else f'{attr_name}.tpdb'}")
elif (git_name and git_name.lower().endswith(".url")) or (not git_name and f"{attr_name}.url" in check_tree):
return f"url_{attr_name}", from_repo(f"{check_key}/{util.quote(git_name) if git_name else f'{attr_name}.url'}")
return f"url_{attr_name}", from_repo(f"{check_key}/{quote(git_name) if git_name else f'{attr_name}.url'}")
elif git_name:
if git_name in check_tree:
return f"url_{attr_name}", repo_url(f"{check_key}/{util.quote(git_name)}")
return f"url_{attr_name}", repo_url(f"{check_key}/{quote(git_name)}")
else:
for ct in check_tree:
if ct.lower().startswith(attr_name):
return f"url_{attr_name}", repo_url(f"{check_key}/{util.quote(ct)}")
return f"url_{attr_name}", repo_url(f"{check_key}/{quote(ct)}")
return None, None
def init_set(check_key, check_tree):
@ -1417,14 +1417,14 @@ class MetadataFile(DataFile):
if k not in top_tree:
logger.info(f"Image Set Warning: {k} not found at https://github.com{repo}tree/master/{sub}")
continue
k_encoded = util.quote(k)
k_encoded = quote(k)
item_folder = self.config.GitHub.get_tree(top_tree[k]["url"])
item_data = init_set(k_encoded, item_folder)
seasons = {}
for ik in item_folder:
match = re.search(r"(\d+)", ik)
if match:
season_path = f"{k_encoded}/{util.quote(ik)}"
season_path = f"{k_encoded}/{quote(ik)}"
season_num = int(match.group(1))
season_folder = self.config.GitHub.get_tree(item_folder[ik]["url"])
season_data = init_set(season_path, season_folder)
@ -1770,7 +1770,6 @@ class MetadataFile(DataFile):
nonlocal updated
if updated:
try:
#current_item.saveEdits()
logger.info(f"{description} Metadata Update Successful")
except BadRequest:
logger.error(f"{description} Metadata Update Failed")
@ -1816,7 +1815,6 @@ class MetadataFile(DataFile):
summary = tmdb_item.overview
genres = tmdb_item.genres
#item.batchEdits()
add_edit("title", item, meta, methods)
add_edit("sort_title", item, meta, methods, key="titleSort")
if self.library.is_movie:
@ -1926,7 +1924,6 @@ class MetadataFile(DataFile):
season_methods = {sm.lower(): sm for sm in season_dict}
season_style_data = None
if update_seasons:
#season.batchEdits()
add_edit("title", season, season_dict, season_methods)
add_edit("summary", season, season_dict, season_methods)
add_edit("user_rating", season, season_dict, season_methods, key="userRating", var_type="float")
@ -1993,7 +1990,6 @@ class MetadataFile(DataFile):
logger.error(f"{self.type_str} Error: Episode {episode_id} in Season {season_id} not found")
continue
episode_methods = {em.lower(): em for em in episode_dict}
#episode.batchEdits()
add_edit("title", episode, episode_dict, episode_methods)
add_edit("sort_title", episode, episode_dict, episode_methods, key="titleSort")
add_edit("content_rating", episode, episode_dict, episode_methods, key="contentRating")
@ -2040,7 +2036,6 @@ class MetadataFile(DataFile):
logger.error(f"{self.type_str} Error: episode {episode_id} of season {season_id} not found")
continue
episode_methods = {em.lower(): em for em in episode_dict}
#episode.batchEdits()
add_edit("title", episode, episode_dict, episode_methods)
add_edit("sort_title", episode, episode_dict, episode_methods, key="titleSort")
add_edit("content_rating", episode, episode_dict, episode_methods, key="contentRating")
@ -2081,7 +2076,6 @@ class MetadataFile(DataFile):
else:
logger.error(f"{self.type_str} Error: Album: {album_name} not found")
continue
#album.batchEdits()
add_edit("title", album, album_dict, album_methods, value=title)
add_edit("sort_title", album, album_dict, album_methods, key="titleSort")
add_edit("critic_rating", album, album_dict, album_methods, key="rating", var_type="float")
@ -2126,7 +2120,6 @@ class MetadataFile(DataFile):
logger.error(f"{self.type_str} Error: Track: {track_num} not found")
continue
#track.batchEdits()
add_edit("title", track, track_dict, track_methods, value=title)
add_edit("user_rating", track, track_dict, track_methods, key="userRating", var_type="float")
add_edit("track", track, track_dict, track_methods, key="index", var_type="int")
@ -2187,7 +2180,6 @@ class MetadataFile(DataFile):
race = race_lookup[season.seasonNumber]
title = race.format_name(round_prefix, shorten_gp)
updated = False
#season.batchEdits()
add_edit("title", season, value=title)
finish_edit(season, f"Season: {title}")
_, _, ups = self.library.item_images(season, {}, {}, asset_location=asset_location, title=title,
@ -2198,7 +2190,6 @@ class MetadataFile(DataFile):
for episode in season.episodes():
if len(episode.locations) > 0:
ep_title, session_date = race.session_info(episode.locations[0], sprint_weekend)
#episode.batchEdits()
add_edit("title", episode, value=ep_title)
add_edit("originally_available", episode, key="originallyAvailableAt", var_type="date", value=session_date)
finish_edit(episode, f"Season: {season.seasonNumber} Episode: {episode.episodeNumber}")

@ -1,8 +1,8 @@
from datetime import datetime
from modules import util
from modules.request import parse_qs, urlparse
from modules.util import Failed
from num2words import num2words
from urllib.parse import urlparse, parse_qs
logger = util.logger
@ -125,8 +125,9 @@ base_url = "https://www.boxofficemojo.com"
class BoxOfficeMojo:
def __init__(self, config):
self.config = config
def __init__(self, requests, cache):
self.requests = requests
self.cache = cache
self._never_options = None
self._intl_options = None
self._year_options = None
@ -161,7 +162,7 @@ class BoxOfficeMojo:
logger.trace(f"URL: {base_url}{url}")
if params:
logger.trace(f"Params: {params}")
response = self.config.get_html(f"{base_url}{url}", headers=util.header(), params=params)
response = self.requests.get_html(f"{base_url}{url}", header=True, params=params)
return response.xpath(xpath) if xpath else response
def _parse_list(self, url, params, limit):
@ -258,16 +259,16 @@ class BoxOfficeMojo:
else:
imdb_id = None
expired = None
if self.config.Cache:
imdb_id, expired = self.config.Cache.query_letterboxd_map(item)
if self.cache:
imdb_id, expired = self.cache.query_letterboxd_map(item)
if not imdb_id or expired is not False:
try:
imdb_id = self._imdb(item)
except Failed as e:
logger.error(e)
continue
if self.config.Cache:
self.config.Cache.update_letterboxd_map(expired, item, imdb_id)
if self.cache:
self.cache.update_letterboxd_map(expired, item, imdb_id)
ids.append((imdb_id, "imdb"))
logger.info(f"Processed {total_items} IMDb IDs")
return ids

@ -9,8 +9,8 @@ base_url = "https://notifiarr.com/api/v1/"
class Notifiarr:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, params):
self.requests = requests
self.apikey = params["apikey"]
self.header = {"X-API-Key": self.apikey}
logger.secret(self.apikey)
@ -24,7 +24,7 @@ class Notifiarr:
@retry(stop_max_attempt_number=6, wait_fixed=10000, retry_on_exception=util.retry_if_not_failed)
def request(self, json=None, path="notification", params=None):
response = self.config.get(f"{base_url}{path}/pmm/", json=json, headers=self.header, params=params)
response = self.requests.get(f"{base_url}{path}/pmm/", json=json, headers=self.header, params=params)
try:
response_json = response.json()
except JSONDecodeError as e:

@ -43,8 +43,9 @@ class OMDbObj:
class OMDb:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, cache, params):
self.requests = requests
self.cache = cache
self.apikey = params["apikey"]
self.expiration = params["expiration"]
self.limit = False
@ -53,16 +54,16 @@ class OMDb:
def get_omdb(self, imdb_id, ignore_cache=False):
expired = None
if self.config.Cache and not ignore_cache:
omdb_dict, expired = self.config.Cache.query_omdb(imdb_id, self.expiration)
if self.cache and not ignore_cache:
omdb_dict, expired = self.cache.query_omdb(imdb_id, self.expiration)
if omdb_dict and expired is False:
return OMDbObj(imdb_id, omdb_dict)
logger.trace(f"IMDb ID: {imdb_id}")
response = self.config.get(base_url, params={"i": imdb_id, "apikey": self.apikey})
response = self.requests.get(base_url, params={"i": imdb_id, "apikey": self.apikey})
if response.status_code < 400:
omdb = OMDbObj(imdb_id, response.json())
if self.config.Cache and not ignore_cache:
self.config.Cache.update_omdb(expired, omdb, self.expiration)
if self.cache and not ignore_cache:
self.cache.update_omdb(expired, omdb, self.expiration)
return omdb
else:
try:

@ -1,7 +1,7 @@
import os, re
from datetime import datetime, timedelta, timezone
from modules import plex, util, anidb
from modules.util import Failed, LimitReached, YAML
from modules.util import Failed, LimitReached
from plexapi.exceptions import NotFound
from plexapi.video import Movie, Show
@ -296,10 +296,11 @@ class Operations:
mal_id = self.library.reverse_mal[item.ratingKey]
elif not anidb_id:
logger.warning(f"Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid: {item.guid}")
elif anidb_id not in self.config.Convert._anidb_to_mal:
logger.warning(f"Convert Warning: No MyAnimeList Found for AniDB ID: {anidb_id} of Guid: {item.guid}")
else:
mal_id = self.config.Convert._anidb_to_mal[anidb_id]
try:
mal_id = self.config.Convert.anidb_to_mal(anidb_id)
except Failed as err:
logger.warning(f"{err} of Guid: {item.guid}")
if mal_id:
try:
_mal_obj = self.config.MyAnimeList.get_anime(mal_id)
@ -1134,7 +1135,7 @@ class Operations:
yaml = None
if os.path.exists(self.library.metadata_backup["path"]):
try:
yaml = YAML(path=self.library.metadata_backup["path"])
yaml = self.config.Requests.file_yaml(self.library.metadata_backup["path"])
except Failed as e:
logger.error(e)
filename, file_extension = os.path.splitext(self.library.metadata_backup["path"])
@ -1144,7 +1145,7 @@ class Operations:
os.rename(self.library.metadata_backup["path"], f"{filename}{i}{file_extension}")
logger.error(f"Backup failed to load saving copy to {filename}{i}{file_extension}")
if not yaml:
yaml = YAML(path=self.library.metadata_backup["path"], create=True)
yaml = self.config.Requests.file_yaml(self.library.metadata_backup["path"], create=True)
if "metadata" not in yaml.data or not isinstance(yaml.data["metadata"], dict):
yaml.data["metadata"] = {}
special_names = {}

@ -71,6 +71,8 @@ def get_canvas_size(item):
class Overlay:
def __init__(self, config, library, overlay_file, original_mapping_name, overlay_data, suppress, level):
self.config = config
self.requests = self.config.Requests
self.cache = self.config.Cache
self.library = library
self.overlay_file = overlay_file
self.original_mapping_name = original_mapping_name
@ -159,7 +161,7 @@ class Overlay:
raise Failed(f"Overlay Error: horizontal_offset and vertical_offset are required when using a backdrop")
def get_and_save_image(image_url):
response = self.config.get(image_url)
response = self.requests.get(image_url)
if response.status_code == 404:
raise Failed(f"Overlay Error: Overlay Image not found at: {image_url}")
if response.status_code >= 400:
@ -224,14 +226,14 @@ class Overlay:
self.addon_offset = util.parse("Overlay", "addon_offset", self.data["addon_offset"], datatype="int", parent="overlay") if "addon_offset" in self.data else 0
self.addon_position = util.parse("Overlay", "addon_position", self.data["addon_position"], parent="overlay", options=["left", "right", "top", "bottom"]) if "addon_position" in self.data else "left"
image_compare = None
if self.config.Cache:
_, image_compare, _ = self.config.Cache.query_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays")
if self.cache:
_, image_compare, _ = self.cache.query_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays")
overlay_size = os.stat(self.path).st_size
self.updated = not image_compare or str(overlay_size) != str(image_compare)
try:
self.image = Image.open(self.path).convert("RGBA")
if self.config.Cache:
self.config.Cache.update_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays", self.name, overlay_size)
if self.cache:
self.cache.update_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays", self.name, overlay_size)
except OSError:
raise Failed(f"Overlay Error: overlay image {self.path} failed to load")
match = re.search("\\(([^)]+)\\)", self.name)
@ -308,16 +310,16 @@ class Overlay:
if not os.path.exists(self.path):
raise Failed(f"Overlay Error: Overlay Image not found at: {self.path}")
image_compare = None
if self.config.Cache:
_, image_compare, _ = self.config.Cache.query_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays")
if self.cache:
_, image_compare, _ = self.cache.query_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays")
overlay_size = os.stat(self.path).st_size
self.updated = not image_compare or str(overlay_size) != str(image_compare)
try:
self.image = Image.open(self.path).convert("RGBA")
if self.has_coordinates():
self.backdrop_box = self.image.size
if self.config.Cache:
self.config.Cache.update_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays", self.mapping_name, overlay_size)
if self.cache:
self.cache.update_image_map(self.mapping_name, f"{self.library.image_table_name}_overlays", self.mapping_name, overlay_size)
except OSError:
raise Failed(f"Overlay Error: overlay image {self.path} failed to load")

@ -13,6 +13,7 @@ logger = util.logger
class Overlays:
def __init__(self, config, library):
self.config = config
self.cache = self.config.Cache
self.library = library
self.overlays = []
@ -88,8 +89,8 @@ class Overlays:
image_compare = None
overlay_compare = None
poster = None
if self.config.Cache:
image, image_compare, overlay_compare = self.config.Cache.query_image_map(item.ratingKey, f"{self.library.image_table_name}_overlays")
if self.cache:
image, image_compare, overlay_compare = self.cache.query_image_map(item.ratingKey, f"{self.library.image_table_name}_overlays")
self.library.reload(item, force=True)
overlay_compare = [] if overlay_compare is None else util.get_list(overlay_compare, split="|")
@ -126,10 +127,10 @@ class Overlays:
if compare_name not in overlay_compare or properties[original_name].updated:
overlay_change = f"{compare_name} not in {overlay_compare} or {properties[original_name].updated}"
if self.config.Cache:
if self.cache:
for over_name in over_names:
if properties[over_name].name.startswith("text"):
for cache_key, cache_value in self.config.Cache.query_overlay_special_text(item.ratingKey).items():
for cache_key, cache_value in self.cache.query_overlay_special_text(item.ratingKey).items():
actual = plex.attribute_translation[cache_key] if cache_key in plex.attribute_translation else cache_key
if not hasattr(item, actual):
continue
@ -369,10 +370,11 @@ class Overlays:
mal_id = self.library.reverse_mal[item.ratingKey]
elif not anidb_id:
raise Failed(f"Convert Warning: No AniDB ID to Convert to MyAnimeList ID for Guid: {item.guid}")
elif anidb_id not in self.config.Convert._anidb_to_mal:
raise Failed(f"Convert Warning: No MyAnimeList Found for AniDB ID: {anidb_id} of Guid: {item.guid}")
else:
mal_id = self.config.Convert._anidb_to_mal[anidb_id]
try:
mal_id = self.config.Convert.anidb_to_mal(anidb_id)
except Failed as errr:
raise Failed(f"{errr} of Guid: {item.guid}")
if mal_id:
found_rating = self.config.MyAnimeList.get_anime(mal_id).score
except Failed as err:
@ -394,9 +396,9 @@ class Overlays:
actual_value = getattr(item, actual_attr)
if format_var == "versions":
actual_value = len(actual_value)
if self.config.Cache:
if self.cache:
cache_store = actual_value.strftime("%Y-%m-%d") if format_var in overlay.date_vars else actual_value
self.config.Cache.update_overlay_special_text(item.ratingKey, format_var, cache_store)
self.cache.update_overlay_special_text(item.ratingKey, format_var, cache_store)
sub_value = None
if format_var == "originally_available":
if mod:
@ -517,8 +519,8 @@ class Overlays:
else:
logger.info(" Overlay Update Not Needed")
if self.config.Cache and poster_compare:
self.config.Cache.update_image_map(item.ratingKey, f"{self.library.image_table_name}_overlays", item.thumb, poster_compare, overlay='|'.join(compare_names))
if self.cache and poster_compare:
self.cache.update_image_map(item.ratingKey, f"{self.library.image_table_name}_overlays", item.thumb, poster_compare, overlay='|'.join(compare_names))
except Failed as e:
logger.error(f" {e}\n Overlays Attempted on {item_title}: {', '.join(over_names)}")
except Exception as e:

@ -1,8 +1,10 @@
import os, plexapi, re, requests, time
import os, plexapi, re, time
from datetime import datetime, timedelta
from modules import builder, util
from modules.library import Library
from modules.util import Failed, ImageData
from modules.poster import ImageData
from modules.request import parse_qs, quote_plus, urlparse
from modules.util import Failed
from PIL import Image
from plexapi import utils
from plexapi.audio import Artist, Track, Album
@ -12,8 +14,8 @@ from plexapi.library import Role, FilterChoice
from plexapi.playlist import Playlist
from plexapi.server import PlexServer
from plexapi.video import Movie, Show, Season, Episode
from requests.exceptions import ConnectionError, ConnectTimeout
from retrying import retry
from urllib import parse
from xml.etree.ElementTree import ParseError
logger = util.logger
@ -445,16 +447,13 @@ class Plex(Library):
super().__init__(config, params)
self.plex = params["plex"]
self.url = self.plex["url"]
plex_session = self.config.session
if self.plex["verify_ssl"] is False and self.config.general["verify_ssl"] is True:
plex_session = self.config.Requests.session
if self.plex["verify_ssl"] is False and self.config.Requests.global_ssl is True:
logger.debug("Overriding verify_ssl to False for Plex connection")
plex_session = requests.Session()
plex_session.verify = False
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
if self.plex["verify_ssl"] is True and self.config.general["verify_ssl"] is False:
plex_session = self.config.Requests.create_session(verify_ssl=False)
if self.plex["verify_ssl"] is True and self.config.Requests.global_ssl is False:
logger.debug("Overriding verify_ssl to True for Plex connection")
plex_session = requests.Session()
plex_session = self.config.Requests.create_session()
self.token = self.plex["token"]
self.timeout = self.plex["timeout"]
logger.secret(self.url)
@ -493,13 +492,13 @@ class Plex(Library):
except Unauthorized:
logger.info(f"Plex Error: Plex connection attempt returned 'Unauthorized'")
raise Failed("Plex Error: Plex token is invalid")
except requests.exceptions.ConnectTimeout:
except ConnectTimeout:
raise Failed(f"Plex Error: Plex did not respond within the {self.timeout}-second timeout.")
except ValueError as e:
logger.info(f"Plex Error: Plex connection attempt returned 'ValueError'")
logger.stacktrace()
raise Failed(f"Plex Error: {e}")
except (requests.exceptions.ConnectionError, ParseError):
except (ConnectionError, ParseError):
logger.info(f"Plex Error: Plex connection attempt returned 'ConnectionError' or 'ParseError'")
logger.stacktrace()
raise Failed("Plex Error: Plex URL is probably invalid")
@ -630,7 +629,7 @@ class Plex(Library):
def upload_theme(self, collection, url=None, filepath=None):
key = f"/library/metadata/{collection.ratingKey}/themes"
if url:
self.PlexServer.query(f"{key}?url={parse.quote_plus(url)}", method=self.PlexServer._session.post)
self.PlexServer.query(f"{key}?url={quote_plus(url)}", method=self.PlexServer._session.post)
elif filepath:
self.PlexServer.query(key, method=self.PlexServer._session.post, data=open(filepath, 'rb').read())
@ -745,7 +744,7 @@ class Plex(Library):
raise Failed("Overlay Error: No Poster found to reset")
return image_url
def _reload(self, item):
def item_reload(self, item):
item.reload(checkFiles=False, includeAllConcerts=False, includeBandwidths=False, includeChapters=False,
includeChildren=False, includeConcerts=False, includeExternalMedia=False, includeExtras=False,
includeFields=False, includeGeolocation=False, includeLoudnessRamps=False, includeMarkers=False,
@ -774,7 +773,7 @@ class Plex(Library):
item, is_full = self.cached_items[item.ratingKey]
try:
if not is_full or force:
self._reload(item)
self.item_reload(item)
self.cached_items[item.ratingKey] = (item, True)
except (BadRequest, NotFound) as e:
logger.stacktrace()
@ -911,7 +910,7 @@ class Plex(Library):
if playlist.title not in playlists:
playlists[playlist.title] = []
playlists[playlist.title].append(username)
except requests.exceptions.ConnectionError:
except ConnectionError:
pass
scan_user(self.PlexServer, self.account.title)
for user in self.users:
@ -990,7 +989,7 @@ class Plex(Library):
self._query(f"/library/collections{utils.joinArgs(args)}", post=True)
def get_smart_filter_from_uri(self, uri):
smart_filter = parse.parse_qs(parse.urlparse(uri.replace("/#!/", "/")).query)["key"][0] # noqa
smart_filter = parse_qs(urlparse(uri.replace("/#!/", "/")).query)["key"][0] # noqa
args = smart_filter[smart_filter.index("?"):]
return self.build_smart_filter(args), int(args[args.index("type=") + 5:args.index("type=") + 6])
@ -1037,7 +1036,7 @@ class Plex(Library):
for playlist in self.PlexServer.switchUser(user).playlists():
if isinstance(playlist, Playlist) and playlist.title == playlist_title:
return playlist
except requests.exceptions.ConnectionError:
except ConnectionError:
pass
raise Failed(f"Plex Error: Playlist {playlist_title} not found")
@ -1090,7 +1089,7 @@ class Plex(Library):
try:
fin = False
for guid_tag in item.guids:
url_parsed = requests.utils.urlparse(guid_tag.id)
url_parsed = urlparse(guid_tag.id)
if url_parsed.scheme == "tvdb":
if isinstance(item, Show):
ids.append((int(url_parsed.netloc), "tvdb"))
@ -1106,7 +1105,7 @@ class Plex(Library):
break
if fin:
continue
except requests.exceptions.ConnectionError:
except ConnectionError:
continue
if imdb_id and not tmdb_id:
for imdb in imdb_id:
@ -1329,8 +1328,8 @@ class Plex(Library):
asset_location = item_dir
except Failed as e:
logger.warning(e)
poster = util.pick_image(title, posters, self.prioritize_assets, self.download_url_assets, asset_location, image_name=image_name)
background = util.pick_image(title, backgrounds, self.prioritize_assets, self.download_url_assets, asset_location,
poster = self.pick_image(title, posters, self.prioritize_assets, self.download_url_assets, asset_location, image_name=image_name)
background = self.pick_image(title, backgrounds, self.prioritize_assets, self.download_url_assets, asset_location,
is_poster=False, image_name=f"{image_name}_background" if image_name else image_name)
updated = False
if poster or background:

@ -1,10 +1,24 @@
import os, time
from modules import util
from modules.util import Failed, ImageData
from modules.util import Failed
from PIL import Image, ImageFont, ImageDraw, ImageColor
logger = util.logger
class ImageData:
def __init__(self, attribute, location, prefix="", is_poster=True, is_url=True, compare=None):
self.attribute = attribute
self.location = location
self.prefix = prefix
self.is_poster = is_poster
self.is_url = is_url
self.compare = compare if compare else location if is_url else os.stat(location).st_size
self.message = f"{prefix}{'poster' if is_poster else 'background'} to [{'URL' if is_url else 'File'}] {location}"
def __str__(self):
return str(self.__dict__)
class ImageBase:
def __init__(self, config, data):
self.config = config
@ -48,10 +62,10 @@ class ImageBase:
else:
return None, None
response = self.config.get(url)
response = self.config.Requests.get(url)
if response.status_code >= 400:
raise Failed(f"Poster Error: {attr} not found at: {url}")
if "Content-Type" not in response.headers or response.headers["Content-Type"] not in util.image_content_types:
if "Content-Type" not in response.headers or response.headers["Content-Type"] not in self.config.Requests.image_content_types:
raise Failed(f"Poster Error: {attr} not a png, jpg, or webp: {url}")
if response.headers["Content-Type"] == "image/jpeg":
ext = "jpg"

@ -13,15 +13,16 @@ availability_descriptions = {"announced": "For Announced", "cinemas": "For In Ci
monitor_descriptions = {"movie": "Monitor Only the Movie", "collection": "Monitor the Movie and Collection", "none": "Do not Monitor"}
class Radarr:
def __init__(self, config, library, params):
self.config = config
def __init__(self, requests, cache, library, params):
self.requests = requests
self.cache = cache
self.library = library
self.url = params["url"]
self.token = params["token"]
logger.secret(self.url)
logger.secret(self.token)
try:
self.api = RadarrAPI(self.url, self.token, session=self.config.session)
self.api = RadarrAPI(self.url, self.token, session=self.requests.session)
self.api.respect_list_exclusions_when_adding()
self.api._validate_add_options(params["root_folder_path"], params["quality_profile"]) # noqa
self.profiles = self.api.quality_profile()
@ -102,8 +103,8 @@ class Radarr:
tmdb_id = item[0] if isinstance(item, tuple) else item
logger.ghost(f"Loading TMDb ID {i}/{len(tmdb_ids)} ({tmdb_id})")
try:
if self.config.Cache and not ignore_cache:
_id = self.config.Cache.query_radarr_adds(tmdb_id, self.library.original_mapping_name)
if self.cache and not ignore_cache:
_id = self.cache.query_radarr_adds(tmdb_id, self.library.original_mapping_name)
if _id:
skipped.append(item)
raise Continue
@ -152,8 +153,8 @@ class Radarr:
logger.info("")
for movie in added:
logger.info(f"Added to Radarr | {movie.tmdbId:<7} | {movie.title}")
if self.config.Cache:
self.config.Cache.update_radarr_adds(movie.tmdbId, self.library.original_mapping_name)
if self.cache:
self.cache.update_radarr_adds(movie.tmdbId, self.library.original_mapping_name)
logger.info(f"{len(added)} Movie{'s' if len(added) > 1 else ''} added to Radarr")
if len(exists) > 0 or len(skipped) > 0:
@ -169,8 +170,8 @@ class Radarr:
upgrade_qp.append(movie)
else:
logger.info(f"Already in Radarr | {movie.tmdbId:<7} | {movie.title}")
if self.config.Cache:
self.config.Cache.update_radarr_adds(movie.tmdbId, self.library.original_mapping_name)
if self.cache:
self.cache.update_radarr_adds(movie.tmdbId, self.library.original_mapping_name)
if upgrade_qp:
self.api.edit_multiple_movies(upgrade_qp, quality_profile=qp)
for movie in upgrade_qp:

@ -8,11 +8,11 @@ builders = ["reciperr_list", "stevenlu_popular"]
stevenlu_url = "https://s3.amazonaws.com/popular-movies/movies.json"
class Reciperr:
def __init__(self, config):
self.config = config
def __init__(self, requests):
self.requests = requests
def _request(self, url, name="Reciperr"):
response = self.config.get(url)
response = self.requests.get(url)
if response.status_code >= 400:
raise Failed(f"{name} Error: JSON not found at {url}")
return response.json()

@ -0,0 +1,242 @@
import base64, os, ruamel.yaml, requests
from lxml import html
from modules import util
from modules.poster import ImageData
from modules.util import Failed
from requests.exceptions import ConnectionError
from retrying import retry
from urllib import parse
logger = util.logger
image_content_types = ["image/png", "image/jpeg", "image/webp"]
def get_header(headers, header, language):
if headers:
return headers
else:
if header and not language:
language = "en-US,en;q=0.5"
if language:
return {
"Accept-Language": "eng" if language == "default" else language,
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/113.0"
}
def parse_version(version, text="develop"):
version = version.replace("develop", text)
split_version = version.split(f"-{text}")
return version, split_version[0], int(split_version[1]) if len(split_version) > 1 else 0
def quote(data):
return parse.quote(str(data))
def quote_plus(data):
return parse.quote_plus(str(data))
def parse_qs(data):
return parse.parse_qs(data)
def urlparse(data):
return parse.urlparse(str(data))
class Requests:
def __init__(self, file_version, env_version, git_branch, verify_ssl=True):
self.file_version = file_version
self.env_version = env_version
self.git_branch = git_branch
self.image_content_types = ["image/png", "image/jpeg", "image/webp"]
self.nightly_version = None
self.develop_version = None
self.master_version = None
self.session = self.create_session()
self.global_ssl = verify_ssl
if not self.global_ssl:
self.no_verify_ssl()
self.branch = self.guess_branch()
self.version = (self.file_version[0].replace("develop", self.branch), self.file_version[1].replace("develop", self.branch), self.file_version[2])
self.latest_version = self.current_version(self.version, branch=self.branch)
self.new_version = self.latest_version[0] if self.latest_version and (self.version[1] != self.latest_version[1] or (self.version[2] and self.version[2] < self.latest_version[2])) else None
def create_session(self, verify_ssl=True):
session = requests.Session()
if not verify_ssl:
self.no_verify_ssl(session)
return session
def no_verify_ssl(self, session=None):
if session is None:
session = self.session
session.verify = False
if session.verify is False:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def has_new_version(self):
return self.version[0] != "Unknown" and self.latest_version[0] != "Unknown" and self.version[1] != self.latest_version[1] or (self.version[2] and self.version[2] < self.latest_version[2])
def download_image(self, title, image_url, download_directory, is_poster=True, filename=None):
response = self.get_image(image_url)
new_image = os.path.join(download_directory, f"{filename}") if filename else download_directory
if response.headers["Content-Type"] == "image/jpeg":
new_image += ".jpg"
elif response.headers["Content-Type"] == "image/webp":
new_image += ".webp"
else:
new_image += ".png"
with open(new_image, "wb") as handler:
handler.write(response.content)
return ImageData("asset_directory", new_image, prefix=f"{title}'s ", is_poster=is_poster, is_url=False)
def file_yaml(self, path_to_file, check_empty=False, create=False, start_empty=False):
return YAML(path=path_to_file, check_empty=check_empty, create=create, start_empty=start_empty)
def get_yaml(self, url, check_empty=False):
response = self.get(url)
if response.status_code >= 400:
raise Failed(f"URL Error: No file found at {url}")
return YAML(input_data=response.content, check_empty=check_empty)
def get_image(self, url):
response = self.get(url, header=True)
if response.status_code == 404:
raise Failed(f"Image Error: Not Found on Image URL: {url}")
if response.status_code >= 400:
raise Failed(f"Image Error: {response.status_code} on Image URL: {url}")
if "Content-Type" not in response.headers or response.headers["Content-Type"] not in self.image_content_types:
raise Failed("Image Not PNG, JPG, or WEBP")
def get_stream(self, url, location, info="Item"):
with self.session.get(url, stream=True) as r:
r.raise_for_status()
total_length = r.headers.get('content-length')
if total_length is not None:
total_length = int(total_length)
dl = 0
with open(location, "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
dl += len(chunk)
f.write(chunk)
logger.ghost(f"Downloading {info}: {dl / total_length * 100:6.2f}%")
logger.exorcise()
def get_html(self, url, headers=None, params=None, header=None, language=None):
return html.fromstring(self.get(url, headers=headers, params=params, header=header, language=language).content)
def get_json(self, url, json=None, headers=None, params=None, header=None, language=None):
response = self.get(url, json=json, headers=headers, params=params, header=header, language=language)
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
@retry(stop_max_attempt_number=6, wait_fixed=10000)
def get(self, url, json=None, headers=None, params=None, header=None, language=None):
return self.session.get(url, json=json, headers=get_header(headers, header, language), params=params)
def get_image_encoded(self, url):
return base64.b64encode(self.get(url).content).decode('utf-8')
def post_html(self, url, data=None, json=None, headers=None, header=None, language=None):
return html.fromstring(self.post(url, data=data, json=json, headers=headers, header=header, language=language).content)
def post_json(self, url, data=None, json=None, headers=None, header=None, language=None):
response = self.post(url, data=data, json=json, headers=headers, header=header, language=language)
try:
return response.json()
except ValueError:
logger.error(str(response.content))
raise
@retry(stop_max_attempt_number=6, wait_fixed=10000)
def post(self, url, data=None, json=None, headers=None, header=None, language=None):
return self.session.post(url, data=data, json=json, headers=get_header(headers, header, language))
def guess_branch(self):
if self.git_branch:
return self.git_branch
elif self.env_version in ["nightly", "develop"]:
return self.env_version
elif self.file_version[2] > 0:
dev_version = self.get_develop()
if self.file_version[1] != dev_version[1] or self.file_version[2] <= dev_version[2]:
return "develop"
else:
return "nightly"
else:
return "master"
def current_version(self, version, branch=None):
if branch == "nightly":
return self.get_nightly()
elif branch == "develop":
return self.get_develop()
elif version[2] > 0:
new_version = self.get_develop()
if version[1] != new_version[1] or new_version[2] >= version[2]:
return new_version
return self.get_nightly()
else:
return self.get_master()
def get_nightly(self):
if self.nightly_version is None:
self.nightly_version = self.get_version("nightly")
return self.nightly_version
def get_develop(self):
if self.develop_version is None:
self.develop_version = self.get_version("develop")
return self.develop_version
def get_master(self):
if self.master_version is None:
self.master_version = self.get_version("master")
return self.master_version
def get_version(self, level):
try:
url = f"https://raw.githubusercontent.com/Kometa-Team/Kometa/{level}/VERSION"
return parse_version(self.get(url).content.decode().strip(), text=level)
except ConnectionError:
return "Unknown", "Unknown", 0
class YAML:
def __init__(self, path=None, input_data=None, check_empty=False, create=False, start_empty=False):
self.path = path
self.input_data = input_data
self.yaml = ruamel.yaml.YAML()
self.yaml.width = 100000
self.yaml.indent(mapping=2, sequence=2)
try:
if input_data:
self.data = self.yaml.load(input_data)
else:
if start_empty or (create and not os.path.exists(self.path)):
with open(self.path, 'w'):
pass
self.data = {}
else:
with open(self.path, encoding="utf-8") as fp:
self.data = self.yaml.load(fp)
except ruamel.yaml.error.YAMLError as e:
e = str(e).replace("\n", "\n ")
raise Failed(f"YAML Error: {e}")
except Exception as e:
raise Failed(f"YAML Error: {e}")
if not self.data or not isinstance(self.data, dict):
if check_empty:
raise Failed("YAML Error: File is empty")
self.data = {}
def save(self):
if self.path:
with open(self.path, 'w', encoding="utf-8") as fp:
self.yaml.dump(self.data, fp)

@ -29,15 +29,16 @@ monitor_descriptions = {
apply_tags_translation = {"": "add", "sync": "replace", "remove": "remove"}
class Sonarr:
def __init__(self, config, library, params):
self.config = config
def __init__(self, requests, cache, library, params):
self.requests = requests
self.cache = cache
self.library = library
self.url = params["url"]
self.token = params["token"]
logger.secret(self.url)
logger.secret(self.token)
try:
self.api = SonarrAPI(self.url, self.token, session=self.config.session)
self.api = SonarrAPI(self.url, self.token, session=self.requests.session)
self.api.respect_list_exclusions_when_adding()
self.api._validate_add_options(params["root_folder_path"], params["quality_profile"], params["language_profile"]) # noqa
self.profiles = self.api.quality_profile()
@ -126,8 +127,8 @@ class Sonarr:
tvdb_id = item[0] if isinstance(item, tuple) else item
logger.ghost(f"Loading TVDb ID {i}/{len(tvdb_ids)} ({tvdb_id})")
try:
if self.config.Cache and not ignore_cache:
_id = self.config.Cache.query_sonarr_adds(tvdb_id, self.library.original_mapping_name)
if self.cache and not ignore_cache:
_id = self.cache.query_sonarr_adds(tvdb_id, self.library.original_mapping_name)
if _id:
skipped.append(item)
raise Continue
@ -176,8 +177,8 @@ class Sonarr:
logger.info("")
for series in added:
logger.info(f"Added to Sonarr | {series.tvdbId:<7} | {series.title}")
if self.config.Cache:
self.config.Cache.update_sonarr_adds(series.tvdbId, self.library.original_mapping_name)
if self.cache:
self.cache.update_sonarr_adds(series.tvdbId, self.library.original_mapping_name)
logger.info(f"{len(added)} Series added to Sonarr")
if len(exists) > 0 or len(skipped) > 0:
@ -193,8 +194,8 @@ class Sonarr:
upgrade_qp.append(series)
else:
logger.info(f"Already in Sonarr | {series.tvdbId:<7} | {series.title}")
if self.config.Cache:
self.config.Cache.update_sonarr_adds(series.tvdbId, self.library.original_mapping_name)
if self.cache:
self.cache.update_sonarr_adds(series.tvdbId, self.library.original_mapping_name)
if upgrade_qp:
self.api.edit_multiple_series(upgrade_qp, quality_profile=qp)
for series in upgrade_qp:

@ -8,8 +8,8 @@ logger = util.logger
builders = ["tautulli_popular", "tautulli_watched"]
class Tautulli:
def __init__(self, config, library, params):
self.config = config
def __init__(self, requests, library, params):
self.requests = requests
self.library = library
self.url = params["url"]
self.apikey = params["apikey"]
@ -69,4 +69,4 @@ class Tautulli:
if params:
for k, v in params.items():
final_params[k] = v
return self.config.get_json(self.api, params=final_params)
return self.requests.get_json(self.api, params=final_params)

@ -113,8 +113,8 @@ class TMDbMovie(TMDBObj):
super().__init__(tmdb, tmdb_id, ignore_cache=ignore_cache)
expired = None
data = None
if self._tmdb.config.Cache and not ignore_cache:
data, expired = self._tmdb.config.Cache.query_tmdb_movie(tmdb_id, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
data, expired = self._tmdb.cache.query_tmdb_movie(tmdb_id, self._tmdb.expiration)
if expired or not data:
data = self.load_movie()
super()._load(data)
@ -125,8 +125,8 @@ class TMDbMovie(TMDBObj):
self.collection_id = data["collection_id"] if isinstance(data, dict) else data.collection.id if data.collection else None
self.collection_name = data["collection_name"] if isinstance(data, dict) else data.collection.name if data.collection else None
if self._tmdb.config.Cache and not ignore_cache:
self._tmdb.config.Cache.update_tmdb_movie(expired, self, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
self._tmdb.cache.update_tmdb_movie(expired, self, self._tmdb.expiration)
@retry(stop_max_attempt_number=6, wait_fixed=10000, retry_on_exception=util.retry_if_not_failed)
def load_movie(self):
@ -144,8 +144,8 @@ class TMDbShow(TMDBObj):
super().__init__(tmdb, tmdb_id, ignore_cache=ignore_cache)
expired = None
data = None
if self._tmdb.config.Cache and not ignore_cache:
data, expired = self._tmdb.config.Cache.query_tmdb_show(tmdb_id, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
data, expired = self._tmdb.cache.query_tmdb_show(tmdb_id, self._tmdb.expiration)
if expired or not data:
data = self.load_show()
super()._load(data)
@ -162,8 +162,8 @@ class TMDbShow(TMDBObj):
loop = data.seasons if not isinstance(data, dict) else data["seasons"].split("%|%") if data["seasons"] else [] # noqa
self.seasons = [TMDbSeason(s) for s in loop]
if self._tmdb.config.Cache and not ignore_cache:
self._tmdb.config.Cache.update_tmdb_show(expired, self, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
self._tmdb.cache.update_tmdb_show(expired, self, self._tmdb.expiration)
@retry(stop_max_attempt_number=6, wait_fixed=10000, retry_on_exception=util.retry_if_not_failed)
def load_show(self):
@ -184,8 +184,8 @@ class TMDbEpisode:
self.ignore_cache = ignore_cache
expired = None
data = None
if self._tmdb.config.Cache and not ignore_cache:
data, expired = self._tmdb.config.Cache.query_tmdb_episode(self.tmdb_id, self.season_number, self.episode_number, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
data, expired = self._tmdb.cache.query_tmdb_episode(self.tmdb_id, self.season_number, self.episode_number, self._tmdb.expiration)
if expired or not data:
data = self.load_episode()
@ -198,8 +198,8 @@ class TMDbEpisode:
self.imdb_id = data["imdb_id"] if isinstance(data, dict) else data.imdb_id
self.tvdb_id = data["tvdb_id"] if isinstance(data, dict) else data.tvdb_id
if self._tmdb.config.Cache and not ignore_cache:
self._tmdb.config.Cache.update_tmdb_episode(expired, self, self._tmdb.expiration)
if self._tmdb.cache and not ignore_cache:
self._tmdb.cache.update_tmdb_episode(expired, self, self._tmdb.expiration)
@retry(stop_max_attempt_number=6, wait_fixed=10000, retry_on_exception=util.retry_if_not_failed)
def load_episode(self):
@ -215,13 +215,15 @@ class TMDbEpisode:
class TMDb:
def __init__(self, config, params):
self.config = config
self.requests = self.config.Requests
self.cache = self.config.Cache
self.apikey = params["apikey"]
self.language = params["language"]
self.region = None
self.expiration = params["expiration"]
logger.secret(self.apikey)
try:
self.TMDb = TMDbAPIs(self.apikey, language=self.language, session=self.config.session)
self.TMDb = TMDbAPIs(self.apikey, language=self.language, session=self.requests.session)
except TMDbException as e:
raise Failed(f"TMDb Error: {e}")
self.iso_3166_1 = {iso: i.name for iso, i in self.TMDb._iso_3166_1.items()} # noqa

@ -1,6 +1,7 @@
import requests, time, webbrowser
import time, webbrowser
from modules import util
from modules.util import Failed, TimeoutExpired, YAML
from modules.request import urlparse
from modules.util import Failed, TimeoutExpired
from retrying import retry
logger = util.logger
@ -36,8 +37,9 @@ id_types = {
}
class Trakt:
def __init__(self, config, params):
self.config = config
def __init__(self, requests, read_only, params):
self.requests = requests
self.read_only = read_only
self.client_id = params["client_id"]
self.client_secret = params["client_secret"]
self.pin = params["pin"]
@ -137,10 +139,9 @@ class Trakt:
"redirect_uri": redirect_uri,
"grant_type": "authorization_code"
}
response = self.config.post(f"{base_url}/oauth/token", json=json_data, headers={"Content-Type": "application/json"})
response = self.requests.post(f"{base_url}/oauth/token", json=json_data, headers={"Content-Type": "application/json"})
if response.status_code != 200:
raise Failed(f"Trakt Error: ({response.status_code}) {response.reason}")
#raise Failed("Trakt Error: Invalid trakt pin. If you're sure you typed it in correctly your client_id or client_secret may be invalid")
response_json = response.json()
logger.trace(response_json)
if not self._save(response_json):
@ -155,7 +156,7 @@ class Trakt:
"trakt-api-key": self.client_id
}
logger.secret(token)
response = self.config.get(f"{base_url}/users/settings", headers=headers)
response = self.requests.get(f"{base_url}/users/settings", headers=headers)
if response.status_code == 423:
raise Failed("Trakt Error: Account is Locked please Contact Trakt Support")
if response.status_code != 200:
@ -172,7 +173,7 @@ class Trakt:
"redirect_uri": redirect_uri,
"grant_type": "refresh_token"
}
response = self.config.post(f"{base_url}/oauth/token", json=json_data, headers={"Content-Type": "application/json"})
response = self.requests.post(f"{base_url}/oauth/token", json=json_data, headers={"Content-Type": "application/json"})
if response.status_code != 200:
return False
return self._save(response.json())
@ -180,8 +181,8 @@ class Trakt:
def _save(self, authorization):
if authorization and self._check(authorization):
if self.authorization != authorization and not self.config.read_only:
yaml = YAML(self.config_path)
if self.authorization != authorization and not self.read_only:
yaml = self.requests.file_yaml(self.config_path)
yaml.data["trakt"]["pin"] = None
yaml.data["trakt"]["authorization"] = {
"access_token": authorization["access_token"],
@ -219,9 +220,9 @@ class Trakt:
if pages > 1:
params["page"] = current
if json_data is not None:
response = self.config.post(f"{base_url}{url}", json=json_data, headers=headers)
response = self.requests.post(f"{base_url}{url}", json=json_data, headers=headers)
else:
response = self.config.get(f"{base_url}{url}", headers=headers, params=params)
response = self.requests.get(f"{base_url}{url}", headers=headers, params=params)
if pages == 1 and "X-Pagination-Page-Count" in response.headers and not params:
pages = int(response.headers["X-Pagination-Page-Count"])
if response.status_code >= 400:
@ -251,7 +252,7 @@ class Trakt:
def list_description(self, data):
try:
return self._request(requests.utils.urlparse(data).path)["description"]
return self._request(urlparse(data).path)["description"]
except Failed:
raise Failed(data)
@ -313,7 +314,7 @@ class Trakt:
return data
def sync_list(self, slug, ids):
current_ids = self._list(slug, urlparse=False, fail=False)
current_ids = self._list(slug, parse=False, fail=False)
def read_result(data, obj_type, result_type, result_str=None):
result_str = result_str if result_str else result_type.capitalize()
@ -351,7 +352,7 @@ class Trakt:
read_not_found(results, "Remove")
time.sleep(1)
trakt_ids = self._list(slug, urlparse=False, trakt_ids=True)
trakt_ids = self._list(slug, parse=False, trakt_ids=True)
trakt_lookup = {f"{ty}_{i_id}": t_id for t_id, i_id, ty in trakt_ids}
rank_ids = [trakt_lookup[f"{ty}_{i_id}"] for i_id, ty in ids if f"{ty}_{i_id}" in trakt_lookup]
self._request(f"/users/me/lists/{slug}/items/reorder", json_data={"rank": rank_ids})
@ -376,9 +377,9 @@ class Trakt:
def build_user_url(self, user, name):
return f"{base_url.replace('api.', '')}/users/{user}/lists/{name}"
def _list(self, data, urlparse=True, trakt_ids=False, fail=True, ignore_other=False):
def _list(self, data, parse=True, trakt_ids=False, fail=True, ignore_other=False):
try:
url = requests.utils.urlparse(data).path.replace("/official/", "/") if urlparse else f"/users/me/lists/{data}"
url = urlparse(data).path.replace("/official/", "/") if parse else f"/users/me/lists/{data}"
items = self._request(f"{url}/items")
except Failed:
raise Failed(f"Trakt Error: List {data} not found")
@ -417,7 +418,7 @@ class Trakt:
return self._parse(items, typeless=chart_type == "popular", item_type="movie" if is_movie else "show", ignore_other=ignore_other)
def get_people(self, data):
return {str(i[0][0]): i[0][1] for i in self._list(data) if i[1] == "tmdb_person"}
return {str(i[0][0]): i[0][1] for i in self._list(data) if i[1] == "tmdb_person"} # noqa
def validate_list(self, trakt_lists):
values = util.get_list(trakt_lists, split=False)

@ -1,9 +1,10 @@
import re, requests, time
import re, time
from datetime import datetime
from lxml import html
from lxml.etree import ParserError
from modules import util
from modules.util import Failed
from requests.exceptions import MissingSchema
from retrying import retry
logger = util.logger
@ -48,8 +49,8 @@ class TVDbObj:
self.ignore_cache = ignore_cache
expired = None
data = None
if self._tvdb.config.Cache and not ignore_cache:
data, expired = self._tvdb.config.Cache.query_tvdb(tvdb_id, is_movie, self._tvdb.expiration)
if self._tvdb.cache and not ignore_cache:
data, expired = self._tvdb.cache.query_tvdb(tvdb_id, is_movie, self._tvdb.expiration)
if expired or not data:
item_url = f"{urls['movie_id' if is_movie else 'series_id']}{tvdb_id}"
try:
@ -100,12 +101,13 @@ class TVDbObj:
self.genres = parse_page("//strong[text()='Genres']/parent::li/span/a/text()[normalize-space()]", is_list=True)
if self._tvdb.config.Cache and not ignore_cache:
self._tvdb.config.Cache.update_tvdb(expired, self, self._tvdb.expiration)
if self._tvdb.cache and not ignore_cache:
self._tvdb.cache.update_tvdb(expired, self, self._tvdb.expiration)
class TVDb:
def __init__(self, config, tvdb_language, expiration):
self.config = config
def __init__(self, requests, cache, tvdb_language, expiration):
self.requests = requests
self.cache = cache
self.language = tvdb_language
self.expiration = expiration
@ -115,7 +117,7 @@ class TVDb:
@retry(stop_max_attempt_number=6, wait_fixed=10000, retry_on_exception=util.retry_if_not_failed)
def get_request(self, tvdb_url):
response = self.config.get(tvdb_url, headers=util.header(self.language))
response = self.requests.get(tvdb_url, language=self.language)
if response.status_code >= 400:
raise Failed(f"({response.status_code}) {response.reason}")
return html.fromstring(response.content)
@ -136,8 +138,8 @@ class TVDb:
else:
raise Failed(f"TVDb Error: {tvdb_url} must begin with {urls['movies']} or {urls['series']}")
expired = None
if self.config.Cache and not ignore_cache and not is_movie:
tvdb_id, expired = self.config.Cache.query_tvdb_map(tvdb_url, self.expiration)
if self.cache and not ignore_cache and not is_movie:
tvdb_id, expired = self.cache.query_tvdb_map(tvdb_url, self.expiration)
if tvdb_id and not expired:
return tvdb_id, None, None
logger.trace(f"URL: {tvdb_url}")
@ -165,8 +167,8 @@ class TVDb:
pass
if tmdb_id is None and imdb_id is None:
raise Failed(f"TVDb Error: No TMDb ID or IMDb ID found")
if self.config.Cache and not ignore_cache and not is_movie:
self.config.Cache.update_tvdb_map(expired, tvdb_url, tvdb_id, self.expiration)
if self.cache and not ignore_cache and not is_movie:
self.cache.update_tvdb_map(expired, tvdb_url, tvdb_id, self.expiration)
return tvdb_id, tmdb_id, imdb_id
elif tvdb_url.startswith(urls["movie_id"]):
err_text = f"using TVDb Movie ID: {tvdb_url[len(urls['movie_id']):]}"
@ -177,7 +179,7 @@ class TVDb:
raise Failed(f"TVDb Error: Could not find a TVDb {media_type} {err_text}")
def get_list_description(self, tvdb_url):
response = self.config.get_html(tvdb_url, headers=util.header(self.language))
response = self.requests.get_html(tvdb_url, language=self.language)
description = response.xpath("//div[@class='block']/div[not(@style='display:none')]/p/text()")
description = description[0] if len(description) > 0 and len(description[0]) > 0 else None
poster = response.xpath("//div[@id='artwork']/div/div/a/@href")
@ -190,7 +192,7 @@ class TVDb:
logger.trace(f"URL: {tvdb_url}")
if tvdb_url.startswith((urls["list"], urls["alt_list"])):
try:
response = self.config.get_html(tvdb_url, headers=util.header(self.language))
response = self.requests.get_html(tvdb_url, language=self.language)
items = response.xpath("//div[@id='general']//div/div/h3/a")
for item in items:
title = item.xpath("text()")[0]
@ -217,7 +219,7 @@ class TVDb:
if len(ids) > 0:
return ids
raise Failed(f"TVDb Error: No TVDb IDs found at {tvdb_url}")
except requests.exceptions.MissingSchema:
except MissingSchema:
logger.stacktrace()
raise Failed(f"TVDb Error: URL Lookup Failed for {tvdb_url}")
else:

@ -1,4 +1,4 @@
import glob, os, re, requests, ruamel.yaml, signal, sys, time
import glob, os, re, signal, sys, time
from datetime import datetime, timedelta
from modules.logs import MyLogger
from num2words import num2words
@ -43,19 +43,6 @@ class NotScheduled(Exception):
class NotScheduledRange(NotScheduled):
pass
class ImageData:
def __init__(self, attribute, location, prefix="", is_poster=True, is_url=True, compare=None):
self.attribute = attribute
self.location = location
self.prefix = prefix
self.is_poster = is_poster
self.is_url = is_url
self.compare = compare if compare else location if is_url else os.stat(location).st_size
self.message = f"{prefix}{'poster' if is_poster else 'background'} to [{'URL' if is_url else 'File'}] {location}"
def __str__(self):
return str(self.__dict__)
def retry_if_not_failed(exception):
return not isinstance(exception, Failed)
@ -108,88 +95,6 @@ parental_labels = [f"{t.capitalize()}:{v}" for t in parental_types for v in pare
previous_time = None
start_time = None
def guess_branch(version, env_version, git_branch):
if git_branch:
return git_branch
elif env_version in ["nightly", "develop"]:
return env_version
elif version[2] > 0:
dev_version = get_develop()
if version[1] != dev_version[1] or version[2] <= dev_version[2]:
return "develop"
else:
return "nightly"
else:
return "master"
def current_version(version, branch=None):
if branch == "nightly":
return get_nightly()
elif branch == "develop":
return get_develop()
elif version[2] > 0:
new_version = get_develop()
if version[1] != new_version[1] or new_version[2] >= version[2]:
return new_version
return get_nightly()
else:
return get_master()
nightly_version = None
def get_nightly():
global nightly_version
if nightly_version is None:
nightly_version = get_version("nightly")
return nightly_version
develop_version = None
def get_develop():
global develop_version
if develop_version is None:
develop_version = get_version("develop")
return develop_version
master_version = None
def get_master():
global master_version
if master_version is None:
master_version = get_version("master")
return master_version
def get_version(level):
try:
url = f"https://raw.githubusercontent.com/Kometa-Team/Kometa/{level}/VERSION"
return parse_version(requests.get(url).content.decode().strip(), text=level)
except requests.exceptions.ConnectionError:
return "Unknown", "Unknown", 0
def parse_version(version, text="develop"):
version = version.replace("develop", text)
split_version = version.split(f"-{text}")
return version, split_version[0], int(split_version[1]) if len(split_version) > 1 else 0
def quote(data):
return requests.utils.quote(str(data))
def download_image(title, image_url, download_directory, is_poster=True, filename=None):
response = requests.get(image_url, headers=header())
if response.status_code == 404:
raise Failed(f"Image Error: Not Found on Image URL: {image_url}")
if response.status_code >= 400:
raise Failed(f"Image Error: {response.status_code} on Image URL: {image_url}")
if "Content-Type" not in response.headers or response.headers["Content-Type"] not in image_content_types:
raise Failed("Image Not PNG, JPG, or WEBP")
new_image = os.path.join(download_directory, f"{filename}") if filename else download_directory
if response.headers["Content-Type"] == "image/jpeg":
new_image += ".jpg"
elif response.headers["Content-Type"] == "image/webp":
new_image += ".webp"
else:
new_image += ".png"
with open(new_image, "wb") as handler:
handler.write(response.content)
return ImageData("asset_directory", new_image, prefix=f"{title}'s ", is_poster=is_poster, is_url=False)
def get_image_dicts(group, alias):
posters = {}
backgrounds = {}
@ -205,34 +110,6 @@ def get_image_dicts(group, alias):
logger.error(f"Metadata Error: {attr} attribute is blank")
return posters, backgrounds
def pick_image(title, images, prioritize_assets, download_url_assets, item_dir, is_poster=True, image_name=None):
image_type = "poster" if is_poster else "background"
if image_name is None:
image_name = image_type
if images:
logger.debug(f"{len(images)} {image_type}{'s' if len(images) > 1 else ''} found:")
for i in images:
logger.debug(f"Method: {i} {image_type.capitalize()}: {images[i]}")
if prioritize_assets and "asset_directory" in images:
return images["asset_directory"]
for attr in ["style_data", f"url_{image_type}", f"file_{image_type}", f"tmdb_{image_type}", "tmdb_profile",
"tmdb_list_poster", "tvdb_list_poster", f"tvdb_{image_type}", "asset_directory", f"pmm_{image_type}",
"tmdb_person", "tmdb_collection_details", "tmdb_actor_details", "tmdb_crew_details", "tmdb_director_details",
"tmdb_producer_details", "tmdb_writer_details", "tmdb_movie_details", "tmdb_list_details",
"tvdb_list_details", "tvdb_movie_details", "tvdb_show_details", "tmdb_show_details"]:
if attr in images:
if attr in ["style_data", f"url_{image_type}"] and download_url_assets and item_dir:
if "asset_directory" in images:
return images["asset_directory"]
else:
try:
return download_image(title, images[attr], item_dir, is_poster=is_poster, filename=image_name)
except Failed as e:
logger.error(e)
if attr in ["asset_directory", f"pmm_{image_type}"]:
return images[attr]
return ImageData(attr, images[attr], is_poster=is_poster, is_url=attr != f"file_{image_type}")
def add_dict_list(keys, value, dict_map):
for key in keys:
if key in dict_map:
@ -1012,36 +889,3 @@ def get_system_fonts():
return dirs
system_fonts = [n for d in dirs for _, _, ns in os.walk(d) for n in ns]
return system_fonts
class YAML:
def __init__(self, path=None, input_data=None, check_empty=False, create=False, start_empty=False):
self.path = path
self.input_data = input_data
self.yaml = ruamel.yaml.YAML()
self.yaml.width = 100000
self.yaml.indent(mapping=2, sequence=2)
try:
if input_data:
self.data = self.yaml.load(input_data)
else:
if start_empty or (create and not os.path.exists(self.path)):
with open(self.path, 'w'):
pass
self.data = {}
else:
with open(self.path, encoding="utf-8") as fp:
self.data = self.yaml.load(fp)
except ruamel.yaml.error.YAMLError as e:
e = str(e).replace("\n", "\n ")
raise Failed(f"YAML Error: {e}")
except Exception as e:
raise Failed(f"YAML Error: {e}")
if not self.data or not isinstance(self.data, dict):
if check_empty:
raise Failed("YAML Error: File is empty")
self.data = {}
def save(self):
if self.path:
with open(self.path, 'w', encoding="utf-8") as fp:
self.yaml.dump(self.data, fp)

@ -1,12 +1,13 @@
from json import JSONDecodeError
from modules import util
from modules.util import Failed, YAML
from modules.util import Failed
logger = util.logger
class Webhooks:
def __init__(self, config, system_webhooks, library=None, notifiarr=None, gotify=None):
self.config = config
self.requests = self.config.Requests
self.error_webhooks = system_webhooks["error"] if "error" in system_webhooks else []
self.version_webhooks = system_webhooks["version"] if "version" in system_webhooks else []
self.run_start_webhooks = system_webhooks["run_start"] if "run_start" in system_webhooks else []
@ -39,7 +40,7 @@ class Webhooks:
json = self.discord(json)
elif webhook.startswith("https://hooks.slack.com/services"):
json = self.slack(json)
response = self.config.post(webhook, json=json)
response = self.requests.post(webhook, json=json)
if response is not None:
try:
response_json = response.json()
@ -47,7 +48,7 @@ class Webhooks:
if webhook == "notifiarr" and self.notifiarr and response.status_code == 400:
def remove_from_config(text, hook_cat):
if response_json["details"]["response"] == text:
yaml = YAML(self.config.config_path)
yaml = self.requests.file_yaml(self.config.config_path)
changed = False
if hook_cat in yaml.data and yaml.data["webhooks"][hook_cat]:
if isinstance(yaml.data["webhooks"][hook_cat], list) and "notifiarr" in yaml.data["webhooks"][hook_cat]:
@ -83,7 +84,7 @@ class Webhooks:
if version[1] != latest_version[1]:
notes = self.config.GitHub.latest_release_notes()
elif version[2] and version[2] < latest_version[2]:
notes = self.config.GitHub.get_commits(version[2], nightly=self.config.branch == "nightly")
notes = self.config.GitHub.get_commits(version[2], nightly=self.requests.branch == "nightly")
self._request(self.version_webhooks, {"event": "version", "current": version[0], "latest": latest_version[0], "notes": notes})
def end_time_hooks(self, start_time, end_time, run_time, stats):
@ -124,10 +125,10 @@ class Webhooks:
if self.library:
thumb = None
if not poster_url and collection.thumb and next((f for f in collection.fields if f.name == "thumb"), None):
thumb = self.config.get_image_encoded(f"{self.library.url}{collection.thumb}?X-Plex-Token={self.library.token}")
thumb = self.requests.get_image_encoded(f"{self.library.url}{collection.thumb}?X-Plex-Token={self.library.token}")
art = None
if not playlist and not background_url and collection.art and next((f for f in collection.fields if f.name == "art"), None):
art = self.config.get_image_encoded(f"{self.library.url}{collection.art}?X-Plex-Token={self.library.token}")
art = self.requests.get_image_encoded(f"{self.library.url}{collection.art}?X-Plex-Token={self.library.token}")
self._request(webhooks, {
"event": "changes",
"server_name": self.library.PlexServer.friendlyName,
@ -330,4 +331,3 @@ class Webhooks:
fields.append(field)
new_json["embeds"][0]["fields"] = fields
return new_json

@ -8,9 +8,9 @@ PlexAPI==4.15.13
psutil==5.9.8
python-dotenv==1.0.1
python-dateutil==2.9.0.post0
requests==2.32.1
requests==2.32.2
retrying==1.3.4
ruamel.yaml==0.18.6
schedule==1.2.1
setuptools==69.5.1
schedule==1.2.2
setuptools==70.0.0
tmdbapis==1.2.16
Loading…
Cancel
Save