[44] Merge remote-tracking branch 'yozora/docfixes' into nightly

pull/1949/head
meisnate12 2 months ago
commit f5c5aa345d

@ -8,14 +8,14 @@ Updated tmdbapis requirement to 1.2.7
Due to FlixPatrol moving a lot of their data behind a paywall and them reworking their pages to remove IMDb IDs and TMDb IDs the flixpatrol builders and default files have been removed. There currently are no plans to re-add them.
# New Features
Added new [BoxOfficeMojo Builder](https://metamanager.wiki/en/latest/files/builders/mojo/) - credit to @nwithan8 for the suggestion and initial code submission
Added `monitor_existing` to sonarr and radarr. To update the monitored status of items existing in plex to match the `monitor` declared.
Added [Gotify](https://gotify.net/) as a notification service. Thanks @krstn420 for the initial code.
Added [Trakt and MyAnimeList Authentication Page](https://metamanager.wiki/en/latest/config/auth/) allowing users to authenticate against those services directly from the wiki. credit to @chazlarson for developing the script
# Updates
Reworked PMM Default Streaming [Collections](https://metamanager.wiki/en/latest/defaults/both/streaming) and [Overlays](https://metamanager.wiki/en/latest/defaults/overlays/streaming) to utilize TMDB Watch Provider data, allowing users to customize regions without relying on mdblist. This data will be more accurate and up-to-date now.
Added new [BoxOfficeMojo Builder](https://metamanager.wiki/en/latest/files/builders/mojo/) - credit to @nwithan8 for the suggestion and initial code submission
Added new [`trakt_chart` attributes](https://metamanager.wiki/en/latest/files/builders/trakt/#trakt-chart) `network_ids`, `studio_ids`, `votes`, `tmdb_ratings`, `tmdb_votes`, `imdb_ratings`, `imdb_votes`, `rt_meters`, `rt_user_meters`, `metascores` and removed the deprecated `network` attribute
Added [Trakt and MyAnimeList Authentication Page](https://metamanager.wiki/en/latest/config/auth/) allowing users to authenticate against those services directly from the wiki. credit to @chazlarson for developing the script
Trakt Builder `trakt_userlist` value `recommendations` removed and `favorites` added.
Mass Update operations now can be given a list of sources to fall back on when one fails including a manual source.
`mass_content_rating_update` has a new source `mdb_age_rating`

@ -507,7 +507,7 @@ The available attributes for each library are as follows:
upgrade_existing: false
monitor_existing: false
root_folder_path: /movies
monitor: movie
monitor: false
availability: released
tag:
search: false

@ -29,7 +29,7 @@ radarr:
upgrade_existing: false
monitor_existing: false
root_folder_path: S:/Movies
monitor: movie
monitor: false
availability: announced
quality_profile: HD-1080p
tag: pmm
@ -80,7 +80,7 @@ radarr:
upgrade_existing: #
monitor_existing: #
root_folder_path: /movies
monitor: movie
monitor: false
availability: announced
quality_profile: HD-1080p
tag:
@ -146,7 +146,7 @@ radarr:
upgrade_existing: false
monitor_existing: false
root_folder_path: /movies
monitor: movie
monitor: false
availability: released
tag:
search: false

@ -295,7 +295,6 @@ table.dualTable td, table.dualTable th {
/* Custom tooltips */
.md-tooltip {
background-color: var(--md-primary-fg-color);
border-radius: 6px;
}

@ -417,31 +417,35 @@ class IMDb:
imdb_ids = []
logger.ghost("Parsing Page 1")
response_json = self._graph_request(json_obj)
total = response_json["data"]["advancedTitleSearch"]["total"]
limit = data["limit"]
if limit < 1 or total < limit:
limit = total
remainder = limit % item_count
if remainder == 0:
remainder = item_count
num_of_pages = math.ceil(int(limit) / item_count)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
imdb_ids.extend([n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]])
if num_of_pages > 1:
for i in range(2, num_of_pages + 1):
start_num = (i - 1) * item_count + 1
logger.ghost(f"Parsing Page {i}/{num_of_pages} {start_num}-{limit if i == num_of_pages else i * item_count}")
json_obj["variables"]["after"] = end_cursor
response_json = self._graph_request(json_obj)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
ids_found = [n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]]
if i == num_of_pages:
ids_found = ids_found[:remainder]
imdb_ids.extend(ids_found)
logger.exorcise()
if len(imdb_ids) > 0:
return imdb_ids
raise Failed("IMDb Error: No IMDb IDs Found")
try:
total = response_json["data"]["advancedTitleSearch"]["total"]
limit = data["limit"]
if limit < 1 or total < limit:
limit = total
remainder = limit % item_count
if remainder == 0:
remainder = item_count
num_of_pages = math.ceil(int(limit) / item_count)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
imdb_ids.extend([n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]])
if num_of_pages > 1:
for i in range(2, num_of_pages + 1):
start_num = (i - 1) * item_count + 1
logger.ghost(f"Parsing Page {i}/{num_of_pages} {start_num}-{limit if i == num_of_pages else i * item_count}")
json_obj["variables"]["after"] = end_cursor
response_json = self._graph_request(json_obj)
end_cursor = response_json["data"]["advancedTitleSearch"]["pageInfo"]["endCursor"]
ids_found = [n["node"]["title"]["id"] for n in response_json["data"]["advancedTitleSearch"]["edges"]]
if i == num_of_pages:
ids_found = ids_found[:remainder]
imdb_ids.extend(ids_found)
logger.exorcise()
if len(imdb_ids) > 0:
return imdb_ids
raise Failed("IMDb Error: No IMDb IDs Found")
except KeyError:
logger.error(f"Response: {response_json}")
raise
def _award(self, data):
final_list = []

Loading…
Cancel
Save