diff --git a/CHANGELOG.md b/CHANGELOG.md index f20276cbd7..f767bc71d5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,24 +1,59 @@ # Changelog -## [3.9.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD) +## [3.9.2-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.1...HEAD) +### 📖 Documentation + +- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951) + +**🆕 New features** + +- Multiverse: First PR [\#2908](https://github.com/pypeclub/OpenPype/pull/2908) + **🚀 Enhancements** +- Slack: Added configurable maximum file size of review upload to Slack [\#2945](https://github.com/pypeclub/OpenPype/pull/2945) +- NewPublisher: Prepared implementation of optional pyblish plugin [\#2943](https://github.com/pypeclub/OpenPype/pull/2943) +- Workfiles: Open published workfiles [\#2925](https://github.com/pypeclub/OpenPype/pull/2925) +- General: Default modules loaded dynamically [\#2923](https://github.com/pypeclub/OpenPype/pull/2923) - CI: change the version bump logic [\#2919](https://github.com/pypeclub/OpenPype/pull/2919) - Deadline: Add headless argument [\#2916](https://github.com/pypeclub/OpenPype/pull/2916) +- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911) - Ftrack: Fill workfile in custom attribute [\#2906](https://github.com/pypeclub/OpenPype/pull/2906) +- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903) - Settings UI: Add simple tooltips for settings entities [\#2901](https://github.com/pypeclub/OpenPype/pull/2901) **🐛 Bug fixes** +- Slack: Added default for review\_upload\_limit for Slack [\#2965](https://github.com/pypeclub/OpenPype/pull/2965) +- Settings: Conditional dictionary avoid invalid logs [\#2956](https://github.com/pypeclub/OpenPype/pull/2956) +- LogViewer: Don't refresh on initialization [\#2949](https://github.com/pypeclub/OpenPype/pull/2949) +- nuke: python3 compatibility issue with `iteritems` [\#2948](https://github.com/pypeclub/OpenPype/pull/2948) +- General: anatomy data with correct task short key [\#2947](https://github.com/pypeclub/OpenPype/pull/2947) +- SceneInventory: Fix imports in UI [\#2944](https://github.com/pypeclub/OpenPype/pull/2944) +- Slack: add generic exception [\#2941](https://github.com/pypeclub/OpenPype/pull/2941) +- General: Python specific vendor paths on env injection [\#2939](https://github.com/pypeclub/OpenPype/pull/2939) +- General: More fail safe delete old versions [\#2936](https://github.com/pypeclub/OpenPype/pull/2936) +- Settings UI: Collapsed of collapsible wrapper works as expected [\#2934](https://github.com/pypeclub/OpenPype/pull/2934) +- Maya: Do not pass `set` to maya commands \(fixes support for older maya versions\) [\#2932](https://github.com/pypeclub/OpenPype/pull/2932) +- General: Don't print log record on OSError [\#2926](https://github.com/pypeclub/OpenPype/pull/2926) +- Hiero: Fix import of 'register\_event\_callback' [\#2924](https://github.com/pypeclub/OpenPype/pull/2924) - Ftrack: Missing Ftrack id after editorial publish [\#2905](https://github.com/pypeclub/OpenPype/pull/2905) - AfterEffects: Fix rendering for single frame in DL [\#2875](https://github.com/pypeclub/OpenPype/pull/2875) **🔀 Refactored code** +- General: Move Attribute Definitions from pipeline [\#2931](https://github.com/pypeclub/OpenPype/pull/2931) +- General: Removed silo references and terminal splash [\#2927](https://github.com/pypeclub/OpenPype/pull/2927) +- General: Move pipeline constants to OpenPype [\#2918](https://github.com/pypeclub/OpenPype/pull/2918) - General: Move formatting and workfile functions [\#2914](https://github.com/pypeclub/OpenPype/pull/2914) +- General: Move remaining plugins from avalon [\#2912](https://github.com/pypeclub/OpenPype/pull/2912) + +**Merged pull requests:** + +- Maya - added transparency into review creator [\#2952](https://github.com/pypeclub/OpenPype/pull/2952) ## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18) @@ -42,6 +77,7 @@ - General: Remove forgotten use of avalon Creator [\#2885](https://github.com/pypeclub/OpenPype/pull/2885) - General: Avoid circular import [\#2884](https://github.com/pypeclub/OpenPype/pull/2884) - Fixes for attaching loaded containers \(\#2837\) [\#2874](https://github.com/pypeclub/OpenPype/pull/2874) +- Maya: Deformer node ids validation plugin [\#2826](https://github.com/pypeclub/OpenPype/pull/2826) **🔀 Refactored code** @@ -70,10 +106,6 @@ - Maya: add loaded containers to published instance [\#2837](https://github.com/pypeclub/OpenPype/pull/2837) - Ftrack: Can sync fps as string [\#2836](https://github.com/pypeclub/OpenPype/pull/2836) - General: Custom function for find executable [\#2822](https://github.com/pypeclub/OpenPype/pull/2822) -- General: Color dialog UI fixes [\#2817](https://github.com/pypeclub/OpenPype/pull/2817) -- global: letter box calculated on output as last process [\#2812](https://github.com/pypeclub/OpenPype/pull/2812) -- Nuke: adding Reformat to baking mov plugin [\#2811](https://github.com/pypeclub/OpenPype/pull/2811) -- Manager: Update all to latest button [\#2805](https://github.com/pypeclub/OpenPype/pull/2805) **🐛 Bug fixes** @@ -94,14 +126,12 @@ - Maya: Stop creation of reviews for Cryptomattes [\#2832](https://github.com/pypeclub/OpenPype/pull/2832) - Deadline: Remove recreated event [\#2828](https://github.com/pypeclub/OpenPype/pull/2828) - Deadline: Added missing events folder [\#2827](https://github.com/pypeclub/OpenPype/pull/2827) -- Maya: Deformer node ids validation plugin [\#2826](https://github.com/pypeclub/OpenPype/pull/2826) - Settings: Missing document with OP versions may break start of OpenPype [\#2825](https://github.com/pypeclub/OpenPype/pull/2825) - Deadline: more detailed temp file name for environment json [\#2824](https://github.com/pypeclub/OpenPype/pull/2824) - General: Host name was formed from obsolete code [\#2821](https://github.com/pypeclub/OpenPype/pull/2821) - Settings UI: Fix "Apply from" action [\#2820](https://github.com/pypeclub/OpenPype/pull/2820) - Ftrack: Job killer with missing user [\#2819](https://github.com/pypeclub/OpenPype/pull/2819) - Nuke: Use AVALON\_APP to get value for "app" key [\#2818](https://github.com/pypeclub/OpenPype/pull/2818) -- StandalonePublisher: use dynamic groups in subset names [\#2816](https://github.com/pypeclub/OpenPype/pull/2816) **🔀 Refactored code** diff --git a/openpype/__init__.py b/openpype/__init__.py index 99629a4257..8b94b2dc3f 100644 --- a/openpype/__init__.py +++ b/openpype/__init__.py @@ -78,6 +78,7 @@ def install(): from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, + register_inventory_action, ) from avalon import pipeline @@ -124,7 +125,7 @@ def install(): pyblish.register_plugin_path(path) register_loader_plugin_path(path) avalon.register_plugin_path(LegacyCreator, path) - avalon.register_plugin_path(avalon.InventoryAction, path) + register_inventory_action(path) # apply monkey patched discover to original one log.info("Patching discovery") diff --git a/openpype/hooks/pre_python_2_prelaunch.py b/openpype/hooks/pre_python_2_prelaunch.py deleted file mode 100644 index 84272d2e5d..0000000000 --- a/openpype/hooks/pre_python_2_prelaunch.py +++ /dev/null @@ -1,35 +0,0 @@ -import os -from openpype.lib import PreLaunchHook - - -class PrePython2Vendor(PreLaunchHook): - """Prepend python 2 dependencies for py2 hosts.""" - order = 10 - - def execute(self): - if not self.application.use_python_2: - return - - # Prepare vendor dir path - self.log.info("adding global python 2 vendor") - pype_root = os.getenv("OPENPYPE_REPOS_ROOT") - python_2_vendor = os.path.join( - pype_root, - "openpype", - "vendor", - "python", - "python_2" - ) - - # Add Python 2 modules - python_paths = [ - python_2_vendor - ] - - # Load PYTHONPATH from current launch context - python_path = self.launch_context.env.get("PYTHONPATH") - if python_path: - python_paths.append(python_path) - - # Set new PYTHONPATH to launch context environments - self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths) diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index 681f1c51a7..bb9affc9b6 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -2,10 +2,11 @@ import os import sys from Qt import QtWidgets +from bson.objectid import ObjectId import pyblish.api import avalon.api -from avalon import io, pipeline +from avalon import io from openpype import lib from openpype.api import Logger @@ -13,6 +14,7 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) import openpype.hosts.aftereffects from openpype.lib import register_event_callback @@ -29,7 +31,6 @@ PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") -INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") def check_inventory(): @@ -42,7 +43,7 @@ def check_inventory(): representation = container['representation'] representation_doc = io.find_one( { - "_id": io.ObjectId(representation), + "_id": ObjectId(representation), "type": "representation" }, projection={"parent": True} @@ -149,7 +150,7 @@ def containerise(name, """ data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "name": name, "namespace": namespace, "loader": str(loader), diff --git a/openpype/hosts/aftereffects/api/workio.py b/openpype/hosts/aftereffects/api/workio.py index 04c7834d8f..5a8f86ead5 100644 --- a/openpype/hosts/aftereffects/api/workio.py +++ b/openpype/hosts/aftereffects/api/workio.py @@ -1,8 +1,8 @@ """Host API required Work Files tool""" import os +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS from .launch_logic import get_stub -from avalon import api def _active_document(): @@ -14,7 +14,7 @@ def _active_document(): def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["aftereffects"] + return HOST_WORKFILE_EXTENSIONS["aftereffects"] def has_unsaved_changes(): diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 3069c3e1c9..29d6d356c8 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -328,7 +328,6 @@ class LaunchWorkFiles(LaunchQtApp): result = super().execute(context) self._window.set_context({ "asset": avalon.api.Session["AVALON_ASSET"], - "silo": avalon.api.Session["AVALON_SILO"], "task": avalon.api.Session["AVALON_TASK"] }) return result diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 07a7509dd7..8c580cf214 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -12,12 +12,12 @@ from . import ops import pyblish.api import avalon.api from avalon import io, schema -from avalon.pipeline import AVALON_CONTAINER_ID from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from openpype.api import Logger from openpype.lib import ( @@ -31,7 +31,6 @@ PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") -INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") ORIGINAL_EXCEPTHOOK = sys.excepthook diff --git a/openpype/hosts/blender/api/workio.py b/openpype/hosts/blender/api/workio.py index fd68761982..5eb9f82999 100644 --- a/openpype/hosts/blender/api/workio.py +++ b/openpype/hosts/blender/api/workio.py @@ -4,7 +4,8 @@ from pathlib import Path from typing import List, Optional import bpy -from avalon import api + +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS class OpenFileCacher: @@ -77,7 +78,7 @@ def has_unsaved_changes() -> bool: def file_extensions() -> List[str]: """Return the supported file extensions for Blender scene files.""" - return api.HOST_WORKFILE_EXTENSIONS["blender"] + return HOST_WORKFILE_EXTENSIONS["blender"] def work_root(session: dict) -> str: diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py index 3daaeceffe..1b2e800769 100644 --- a/openpype/hosts/blender/plugins/load/load_abc.py +++ b/openpype/hosts/blender/plugins/load/load_abc.py @@ -6,11 +6,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) + from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) from openpype.hosts.blender.api import plugin, lib diff --git a/openpype/hosts/blender/plugins/load/load_audio.py b/openpype/hosts/blender/plugins/load/load_audio.py index b95c5db270..3f4fcc17de 100644 --- a/openpype/hosts/blender/plugins/load/load_audio.py +++ b/openpype/hosts/blender/plugins/load/load_audio.py @@ -6,12 +6,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/load/load_camera_blend.py b/openpype/hosts/blender/plugins/load/load_camera_blend.py index 6ed2e8a575..f00027f0b4 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_blend.py +++ b/openpype/hosts/blender/plugins/load/load_camera_blend.py @@ -7,12 +7,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) logger = logging.getLogger("openpype").getChild( diff --git a/openpype/hosts/blender/plugins/load/load_camera_fbx.py b/openpype/hosts/blender/plugins/load/load_camera_fbx.py index 626ed44f08..97f844e610 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_camera_fbx.py @@ -6,12 +6,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) from openpype.hosts.blender.api import plugin, lib from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/load/load_fbx.py b/openpype/hosts/blender/plugins/load/load_fbx.py index 2d249ef647..ee2e7d175c 100644 --- a/openpype/hosts/blender/plugins/load/load_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_fbx.py @@ -6,12 +6,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) from openpype.hosts.blender.api import plugin, lib from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/load/load_layout_blend.py b/openpype/hosts/blender/plugins/load/load_layout_blend.py index d87df3c010..cf8e89ed1f 100644 --- a/openpype/hosts/blender/plugins/load/load_layout_blend.py +++ b/openpype/hosts/blender/plugins/load/load_layout_blend.py @@ -10,12 +10,12 @@ from openpype import lib from openpype.pipeline import ( legacy_create, get_representation_path, + AVALON_CONTAINER_ID, ) from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/load/load_layout_json.py b/openpype/hosts/blender/plugins/load/load_layout_json.py index 0693937fec..a0580af4a0 100644 --- a/openpype/hosts/blender/plugins/load/load_layout_json.py +++ b/openpype/hosts/blender/plugins/load/load_layout_json.py @@ -13,12 +13,12 @@ from openpype.pipeline import ( load_container, get_representation_path, loaders_from_representation, + AVALON_CONTAINER_ID, ) from openpype.hosts.blender.api.pipeline import ( AVALON_INSTANCES, AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) from openpype.hosts.blender.api import plugin diff --git a/openpype/hosts/blender/plugins/load/load_model.py b/openpype/hosts/blender/plugins/load/load_model.py index 18d01dcb29..0a5d98ffa0 100644 --- a/openpype/hosts/blender/plugins/load/load_model.py +++ b/openpype/hosts/blender/plugins/load/load_model.py @@ -6,12 +6,14 @@ from typing import Dict, List, Optional import bpy -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID, +) from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/load/load_rig.py b/openpype/hosts/blender/plugins/load/load_rig.py index cec088076c..4dfa96167f 100644 --- a/openpype/hosts/blender/plugins/load/load_rig.py +++ b/openpype/hosts/blender/plugins/load/load_rig.py @@ -10,6 +10,7 @@ from openpype import lib from openpype.pipeline import ( legacy_create, get_representation_path, + AVALON_CONTAINER_ID, ) from openpype.hosts.blender.api import ( plugin, @@ -18,7 +19,6 @@ from openpype.hosts.blender.api import ( from openpype.hosts.blender.api.pipeline import ( AVALON_CONTAINERS, AVALON_PROPERTY, - AVALON_CONTAINER_ID ) diff --git a/openpype/hosts/blender/plugins/publish/extract_layout.py b/openpype/hosts/blender/plugins/publish/extract_layout.py index cc7c90f4c8..b78a193d81 100644 --- a/openpype/hosts/blender/plugins/publish/extract_layout.py +++ b/openpype/hosts/blender/plugins/publish/extract_layout.py @@ -1,6 +1,8 @@ import os import json +from bson.objectid import ObjectId + import bpy import bpy_extras import bpy_extras.anim_utils @@ -140,7 +142,7 @@ class ExtractLayout(openpype.api.Extractor): blend = io.find_one( { "type": "representation", - "parent": io.ObjectId(parent), + "parent": ObjectId(parent), "name": "blend" }, projection={"_id": True}) @@ -151,7 +153,7 @@ class ExtractLayout(openpype.api.Extractor): fbx = io.find_one( { "type": "representation", - "parent": io.ObjectId(parent), + "parent": ObjectId(parent), "name": "fbx" }, projection={"_id": True}) @@ -162,7 +164,7 @@ class ExtractLayout(openpype.api.Extractor): abc = io.find_one( { "type": "representation", - "parent": io.ObjectId(parent), + "parent": ObjectId(parent), "name": "abc" }, projection={"_id": True}) diff --git a/openpype/hosts/flame/api/lib.py b/openpype/hosts/flame/api/lib.py index 74d9e7607a..aa2cfcb96d 100644 --- a/openpype/hosts/flame/api/lib.py +++ b/openpype/hosts/flame/api/lib.py @@ -18,6 +18,7 @@ log = Logger.get_logger(__name__) FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]") + class CTX: # singleton used for passing data between api modules app_framework = None @@ -538,9 +539,17 @@ def get_segment_attributes(segment): # head and tail with forward compatibility if segment.head: - clip_data["segment_head"] = int(segment.head) + # `infinite` can be also returned + if isinstance(segment.head, str): + clip_data["segment_head"] = 0 + else: + clip_data["segment_head"] = int(segment.head) if segment.tail: - clip_data["segment_tail"] = int(segment.tail) + # `infinite` can be also returned + if isinstance(segment.tail, str): + clip_data["segment_tail"] = 0 + else: + clip_data["segment_tail"] = int(segment.tail) # add all available shot tokens shot_tokens = _get_shot_tokens_values(segment, [ diff --git a/openpype/hosts/flame/api/pipeline.py b/openpype/hosts/flame/api/pipeline.py index 930c6abe29..ca3f38c1bc 100644 --- a/openpype/hosts/flame/api/pipeline.py +++ b/openpype/hosts/flame/api/pipeline.py @@ -4,13 +4,14 @@ Basic avalon integration import os import contextlib from avalon import api as avalon -from avalon.pipeline import AVALON_CONTAINER_ID from pyblish import api as pyblish + from openpype.api import Logger from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from .lib import ( set_segment_data_marker, @@ -26,7 +27,6 @@ PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") -INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") AVALON_CONTAINERS = "AVALON_CONTAINERS" @@ -34,12 +34,10 @@ log = Logger.get_logger(__name__) def install(): - pyblish.register_host("flame") pyblish.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) avalon.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH) log.info("OpenPype Flame plug-ins registred ...") # register callback for switching publishable @@ -47,6 +45,7 @@ def install(): log.info("OpenPype Flame host installed ...") + def uninstall(): pyblish.deregister_host("flame") @@ -54,7 +53,6 @@ def uninstall(): pyblish.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) avalon.deregister_plugin_path(LegacyCreator, CREATE_PATH) - avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH) # register callback for switching publishable pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/flame/api/scripts/wiretap_com.py b/openpype/hosts/flame/api/scripts/wiretap_com.py index ee906c2608..54993d34eb 100644 --- a/openpype/hosts/flame/api/scripts/wiretap_com.py +++ b/openpype/hosts/flame/api/scripts/wiretap_com.py @@ -422,7 +422,13 @@ class WireTapCom(object): color_policy = color_policy or "Legacy" # check if the colour policy in custom dir - if not os.path.exists(color_policy): + if "/" in color_policy: + # if unlikelly full path was used make it redundant + color_policy = color_policy.replace("/syncolor/policies/", "") + # expecting input is `Shared/NameOfPolicy` + color_policy = "/syncolor/policies/{}".format( + color_policy) + else: color_policy = "/syncolor/policies/Autodesk/{}".format( color_policy) diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py index 70340ad7a2..2482abd9c7 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py @@ -34,119 +34,125 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): def process(self, context): project = context.data["flameProject"] sequence = context.data["flameSequence"] + selected_segments = context.data["flameSelectedSegments"] + self.log.debug("__ selected_segments: {}".format(selected_segments)) + self.otio_timeline = context.data["otioTimeline"] self.clips_in_reels = opfapi.get_clips_in_reels(project) self.fps = context.data["fps"] # process all sellected - with opfapi.maintained_segment_selection(sequence) as segments: - for segment in segments: - comment_attributes = self._get_comment_attributes(segment) - self.log.debug("_ comment_attributes: {}".format( - pformat(comment_attributes))) + for segment in selected_segments: + # get openpype tag data + marker_data = opfapi.get_segment_data_marker(segment) + self.log.debug("__ marker_data: {}".format( + pformat(marker_data))) - clip_data = opfapi.get_segment_attributes(segment) - clip_name = clip_data["segment_name"] - self.log.debug("clip_name: {}".format(clip_name)) + if not marker_data: + continue - # get openpype tag data - marker_data = opfapi.get_segment_data_marker(segment) - self.log.debug("__ marker_data: {}".format( - pformat(marker_data))) + if marker_data.get("id") != "pyblish.avalon.instance": + continue - if not marker_data: - continue + self.log.debug("__ segment.name: {}".format( + segment.name + )) - if marker_data.get("id") != "pyblish.avalon.instance": - continue + comment_attributes = self._get_comment_attributes(segment) - # get file path - file_path = clip_data["fpath"] + self.log.debug("_ comment_attributes: {}".format( + pformat(comment_attributes))) - # get source clip - source_clip = self._get_reel_clip(file_path) + clip_data = opfapi.get_segment_attributes(segment) + clip_name = clip_data["segment_name"] + self.log.debug("clip_name: {}".format(clip_name)) - first_frame = opfapi.get_frame_from_filename(file_path) or 0 + # get file path + file_path = clip_data["fpath"] - head, tail = self._get_head_tail(clip_data, first_frame) + # get source clip + source_clip = self._get_reel_clip(file_path) - # solve handles length - marker_data["handleStart"] = min( - marker_data["handleStart"], head) - marker_data["handleEnd"] = min( - marker_data["handleEnd"], tail) + first_frame = opfapi.get_frame_from_filename(file_path) or 0 - with_audio = bool(marker_data.pop("audio")) + head, tail = self._get_head_tail(clip_data, first_frame) - # add marker data to instance data - inst_data = dict(marker_data.items()) + # solve handles length + marker_data["handleStart"] = min( + marker_data["handleStart"], head) + marker_data["handleEnd"] = min( + marker_data["handleEnd"], tail) - asset = marker_data["asset"] - subset = marker_data["subset"] + with_audio = bool(marker_data.pop("audio")) - # insert family into families - family = marker_data["family"] - families = [str(f) for f in marker_data["families"]] - families.insert(0, str(family)) + # add marker data to instance data + inst_data = dict(marker_data.items()) - # form label - label = asset - if asset != clip_name: - label += " ({})".format(clip_name) - label += " {}".format(subset) - label += " {}".format("[" + ", ".join(families) + "]") + asset = marker_data["asset"] + subset = marker_data["subset"] - inst_data.update({ - "name": "{}_{}".format(asset, subset), - "label": label, - "asset": asset, - "item": segment, - "families": families, - "publish": marker_data["publish"], - "fps": self.fps, - "flameSourceClip": source_clip, - "sourceFirstFrame": int(first_frame), - "path": file_path - }) + # insert family into families + family = marker_data["family"] + families = [str(f) for f in marker_data["families"]] + families.insert(0, str(family)) - # get otio clip data - otio_data = self._get_otio_clip_instance_data(clip_data) or {} - self.log.debug("__ otio_data: {}".format(pformat(otio_data))) + # form label + label = asset + if asset != clip_name: + label += " ({})".format(clip_name) + label += " {} [{}]".format(subset, ", ".join(families)) - # add to instance data - inst_data.update(otio_data) - self.log.debug("__ inst_data: {}".format(pformat(inst_data))) + inst_data.update({ + "name": "{}_{}".format(asset, subset), + "label": label, + "asset": asset, + "item": segment, + "families": families, + "publish": marker_data["publish"], + "fps": self.fps, + "flameSourceClip": source_clip, + "sourceFirstFrame": int(first_frame), + "path": file_path + }) - # add resolution - self._get_resolution_to_data(inst_data, context) + # get otio clip data + otio_data = self._get_otio_clip_instance_data(clip_data) or {} + self.log.debug("__ otio_data: {}".format(pformat(otio_data))) - # add comment attributes if any - inst_data.update(comment_attributes) + # add to instance data + inst_data.update(otio_data) + self.log.debug("__ inst_data: {}".format(pformat(inst_data))) - # create instance - instance = context.create_instance(**inst_data) + # add resolution + self._get_resolution_to_data(inst_data, context) - # add colorspace data - instance.data.update({ - "versionData": { - "colorspace": clip_data["colour_space"], - } - }) + # add comment attributes if any + inst_data.update(comment_attributes) - # create shot instance for shot attributes create/update - self._create_shot_instance(context, clip_name, **inst_data) + # create instance + instance = context.create_instance(**inst_data) - self.log.info("Creating instance: {}".format(instance)) - self.log.info( - "_ instance.data: {}".format(pformat(instance.data))) + # add colorspace data + instance.data.update({ + "versionData": { + "colorspace": clip_data["colour_space"], + } + }) - if not with_audio: - continue + # create shot instance for shot attributes create/update + self._create_shot_instance(context, clip_name, **inst_data) - # add audioReview attribute to plate instance data - # if reviewTrack is on - if marker_data.get("reviewTrack") is not None: - instance.data["reviewAudio"] = True + self.log.info("Creating instance: {}".format(instance)) + self.log.info( + "_ instance.data: {}".format(pformat(instance.data))) + + if not with_audio: + continue + + # add audioReview attribute to plate instance data + # if reviewTrack is on + if marker_data.get("reviewTrack") is not None: + instance.data["reviewAudio"] = True def _get_comment_attributes(self, segment): comment = segment.comment.get_value() @@ -188,7 +194,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): # get pattern defined by type pattern = TXT_PATERN - if a_type in ("number" , "float"): + if a_type in ("number", "float"): pattern = NUM_PATERN res_goup = pattern.findall(value) diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py index faa5be9d68..c6aeae7730 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py @@ -31,27 +31,28 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin): ) # adding otio timeline to context - with opfapi.maintained_segment_selection(sequence): + with opfapi.maintained_segment_selection(sequence) as selected_seg: otio_timeline = flame_export.create_otio_timeline(sequence) - instance_data = { - "name": subset_name, - "asset": asset_doc["name"], - "subset": subset_name, - "family": "workfile" - } + instance_data = { + "name": subset_name, + "asset": asset_doc["name"], + "subset": subset_name, + "family": "workfile" + } - # create instance with workfile - instance = context.create_instance(**instance_data) - self.log.info("Creating instance: {}".format(instance)) + # create instance with workfile + instance = context.create_instance(**instance_data) + self.log.info("Creating instance: {}".format(instance)) - # update context with main project attributes - context.data.update({ - "flameProject": project, - "flameSequence": sequence, - "otioTimeline": otio_timeline, - "currentFile": "Flame/{}/{}".format( - project.name, sequence.name - ), - "fps": float(str(sequence.frame_rate)[:-4]) - }) + # update context with main project attributes + context.data.update({ + "flameProject": project, + "flameSequence": sequence, + "otioTimeline": otio_timeline, + "currentFile": "Flame/{}/{}".format( + project.name, sequence.name + ), + "flameSelectedSegments": selected_seg, + "fps": float(str(sequence.frame_rate)[:-4]) + }) diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index 2bb5ea8aae..f7a2360bfa 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -3,6 +3,7 @@ import sys import re import contextlib +from bson.objectid import ObjectId from Qt import QtGui from avalon import io @@ -92,7 +93,7 @@ def switch_item(container, # Collect any of current asset, subset and representation if not provided # so we can use the original name from those. if any(not x for x in [asset_name, subset_name, representation_name]): - _id = io.ObjectId(container["representation"]) + _id = ObjectId(container["representation"]) representation = io.find_one({"type": "representation", "_id": _id}) version, subset, asset, project = io.parenthood(representation) diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py index 92e54ad6f5..c9cd76770a 100644 --- a/openpype/hosts/fusion/api/pipeline.py +++ b/openpype/hosts/fusion/api/pipeline.py @@ -8,13 +8,15 @@ import contextlib import pyblish.api import avalon.api -from avalon.pipeline import AVALON_CONTAINER_ID from openpype.api import Logger from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + register_inventory_action_path, + deregister_inventory_action_path, + AVALON_CONTAINER_ID, ) import openpype.hosts.fusion @@ -69,7 +71,7 @@ def install(): register_loader_plugin_path(LOAD_PATH) avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.api.register_plugin_path(avalon.api.InventoryAction, INVENTORY_PATH) + register_inventory_action_path(INVENTORY_PATH) pyblish.api.register_callback( "instanceToggled", on_pyblish_instance_toggled @@ -93,9 +95,7 @@ def uninstall(): deregister_loader_plugin_path(LOAD_PATH) avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) - avalon.api.deregister_plugin_path( - avalon.api.InventoryAction, INVENTORY_PATH - ) + deregister_inventory_action_path(INVENTORY_PATH) pyblish.api.deregister_callback( "instanceToggled", on_pyblish_instance_toggled diff --git a/openpype/hosts/fusion/api/workio.py b/openpype/hosts/fusion/api/workio.py index ec9ac7481a..a1710c6e3a 100644 --- a/openpype/hosts/fusion/api/workio.py +++ b/openpype/hosts/fusion/api/workio.py @@ -1,12 +1,14 @@ """Host API required Work Files tool""" import sys import os -from avalon import api + +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS + from .pipeline import get_current_comp def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["fusion"] + return HOST_WORKFILE_EXTENSIONS["fusion"] def has_unsaved_changes(): diff --git a/openpype/hosts/fusion/plugins/inventory/select_containers.py b/openpype/hosts/fusion/plugins/inventory/select_containers.py index 294c134505..d554b73a5b 100644 --- a/openpype/hosts/fusion/plugins/inventory/select_containers.py +++ b/openpype/hosts/fusion/plugins/inventory/select_containers.py @@ -1,7 +1,7 @@ -from avalon import api +from openpype.pipeline import InventoryAction -class FusionSelectContainers(api.InventoryAction): +class FusionSelectContainers(InventoryAction): label = "Select Containers" icon = "mouse-pointer" diff --git a/openpype/hosts/fusion/plugins/inventory/set_tool_color.py b/openpype/hosts/fusion/plugins/inventory/set_tool_color.py index 2f5ae4d241..c7530ce674 100644 --- a/openpype/hosts/fusion/plugins/inventory/set_tool_color.py +++ b/openpype/hosts/fusion/plugins/inventory/set_tool_color.py @@ -1,6 +1,6 @@ -from avalon import api from Qt import QtGui, QtWidgets +from openpype.pipeline import InventoryAction from openpype import style from openpype.hosts.fusion.api import ( get_current_comp, @@ -8,7 +8,7 @@ from openpype.hosts.fusion.api import ( ) -class FusionSetToolColor(api.InventoryAction): +class FusionSetToolColor(InventoryAction): """Update the color of the selected tools""" label = "Set Tool Color" diff --git a/openpype/hosts/harmony/api/pipeline.py b/openpype/hosts/harmony/api/pipeline.py index f967da15ca..420e9720db 100644 --- a/openpype/hosts/harmony/api/pipeline.py +++ b/openpype/hosts/harmony/api/pipeline.py @@ -2,11 +2,11 @@ import os from pathlib import Path import logging +from bson.objectid import ObjectId import pyblish.api from avalon import io import avalon.api -from avalon.pipeline import AVALON_CONTAINER_ID from openpype import lib from openpype.lib import register_event_callback @@ -14,6 +14,7 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) import openpype.hosts.harmony import openpype.hosts.harmony.api as harmony @@ -113,7 +114,7 @@ def check_inventory(): representation = container['representation'] representation_doc = io.find_one( { - "_id": io.ObjectId(representation), + "_id": ObjectId(representation), "type": "representation" }, projection={"parent": True} diff --git a/openpype/hosts/harmony/api/workio.py b/openpype/hosts/harmony/api/workio.py index 38a00ae414..ab1cb9b1a9 100644 --- a/openpype/hosts/harmony/api/workio.py +++ b/openpype/hosts/harmony/api/workio.py @@ -2,20 +2,21 @@ import os import shutil +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS + from .lib import ( ProcessContext, get_local_harmony_path, zip_and_move, launch_zip_file ) -from avalon import api # used to lock saving until previous save is done. save_disabled = False def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["harmony"] + return HOST_WORKFILE_EXTENSIONS["harmony"] def has_unsaved_changes(): diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py index a9467ae5a4..df3b24ff2c 100644 --- a/openpype/hosts/hiero/api/lib.py +++ b/openpype/hosts/hiero/api/lib.py @@ -8,7 +8,10 @@ import platform import ast import shutil import hiero + from Qt import QtWidgets +from bson.objectid import ObjectId + import avalon.api as avalon import avalon.io from openpype.api import (Logger, Anatomy, get_anatomy_settings) @@ -1006,7 +1009,7 @@ def check_inventory_versions(): # get representation from io representation = io.find_one({ "type": "representation", - "_id": io.ObjectId(container["representation"]) + "_id": ObjectId(container["representation"]) }) # Get start frame from version data diff --git a/openpype/hosts/hiero/api/pipeline.py b/openpype/hosts/hiero/api/pipeline.py index eff126c0b6..0d3c8914ce 100644 --- a/openpype/hosts/hiero/api/pipeline.py +++ b/openpype/hosts/hiero/api/pipeline.py @@ -4,7 +4,7 @@ Basic avalon integration import os import contextlib from collections import OrderedDict -from avalon.pipeline import AVALON_CONTAINER_ID + from avalon import api as avalon from avalon import schema from pyblish import api as pyblish @@ -13,6 +13,7 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from openpype.tools.utils import host_tools from . import lib, menu, events @@ -28,7 +29,6 @@ PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish").replace("\\", "/") LOAD_PATH = os.path.join(PLUGINS_DIR, "load").replace("\\", "/") CREATE_PATH = os.path.join(PLUGINS_DIR, "create").replace("\\", "/") -INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory").replace("\\", "/") AVALON_CONTAINERS = ":AVALON_CONTAINERS" @@ -51,7 +51,6 @@ def install(): pyblish.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) avalon.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH) # register callback for switching publishable pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/hiero/api/workio.py b/openpype/hosts/hiero/api/workio.py index dacb11624f..394cb5e2ab 100644 --- a/openpype/hosts/hiero/api/workio.py +++ b/openpype/hosts/hiero/api/workio.py @@ -1,14 +1,14 @@ import os import hiero -from avalon import api + from openpype.api import Logger +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS - -log = Logger().get_logger(__name__) +log = Logger.get_logger(__name__) def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["hiero"] + return HOST_WORKFILE_EXTENSIONS["hiero"] def has_unsaved_changes(): diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 7d4e58efb7..d079c9ea81 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -8,12 +8,12 @@ import hdefereval import pyblish.api import avalon.api -from avalon.pipeline import AVALON_CONTAINER_ID from avalon.lib import find_submodule from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, + AVALON_CONTAINER_ID, ) import openpype.hosts.houdini from openpype.hosts.houdini.api import lib diff --git a/openpype/hosts/houdini/api/workio.py b/openpype/hosts/houdini/api/workio.py index e7310163ea..e0213023fd 100644 --- a/openpype/hosts/houdini/api/workio.py +++ b/openpype/hosts/houdini/api/workio.py @@ -2,11 +2,11 @@ import os import hou -from avalon import api +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["houdini"] + return HOST_WORKFILE_EXTENSIONS["houdini"] def has_unsaved_changes(): diff --git a/openpype/hosts/houdini/plugins/load/load_image.py b/openpype/hosts/houdini/plugins/load/load_image.py index bd9ea3eee3..671f08f18f 100644 --- a/openpype/hosts/houdini/plugins/load/load_image.py +++ b/openpype/hosts/houdini/plugins/load/load_image.py @@ -3,6 +3,7 @@ import os from openpype.pipeline import ( load, get_representation_path, + AVALON_CONTAINER_ID, ) from openpype.hosts.houdini.api import lib, pipeline @@ -73,7 +74,7 @@ class ImageLoader(load.LoaderPlugin): # Imprint it manually data = { "schema": "avalon-core:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "name": node_name, "namespace": namespace, "loader": str(self.__class__.__name__), diff --git a/openpype/hosts/houdini/plugins/load/load_usd_layer.py b/openpype/hosts/houdini/plugins/load/load_usd_layer.py index d803e6abfe..48580fc3aa 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_layer.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_layer.py @@ -1,8 +1,9 @@ from openpype.pipeline import ( load, get_representation_path, + AVALON_CONTAINER_ID, ) -from openpype.hosts.houdini.api import lib, pipeline +from openpype.hosts.houdini.api import lib class USDSublayerLoader(load.LoaderPlugin): @@ -43,7 +44,7 @@ class USDSublayerLoader(load.LoaderPlugin): # Imprint it manually data = { "schema": "avalon-core:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "name": node_name, "namespace": namespace, "loader": str(self.__class__.__name__), diff --git a/openpype/hosts/houdini/plugins/load/load_usd_reference.py b/openpype/hosts/houdini/plugins/load/load_usd_reference.py index fdb443f4cf..6851c77e6d 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_reference.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_reference.py @@ -1,8 +1,9 @@ from openpype.pipeline import ( load, get_representation_path, + AVALON_CONTAINER_ID, ) -from openpype.hosts.houdini.api import lib, pipeline +from openpype.hosts.houdini.api import lib class USDReferenceLoader(load.LoaderPlugin): @@ -43,7 +44,7 @@ class USDReferenceLoader(load.LoaderPlugin): # Imprint it manually data = { "schema": "avalon-core:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "name": node_name, "namespace": namespace, "loader": str(self.__class__.__name__), diff --git a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py index 4071eb3e0c..499b733570 100644 --- a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py +++ b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py @@ -145,7 +145,6 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase): path = self._template.format(**{ "root": root, "project": PROJECT, - "silo": asset_doc["silo"], "asset": asset_doc["name"], "subset": subset, "representation": ext, @@ -165,4 +164,3 @@ output_processor = AvalonURIOutputProcessor() def usdOutputProcessor(): return output_processor - diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 210beb90e3..90688423e0 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -1511,7 +1511,7 @@ def get_container_members(container): members = cmds.sets(container, query=True) or [] members = cmds.ls(members, long=True, objectsOnly=True) or [] - members = set(members) + all_members = set(members) # Include any referenced nodes from any reference in the container # This is required since we've removed adding ALL nodes of a reference @@ -1530,9 +1530,9 @@ def get_container_members(container): reference_members = cmds.ls(reference_members, long=True, objectsOnly=True) - members.update(reference_members) + all_members.update(reference_members) - return members + return list(all_members) # region LOOKDEV diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index 5cdc3ff4fd..bb61128178 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -10,7 +10,6 @@ import pyblish.api import avalon.api from avalon.lib import find_submodule -from avalon.pipeline import AVALON_CONTAINER_ID import openpype.hosts.maya from openpype.tools.utils import host_tools @@ -23,7 +22,10 @@ from openpype.lib.path_tools import HostDirmap from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, + register_inventory_action_path, deregister_loader_plugin_path, + deregister_inventory_action_path, + AVALON_CONTAINER_ID, ) from openpype.hosts.maya.lib import copy_workspace_mel from . import menu, lib @@ -59,7 +61,7 @@ def install(): register_loader_plugin_path(LOAD_PATH) avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.api.register_plugin_path(avalon.api.InventoryAction, INVENTORY_PATH) + register_inventory_action_path(INVENTORY_PATH) log.info(PUBLISH_PATH) log.info("Installing callbacks ... ") @@ -188,9 +190,7 @@ def uninstall(): deregister_loader_plugin_path(LOAD_PATH) avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) - avalon.api.deregister_plugin_path( - avalon.api.InventoryAction, INVENTORY_PATH - ) + deregister_inventory_action_path(INVENTORY_PATH) menu.uninstall() diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 84379bc145..3721868823 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -4,11 +4,11 @@ from maya import cmds import qargparse -from avalon.pipeline import AVALON_CONTAINER_ID from openpype.pipeline import ( LegacyCreator, LoaderPlugin, get_representation_path, + AVALON_CONTAINER_ID, ) from .pipeline import containerise diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index 96a9700b88..0b60564e5e 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -6,6 +6,8 @@ import contextlib import copy import six +from bson.objectid import ObjectId + from maya import cmds from avalon import io @@ -282,7 +284,7 @@ def update_package_version(container, version): # Versioning (from `core.maya.pipeline`) current_representation = io.find_one({ - "_id": io.ObjectId(container["representation"]) + "_id": ObjectId(container["representation"]) }) assert current_representation is not None, "This is a bug" @@ -327,7 +329,7 @@ def update_package(set_container, representation): # Load the original package data current_representation = io.find_one({ - "_id": io.ObjectId(set_container['representation']), + "_id": ObjectId(set_container['representation']), "type": "representation" }) @@ -478,10 +480,10 @@ def update_scene(set_container, containers, current_data, new_data, new_file): # They *must* use the same asset, subset and Loader for # `update_container` to make sense. old = io.find_one({ - "_id": io.ObjectId(representation_current) + "_id": ObjectId(representation_current) }) new = io.find_one({ - "_id": io.ObjectId(representation_new) + "_id": ObjectId(representation_new) }) is_valid = compare_representations(old=old, new=new) if not is_valid: diff --git a/openpype/hosts/maya/api/workio.py b/openpype/hosts/maya/api/workio.py index 698c48e81e..fd4961c4bf 100644 --- a/openpype/hosts/maya/api/workio.py +++ b/openpype/hosts/maya/api/workio.py @@ -1,11 +1,12 @@ """Host API required Work Files tool""" import os from maya import cmds -from avalon import api + +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS def file_extensions(): - return api.HOST_WORKFILE_EXTENSIONS["maya"] + return HOST_WORKFILE_EXTENSIONS["maya"] def has_unsaved_changes(): diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py new file mode 100644 index 0000000000..b2266e5a57 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py @@ -0,0 +1,51 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsd(plugin.Creator): + """Multiverse USD data""" + + name = "usdMain" + label = "Multiverse USD" + family = "usd" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsd, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["stripNamespaces"] = False + self.data["mergeTransformAndShape"] = False + self.data["writeAncestors"] = True + self.data["flattenParentXforms"] = False + self.data["writeSparseOverrides"] = False + self.data["useMetaPrimPath"] = False + self.data["customRootPath"] = '' + self.data["customAttributes"] = '' + self.data["nodeTypesToIgnore"] = '' + self.data["writeMeshes"] = True + self.data["writeCurves"] = True + self.data["writeParticles"] = True + self.data["writeCameras"] = False + self.data["writeLights"] = False + self.data["writeJoints"] = False + self.data["writeCollections"] = False + self.data["writePositions"] = True + self.data["writeNormals"] = True + self.data["writeUVs"] = True + self.data["writeColorSets"] = False + self.data["writeTangents"] = False + self.data["writeRefPositions"] = False + self.data["writeBlendShapes"] = False + self.data["writeDisplayColor"] = False + self.data["writeSkinWeights"] = False + self.data["writeMaterialAssignment"] = False + self.data["writeHardwareShader"] = False + self.data["writeShadingNetworks"] = False + self.data["writeTransformMatrix"] = True + self.data["writeUsdAttributes"] = False + self.data["timeVaryingTopology"] = False + self.data["customMaterialNamespace"] = '' + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py new file mode 100644 index 0000000000..77b808c459 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py @@ -0,0 +1,23 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsdComp(plugin.Creator): + """Create Multiverse USD Composition""" + + name = "usdCompositionMain" + label = "Multiverse USD Composition" + family = "usdComposition" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["stripNamespaces"] = False + self.data["mergeTransformAndShape"] = False + self.data["flattenContent"] = False + self.data["writePendingOverrides"] = False + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py new file mode 100644 index 0000000000..bb82ab2039 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py @@ -0,0 +1,28 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsdOver(plugin.Creator): + """Multiverse USD data""" + + name = "usdOverrideMain" + label = "Multiverse USD Override" + family = "usdOverride" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["writeAll"] = False + self.data["writeTransforms"] = True + self.data["writeVisibility"] = True + self.data["writeAttributes"] = True + self.data["writeMaterials"] = True + self.data["writeVariants"] = True + self.data["writeVariantsDefinition"] = True + self.data["writeActiveState"] = True + self.data["writeNamespaces"] = False + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index 14a21d28ca..fbf3399f61 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -15,6 +15,14 @@ class CreateReview(plugin.Creator): keepImages = False isolate = False imagePlane = True + transparency = [ + "preset", + "simple", + "object sorting", + "weighted average", + "depth peeling", + "alpha cut" + ] def __init__(self, *args, **kwargs): super(CreateReview, self).__init__(*args, **kwargs) @@ -28,5 +36,6 @@ class CreateReview(plugin.Creator): data["isolate"] = self.isolate data["keepImages"] = self.keepImages data["imagePlane"] = self.imagePlane + data["transparency"] = self.transparency self.data = data diff --git a/openpype/hosts/maya/plugins/inventory/import_modelrender.py b/openpype/hosts/maya/plugins/inventory/import_modelrender.py index c5d3d0c8f4..d9bb256fac 100644 --- a/openpype/hosts/maya/plugins/inventory/import_modelrender.py +++ b/openpype/hosts/maya/plugins/inventory/import_modelrender.py @@ -1,6 +1,8 @@ import json -from avalon import api, io +from avalon import io +from bson.objectid import ObjectId from openpype.pipeline import ( + InventoryAction, get_representation_context, get_representation_path_from_context, ) @@ -10,7 +12,7 @@ from openpype.hosts.maya.api.lib import ( ) -class ImportModelRender(api.InventoryAction): +class ImportModelRender(InventoryAction): label = "Import Model Render Sets" icon = "industry" @@ -39,7 +41,7 @@ class ImportModelRender(api.InventoryAction): nodes.append(n) repr_doc = io.find_one({ - "_id": io.ObjectId(container["representation"]), + "_id": ObjectId(container["representation"]), }) version_id = repr_doc["parent"] diff --git a/openpype/hosts/maya/plugins/inventory/import_reference.py b/openpype/hosts/maya/plugins/inventory/import_reference.py index 2fa132a867..afb1e0e17f 100644 --- a/openpype/hosts/maya/plugins/inventory/import_reference.py +++ b/openpype/hosts/maya/plugins/inventory/import_reference.py @@ -1,11 +1,10 @@ from maya import cmds -from avalon import api - +from openpype.pipeline import InventoryAction from openpype.hosts.maya.api.plugin import get_reference_node -class ImportReference(api.InventoryAction): +class ImportReference(InventoryAction): """Imports selected reference to inside of the file.""" label = "Import Reference" diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py new file mode 100644 index 0000000000..c03f2c5d92 --- /dev/null +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py @@ -0,0 +1,102 @@ +# -*- coding: utf-8 -*- +import maya.cmds as cmds + +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.maya.api.lib import ( + maintained_selection, + namespaced, + unique_namespace +) +from openpype.hosts.maya.api.pipeline import containerise + + +class MultiverseUsdLoader(load.LoaderPlugin): + """Load the USD by Multiverse""" + + families = ["model", "usd", "usdComposition", "usdOverride", + "pointcache", "animation"] + representations = ["usd", "usda", "usdc", "usdz", "abc"] + + label = "Read USD by Multiverse" + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, options=None): + + asset = context['asset']['name'] + namespace = namespace or unique_namespace( + asset + "_", + prefix="_" if asset[0].isdigit() else "", + suffix="_", + ) + + # Create the shape + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + shape = None + transform = None + with maintained_selection(): + cmds.namespace(addNamespace=namespace) + with namespaced(namespace, new=False): + import multiverse + shape = multiverse.CreateUsdCompound(self.fname) + transform = cmds.listRelatives( + shape, parent=True, fullPath=True)[0] + + # Lock the shape node so the user cannot delete it. + cmds.lockNode(shape, lock=True) + + nodes = [transform, shape] + self[:] = nodes + + return containerise( + name=name, + namespace=namespace, + nodes=nodes, + context=context, + loader=self.__class__.__name__) + + def update(self, container, representation): + # type: (dict, dict) -> None + """Update container with specified representation.""" + node = container['objectName'] + assert cmds.objExists(node), "Missing container" + + members = cmds.sets(node, query=True) or [] + shapes = cmds.ls(members, type="mvUsdCompoundShape") + assert shapes, "Cannot find mvUsdCompoundShape in container" + + path = get_representation_path(representation) + + import multiverse + for shape in shapes: + multiverse.SetUsdCompoundAssetPaths(shape, [path]) + + cmds.setAttr("{}.representation".format(node), + str(representation["_id"]), + type="string") + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + # type: (dict) -> None + """Remove loaded container.""" + # Delete container and its contents + if cmds.objExists(container['objectName']): + members = cmds.sets(container['objectName'], query=True) or [] + cmds.delete([container['objectName']] + members) + + # Remove the namespace, if empty + namespace = container['namespace'] + if cmds.namespace(exists=namespace): + members = cmds.namespaceInfo(namespace, listNamespace=True) + if not members: + cmds.namespace(removeNamespace=namespace) + else: + self.log.warning("Namespace not deleted because it " + "still has members: %s", namespace) diff --git a/openpype/hosts/maya/plugins/load/load_vrayproxy.py b/openpype/hosts/maya/plugins/load/load_vrayproxy.py index 5b79b1efb3..69d54df62b 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayproxy.py +++ b/openpype/hosts/maya/plugins/load/load_vrayproxy.py @@ -7,6 +7,8 @@ loader will use them instead of native vray vrmesh format. """ import os +from bson.objectid import ObjectId + import maya.cmds as cmds from avalon import io @@ -186,7 +188,7 @@ class VRayProxyLoader(load.LoaderPlugin): abc_rep = io.find_one( { "type": "representation", - "parent": io.ObjectId(version_id), + "parent": ObjectId(version_id), "name": "abc" }) diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py index 389995d30c..3a47cdadb5 100644 --- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py +++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py @@ -6,7 +6,7 @@ from maya import cmds import openpype.api from openpype.hosts.maya.api.lib import maintained_selection -from avalon.pipeline import AVALON_CONTAINER_ID +from openpype.pipeline import AVALON_CONTAINER_ID class ExtractMayaSceneRaw(openpype.api.Extractor): diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py new file mode 100644 index 0000000000..4e4efdc32c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py @@ -0,0 +1,210 @@ +import os +import six + +from maya import cmds + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + + +class ExtractMultiverseUsd(openpype.api.Extractor): + """Extractor for USD by Multiverse.""" + + label = "Extract Multiverse USD" + hosts = ["maya"] + families = ["usd"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "stripNamespaces": bool, + "mergeTransformAndShape": bool, + "writeAncestors": bool, + "flattenParentXforms": bool, + "writeSparseOverrides": bool, + "useMetaPrimPath": bool, + "customRootPath": str, + "customAttributes": str, + "nodeTypesToIgnore": str, + "writeMeshes": bool, + "writeCurves": bool, + "writeParticles": bool, + "writeCameras": bool, + "writeLights": bool, + "writeJoints": bool, + "writeCollections": bool, + "writePositions": bool, + "writeNormals": bool, + "writeUVs": bool, + "writeColorSets": bool, + "writeTangents": bool, + "writeRefPositions": bool, + "writeBlendShapes": bool, + "writeDisplayColor": bool, + "writeSkinWeights": bool, + "writeMaterialAssignment": bool, + "writeHardwareShader": bool, + "writeShadingNetworks": bool, + "writeTransformMatrix": bool, + "writeUsdAttributes": bool, + "timeVaryingTopology": bool, + "customMaterialNamespace": str, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "stripNamespaces": False, + "mergeTransformAndShape": False, + "writeAncestors": True, + "flattenParentXforms": False, + "writeSparseOverrides": False, + "useMetaPrimPath": False, + "customRootPath": str(), + "customAttributes": str(), + "nodeTypesToIgnore": str(), + "writeMeshes": True, + "writeCurves": True, + "writeParticles": True, + "writeCameras": False, + "writeLights": False, + "writeJoints": False, + "writeCollections": False, + "writePositions": True, + "writeNormals": True, + "writeUVs": True, + "writeColorSets": False, + "writeTangents": False, + "writeRefPositions": False, + "writeBlendShapes": False, + "writeDisplayColor": False, + "writeSkinWeights": False, + "writeMaterialAssignment": False, + "writeHardwareShader": False, + "writeShadingNetworks": False, + "writeTransformMatrix": True, + "writeUsdAttributes": False, + "timeVaryingTopology": False, + "customMaterialNamespace": str(), + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def parse_overrides(self, instance, options): + """Inspect data of instance to determine overridden options""" + + for key in instance.data: + if key not in self.options: + continue + + # Ensure the data is of correct type + value = instance.data[key] + if isinstance(value, six.text_type): + value = str(value) + if not isinstance(value, self.options[key]): + self.log.warning( + "Overridden attribute {key} was of " + "the wrong type: {invalid_type} " + "- should have been {valid_type}".format( + key=key, + invalid_type=type(value).__name__, + valid_type=self.options[key].__name__)) + continue + + options[key] = value + + return options + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usd".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace('\\', '/') + + # Parse export options + options = self.default_options + options = self.parse_overrides(instance, options) + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type=("mesh"), + noIntermediate=True, + long=True) + self.log.info('Collected object {}'.format(members)) + + import multiverse + + time_opts = None + frame_start = instance.data['frameStart'] + frame_end = instance.data['frameEnd'] + handle_start = instance.data['handleStart'] + handle_end = instance.data['handleEnd'] + step = instance.data['step'] + fps = instance.data['fps'] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + asset_write_opts = multiverse.AssetWriteOptions(time_opts) + options_discard_keys = { + 'numTimeSamples', + 'timeSamplesSpan', + 'frameStart', + 'frameEnd', + 'handleStart', + 'handleEnd', + 'step', + 'fps' + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(asset_write_opts, key, value) + + multiverse.WriteAsset(file_path, members, asset_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usd', + 'ext': 'usd', + 'files': file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py new file mode 100644 index 0000000000..8fccc412e6 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py @@ -0,0 +1,151 @@ +import os + +from maya import cmds + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + + +class ExtractMultiverseUsdComposition(openpype.api.Extractor): + """Extractor of Multiverse USD Composition.""" + + label = "Extract Multiverse USD Composition" + hosts = ["maya"] + families = ["usdComposition"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "stripNamespaces": bool, + "mergeTransformAndShape": bool, + "flattenContent": bool, + "writePendingOverrides": bool, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "stripNamespaces": True, + "mergeTransformAndShape": False, + "flattenContent": False, + "writePendingOverrides": False, + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def parse_overrides(self, instance, options): + """Inspect data of instance to determine overridden options""" + + for key in instance.data: + if key not in self.options: + continue + + # Ensure the data is of correct type + value = instance.data[key] + if not isinstance(value, self.options[key]): + self.log.warning( + "Overridden attribute {key} was of " + "the wrong type: {invalid_type} " + "- should have been {valid_type}".format( + key=key, + invalid_type=type(value).__name__, + valid_type=self.options[key].__name__)) + continue + + options[key] = value + + return options + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usd".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace('\\', '/') + + # Parse export options + options = self.default_options + options = self.parse_overrides(instance, options) + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type="mvUsdCompoundShape", + noIntermediate=True, + long=True) + self.log.info('Collected object {}'.format(members)) + + import multiverse + + time_opts = None + frame_start = instance.data['frameStart'] + frame_end = instance.data['frameEnd'] + handle_start = instance.data['handleStart'] + handle_end = instance.data['handleEnd'] + step = instance.data['step'] + fps = instance.data['fps'] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + comp_write_opts = multiverse.CompositionWriteOptions() + options_discard_keys = { + 'numTimeSamples', + 'timeSamplesSpan', + 'frameStart', + 'frameEnd', + 'handleStart', + 'handleEnd', + 'step', + 'fps' + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(comp_write_opts, key, value) + + multiverse.WriteComposition(file_path, members, comp_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usd', + 'ext': 'usd', + 'files': file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py new file mode 100644 index 0000000000..ce0e8a392a --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py @@ -0,0 +1,139 @@ +import os + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + +from maya import cmds + + +class ExtractMultiverseUsdOverride(openpype.api.Extractor): + """Extractor for USD Override by Multiverse.""" + + label = "Extract Multiverse USD Override" + hosts = ["maya"] + families = ["usdOverride"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "writeAll": bool, + "writeTransforms": bool, + "writeVisibility": bool, + "writeAttributes": bool, + "writeMaterials": bool, + "writeVariants": bool, + "writeVariantsDefinition": bool, + "writeActiveState": bool, + "writeNamespaces": bool, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "writeAll": False, + "writeTransforms": True, + "writeVisibility": True, + "writeAttributes": True, + "writeMaterials": True, + "writeVariants": True, + "writeVariantsDefinition": True, + "writeActiveState": True, + "writeNamespaces": False, + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usda".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace("\\", "/") + + # Parse export options + options = self.default_options + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type="mvUsdCompoundShape", + noIntermediate=True, + long=True) + self.log.info("Collected object {}".format(members)) + + # TODO: Deal with asset, composition, overide with options. + import multiverse + + time_opts = None + frame_start = instance.data["frameStart"] + frame_end = instance.data["frameEnd"] + handle_start = instance.data["handleStart"] + handle_end = instance.data["handleEnd"] + step = instance.data["step"] + fps = instance.data["fps"] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + over_write_opts = multiverse.OverridesWriteOptions(time_opts) + options_discard_keys = { + "numTimeSamples", + "timeSamplesSpan", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd", + "step", + "fps" + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(over_write_opts, key, value) + + for member in members: + multiverse.WriteOverrides(file_path, member, over_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "usd", + "ext": "usd", + "files": file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index b233a57453..bb1ecf279d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -73,6 +73,11 @@ class ExtractPlayblast(openpype.api.Extractor): pm.currentTime(refreshFrameInt - 1, edit=True) pm.currentTime(refreshFrameInt, edit=True) + # Override transparency if requested. + transparency = instance.data.get("transparency", 0) + if transparency != 0: + preset["viewport2_options"]["transparencyAlgorithm"] = transparency + # Isolate view is requested by having objects in the set besides a # camera. if preset.pop("isolate_view", False) and instance.data.get("isolate"): diff --git a/openpype/hosts/nuke/api/command.py b/openpype/hosts/nuke/api/command.py index 212d4757c6..6f74c08e97 100644 --- a/openpype/hosts/nuke/api/command.py +++ b/openpype/hosts/nuke/api/command.py @@ -1,6 +1,7 @@ import logging import contextlib import nuke +from bson.objectid import ObjectId from avalon import api, io @@ -70,10 +71,10 @@ def get_handles(asset): if "visualParent" in data: vp = data["visualParent"] if vp is not None: - parent_asset = io.find_one({"_id": io.ObjectId(vp)}) + parent_asset = io.find_one({"_id": ObjectId(vp)}) if parent_asset is None: - parent_asset = io.find_one({"_id": io.ObjectId(asset["parent"])}) + parent_asset = io.find_one({"_id": ObjectId(asset["parent"])}) if parent_asset is not None: return get_handles(parent_asset) diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index dba7ec1b85..3c8ba3e77c 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -6,10 +6,11 @@ import contextlib from collections import OrderedDict import clique +from bson.objectid import ObjectId import nuke -from avalon import api, io, lib +from avalon import api, io from openpype.api import ( Logger, @@ -20,7 +21,6 @@ from openpype.api import ( get_workdir_data, get_asset, get_current_project_settings, - ApplicationManager ) from openpype.tools.utils import host_tools from openpype.lib.path_tools import HostDirmap @@ -570,7 +570,7 @@ def check_inventory_versions(): # get representation from io representation = io.find_one({ "type": "representation", - "_id": io.ObjectId(avalon_knob_data["representation"]) + "_id": ObjectId(avalon_knob_data["representation"]) }) # Failsafe for not finding the representation. diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index fd2e16b8d3..1d110cb94a 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -6,7 +6,6 @@ import nuke import pyblish.api import avalon.api -from avalon import pipeline import openpype from openpype.api import ( @@ -18,7 +17,10 @@ from openpype.lib import register_event_callback from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, + register_inventory_action_path, deregister_loader_plugin_path, + deregister_inventory_action_path, + AVALON_CONTAINER_ID, ) from openpype.tools.utils import host_tools @@ -105,7 +107,7 @@ def install(): pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.api.register_plugin_path(avalon.api.InventoryAction, INVENTORY_PATH) + register_inventory_action_path(INVENTORY_PATH) # Register Avalon event for workfiles loading. register_event_callback("workio.open_file", check_inventory_versions) @@ -131,6 +133,7 @@ def uninstall(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_inventory_action_path(INVENTORY_PATH) pyblish.api.deregister_callback( "instanceToggled", on_pyblish_instance_toggled) @@ -330,7 +333,7 @@ def containerise(node, data = OrderedDict( [ ("schema", "openpype:container-2.0"), - ("id", pipeline.AVALON_CONTAINER_ID), + ("id", AVALON_CONTAINER_ID), ("name", name), ("namespace", namespace), ("loader", str(loader)), diff --git a/openpype/hosts/nuke/api/workio.py b/openpype/hosts/nuke/api/workio.py index dbc24fdc9b..68fcb0927f 100644 --- a/openpype/hosts/nuke/api/workio.py +++ b/openpype/hosts/nuke/api/workio.py @@ -1,11 +1,12 @@ """Host API required Work Files tool""" import os import nuke -import avalon.api + +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS def file_extensions(): - return avalon.api.HOST_WORKFILE_EXTENSIONS["nuke"] + return HOST_WORKFILE_EXTENSIONS["nuke"] def has_unsaved_changes(): diff --git a/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py b/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py index 5f834be557..c04c939a8d 100644 --- a/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py +++ b/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py @@ -1,9 +1,9 @@ -from avalon import api from openpype.api import Logger +from openpype.pipeline import InventoryAction from openpype.hosts.nuke.api.lib import set_avalon_knob_data -class RepairOldLoaders(api.InventoryAction): +class RepairOldLoaders(InventoryAction): label = "Repair Old Loaders" icon = "gears" diff --git a/openpype/hosts/nuke/plugins/inventory/select_containers.py b/openpype/hosts/nuke/plugins/inventory/select_containers.py index 3f174b3562..d7d5f00b87 100644 --- a/openpype/hosts/nuke/plugins/inventory/select_containers.py +++ b/openpype/hosts/nuke/plugins/inventory/select_containers.py @@ -1,8 +1,8 @@ -from avalon import api +from openpype.pipeline import InventoryAction from openpype.hosts.nuke.api.commands import viewer_update_and_undo_stop -class SelectContainers(api.InventoryAction): +class SelectContainers(InventoryAction): label = "Select Containers" icon = "mouse-pointer" diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py index 68c3952942..1ed32996e1 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects.py +++ b/openpype/hosts/nuke/plugins/load/load_effects.py @@ -72,7 +72,7 @@ class LoadEffects(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -188,7 +188,7 @@ class LoadEffects(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -330,11 +330,11 @@ class LoadEffects(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] - elif isinstance(input, unicode): - return input.encode('utf-8') + elif isinstance(input, str): + return str(input) else: return input diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py index 9c4fd4c2c6..383776111f 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py @@ -74,7 +74,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -194,7 +194,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -350,11 +350,11 @@ class LoadEffectsInputProcess(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] - elif isinstance(input, unicode): - return input.encode('utf-8') + elif isinstance(input, str): + return str(input) else: return input diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py index 87bebce15b..df52a22364 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py @@ -240,7 +240,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] elif isinstance(input, unicode): diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 544b9e04da..31a8ff18ee 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -24,7 +24,11 @@ class ExtractReviewDataMov(openpype.api.Extractor): outputs = {} def process(self, instance): - families = instance.data["families"] + families = set(instance.data["families"]) + + # add main family to make sure all families are compared + families.add(instance.data["family"]) + task_type = instance.context.data["taskType"] subset = instance.data["subset"] self.log.info("Creating staging dir...") @@ -50,51 +54,31 @@ class ExtractReviewDataMov(openpype.api.Extractor): f_task_types = o_data["filter"]["task_types"] f_subsets = o_data["filter"]["sebsets"] + self.log.debug( + "f_families `{}` > families: {}".format( + f_families, families)) + + self.log.debug( + "f_task_types `{}` > task_type: {}".format( + f_task_types, task_type)) + + self.log.debug( + "f_subsets `{}` > subset: {}".format( + f_subsets, subset)) + # test if family found in context - test_families = any([ - # first if exact family set is matching - # make sure only interesetion of list is correct - bool(set(families).intersection(f_families)), - # and if famiies are set at all - # if not then return True because we want this preset - # to be active if nothig is set - bool(not f_families) - ]) + # using intersection to make sure all defined + # families are present in combination + if f_families and not families.intersection(f_families): + continue # test task types from filter - test_task_types = any([ - # check if actual task type is defined in task types - # set in preset's filter - bool(task_type in f_task_types), - # and if taskTypes are defined in preset filter - # if not then return True, because we want this filter - # to be active if no taskType is set - bool(not f_task_types) - ]) + if f_task_types and task_type not in f_task_types: + continue # test subsets from filter - test_subsets = any([ - # check if any of subset filter inputs - # converted to regex patern is not found in subset - # we keep strict case sensitivity - bool(next(( - s for s in f_subsets - if re.search(re.compile(s), subset) - ), None)), - # but if no subsets were set then make this acuntable too - bool(not f_subsets) - ]) - - # we need all filters to be positive for this - # preset to be activated - test_all = all([ - test_families, - test_task_types, - test_subsets - ]) - - # if it is not positive then skip this preset - if not test_all: + if f_subsets and not any( + re.search(s, subset) for s in f_subsets): continue self.log.info( diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py b/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py index 5ee93403d0..907577a97d 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py @@ -25,7 +25,7 @@ class RepairNukeWriteDeadlineTab(pyblish.api.Action): # Remove existing knobs. knob_names = openpype.hosts.nuke.lib.get_deadline_knob_names() - for name, knob in group_node.knobs().iteritems(): + for name, knob in group_node.knobs().items(): if name in knob_names: group_node.removeKnob(knob) diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py index e814e1ca4d..c2ad0ac7b0 100644 --- a/openpype/hosts/photoshop/api/pipeline.py +++ b/openpype/hosts/photoshop/api/pipeline.py @@ -1,9 +1,10 @@ import os from Qt import QtWidgets +from bson.objectid import ObjectId import pyblish.api import avalon.api -from avalon import pipeline, io +from avalon import io from openpype.api import Logger from openpype.lib import register_event_callback @@ -11,6 +12,7 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) import openpype.hosts.photoshop @@ -36,7 +38,7 @@ def check_inventory(): representation = container['representation'] representation_doc = io.find_one( { - "_id": io.ObjectId(representation), + "_id": ObjectId(representation), "type": "representation" }, projection={"parent": True} @@ -221,7 +223,7 @@ def containerise( data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "name": name, "namespace": namespace, "loader": str(loader), diff --git a/openpype/hosts/photoshop/api/workio.py b/openpype/hosts/photoshop/api/workio.py index 0bf3ed2bd9..951c5dbfff 100644 --- a/openpype/hosts/photoshop/api/workio.py +++ b/openpype/hosts/photoshop/api/workio.py @@ -1,8 +1,7 @@ """Host API required Work Files tool""" import os -import avalon.api - +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS from . import lib @@ -15,7 +14,7 @@ def _active_document(): def file_extensions(): - return avalon.api.HOST_WORKFILE_EXTENSIONS["photoshop"] + return HOST_WORKFILE_EXTENSIONS["photoshop"] def has_unsaved_changes(): diff --git a/openpype/hosts/resolve/api/pipeline.py b/openpype/hosts/resolve/api/pipeline.py index fa309e3503..e8b017ead5 100644 --- a/openpype/hosts/resolve/api/pipeline.py +++ b/openpype/hosts/resolve/api/pipeline.py @@ -6,13 +6,13 @@ import contextlib from collections import OrderedDict from avalon import api as avalon from avalon import schema -from avalon.pipeline import AVALON_CONTAINER_ID from pyblish import api as pyblish from openpype.api import Logger from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from . import lib from . import PLUGINS_DIR @@ -22,7 +22,6 @@ log = Logger().get_logger(__name__) PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") -INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") AVALON_CONTAINERS = ":AVALON_CONTAINERS" @@ -48,7 +47,6 @@ def install(): register_loader_plugin_path(LOAD_PATH) avalon.register_plugin_path(LegacyCreator, CREATE_PATH) - avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH) # register callback for switching publishable pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled) @@ -73,7 +71,6 @@ def uninstall(): deregister_loader_plugin_path(LOAD_PATH) avalon.deregister_plugin_path(LegacyCreator, CREATE_PATH) - avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH) # register callback for switching publishable pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/testhost/plugins/create/auto_creator.py b/openpype/hosts/testhost/plugins/create/auto_creator.py index 45c573e487..d5935602a0 100644 --- a/openpype/hosts/testhost/plugins/create/auto_creator.py +++ b/openpype/hosts/testhost/plugins/create/auto_creator.py @@ -1,10 +1,10 @@ +from avalon import io +from openpype.lib import NumberDef from openpype.hosts.testhost.api import pipeline from openpype.pipeline import ( AutoCreator, CreatedInstance, - lib ) -from avalon import io class MyAutoCreator(AutoCreator): @@ -13,7 +13,7 @@ class MyAutoCreator(AutoCreator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key", label="Number") + NumberDef("number_key", label="Number") ] return output diff --git a/openpype/hosts/testhost/plugins/create/test_creator_1.py b/openpype/hosts/testhost/plugins/create/test_creator_1.py index 45c30e8a27..7664276fa2 100644 --- a/openpype/hosts/testhost/plugins/create/test_creator_1.py +++ b/openpype/hosts/testhost/plugins/create/test_creator_1.py @@ -1,10 +1,16 @@ import json from openpype import resources from openpype.hosts.testhost.api import pipeline +from openpype.lib import ( + UISeparatorDef, + UILabelDef, + BoolDef, + NumberDef, + FileDef, +) from openpype.pipeline import ( Creator, CreatedInstance, - lib ) @@ -54,17 +60,17 @@ class TestCreatorOne(Creator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key", label="Number"), + NumberDef("number_key", label="Number"), ] return output def get_pre_create_attr_defs(self): output = [ - lib.BoolDef("use_selection", label="Use selection"), - lib.UISeparatorDef(), - lib.UILabelDef("Testing label"), - lib.FileDef("filepath", folders=True, label="Filepath"), - lib.FileDef( + BoolDef("use_selection", label="Use selection"), + UISeparatorDef(), + UILabelDef("Testing label"), + FileDef("filepath", folders=True, label="Filepath"), + FileDef( "filepath_2", multipath=True, folders=True, label="Filepath 2" ) ] diff --git a/openpype/hosts/testhost/plugins/create/test_creator_2.py b/openpype/hosts/testhost/plugins/create/test_creator_2.py index e66304a038..f54adee8a2 100644 --- a/openpype/hosts/testhost/plugins/create/test_creator_2.py +++ b/openpype/hosts/testhost/plugins/create/test_creator_2.py @@ -1,8 +1,8 @@ +from openpype.lib import NumberDef, TextDef from openpype.hosts.testhost.api import pipeline from openpype.pipeline import ( Creator, CreatedInstance, - lib ) @@ -40,8 +40,8 @@ class TestCreatorTwo(Creator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key"), - lib.TextDef("text_key") + NumberDef("number_key"), + TextDef("text_key") ] return output diff --git a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py b/openpype/hosts/testhost/plugins/publish/collect_instance_1.py index 3c035eccb6..c7241a15a8 100644 --- a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py +++ b/openpype/hosts/testhost/plugins/publish/collect_instance_1.py @@ -1,10 +1,8 @@ import json import pyblish.api -from openpype.pipeline import ( - OpenPypePyblishPluginMixin, - attribute_definitions -) +from openpype.lib import attribute_definitions +from openpype.pipeline import OpenPypePyblishPluginMixin class CollectInstanceOneTestHost( diff --git a/openpype/hosts/tvpaint/api/pipeline.py b/openpype/hosts/tvpaint/api/pipeline.py index 46c9d3a1dd..ec880a1abc 100644 --- a/openpype/hosts/tvpaint/api/pipeline.py +++ b/openpype/hosts/tvpaint/api/pipeline.py @@ -10,7 +10,6 @@ import pyblish.api import avalon.api from avalon import io -from avalon.pipeline import AVALON_CONTAINER_ID from openpype.hosts import tvpaint from openpype.api import get_current_project_settings @@ -19,6 +18,7 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from .lib import ( diff --git a/openpype/hosts/tvpaint/api/workio.py b/openpype/hosts/tvpaint/api/workio.py index c513bec6cf..88bdd7117e 100644 --- a/openpype/hosts/tvpaint/api/workio.py +++ b/openpype/hosts/tvpaint/api/workio.py @@ -4,6 +4,7 @@ """ from avalon import api +from openpype.pipeline import HOST_WORKFILE_EXTENSIONS from .lib import ( execute_george, execute_george_through_file @@ -47,7 +48,7 @@ def has_unsaved_changes(): def file_extensions(): """Return the supported file extensions for Blender scene files.""" - return api.HOST_WORKFILE_EXTENSIONS["tvpaint"] + return HOST_WORKFILE_EXTENSIONS["tvpaint"] def work_root(session): diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index 9ec11b942d..713c588976 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -4,13 +4,13 @@ import logging from typing import List import pyblish.api -from avalon.pipeline import AVALON_CONTAINER_ID from avalon import api from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, deregister_loader_plugin_path, + AVALON_CONTAINER_ID, ) from openpype.tools.utils import host_tools import openpype.hosts.unreal diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py b/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py index 3508fe5ed7..6ac3531b40 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py @@ -2,8 +2,10 @@ """Loader for published alembics.""" import os -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -117,7 +119,7 @@ class PointCacheAlembicLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py b/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py index 180942de51..b2c3889f68 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py @@ -2,8 +2,10 @@ """Load Skeletal Mesh alembics.""" import os -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -81,7 +83,7 @@ class SkeletalMeshAlembicLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py b/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py index 4e00af1d97..5a73c72c64 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py @@ -2,8 +2,10 @@ """Loader for Static Mesh alembics.""" import os -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -100,7 +102,7 @@ class StaticMeshAlembicLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_animation.py b/openpype/hosts/unreal/plugins/load/load_animation.py index 8ef81f7851..c9a1633031 100644 --- a/openpype/hosts/unreal/plugins/load/load_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_animation.py @@ -3,8 +3,10 @@ import os import json -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -135,7 +137,7 @@ class AnimationFBXLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index 0de9470ef9..40bca0b0c7 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -2,7 +2,8 @@ """Load camera from FBX.""" import os -from avalon import io, pipeline +from avalon import io +from openpype.pipeline import AVALON_CONTAINER_ID from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -116,7 +117,7 @@ class CameraLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index 19ee179d20..7f6ce7d822 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -11,12 +11,12 @@ from unreal import AssetToolsHelpers from unreal import FBXImportType from unreal import MathLibrary as umath -from avalon.pipeline import AVALON_CONTAINER_ID from openpype.pipeline import ( discover_loader_plugins, loaders_from_representation, load_container, get_representation_path, + AVALON_CONTAINER_ID, ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline diff --git a/openpype/hosts/unreal/plugins/load/load_rig.py b/openpype/hosts/unreal/plugins/load/load_rig.py index 3d5616364c..ff844a5e94 100644 --- a/openpype/hosts/unreal/plugins/load/load_rig.py +++ b/openpype/hosts/unreal/plugins/load/load_rig.py @@ -2,8 +2,10 @@ """Load Skeletal Meshes form FBX.""" import os -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -101,7 +103,7 @@ class SkeletalMeshFBXLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py b/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py index 587fc83a77..282d249947 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py @@ -2,8 +2,10 @@ """Load Static meshes form FBX.""" import os -from avalon import pipeline -from openpype.pipeline import get_representation_path +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline import unreal # noqa @@ -95,7 +97,7 @@ class StaticMeshFBXLoader(plugin.Loader): data = { "schema": "openpype:container-2.0", - "id": pipeline.AVALON_CONTAINER_ID, + "id": AVALON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py index 2d09b0e7bd..f34a47b89f 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_layout.py +++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py @@ -3,6 +3,8 @@ import os import json import math +from bson.objectid import ObjectId + import unreal from unreal import EditorLevelLibrary as ell from unreal import EditorAssetLibrary as eal @@ -62,7 +64,7 @@ class ExtractLayout(openpype.api.Extractor): blend = io.find_one( { "type": "representation", - "parent": io.ObjectId(parent), + "parent": ObjectId(parent), "name": "blend" }, projection={"_id": True}) diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 1ebafbb2d2..e8b6d18f4e 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -29,6 +29,21 @@ from .vendor_bin_utils import ( is_oiio_supported ) +from .attribute_definitions import ( + AbtractAttrDef, + + UIDef, + UISeparatorDef, + UILabelDef, + + UnknownDef, + NumberDef, + TextDef, + EnumDef, + BoolDef, + FileDef, +) + from .env_tools import ( env_value_to_bool, get_paths_from_environ, @@ -233,6 +248,19 @@ __all__ = [ "get_ffmpeg_tool_path", "is_oiio_supported", + "AbtractAttrDef", + + "UIDef", + "UISeparatorDef", + "UILabelDef", + + "UnknownDef", + "NumberDef", + "TextDef", + "EnumDef", + "BoolDef", + "FileDef", + "import_filepath", "modules_from_path", "recursive_bases_from_class", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index 557c016d74..ad59ae0dbc 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -1319,6 +1319,41 @@ def _merge_env(env, current_env): return result +def _add_python_version_paths(app, env, logger): + """Add vendor packages specific for a Python version.""" + + # Skip adding if host name is not set + if not app.host_name: + return + + # Add Python 2/3 modules + openpype_root = os.getenv("OPENPYPE_REPOS_ROOT") + python_vendor_dir = os.path.join( + openpype_root, + "openpype", + "vendor", + "python" + ) + if app.use_python_2: + pythonpath = os.path.join(python_vendor_dir, "python_2") + else: + pythonpath = os.path.join(python_vendor_dir, "python_3") + + if not os.path.exists(pythonpath): + return + + logger.debug("Adding Python version specific paths to PYTHONPATH") + python_paths = [pythonpath] + + # Load PYTHONPATH from current launch context + python_path = env.get("PYTHONPATH") + if python_path: + python_paths.append(python_path) + + # Set new PYTHONPATH to launch context environments + env["PYTHONPATH"] = os.pathsep.join(python_paths) + + def prepare_app_environments(data, env_group=None, implementation_envs=True): """Modify launch environments based on launched app and context. @@ -1331,6 +1366,8 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True): app = data["app"] log = data["log"] + _add_python_version_paths(app, data["env"], log) + # `added_env_keys` has debug purpose added_env_keys = {app.group.name, app.name} # Environments for application @@ -1545,6 +1582,7 @@ def _prepare_last_workfile(data, workdir): workdir (str): Path to folder where workfiles should be stored. """ import avalon.api + from openpype.pipeline import HOST_WORKFILE_EXTENSIONS log = data["log"] @@ -1593,7 +1631,7 @@ def _prepare_last_workfile(data, workdir): # Last workfile path last_workfile_path = data.get("last_workfile_path") or "" if not last_workfile_path: - extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(app.host_name) + extensions = HOST_WORKFILE_EXTENSIONS.get(app.host_name) if extensions: anatomy = data["anatomy"] project_settings = data["project_settings"] diff --git a/openpype/pipeline/lib/attribute_definitions.py b/openpype/lib/attribute_definitions.py similarity index 100% rename from openpype/pipeline/lib/attribute_definitions.py rename to openpype/lib/attribute_definitions.py diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index 8e9fff5f67..b4e6abb72d 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -9,6 +9,8 @@ import collections import functools import getpass +from bson.objectid import ObjectId + from openpype.settings import ( get_project_settings, get_system_settings @@ -169,7 +171,7 @@ def any_outdated(): representation_doc = avalon.io.find_one( { - "_id": avalon.io.ObjectId(representation), + "_id": ObjectId(representation), "type": "representation" }, projection={"parent": True} @@ -1703,7 +1705,7 @@ def _get_task_context_data_for_anatomy( "task": { "name": task_name, "type": task_type, - "short_name": project_task_type_data["short_name"] + "short": project_task_type_data["short_name"] } } diff --git a/openpype/lib/log.py b/openpype/lib/log.py index 98a3bae8e6..f33385e0ba 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -98,6 +98,10 @@ class PypeStreamHandler(logging.StreamHandler): self.flush() except (KeyboardInterrupt, SystemExit): raise + + except OSError: + self.handleError(record) + except Exception: print(repr(record)) self.handleError(record) diff --git a/openpype/lib/splash.txt b/openpype/lib/splash.txt deleted file mode 100644 index 833bcd4b9c..0000000000 --- a/openpype/lib/splash.txt +++ /dev/null @@ -1,413 +0,0 @@ - - - - * - - - - - - - .* - - - - - - * - .* - * - - - - . - * - .* - * - . - - . - * - .* - .* - .* - * - . - . - * - .* - .* - .* - * - . - _. - /** - \ * - \* - * - * - . - __. - ---* - \ \* - \ * - \* - * - . - \___. - /* * - \ \ * - \ \* - \ * - \* - . - |____. - /* * - \|\ * - \ \ * - \ \ * - \ \* - \/. - _/_____. - /* * - / \ * - \ \ * - \ \ * - \ \__* - \/__. - __________. - --*-- ___* - \ \ \/_* - \ \ __* - \ \ \_* - \ \____\* - \/____/. - \____________ . - /* ___ \* - \ \ \/_\ * - \ \ _____* - \ \ \___/* - \ \____\ * - \/____/ . - |___________ . - /* ___ \ * - \|\ \/_\ \ * - \ \ _____/ * - \ \ \___/ * - \ \____\ / * - \/____/ \. - _/__________ . - /* ___ \ * - / \ \/_\ \ * - \ \ _____/ * - \ \ \___/ ---* - \ \____\ / \__* - \/____/ \/__. - ____________ . - --*-- ___ \ * - \ \ \/_\ \ * - \ \ _____/ * - \ \ \___/ ---- * - \ \____\ / \____\* - \/____/ \/____/. - ____________ - /\ ___ \ . - \ \ \/_\ \ * - \ \ _____/ * - \ \ \___/ ---- * - \ \____\ / \____\ . - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ . - \ \ _____/ * - \ \ \___/ ---- * - \ \____\ / \____\ . - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ . - \ \ \___/ ---- * - \ \____\ / \____\ . - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ - \ \ \___/ ---- * - \ \____\ / \____\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ - \ \ \___/ ---- . - \ \____\ / \____\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ _ - \ \ \___/ ---- - \ \____\ / \____\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- - \ \____\ / \____\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ - \ \____\ / \____\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ - \ \____\ / \____\ \ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ - \ \____\ / \____\ __\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ - \ \____\ / \____\ \__\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ \ - \ \____\ / \____\ \__\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ - \ \ \___/ ---- \ \ - \ \____\ / \____\ \__\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___. - \ \ \___/ ---- \ \\ - \ \____\ / \____\ \__\, - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ . - \ \ \___/ ---- \ \\ - \ \____\ / \____\ \__\\, - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ _. - \ \ \___/ ---- \ \\\ - \ \____\ / \____\ \__\\\ - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ __. - \ \ \___/ ---- \ \\ \ - \ \____\ / \____\ \__\\_/. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___. - \ \ \___/ ---- \ \\ \\ - \ \____\ / \____\ \__\\__\. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ . - \ \ \___/ ---- \ \\ \\ - \ \____\ / \____\ \__\\__\\. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ _. - \ \ \___/ ---- \ \\ \\\ - \ \____\ / \____\ \__\\__\\. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ __. - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\_. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ __. - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__. - \/____/ \/____/ - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ * - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ O* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ .oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ ..oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . .oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . p.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . Py.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYp.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPe.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE .oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE c.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE C1.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE ClU.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE CluB.oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE Club .oO* - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE Club . .. - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE Club . .. - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE Club . . - ____________ - /\ ___ \ - \ \ \/_\ \ - \ \ _____/ ___ ___ ___ - \ \ \___/ ---- \ \\ \\ \ - \ \____\ / \____\ \__\\__\\__\ - \/____/ \/____/ . PYPE Club . diff --git a/openpype/lib/terminal_splash.py b/openpype/lib/terminal_splash.py deleted file mode 100644 index 0ba2706a27..0000000000 --- a/openpype/lib/terminal_splash.py +++ /dev/null @@ -1,43 +0,0 @@ -# -*- coding: utf-8 -*- -"""Pype terminal animation.""" -import blessed -from pathlib import Path -from time import sleep - -NO_TERMINAL = False - -try: - term = blessed.Terminal() -except AttributeError: - # this happens when blessed cannot find proper terminal. - # If so, skip printing ascii art animation. - NO_TERMINAL = True - - -def play_animation(): - """Play ASCII art Pype animation.""" - if NO_TERMINAL: - return - print(term.home + term.clear) - frame_size = 7 - splash_file = Path(__file__).parent / "splash.txt" - with splash_file.open("r") as sf: - animation = sf.readlines() - - animation_length = int(len(animation) / frame_size) - current_frame = 0 - for _ in range(animation_length): - frame = "".join( - scanline - for y, scanline in enumerate( - animation[current_frame: current_frame + frame_size] - ) - ) - - with term.location(0, 0): - # term.aquamarine3_bold(frame) - print(f"{term.bold}{term.aquamarine3}{frame}{term.normal}") - - sleep(0.02) - current_frame += frame_size - print(term.move_y(7)) diff --git a/openpype/lib/usdlib.py b/openpype/lib/usdlib.py index 3ae7430c7b..89021156b4 100644 --- a/openpype/lib/usdlib.py +++ b/openpype/lib/usdlib.py @@ -315,7 +315,7 @@ def get_usd_master_path(asset, subset, representation): ) template = project["config"]["template"]["publish"] - if isinstance(asset, dict) and "silo" in asset and "name" in asset: + if isinstance(asset, dict) and "name" in asset: # Allow explicitly passing asset document asset_doc = asset else: @@ -325,7 +325,6 @@ def get_usd_master_path(asset, subset, representation): **{ "root": api.registered_root(), "project": api.Session["AVALON_PROJECT"], - "silo": asset_doc["silo"], "asset": asset_doc["name"], "subset": subset, "representation": representation, diff --git a/openpype/modules/base.py b/openpype/modules/base.py index 175957ae39..5cdeb86087 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -28,26 +28,15 @@ from openpype.settings.lib import ( ) from openpype.lib import PypeLogger - -DEFAULT_OPENPYPE_MODULES = ( - "avalon_apps", - "clockify", - "log_viewer", - "deadline", - "muster", - "royalrender", - "python_console_interpreter", - "ftrack", - "slack", - "webserver", - "launcher_action", - "project_manager_action", - "settings_action", - "standalonepublish_action", - "traypublish_action", - "job_queue", - "timers_manager", - "sync_server", +# Files that will be always ignored on modules import +IGNORED_FILENAMES = ( + "__pycache__", +) +# Files ignored on modules import from "./openpype/modules" +IGNORED_DEFAULT_FILENAMES = ( + "__init__.py", + "base.py", + "interfaces.py", ) @@ -146,9 +135,16 @@ class _LoadCache: def get_default_modules_dir(): """Path to default OpenPype modules.""" + current_dir = os.path.abspath(os.path.dirname(__file__)) - return os.path.join(current_dir, "default_modules") + output = [] + for folder_name in ("default_modules", ): + path = os.path.join(current_dir, folder_name) + if os.path.exists(path) and os.path.isdir(path): + output.append(path) + + return output def get_dynamic_modules_dirs(): @@ -186,7 +182,7 @@ def get_dynamic_modules_dirs(): def get_module_dirs(): """List of paths where OpenPype modules can be found.""" _dirpaths = [] - _dirpaths.append(get_default_modules_dir()) + _dirpaths.extend(get_default_modules_dir()) _dirpaths.extend(get_dynamic_modules_dirs()) dirpaths = [] @@ -292,25 +288,45 @@ def _load_modules(): log = PypeLogger.get_logger("ModulesLoader") + current_dir = os.path.abspath(os.path.dirname(__file__)) + processed_paths = set() + processed_paths.add(current_dir) # Import default modules imported from 'openpype.modules' - for default_module_name in DEFAULT_OPENPYPE_MODULES: + for filename in os.listdir(current_dir): + # Ignore filenames + if ( + filename in IGNORED_FILENAMES + or filename in IGNORED_DEFAULT_FILENAMES + ): + continue + + fullpath = os.path.join(current_dir, filename) + basename, ext = os.path.splitext(filename) + + if not os.path.isdir(fullpath) and ext not in (".py", ): + continue + try: - import_str = "openpype.modules.{}".format(default_module_name) - new_import_str = "{}.{}".format(modules_key, default_module_name) + import_str = "openpype.modules.{}".format(basename) + new_import_str = "{}.{}".format(modules_key, basename) default_module = __import__(import_str, fromlist=("", )) sys.modules[new_import_str] = default_module - setattr(openpype_modules, default_module_name, default_module) + setattr(openpype_modules, basename, default_module) except Exception: msg = ( "Failed to import default module '{}'." - ).format(default_module_name) + ).format(basename) log.error(msg, exc_info=True) # Look for OpenPype modules in paths defined with `get_module_dirs` # - dynamically imported OpenPype modules and addons - dirpaths = get_module_dirs() - for dirpath in dirpaths: + for dirpath in get_module_dirs(): + # Skip already processed paths + if dirpath in processed_paths: + continue + processed_paths.add(dirpath) + if not os.path.exists(dirpath): log.warning(( "Could not find path when loading OpenPype modules \"{}\"" @@ -319,12 +335,15 @@ def _load_modules(): for filename in os.listdir(dirpath): # Ignore filenames - if filename in ("__pycache__", ): + if filename in IGNORED_FILENAMES: continue fullpath = os.path.join(dirpath, filename) basename, ext = os.path.splitext(filename) + if not os.path.isdir(fullpath) and ext not in (".py", ): + continue + # TODO add more logic how to define if folder is module or not # - check manifest and content of manifest try: diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py index 1b694e25f1..5871646b20 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py @@ -492,7 +492,8 @@ class DeleteOldVersions(BaseAction): os.remove(file_path) self.log.debug("Removed file: {}".format(file_path)) - remainders.remove(file_path_base) + if file_path_base in remainders: + remainders.remove(file_path_base) continue seq_path_base = os.path.split(seq_path)[1] diff --git a/openpype/modules/log_viewer/tray/app.py b/openpype/modules/log_viewer/tray/app.py index 1e8d6483cd..71827fcac9 100644 --- a/openpype/modules/log_viewer/tray/app.py +++ b/openpype/modules/log_viewer/tray/app.py @@ -26,3 +26,12 @@ class LogsWindow(QtWidgets.QWidget): self.log_detail = log_detail self.setStyleSheet(style.load_stylesheet()) + + self._frist_show = True + + def showEvent(self, event): + super(LogsWindow, self).showEvent(event) + + if self._frist_show: + self._frist_show = False + self.logs_widget.refresh() diff --git a/openpype/modules/log_viewer/tray/widgets.py b/openpype/modules/log_viewer/tray/widgets.py index ff77405de5..ed08e62109 100644 --- a/openpype/modules/log_viewer/tray/widgets.py +++ b/openpype/modules/log_viewer/tray/widgets.py @@ -155,6 +155,11 @@ class LogsWidget(QtWidgets.QWidget): QtCore.Qt.DescendingOrder ) + refresh_triggered_timer = QtCore.QTimer() + refresh_triggered_timer.setSingleShot(True) + refresh_triggered_timer.setInterval(200) + + refresh_triggered_timer.timeout.connect(self._on_refresh_timeout) view.selectionModel().selectionChanged.connect(self._on_index_change) refresh_btn.clicked.connect(self._on_refresh_clicked) @@ -169,10 +174,12 @@ class LogsWidget(QtWidgets.QWidget): self.detail_widget = detail_widget self.refresh_btn = refresh_btn - # prepare - self.refresh() + self._refresh_triggered_timer = refresh_triggered_timer def refresh(self): + self._refresh_triggered_timer.start() + + def _on_refresh_timeout(self): self.model.refresh() self.detail_widget.refresh() diff --git a/openpype/modules/slack/plugins/publish/collect_slack_family.py b/openpype/modules/slack/plugins/publish/collect_slack_family.py index 6c965b04cd..7475bdc89e 100644 --- a/openpype/modules/slack/plugins/publish/collect_slack_family.py +++ b/openpype/modules/slack/plugins/publish/collect_slack_family.py @@ -35,20 +35,25 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin): return # make slack publishable - if profile: - self.log.info("Found profile: {}".format(profile)) - if instance.data.get('families'): - instance.data['families'].append('slack') - else: - instance.data['families'] = ['slack'] + if not profile: + return - instance.data["slack_channel_message_profiles"] = \ - profile["channel_messages"] + self.log.info("Found profile: {}".format(profile)) + if instance.data.get('families'): + instance.data['families'].append('slack') + else: + instance.data['families'] = ['slack'] - slack_token = (instance.context.data["project_settings"] - ["slack"] - ["token"]) - instance.data["slack_token"] = slack_token + selected_profiles = profile["channel_messages"] + for prof in selected_profiles: + prof["review_upload_limit"] = profile.get("review_upload_limit", + 50) + instance.data["slack_channel_message_profiles"] = selected_profiles + + slack_token = (instance.context.data["project_settings"] + ["slack"] + ["token"]) + instance.data["slack_token"] = slack_token def main_family_from_instance(self, instance): # TODO yank from integrate """Returns main family of entered instance.""" diff --git a/openpype/modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py index 018a7594bb..10bde7d4c0 100644 --- a/openpype/modules/slack/plugins/publish/integrate_slack_api.py +++ b/openpype/modules/slack/plugins/publish/integrate_slack_api.py @@ -35,7 +35,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): message = self._get_filled_message(message_profile["message"], instance, review_path) - self.log.info("message:: {}".format(message)) + self.log.debug("message:: {}".format(message)) if not message: return @@ -43,7 +43,8 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): publish_files.add(thumbnail_path) if message_profile["upload_review"] and review_path: - publish_files.add(review_path) + message, publish_files = self._handle_review_upload( + message, message_profile, publish_files, review_path) project = instance.context.data["anatomyData"]["project"]["code"] for channel in message_profile["channels"]: @@ -75,6 +76,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): dbcon = mongo_client[database_name]["notification_messages"] dbcon.insert_one(msg) + def _handle_review_upload(self, message, message_profile, publish_files, + review_path): + """Check if uploaded file is not too large""" + review_file_size_MB = os.path.getsize(review_path) / 1024 / 1024 + file_limit = message_profile.get("review_upload_limit", 50) + if review_file_size_MB > file_limit: + message += "\nReview upload omitted because of file size." + if review_path not in message: + message += "\nFile located at: {}".format(review_path) + else: + publish_files.add(review_path) + return message, publish_files + def _get_filled_message(self, message_templ, instance, review_path=None): """Use message_templ and data from instance to get message content. @@ -210,6 +224,9 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): # You will get a SlackApiError if "ok" is False error_str = self._enrich_error(str(e.response["error"]), channel) self.log.warning("Error happened {}".format(error_str)) + except Exception as e: + error_str = self._enrich_error(str(e), channel) + self.log.warning("Not SlackAPI error", exc_info=True) return None, [] diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index 26970e4edc..511e4c7b94 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -1,4 +1,7 @@ -from .lib import attribute_definitions +from .constants import ( + AVALON_CONTAINER_ID, + HOST_WORKFILE_EXTENSIONS, +) from .create import ( BaseCreator, @@ -38,11 +41,31 @@ from .publish import ( PublishValidationError, PublishXmlValidationError, KnownPublishError, - OpenPypePyblishPluginMixin + OpenPypePyblishPluginMixin, + OptionalPyblishPluginMixin, +) + +from .actions import ( + LauncherAction, + + InventoryAction, + + discover_launcher_actions, + register_launcher_action, + register_launcher_action_path, + + discover_inventory_actions, + register_inventory_action, + register_inventory_action_path, + deregister_inventory_action, + deregister_inventory_action_path, ) __all__ = ( + "AVALON_CONTAINER_ID", + "HOST_WORKFILE_EXTENSIONS", + "attribute_definitions", # --- Create --- @@ -82,5 +105,20 @@ __all__ = ( "PublishValidationError", "PublishXmlValidationError", "KnownPublishError", - "OpenPypePyblishPluginMixin" + "OpenPypePyblishPluginMixin", + "OptionalPyblishPluginMixin", + + # --- Actions --- + "LauncherAction", + "InventoryAction", + + "discover_launcher_actions", + "register_launcher_action", + "register_launcher_action_path", + + "discover_inventory_actions", + "register_inventory_action", + "register_inventory_action_path", + "deregister_inventory_action", + "deregister_inventory_action_path", ) diff --git a/openpype/pipeline/actions.py b/openpype/pipeline/actions.py new file mode 100644 index 0000000000..141e277db3 --- /dev/null +++ b/openpype/pipeline/actions.py @@ -0,0 +1,144 @@ +import logging + + +class LauncherAction(object): + """A custom action available""" + name = None + label = None + icon = None + color = None + order = 0 + + log = logging.getLogger("LauncherAction") + log.propagate = True + + def is_compatible(self, session): + """Return whether the class is compatible with the Session.""" + return True + + def process(self, session, **kwargs): + pass + + +class InventoryAction(object): + """A custom action for the scene inventory tool + + If registered the action will be visible in the Right Mouse Button menu + under the submenu "Actions". + + """ + + label = None + icon = None + color = None + order = 0 + + log = logging.getLogger("InventoryAction") + log.propagate = True + + @staticmethod + def is_compatible(container): + """Override function in a custom class + + This method is specifically used to ensure the action can operate on + the container. + + Args: + container(dict): the data of a loaded asset, see host.ls() + + Returns: + bool + """ + return bool(container.get("objectName")) + + def process(self, containers): + """Override function in a custom class + + This method will receive all containers even those which are + incompatible. It is advised to create a small filter along the lines + of this example: + + valid_containers = filter(self.is_compatible(c) for c in containers) + + The return value will need to be a True-ish value to trigger + the data_changed signal in order to refresh the view. + + You can return a list of container names to trigger GUI to select + treeview items. + + You can return a dict to carry extra GUI options. For example: + { + "objectNames": [container names...], + "options": {"mode": "toggle", + "clear": False} + } + Currently workable GUI options are: + - clear (bool): Clear current selection before selecting by action. + Default `True`. + - mode (str): selection mode, use one of these: + "select", "deselect", "toggle". Default is "select". + + Args: + containers (list): list of dictionaries + + Return: + bool, list or dict + + """ + return True + + +# Launcher action +def discover_launcher_actions(): + import avalon.api + + return avalon.api.discover(LauncherAction) + + +def register_launcher_action(plugin): + import avalon.api + + return avalon.api.register_plugin(LauncherAction, plugin) + + +def register_launcher_action_path(path): + import avalon.api + + return avalon.api.register_plugin_path(LauncherAction, path) + + +# Inventory action +def discover_inventory_actions(): + import avalon.api + + actions = avalon.api.discover(InventoryAction) + filtered_actions = [] + for action in actions: + if action is not InventoryAction: + filtered_actions.append(action) + + return filtered_actions + + +def register_inventory_action(plugin): + import avalon.api + + return avalon.api.register_plugin(InventoryAction, plugin) + + +def deregister_inventory_action(plugin): + import avalon.api + + avalon.api.deregister_plugin(InventoryAction, plugin) + + +def register_inventory_action_path(path): + import avalon.api + + return avalon.api.register_plugin_path(InventoryAction, path) + + +def deregister_inventory_action_path(path): + import avalon.api + + return avalon.api.deregister_plugin_path(InventoryAction, path) diff --git a/openpype/pipeline/constants.py b/openpype/pipeline/constants.py new file mode 100644 index 0000000000..e6496cbf95 --- /dev/null +++ b/openpype/pipeline/constants.py @@ -0,0 +1,19 @@ +# Metadata ID of loaded container into scene +AVALON_CONTAINER_ID = "pyblish.avalon.container" + +# TODO get extensions from host implementations +HOST_WORKFILE_EXTENSIONS = { + "blender": [".blend"], + "celaction": [".scn"], + "tvpaint": [".tvpp"], + "fusion": [".comp"], + "harmony": [".zip"], + "houdini": [".hip", ".hiplc", ".hipnc"], + "maya": [".ma", ".mb"], + "nuke": [".nk"], + "hiero": [".hrox"], + "photoshop": [".psd", ".psb"], + "premiere": [".prproj"], + "resolve": [".drp"], + "aftereffects": [".aep"] +} diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index c2757a4502..eeb08a6294 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -6,7 +6,6 @@ import inspect from uuid import uuid4 from contextlib import contextmanager -from ..lib import UnknownDef from .creator_plugins import ( BaseCreator, Creator, @@ -87,6 +86,8 @@ class AttributeValues: origin_data(dict): Values loaded from host before conversion. """ def __init__(self, attr_defs, values, origin_data=None): + from openpype.lib.attribute_definitions import UnknownDef + if origin_data is None: origin_data = copy.deepcopy(values) self._origin_data = origin_data diff --git a/openpype/pipeline/lib/__init__.py b/openpype/pipeline/lib/__init__.py deleted file mode 100644 index f762c4205d..0000000000 --- a/openpype/pipeline/lib/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -from .attribute_definitions import ( - AbtractAttrDef, - - UIDef, - UISeparatorDef, - UILabelDef, - - UnknownDef, - NumberDef, - TextDef, - EnumDef, - BoolDef, - FileDef, -) - - -__all__ = ( - "AbtractAttrDef", - - "UIDef", - "UISeparatorDef", - "UILabelDef", - - "UnknownDef", - "NumberDef", - "TextDef", - "EnumDef", - "BoolDef", - "FileDef", -) diff --git a/openpype/pipeline/load/plugins.py b/openpype/pipeline/load/plugins.py index 601ad3b258..9b2b6bb084 100644 --- a/openpype/pipeline/load/plugins.py +++ b/openpype/pipeline/load/plugins.py @@ -127,4 +127,5 @@ def register_loader_plugin_path(path): def deregister_loader_plugin(plugin): import avalon.api + avalon.api.deregister_plugin(LoaderPlugin, plugin) diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index 6d32c11cd7..53ac6b626d 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -7,6 +7,7 @@ import inspect import numbers import six +from bson.objectid import ObjectId from avalon import io, schema from avalon.api import Session, registered_root @@ -67,7 +68,7 @@ def get_repres_contexts(representation_ids, dbcon=None): _representation_ids = [] for repre_id in representation_ids: if isinstance(repre_id, six.string_types): - repre_id = io.ObjectId(repre_id) + repre_id = ObjectId(repre_id) _representation_ids.append(repre_id) repre_docs = dbcon.find({ @@ -174,7 +175,7 @@ def get_subset_contexts(subset_ids, dbcon=None): _subset_ids = set() for subset_id in subset_ids: if isinstance(subset_id, six.string_types): - subset_id = io.ObjectId(subset_id) + subset_id = ObjectId(subset_id) _subset_ids.add(subset_id) subset_docs = dbcon.find({ @@ -217,7 +218,7 @@ def get_representation_context(representation): """Return parenthood context for representation. Args: - representation (str or io.ObjectId or dict): The representation id + representation (str or ObjectId or dict): The representation id or full representation as returned by the database. Returns: @@ -227,9 +228,9 @@ def get_representation_context(representation): assert representation is not None, "This is a bug" - if isinstance(representation, (six.string_types, io.ObjectId)): + if isinstance(representation, (six.string_types, ObjectId)): representation = io.find_one( - {"_id": io.ObjectId(str(representation))}) + {"_id": ObjectId(str(representation))}) version, subset, asset, project = io.parenthood(representation) @@ -340,7 +341,7 @@ def load_container( Args: Loader (Loader): The loader class to trigger. - representation (str or io.ObjectId or dict): The representation id + representation (str or ObjectId or dict): The representation id or full representation as returned by the database. namespace (str, Optional): The namespace to assign. Defaults to None. name (str, Optional): The name to assign. Defaults to subset name. @@ -404,7 +405,7 @@ def update_container(container, version=-1): # Compute the different version from 'representation' current_representation = io.find_one({ - "_id": io.ObjectId(container["representation"]) + "_id": ObjectId(container["representation"]) }) assert current_representation is not None, "This is a bug" @@ -593,7 +594,6 @@ def get_representation_path(representation, root=None, dbcon=None): "code": project.get("data", {}).get("code") }, "asset": asset["name"], - "silo": asset.get("silo"), "hierarchy": hierarchy, "subset": subset["name"], "version": version_["name"], diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py index c2729a46ce..af5d7c4a91 100644 --- a/openpype/pipeline/publish/__init__.py +++ b/openpype/pipeline/publish/__init__.py @@ -3,6 +3,7 @@ from .publish_plugins import ( PublishXmlValidationError, KnownPublishError, OpenPypePyblishPluginMixin, + OptionalPyblishPluginMixin, ) from .lib import ( @@ -18,6 +19,7 @@ __all__ = ( "PublishXmlValidationError", "KnownPublishError", "OpenPypePyblishPluginMixin", + "OptionalPyblishPluginMixin", "DiscoverResult", "publish_plugins_discover", diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index bce64ec709..2402a005c2 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -1,3 +1,4 @@ +from openpype.lib import BoolDef from .lib import load_help_content_from_plugin @@ -108,3 +109,64 @@ class OpenPypePyblishPluginMixin: plugin_values[key] ) return attribute_values + + def get_attr_values_from_data(self, data): + """Get attribute values for attribute definitions from data. + + Args: + data(dict): Data from instance or context. + """ + return ( + data + .get("publish_attributes", {}) + .get(self.__class__.__name__, {}) + ) + + +class OptionalPyblishPluginMixin(OpenPypePyblishPluginMixin): + """Prepare mixin for optional plugins. + + Defined active attribute definition prepared for published and + prepares method which will check if is active or not. + + ``` + class ValidateScene( + pyblish.api.InstancePlugin, OptionalPyblishPluginMixin + ): + def process(self, instance): + # Skip the instance if is not active by data on the instance + if not self.is_active(instance.data): + return + ``` + """ + + @classmethod + def get_attribute_defs(cls): + """Attribute definitions based on plugin's optional attribute.""" + + # Empty list if plugin is not optional + if not getattr(cls, "optional", None): + return [] + + # Get active value from class as default value + active = getattr(cls, "active", True) + # Return boolean stored under 'active' key with label of the class name + label = cls.label or cls.__name__ + return [ + BoolDef("active", default=active, label=label) + ] + + def is_active(self, data): + """Check if plugins is active for instance/context based on their data. + + Args: + data(dict): Data from instance or context. + """ + # Skip if is not optional and return True + if not getattr(self, "optional", None): + return True + attr_values = self.get_attr_values_from_data(data) + active = attr_values.get("active") + if active is None: + active = getattr(self, "active", True) + return active diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py new file mode 100644 index 0000000000..12bab83be6 --- /dev/null +++ b/openpype/pipeline/thumbnail.py @@ -0,0 +1,147 @@ +import os +import copy +import logging + +log = logging.getLogger(__name__) + + +def get_thumbnail_binary(thumbnail_entity, thumbnail_type, dbcon=None): + if not thumbnail_entity: + return + + resolvers = discover_thumbnail_resolvers() + resolvers = sorted(resolvers, key=lambda cls: cls.priority) + if dbcon is None: + from avalon import io + dbcon = io + + for Resolver in resolvers: + available_types = Resolver.thumbnail_types + if ( + thumbnail_type not in available_types + and "*" not in available_types + and ( + isinstance(available_types, (list, tuple)) + and len(available_types) == 0 + ) + ): + continue + try: + instance = Resolver(dbcon) + result = instance.process(thumbnail_entity, thumbnail_type) + if result: + return result + + except Exception: + log.warning("Resolver {0} failed durring process.".format( + Resolver.__class__.__name__, exc_info=True + )) + + +class ThumbnailResolver(object): + """Determine how to get data from thumbnail entity. + + "priority" - determines the order of processing in `get_thumbnail_binary`, + lower number is processed earlier. + "thumbnail_types" - it is expected that thumbnails will be used in more + more than one level, there is only ["thumbnail"] type at the moment + of creating this docstring but it is expected to add "ico" and "full" + in future. + """ + + priority = 100 + thumbnail_types = ["*"] + + def __init__(self, dbcon): + self._log = None + self.dbcon = dbcon + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + def process(self, thumbnail_entity, thumbnail_type): + pass + + +class TemplateResolver(ThumbnailResolver): + + priority = 90 + + def process(self, thumbnail_entity, thumbnail_type): + + if not os.environ.get("AVALON_THUMBNAIL_ROOT"): + return + + template = thumbnail_entity["data"].get("template") + if not template: + self.log.debug("Thumbnail entity does not have set template") + return + + project = self.dbcon.find_one( + {"type": "project"}, + { + "name": True, + "data.code": True + } + ) + + template_data = copy.deepcopy( + thumbnail_entity["data"].get("template_data") or {} + ) + template_data.update({ + "_id": str(thumbnail_entity["_id"]), + "thumbnail_type": thumbnail_type, + "thumbnail_root": os.environ.get("AVALON_THUMBNAIL_ROOT"), + "project": { + "name": project["name"], + "code": project["data"].get("code") + } + }) + + try: + filepath = os.path.normpath(template.format(**template_data)) + except KeyError: + self.log.warning(( + "Missing template data keys for template <{0}> || Data: {1}" + ).format(template, str(template_data))) + return + + if not os.path.exists(filepath): + self.log.warning("File does not exist \"{0}\"".format(filepath)) + return + + with open(filepath, "rb") as _file: + content = _file.read() + + return content + + +class BinaryThumbnail(ThumbnailResolver): + def process(self, thumbnail_entity, thumbnail_type): + return thumbnail_entity["data"].get("binary_data") + + +# Thumbnail resolvers +def discover_thumbnail_resolvers(): + import avalon.api + + return avalon.api.discover(ThumbnailResolver) + + +def register_thumbnail_resolver(plugin): + import avalon.api + + return avalon.api.register_plugin(ThumbnailResolver, plugin) + + +def register_thumbnail_resolver_path(path): + import avalon.api + + return avalon.api.register_plugin_path(ThumbnailResolver, path) + + +register_thumbnail_resolver(TemplateResolver) +register_thumbnail_resolver(BinaryThumbnail) diff --git a/openpype/plugins/load/delete_old_versions.py b/openpype/plugins/load/delete_old_versions.py index 692acdec02..2789f4ea23 100644 --- a/openpype/plugins/load/delete_old_versions.py +++ b/openpype/plugins/load/delete_old_versions.py @@ -126,7 +126,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): os.remove(file_path) self.log.debug("Removed file: {}".format(file_path)) - remainders.remove(file_path_base) + if file_path_base in remainders: + remainders.remove(file_path_base) continue seq_path_base = os.path.split(seq_path)[1] @@ -333,6 +334,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): def main(self, data, remove_publish_folder): # Size of files. size = 0 + if not data: + return size if remove_publish_folder: size = self.delete_whole_dir_paths(data["dir_paths"].values()) @@ -418,6 +421,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): ) data = self.get_data(context, versions_to_keep) + if not data: + continue size += self.main(data, remove_publish_folder) print("Progressing {}/{}".format(count + 1, len(contexts))) diff --git a/openpype/plugins/publish/collect_scene_loaded_versions.py b/openpype/plugins/publish/collect_scene_loaded_versions.py index d8119846c6..6746757e5f 100644 --- a/openpype/plugins/publish/collect_scene_loaded_versions.py +++ b/openpype/plugins/publish/collect_scene_loaded_versions.py @@ -1,3 +1,4 @@ +from bson.objectid import ObjectId import pyblish.api from avalon import api, io @@ -35,7 +36,7 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin): loaded_versions = [] _containers = list(host.ls()) - _repr_ids = [io.ObjectId(c["representation"]) for c in _containers] + _repr_ids = [ObjectId(c["representation"]) for c in _containers] version_by_repr = { str(doc["_id"]): doc["parent"] for doc in io.find({"_id": {"$in": _repr_ids}}, projection={"parent": 1}) @@ -46,7 +47,7 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin): # may have more then one representation that are same version version = { "subsetName": con["name"], - "representation": io.ObjectId(con["representation"]), + "representation": ObjectId(con["representation"]), "version": version_by_repr[con["representation"]], # _id } loaded_versions.append(version) diff --git a/openpype/plugins/publish/extract_hierarchy_avalon.py b/openpype/plugins/publish/extract_hierarchy_avalon.py index e263edd931..b062a9c4b5 100644 --- a/openpype/plugins/publish/extract_hierarchy_avalon.py +++ b/openpype/plugins/publish/extract_hierarchy_avalon.py @@ -64,7 +64,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): data["tasks"] = tasks parents = [] visualParent = None - # do not store project"s id as visualParent (silo asset) + # do not store project"s id as visualParent if self.project is not None: if self.project["_id"] != parent["_id"]: visualParent = parent["_id"] diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 60245314f4..466606d08b 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -4,6 +4,7 @@ import clique import errno import shutil +from bson.objectid import ObjectId from pymongo import InsertOne, ReplaceOne import pyblish.api from avalon import api, io, schema @@ -161,7 +162,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): if old_version: new_version_id = old_version["_id"] else: - new_version_id = io.ObjectId() + new_version_id = ObjectId() new_hero_version = { "_id": new_version_id, @@ -384,7 +385,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): # Create representation else: - repre["_id"] = io.ObjectId() + repre["_id"] = ObjectId() bulk_writes.append( InsertOne(repre) ) @@ -420,7 +421,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): else: repre["old_id"] = repre["_id"] - repre["_id"] = io.ObjectId() + repre["_id"] = ObjectId() repre["type"] = "archived_representation" bulk_writes.append( InsertOne(repre) diff --git a/openpype/plugins/publish/integrate_inputlinks.py b/openpype/plugins/publish/integrate_inputlinks.py index f973dfc963..11cffc4638 100644 --- a/openpype/plugins/publish/integrate_inputlinks.py +++ b/openpype/plugins/publish/integrate_inputlinks.py @@ -1,8 +1,10 @@ - from collections import OrderedDict -from avalon import io + +from bson.objectid import ObjectId import pyblish.api +from avalon import io + class IntegrateInputLinks(pyblish.api.ContextPlugin): """Connecting version level dependency links""" @@ -104,7 +106,7 @@ class IntegrateInputLinks(pyblish.api.ContextPlugin): # future. link = OrderedDict() link["type"] = link_type - link["id"] = io.ObjectId(input_id) + link["id"] = ObjectId(input_id) link["linkedBy"] = "publish" if "inputLinks" not in version_doc["data"]: diff --git a/openpype/plugins/publish/integrate_new.py b/openpype/plugins/publish/integrate_new.py index 37bff9338f..afa4e0a9cf 100644 --- a/openpype/plugins/publish/integrate_new.py +++ b/openpype/plugins/publish/integrate_new.py @@ -9,6 +9,7 @@ import six import re import shutil +from bson.objectid import ObjectId from pymongo import DeleteOne, InsertOne import pyblish.api from avalon import io @@ -107,6 +108,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): "usd", "staticMesh", "skeletalMesh" + "usdComposition", + "usdOverride" ] exclude_families = ["clip"] db_representation_context_keys = [ @@ -296,7 +299,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): bulk_writes.append(DeleteOne({"_id": repre_id})) repre["orig_id"] = repre_id - repre["_id"] = io.ObjectId() + repre["_id"] = ObjectId() repre["type"] = "archived_representation" bulk_writes.append(InsertOne(repre)) @@ -575,7 +578,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): # Create new id if existing representations does not match if repre_id is None: - repre_id = io.ObjectId() + repre_id = ObjectId() data = repre.get("data") or {} data.update({'path': dst, 'template': template}) @@ -784,7 +787,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): families = [instance.data["family"]] families.extend(instance.data.get("families", [])) io.update_many( - {"type": "subset", "_id": io.ObjectId(subset["_id"])}, + {"type": "subset", "_id": ObjectId(subset["_id"])}, {"$set": {"data.families": families}} ) @@ -809,7 +812,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): if subset_group: io.update_many({ 'type': 'subset', - '_id': io.ObjectId(subset_id) + '_id': ObjectId(subset_id) }, {'$set': {'data.subsetGroup': subset_group}}) def _get_subset_group(self, instance): @@ -1056,7 +1059,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): sync_project_presets = None rec = { - "_id": io.ObjectId(), + "_id": ObjectId(), "path": path } if size: diff --git a/openpype/settings/defaults/project_settings/slack.json b/openpype/settings/defaults/project_settings/slack.json index d77b8c2208..c156fed08e 100644 --- a/openpype/settings/defaults/project_settings/slack.json +++ b/openpype/settings/defaults/project_settings/slack.json @@ -11,6 +11,7 @@ "task_types": [], "tasks": [], "subsets": [], + "review_upload_limit": 50.0, "channel_messages": [] } ] diff --git a/openpype/settings/entities/base_entity.py b/openpype/settings/entities/base_entity.py index 76700d605d..21ee44ae77 100644 --- a/openpype/settings/entities/base_entity.py +++ b/openpype/settings/entities/base_entity.py @@ -173,6 +173,10 @@ class BaseItemEntity(BaseEntity): # Entity has set `_project_override_value` (is not NOT_SET) self.had_project_override = False + self._default_log_invalid_types = True + self._studio_log_invalid_types = True + self._project_log_invalid_types = True + # Callbacks that are called on change. # - main current purspose is to register GUI callbacks self.on_change_callbacks = [] @@ -419,7 +423,7 @@ class BaseItemEntity(BaseEntity): raise InvalidValueType(self.valid_value_types, type(value), self.path) # TODO convert to private method - def _check_update_value(self, value, value_source): + def _check_update_value(self, value, value_source, log_invalid_types=True): """Validation of value on update methods. Update methods update data from currently saved settings so it is @@ -447,16 +451,17 @@ class BaseItemEntity(BaseEntity): if new_value is not NOT_SET: return new_value - # Warning log about invalid value type. - self.log.warning( - ( - "{} Got invalid value type for {} values." - " Expected types: {} | Got Type: {} | Value: \"{}\"" - ).format( - self.path, value_source, - self.valid_value_types, type(value), str(value) + if log_invalid_types: + # Warning log about invalid value type. + self.log.warning( + ( + "{} Got invalid value type for {} values." + " Expected types: {} | Got Type: {} | Value: \"{}\"" + ).format( + self.path, value_source, + self.valid_value_types, type(value), str(value) + ) ) - ) return NOT_SET def available_for_role(self, role_name=None): @@ -985,7 +990,7 @@ class ItemEntity(BaseItemEntity): return self.root_item.get_entity_from_path(path) @abstractmethod - def update_default_value(self, parent_values): + def update_default_value(self, parent_values, log_invalid_types=True): """Fill default values on startup or on refresh. Default values stored in `openpype` repository should update all items @@ -995,11 +1000,13 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass @abstractmethod - def update_studio_value(self, parent_values): + def update_studio_value(self, parent_values, log_invalid_types=True): """Fill studio override values on startup or on refresh. Set studio value if is not set to NOT_SET, in that case studio @@ -1008,11 +1015,13 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass @abstractmethod - def update_project_value(self, parent_values): + def update_project_value(self, parent_values, log_invalid_types=True): """Fill project override values on startup, refresh or project change. Set project value if is not set to NOT_SET, in that case project @@ -1021,5 +1030,7 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass diff --git a/openpype/settings/entities/dict_conditional.py b/openpype/settings/entities/dict_conditional.py index 19f326aea7..88d2dc8296 100644 --- a/openpype/settings/entities/dict_conditional.py +++ b/openpype/settings/entities/dict_conditional.py @@ -518,12 +518,18 @@ class DictConditionalEntity(ItemEntity): output.update(self._current_metadata) return output - def _prepare_value(self, value): + def _prepare_value(self, value, log_invalid_types): if value is NOT_SET or self.enum_key not in value: return NOT_SET, NOT_SET enum_value = value.get(self.enum_key) if enum_value not in self.non_gui_children: + if log_invalid_types: + self.log.warning( + "{} Unknown enum key in default values: {}".format( + self.path, enum_value + ) + ) return NOT_SET, NOT_SET # Create copy of value before poping values @@ -551,22 +557,25 @@ class DictConditionalEntity(ItemEntity): return value, metadata - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET # TODO add value validation - value, metadata = self._prepare_value(value) + value, metadata = self._prepare_value(value, log_invalid_types) self._default_metadata = metadata if value is NOT_SET: - self.enum_entity.update_default_value(value) + self.enum_entity.update_default_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -574,7 +583,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value].keys()) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in default values: {}".format( self.path, @@ -582,28 +591,37 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_default_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_default_value(enum_value, log_invalid_types) + + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_default_value(child_value) + child_obj.update_default_value(child_value, _log_invalid_types) - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") - value, metadata = self._prepare_value(value) + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) + value, metadata = self._prepare_value(value, log_invalid_types) self._studio_override_metadata = metadata self.had_studio_override = metadata is not NOT_SET if value is NOT_SET: - self.enum_entity.update_studio_value(value) + self.enum_entity.update_studio_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -611,7 +629,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value]) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in studio overrides: {}".format( self.path, @@ -619,28 +637,36 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_studio_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_studio_value(enum_value, log_invalid_types) + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_studio_value(child_value) + child_obj.update_studio_value(child_value, _log_invalid_types) - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") - value, metadata = self._prepare_value(value) + + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) + value, metadata = self._prepare_value(value, log_invalid_types) self._project_override_metadata = metadata self.had_project_override = metadata is not NOT_SET if value is NOT_SET: - self.enum_entity.update_project_value(value) + self.enum_entity.update_project_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -648,7 +674,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value]) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in project overrides: {}".format( self.path, @@ -656,12 +682,16 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_project_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_project_value(enum_value, log_invalid_types) + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_project_value(child_value) + child_obj.update_project_value(child_value, _log_invalid_types) def _discard_changes(self, on_change_trigger): self._ignore_child_changes = True diff --git a/openpype/settings/entities/dict_immutable_keys_entity.py b/openpype/settings/entities/dict_immutable_keys_entity.py index 060f8d522e..0209681e95 100644 --- a/openpype/settings/entities/dict_immutable_keys_entity.py +++ b/openpype/settings/entities/dict_immutable_keys_entity.py @@ -414,12 +414,16 @@ class DictImmutableKeysEntity(ItemEntity): return value, metadata - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET # TODO add value validation value, metadata = self._prepare_value(value) @@ -427,13 +431,13 @@ class DictImmutableKeysEntity(ItemEntity): if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in default values: {}".format( self.path, @@ -443,27 +447,31 @@ class DictImmutableKeysEntity(ItemEntity): for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_default_value(child_value) + child_obj.update_default_value(child_value, log_invalid_types) - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._studio_override_metadata = metadata self.had_studio_override = metadata is not NOT_SET if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in studio overrides: {}".format( self.path, @@ -472,27 +480,31 @@ class DictImmutableKeysEntity(ItemEntity): ) for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_studio_value(child_value) + child_obj.update_studio_value(child_value, log_invalid_types) - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._project_override_metadata = metadata self.had_project_override = metadata is not NOT_SET if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in project overrides: {}".format( self.path, @@ -502,7 +514,7 @@ class DictImmutableKeysEntity(ItemEntity): for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_project_value(child_value) + child_obj.update_project_value(child_value, log_invalid_types) def _discard_changes(self, on_change_trigger): self._ignore_child_changes = True @@ -694,37 +706,48 @@ class RootsDictEntity(DictImmutableKeysEntity): self._metadata_are_modified = False self._current_metadata = {} - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) value, _ = self._prepare_value(value) self._default_value = value self._default_metadata = {} self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, _ = self._prepare_value(value) self._studio_value = value self._studio_override_metadata = {} self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, _metadata = self._prepare_value(value) self._project_value = value @@ -886,37 +909,48 @@ class SyncServerSites(DictImmutableKeysEntity): self._metadata_are_modified = False self._current_metadata = {} - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) value, _ = self._prepare_value(value) self._default_value = value self._default_metadata = {} self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, _ = self._prepare_value(value) self._studio_value = value self._studio_override_metadata = {} self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, _metadata = self._prepare_value(value) self._project_value = value diff --git a/openpype/settings/entities/dict_mutable_keys_entity.py b/openpype/settings/entities/dict_mutable_keys_entity.py index 6b9c0bc7ed..a0c93b97a7 100644 --- a/openpype/settings/entities/dict_mutable_keys_entity.py +++ b/openpype/settings/entities/dict_mutable_keys_entity.py @@ -393,11 +393,15 @@ class DictMutableKeysEntity(EndpointEntity): value = self.value_on_not_set using_values_from_state = False + log_invalid_types = True if state is OverrideState.PROJECT: + log_invalid_types = self._project_log_invalid_types using_values_from_state = using_project_overrides elif state is OverrideState.STUDIO: + log_invalid_types = self._studio_log_invalid_types using_values_from_state = using_studio_overrides elif state is OverrideState.DEFAULTS: + log_invalid_types = self._default_log_invalid_types using_values_from_state = using_default_values new_value = copy.deepcopy(value) @@ -437,11 +441,11 @@ class DictMutableKeysEntity(EndpointEntity): if not label: label = metadata_labels.get(new_key) - child_entity.update_default_value(_value) + child_entity.update_default_value(_value, log_invalid_types) if using_project_overrides: - child_entity.update_project_value(_value) + child_entity.update_project_value(_value, log_invalid_types) elif using_studio_overrides: - child_entity.update_studio_value(_value) + child_entity.update_studio_value(_value, log_invalid_types) if label: children_label_by_id[child_entity.id] = label @@ -598,8 +602,11 @@ class DictMutableKeysEntity(EndpointEntity): metadata[key] = value.pop(key) return value, metadata - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) has_default_value = value is not NOT_SET if has_default_value: for required_key in self.required_keys: @@ -611,15 +618,21 @@ class DictMutableKeysEntity(EndpointEntity): self._default_value = value self._default_metadata = metadata - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._studio_override_value = value self._studio_override_metadata = metadata self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._project_override_value = value self._project_override_metadata = metadata @@ -686,9 +699,12 @@ class DictMutableKeysEntity(EndpointEntity): if not self._can_remove_from_project_override: return + log_invalid_types = True if self._has_studio_override: + log_invalid_types = self._studio_log_invalid_types value = self._studio_override_value elif self.has_default_value: + log_invalid_types = self._default_log_invalid_types value = self._default_value else: value = self.value_on_not_set @@ -709,9 +725,9 @@ class DictMutableKeysEntity(EndpointEntity): for _key, _value in new_value.items(): new_key = self._convert_to_regex_valid_key(_key) child_entity = self._add_key(new_key) - child_entity.update_default_value(_value) + child_entity.update_default_value(_value, log_invalid_types) if self._has_studio_override: - child_entity.update_studio_value(_value) + child_entity.update_studio_value(_value, log_invalid_types) label = metadata_labels.get(_key) if label: diff --git a/openpype/settings/entities/input_entities.py b/openpype/settings/entities/input_entities.py index 7512d7bfcc..3dcd238672 100644 --- a/openpype/settings/entities/input_entities.py +++ b/openpype/settings/entities/input_entities.py @@ -90,18 +90,27 @@ class EndpointEntity(ItemEntity): def require_restart(self): return self.has_unsaved_changes - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self._default_value = value self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) self._studio_override_value = value self.had_studio_override = bool(value is not NOT_SET) - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) self._project_override_value = value self.had_project_override = bool(value is not NOT_SET) @@ -590,22 +599,26 @@ class RawJsonEntity(InputEntity): metadata[key] = value.pop(key) return value, metadata - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + value = self._check_update_value(value, "default", log_invalid_types) self.has_default_value = value is not NOT_SET value, metadata = self._prepare_value(value) self._default_value = value self.default_metadata = metadata - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + value = self._check_update_value( + value, "studio override", log_invalid_types + ) self.had_studio_override = value is not NOT_SET value, metadata = self._prepare_value(value) self._studio_override_value = value self.studio_override_metadata = metadata - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + value = self._check_update_value( + value, "project override", log_invalid_types + ) self.had_project_override = value is not NOT_SET value, metadata = self._prepare_value(value) self._project_override_value = value diff --git a/openpype/settings/entities/item_entities.py b/openpype/settings/entities/item_entities.py index 9c6f428b97..3b756e4ede 100644 --- a/openpype/settings/entities/item_entities.py +++ b/openpype/settings/entities/item_entities.py @@ -173,14 +173,17 @@ class PathEntity(ItemEntity): self._ignore_missing_defaults = ignore_missing_defaults self.child_obj.set_override_state(state, ignore_missing_defaults) - def update_default_value(self, value): - self.child_obj.update_default_value(value) + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + self.child_obj.update_default_value(value, log_invalid_types) - def update_project_value(self, value): - self.child_obj.update_project_value(value) + def update_project_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + self.child_obj.update_project_value(value, log_invalid_types) - def update_studio_value(self, value): - self.child_obj.update_studio_value(value) + def update_studio_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + self.child_obj.update_studio_value(value, log_invalid_types) def _discard_changes(self, *args, **kwargs): self.child_obj.discard_changes(*args, **kwargs) @@ -472,9 +475,9 @@ class ListStrictEntity(ItemEntity): self._has_project_override = False - def _check_update_value(self, value, value_type): + def _check_update_value(self, value, value_type, log_invalid_types=True): value = super(ListStrictEntity, self)._check_update_value( - value, value_type + value, value_type, log_invalid_types ) if value is NOT_SET: return value @@ -484,15 +487,16 @@ class ListStrictEntity(ItemEntity): if value_len == child_len: return value - self.log.warning( - ( - "{} Amount of strict list items in {} values is" - " not same as expected. Expected {} items. Got {} items. {}" - ).format( - self.path, value_type, - child_len, value_len, str(value) + if log_invalid_types: + self.log.warning( + ( + "{} Amount of strict list items in {} values is not same" + " as expected. Expected {} items. Got {} items. {}" + ).format( + self.path, value_type, + child_len, value_len, str(value) + ) ) - ) if value_len < child_len: # Fill missing values with NOT_SET @@ -504,36 +508,51 @@ class ListStrictEntity(ItemEntity): value.pop(child_len) return value - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET if value is NOT_SET: for child_obj in self.children: - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_default_value(item_value) + self.children[idx].update_default_value( + item_value, log_invalid_types + ) - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) if value is NOT_SET: for child_obj in self.children: - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_studio_value(item_value) + self.children[idx].update_studio_value( + item_value, log_invalid_types + ) - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) if value is NOT_SET: for child_obj in self.children: - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_project_value(item_value) + self.children[idx].update_project_value( + item_value, log_invalid_types + ) def reset_callbacks(self): super(ListStrictEntity, self).reset_callbacks() diff --git a/openpype/settings/entities/list_entity.py b/openpype/settings/entities/list_entity.py index 0268c208bb..5d6a64b3ea 100644 --- a/openpype/settings/entities/list_entity.py +++ b/openpype/settings/entities/list_entity.py @@ -325,16 +325,24 @@ class ListEntity(EndpointEntity): for item in value: child_obj = self._add_new_item() - child_obj.update_default_value(item) + child_obj.update_default_value( + item, self._default_log_invalid_types + ) if self._override_state is OverrideState.PROJECT: if self.had_project_override: - child_obj.update_project_value(item) + child_obj.update_project_value( + item, self._project_log_invalid_types + ) elif self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) elif self._override_state is OverrideState.STUDIO: if self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) for child_obj in self.children: child_obj.set_override_state( @@ -466,16 +474,24 @@ class ListEntity(EndpointEntity): for item in value: child_obj = self._add_new_item() - child_obj.update_default_value(item) + child_obj.update_default_value( + item, self._default_log_invalid_types + ) if self._override_state is OverrideState.PROJECT: if self.had_project_override: - child_obj.update_project_value(item) + child_obj.update_project_value( + item, self._project_log_invalid_types + ) elif self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) elif self._override_state is OverrideState.STUDIO: if self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) child_obj.set_override_state( self._override_state, self._ignore_missing_defaults diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json index 14814d8b01..1a9804cd4f 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json @@ -75,6 +75,15 @@ "type": "list", "object_type": "text" }, + { + "type": "number", + "key": "review_upload_limit", + "label": "Upload review maximum file size (MB)", + "decimal": 2, + "default": 50, + "minimum": 0, + "maximum": 1000000 + }, { "type": "separator" }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json index 7607e1a8c1..484fbf9d07 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json @@ -24,6 +24,9 @@ }, { "sequence": "Output as image sequence" + }, + { + "no-audio": "Do not add audio" } ] } diff --git a/openpype/tools/context_dialog/window.py b/openpype/tools/context_dialog/window.py index c8464faa3e..9e030853bf 100644 --- a/openpype/tools/context_dialog/window.py +++ b/openpype/tools/context_dialog/window.py @@ -308,7 +308,6 @@ class ContextDialog(QtWidgets.QDialog): self._validate_strict() def _set_asset_to_tasks_widget(self): - # filter None docs they are silo asset_id = self._assets_widget.get_selected_asset_id() self._tasks_widget.set_asset_id(asset_id) diff --git a/openpype/tools/launcher/actions.py b/openpype/tools/launcher/actions.py index fbaef05261..546bda1c34 100644 --- a/openpype/tools/launcher/actions.py +++ b/openpype/tools/launcher/actions.py @@ -1,6 +1,7 @@ import os -from avalon import api +from Qt import QtWidgets, QtGui + from openpype import PLUGINS_DIR from openpype import style from openpype.api import Logger, resources @@ -8,7 +9,10 @@ from openpype.lib import ( ApplictionExecutableNotFound, ApplicationLaunchFailed ) -from Qt import QtWidgets, QtGui +from openpype.pipeline import ( + LauncherAction, + register_launcher_action_path, +) def register_actions_from_paths(paths): @@ -29,14 +33,15 @@ def register_actions_from_paths(paths): print("Path was not found: {}".format(path)) continue - api.register_plugin_path(api.Action, path) + register_launcher_action_path(path) def register_config_actions(): """Register actions from the configuration for Launcher""" actions_dir = os.path.join(PLUGINS_DIR, "actions") - register_actions_from_paths([actions_dir]) + if os.path.exists(actions_dir): + register_actions_from_paths([actions_dir]) def register_environment_actions(): @@ -46,7 +51,9 @@ def register_environment_actions(): register_actions_from_paths(paths_str.split(os.pathsep)) -class ApplicationAction(api.Action): +# TODO move to 'openpype.pipeline.actions' +# - remove Qt related stuff and implement exceptions to show error in launcher +class ApplicationAction(LauncherAction): """Pype's application launcher Application action based on pype's ApplicationManager system. @@ -74,7 +81,7 @@ class ApplicationAction(api.Action): @property def log(self): if self._log is None: - self._log = Logger().get_logger(self.__class__.__name__) + self._log = Logger.get_logger(self.__class__.__name__) return self._log def is_compatible(self, session): diff --git a/openpype/tools/launcher/lib.py b/openpype/tools/launcher/lib.py index 68c759f295..c1392b7b8f 100644 --- a/openpype/tools/launcher/lib.py +++ b/openpype/tools/launcher/lib.py @@ -1,19 +1,3 @@ -"""Utility script for updating database with configuration files - -Until assets are created entirely in the database, this script -provides a bridge between the file-based project inventory and configuration. - -- Migrating an old project: - $ python -m avalon.inventory --extract --silo-parent=f02_prod - $ python -m avalon.inventory --upload - -- Managing an existing project: - 1. Run `python -m avalon.inventory --load` - 2. Update the .inventory.toml or .config.toml - 3. Run `python -m avalon.inventory --save` - -""" - import os from Qt import QtGui import qtawesome diff --git a/openpype/tools/launcher/models.py b/openpype/tools/launcher/models.py index 85d553fca4..13567e7916 100644 --- a/openpype/tools/launcher/models.py +++ b/openpype/tools/launcher/models.py @@ -8,12 +8,13 @@ import time import appdirs from Qt import QtCore, QtGui import qtawesome -from avalon import api + from openpype.lib import JSONSettingRegistry from openpype.lib.applications import ( CUSTOM_LAUNCH_APP_GROUPS, ApplicationManager ) +from openpype.pipeline import discover_launcher_actions from openpype.tools.utils.lib import ( DynamicQThread, get_project_icon, @@ -68,7 +69,7 @@ class ActionModel(QtGui.QStandardItemModel): def discover(self): """Set up Actions cache. Run this for each new project.""" # Discover all registered actions - actions = api.discover(api.Action) + actions = discover_launcher_actions() # Get available project actions and the application actions app_actions = self.get_application_actions() diff --git a/openpype/tools/libraryloader/app.py b/openpype/tools/libraryloader/app.py index 9f8845f30f..b73b415128 100644 --- a/openpype/tools/libraryloader/app.py +++ b/openpype/tools/libraryloader/app.py @@ -9,14 +9,14 @@ from openpype.tools.loader.widgets import ( ThumbnailWidget, VersionWidget, FamilyListView, - RepresentationWidget + RepresentationWidget, + SubsetWidget ) from openpype.tools.utils.assets_widget import MultiSelectAssetsWidget from openpype.modules import ModulesManager from . import lib -from .widgets import LibrarySubsetWidget module = sys.modules[__name__] module.window = None @@ -92,7 +92,7 @@ class LibraryLoaderWindow(QtWidgets.QDialog): # --- Middle part --- # Subsets widget - subsets_widget = LibrarySubsetWidget( + subsets_widget = SubsetWidget( dbcon, self.groups_config, self.family_config_cache, @@ -448,10 +448,7 @@ class LibraryLoaderWindow(QtWidgets.QDialog): def _set_context(self, context, refresh=True): """Set the selection in the interface using a context. The context must contain `asset` data by name. - Note: Prior to setting context ensure `refresh` is triggered so that - the "silos" are listed correctly, aside from that setting the - context will force a refresh further down because it changes - the active silo and asset. + Args: context (dict): The context to apply. Returns: @@ -463,12 +460,6 @@ class LibraryLoaderWindow(QtWidgets.QDialog): return if refresh: - # Workaround: - # Force a direct (non-scheduled) refresh prior to setting the - # asset widget's silo and asset selection to ensure it's correctly - # displaying the silo tabs. Calling `window.refresh()` and directly - # `window.set_context()` the `set_context()` seems to override the - # scheduled refresh and the silo tabs are not shown. self._refresh_assets() self._assets_widget.select_asset_by_name(asset_name) diff --git a/openpype/tools/libraryloader/lib.py b/openpype/tools/libraryloader/lib.py index 6a497a6a16..182b48893a 100644 --- a/openpype/tools/libraryloader/lib.py +++ b/openpype/tools/libraryloader/lib.py @@ -1,7 +1,6 @@ import os import importlib import logging -from openpype.api import Anatomy log = logging.getLogger(__name__) @@ -20,14 +19,3 @@ def find_config(): log.info("Found %s, loading.." % config) return importlib.import_module(config) - - -class RegisteredRoots: - roots_per_project = {} - - @classmethod - def registered_root(cls, project_name): - if project_name not in cls.roots_per_project: - cls.roots_per_project[project_name] = Anatomy(project_name).roots - - return cls.roots_per_project[project_name] diff --git a/openpype/tools/libraryloader/widgets.py b/openpype/tools/libraryloader/widgets.py deleted file mode 100644 index 45f9ea2048..0000000000 --- a/openpype/tools/libraryloader/widgets.py +++ /dev/null @@ -1,18 +0,0 @@ -from Qt import QtWidgets - -from .lib import RegisteredRoots -from openpype.tools.loader.widgets import SubsetWidget - - -class LibrarySubsetWidget(SubsetWidget): - def on_copy_source(self): - """Copy formatted source path to clipboard""" - source = self.data.get("source", None) - if not source: - return - - project_name = self.dbcon.Session["AVALON_PROJECT"] - root = RegisteredRoots.registered_root(project_name) - path = source.format(root=root) - clipboard = QtWidgets.QApplication.clipboard() - clipboard.setText(path) diff --git a/openpype/tools/loader/app.py b/openpype/tools/loader/app.py index d73a977ac6..923a1fabdb 100644 --- a/openpype/tools/loader/app.py +++ b/openpype/tools/loader/app.py @@ -290,7 +290,6 @@ class LoaderWindow(QtWidgets.QDialog): subsets_model.clear() self.clear_assets_underlines() - # filter None docs they are silo asset_ids = self._assets_widget.get_selected_asset_ids() # Start loading subsets_widget.set_loading_state( @@ -381,17 +380,9 @@ class LoaderWindow(QtWidgets.QDialog): The context must contain `asset` data by name. - Note: Prior to setting context ensure `refresh` is triggered so that - the "silos" are listed correctly, aside from that setting the - context will force a refresh further down because it changes - the active silo and asset. - Args: context (dict): The context to apply. - - Returns: - None - + refrest (bool): Trigger refresh on context set. """ asset = context.get("asset", None) @@ -399,12 +390,6 @@ class LoaderWindow(QtWidgets.QDialog): return if refresh: - # Workaround: - # Force a direct (non-scheduled) refresh prior to setting the - # asset widget's silo and asset selection to ensure it's correctly - # displaying the silo tabs. Calling `window.refresh()` and directly - # `window.set_context()` the `set_context()` seems to override the - # scheduled refresh and the silo tabs are not shown. self._refresh() self._assets_widget.select_asset_by_name(asset) diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py index b14bdd0e93..42fb62b632 100644 --- a/openpype/tools/loader/widgets.py +++ b/openpype/tools/loader/widgets.py @@ -7,9 +7,9 @@ import collections from Qt import QtWidgets, QtCore, QtGui -from avalon import api, pipeline - +from openpype.api import Anatomy from openpype.pipeline import HeroVersionType +from openpype.pipeline.thumbnail import get_thumbnail_binary from openpype.pipeline.load import ( discover_loader_plugins, SubsetLoaderPlugin, @@ -640,6 +640,7 @@ class VersionTextEdit(QtWidgets.QTextEdit): "source": None, "raw": None } + self._anatomy = None # Reset self.set_version(None) @@ -730,20 +731,20 @@ class VersionTextEdit(QtWidgets.QTextEdit): # Add additional actions when any text so we can assume # the version is set. if self.toPlainText().strip(): - menu.addSeparator() - action = QtWidgets.QAction("Copy source path to clipboard", - menu) + action = QtWidgets.QAction( + "Copy source path to clipboard", menu + ) action.triggered.connect(self.on_copy_source) menu.addAction(action) - action = QtWidgets.QAction("Copy raw data to clipboard", - menu) + action = QtWidgets.QAction( + "Copy raw data to clipboard", menu + ) action.triggered.connect(self.on_copy_raw) menu.addAction(action) menu.exec_(event.globalPos()) - del menu def on_copy_source(self): """Copy formatted source path to clipboard""" @@ -751,7 +752,11 @@ class VersionTextEdit(QtWidgets.QTextEdit): if not source: return - path = source.format(root=api.registered_root()) + project_name = self.dbcon.Session["AVALON_PROJECT"] + if self._anatomy is None or self._anatomy.project_name != project_name: + self._anatomy = Anatomy(project_name) + + path = source.format(root=self._anatomy.roots) clipboard = QtWidgets.QApplication.clipboard() clipboard.setText(path) @@ -771,7 +776,6 @@ class VersionTextEdit(QtWidgets.QTextEdit): class ThumbnailWidget(QtWidgets.QLabel): - aspect_ratio = (16, 9) max_width = 300 @@ -863,7 +867,7 @@ class ThumbnailWidget(QtWidgets.QLabel): if not thumbnail_ent: return - thumbnail_bin = pipeline.get_thumbnail_binary( + thumbnail_bin = get_thumbnail_binary( thumbnail_ent, "thumbnail", self.dbcon ) if not thumbnail_bin: diff --git a/openpype/tools/mayalookassigner/commands.py b/openpype/tools/mayalookassigner/commands.py index df72e41354..78fd51c7a3 100644 --- a/openpype/tools/mayalookassigner/commands.py +++ b/openpype/tools/mayalookassigner/commands.py @@ -2,6 +2,7 @@ from collections import defaultdict import logging import os +from bson.objectid import ObjectId import maya.cmds as cmds from avalon import io, api @@ -157,7 +158,7 @@ def create_items_from_nodes(nodes): return asset_view_items for _id, id_nodes in id_hashes.items(): - asset = io.find_one({"_id": io.ObjectId(_id)}, + asset = io.find_one({"_id": ObjectId(_id)}, projection={"name": True}) # Skip if asset id is not found diff --git a/openpype/tools/mayalookassigner/vray_proxies.py b/openpype/tools/mayalookassigner/vray_proxies.py index 6a9347449a..25621fc652 100644 --- a/openpype/tools/mayalookassigner/vray_proxies.py +++ b/openpype/tools/mayalookassigner/vray_proxies.py @@ -6,6 +6,7 @@ import logging import json import six +from bson.objectid import ObjectId import alembic.Abc from maya import cmds @@ -231,7 +232,7 @@ def get_latest_version(asset_id, subset): """ subset = io.find_one({"name": subset, - "parent": io.ObjectId(asset_id), + "parent": ObjectId(asset_id), "type": "subset"}) if not subset: raise RuntimeError("Subset does not exist: %s" % subset) diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 7173ae751e..091d6ca925 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -5,6 +5,7 @@ from collections import defaultdict from Qt import QtCore, QtGui import qtawesome +from bson.objectid import ObjectId from avalon import api, io, schema from openpype.pipeline import HeroVersionType @@ -299,7 +300,7 @@ class InventoryModel(TreeModel): for repre_id, group_dict in sorted(grouped.items()): group_items = group_dict["items"] # Get parenthood per group - representation = io.find_one({"_id": io.ObjectId(repre_id)}) + representation = io.find_one({"_id": ObjectId(repre_id)}) if not representation: not_found["representation"].append(group_items) not_found_ids.append(repre_id) diff --git a/openpype/tools/sceneinventory/switch_dialog.py b/openpype/tools/sceneinventory/switch_dialog.py index 0e7b1b759a..bb3e2615ac 100644 --- a/openpype/tools/sceneinventory/switch_dialog.py +++ b/openpype/tools/sceneinventory/switch_dialog.py @@ -2,12 +2,14 @@ import collections import logging from Qt import QtWidgets, QtCore import qtawesome +from bson.objectid import ObjectId -from avalon import io, pipeline -from openpype.pipeline import ( +from avalon import io +from openpype.pipeline.load import ( discover_loader_plugins, switch_container, get_repres_contexts, + loaders_from_repre_context, ) from .widgets import ( @@ -146,7 +148,7 @@ class SwitchAssetDialog(QtWidgets.QDialog): repre_ids = set() content_loaders = set() for item in self._items: - repre_ids.add(io.ObjectId(item["representation"])) + repre_ids.add(ObjectId(item["representation"])) content_loaders.add(item["loader"]) repres = list(io.find({ @@ -369,7 +371,7 @@ class SwitchAssetDialog(QtWidgets.QDialog): loaders = None for repre_context in repre_contexts.values(): - _loaders = set(pipeline.loaders_from_repre_context( + _loaders = set(loaders_from_repre_context( available_loaders, repre_context )) if loaders is None: @@ -1306,7 +1308,7 @@ class SwitchAssetDialog(QtWidgets.QDialog): repre_docs_by_parent_id_by_name[parent_id][name] = repre_doc for container in self._items: - container_repre_id = io.ObjectId(container["representation"]) + container_repre_id = ObjectId(container["representation"]) container_repre = self.content_repres[container_repre_id] container_repre_name = container_repre["name"] diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index c38390c614..2df6d00406 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -4,14 +4,16 @@ from functools import partial from Qt import QtWidgets, QtCore import qtawesome +from bson.objectid import ObjectId -from avalon import io, api +from avalon import io from openpype import style from openpype.pipeline import ( HeroVersionType, update_container, remove_container, + discover_inventory_actions, ) from openpype.modules import ModulesManager from openpype.tools.utils.lib import ( @@ -78,7 +80,7 @@ class SceneInventoryView(QtWidgets.QTreeView): repre_ids = [] for item in items: - item_id = io.ObjectId(item["representation"]) + item_id = ObjectId(item["representation"]) if item_id not in repre_ids: repre_ids.append(item_id) @@ -145,7 +147,7 @@ class SceneInventoryView(QtWidgets.QTreeView): def _on_switch_to_versioned(items): repre_ids = [] for item in items: - item_id = io.ObjectId(item["representation"]) + item_id = ObjectId(item["representation"]) if item_id not in repre_ids: repre_ids.append(item_id) @@ -195,7 +197,7 @@ class SceneInventoryView(QtWidgets.QTreeView): version_doc["name"] for item in items: - repre_id = io.ObjectId(item["representation"]) + repre_id = ObjectId(item["representation"]) version_id = version_id_by_repre_id.get(repre_id) version_name = version_name_by_id.get(version_id) if version_name is not None: @@ -487,7 +489,7 @@ class SceneInventoryView(QtWidgets.QTreeView): containers = containers or [dict()] # Check which action will be available in the menu - Plugins = api.discover(api.InventoryAction) + Plugins = discover_inventory_actions() compatible = [p() for p in Plugins if any(p.is_compatible(c) for c in containers)] @@ -658,7 +660,7 @@ class SceneInventoryView(QtWidgets.QTreeView): active = items[-1] # Get available versions for active representation - representation_id = io.ObjectId(active["representation"]) + representation_id = ObjectId(active["representation"]) representation = io.find_one({"_id": representation_id}) version = io.find_one({ "_id": representation["parent"] diff --git a/openpype/tools/settings/settings/wrapper_widgets.py b/openpype/tools/settings/settings/wrapper_widgets.py index 7370fcf945..b14a226912 100644 --- a/openpype/tools/settings/settings/wrapper_widgets.py +++ b/openpype/tools/settings/settings/wrapper_widgets.py @@ -92,7 +92,8 @@ class CollapsibleWrapper(WrapperWidget): self.content_layout = content_layout if self.collapsible: - body_widget.toggle_content(self.collapsed) + if not self.collapsed: + body_widget.toggle_content() else: body_widget.hide_toolbox(hide_content=False) diff --git a/openpype/tools/standalonepublish/widgets/model_asset.py b/openpype/tools/standalonepublish/widgets/model_asset.py index a7316a2aa7..02e9073555 100644 --- a/openpype/tools/standalonepublish/widgets/model_asset.py +++ b/openpype/tools/standalonepublish/widgets/model_asset.py @@ -35,7 +35,7 @@ def _iter_model_rows(model, class AssetModel(TreeModel): - """A model listing assets in the silo in the active project. + """A model listing assets in the active project. The assets are displayed in a treeview, they are visually parented by a `visualParent` field in the database containing an `_id` to a parent @@ -64,7 +64,7 @@ class AssetModel(TreeModel): self.refresh() - def _add_hierarchy(self, assets, parent=None, silos=None): + def _add_hierarchy(self, assets, parent=None): """Add the assets that are related to the parent as children items. This method does *not* query the database. These instead are queried @@ -72,27 +72,8 @@ class AssetModel(TreeModel): queries. Resulting in up to 10x speed increase. Args: - assets (dict): All assets in the currently active silo stored - by key/value - - Returns: - None - + assets (dict): All assets from current project. """ - if silos: - # WARNING: Silo item "_id" is set to silo value - # mainly because GUI issue with preserve selection and expanded row - # and because of easier hierarchy parenting (in "assets") - for silo in silos: - node = Node({ - "_id": silo, - "name": silo, - "label": silo, - "type": "silo" - }) - self.add_child(node, parent=parent) - self._add_hierarchy(assets, parent=node) - parent_id = parent["_id"] if parent else None current_assets = assets.get(parent_id, list()) @@ -132,27 +113,19 @@ class AssetModel(TreeModel): self.beginResetModel() - # Get all assets in current silo sorted by name + # Get all assets in current project sorted by name db_assets = self.dbcon.find({"type": "asset"}).sort("name", 1) - silos = db_assets.distinct("silo") or None - # if any silo is set to None then it's expected it should not be used - if silos and None in silos: - silos = None # Group the assets by their visual parent's id assets_by_parent = collections.defaultdict(list) for asset in db_assets: - parent_id = ( - asset.get("data", {}).get("visualParent") or - asset.get("silo") - ) + parent_id = asset.get("data", {}).get("visualParent") assets_by_parent[parent_id].append(asset) # Build the hierarchical tree items recursively self._add_hierarchy( assets_by_parent, - parent=None, - silos=silos + parent=None ) self.endResetModel() @@ -174,8 +147,6 @@ class AssetModel(TreeModel): # Allow a custom icon and custom icon color to be defined data = node.get("_document", {}).get("data", {}) icon = data.get("icon", None) - if icon is None and node.get("type") == "silo": - icon = "database" color = data.get("color", self._default_asset_icon_color) if icon is None: diff --git a/openpype/tools/standalonepublish/widgets/widget_asset.py b/openpype/tools/standalonepublish/widgets/widget_asset.py index e6b74f8f82..8b43cd7cf8 100644 --- a/openpype/tools/standalonepublish/widgets/widget_asset.py +++ b/openpype/tools/standalonepublish/widgets/widget_asset.py @@ -229,7 +229,6 @@ class AssetWidget(QtWidgets.QWidget): data = { 'project': project['name'], 'asset': asset['name'], - 'silo': asset.get("silo"), 'parents': self.get_parents(asset), 'task': task } diff --git a/openpype/tools/texture_copy/app.py b/openpype/tools/texture_copy/app.py index ceca98a082..0c3c260e51 100644 --- a/openpype/tools/texture_copy/app.py +++ b/openpype/tools/texture_copy/app.py @@ -57,7 +57,6 @@ class TextureCopy: "name": project_name, "code": project['data']['code'] }, - "silo": asset.get('silo'), "asset": asset['name'], "family": 'texture', "subset": 'Main', @@ -155,7 +154,6 @@ def texture_copy(asset, project, path): t.echo(">>> Initializing avalon session ...") os.environ["AVALON_PROJECT"] = project os.environ["AVALON_ASSET"] = asset - os.environ["AVALON_SILO"] = "" TextureCopy().process(asset, project, path) diff --git a/openpype/tools/utils/delegates.py b/openpype/tools/utils/delegates.py index d3718b1734..71f817a1d7 100644 --- a/openpype/tools/utils/delegates.py +++ b/openpype/tools/utils/delegates.py @@ -287,9 +287,5 @@ class PrettyTimeDelegate(QtWidgets.QStyledItemDelegate): """ def displayText(self, value, locale): - - if value is None: - # Ignore None value - return - - return pretty_timestamp(value) + if value is not None: + return pretty_timestamp(value) diff --git a/openpype/tools/workfiles/__init__.py b/openpype/tools/workfiles/__init__.py index cde7293931..5fbc71797d 100644 --- a/openpype/tools/workfiles/__init__.py +++ b/openpype/tools/workfiles/__init__.py @@ -1,9 +1,12 @@ +from .window import Window from .app import ( show, - Window + validate_host_requirements, ) __all__ = [ + "Window", + "show", - "Window" + "validate_host_requirements", ] diff --git a/openpype/tools/workfiles/app.py b/openpype/tools/workfiles/app.py index 713992bc4b..f0e7900cf5 100644 --- a/openpype/tools/workfiles/app.py +++ b/openpype/tools/workfiles/app.py @@ -1,40 +1,10 @@ import sys -import os -import re -import copy -import shutil import logging -import datetime -import Qt -from Qt import QtWidgets, QtCore -from avalon import io, api +from avalon import api -from openpype import style -from openpype.tools.utils.lib import ( - qt_app_context -) -from openpype.tools.utils import PlaceholderLineEdit -from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget -from openpype.tools.utils.tasks_widget import TasksWidget -from openpype.tools.utils.delegates import PrettyTimeDelegate -from openpype.lib import ( - emit_event, - Anatomy, - get_workfile_doc, - create_workfile_doc, - save_workfile_data_to_doc, - get_workfile_template_key, - create_workdir_extra_folders, - get_workdir_data, - get_last_workfile_with_version -) -from openpype.lib.avalon_context import ( - update_current_task, - compute_session_changes -) -from .model import FilesModel -from .view import FilesView +from openpype.tools.utils import qt_app_context +from .window import Window log = logging.getLogger(__name__) @@ -42,1181 +12,6 @@ module = sys.modules[__name__] module.window = None -def build_workfile_data(session): - """Get the data required for workfile formatting from avalon `session`""" - - # Set work file data for template formatting - asset_name = session["AVALON_ASSET"] - task_name = session["AVALON_TASK"] - host_name = session["AVALON_APP"] - project_doc = io.find_one( - {"type": "project"}, - { - "name": True, - "data.code": True, - "config.tasks": True, - } - ) - - asset_doc = io.find_one( - { - "type": "asset", - "name": asset_name - }, - { - "name": True, - "data.tasks": True, - "data.parents": True - } - ) - data = get_workdir_data(project_doc, asset_doc, task_name, host_name) - data.update({ - "version": 1, - "comment": "", - "ext": None - }) - - return data - - -class CommentMatcher(object): - """Use anatomy and work file data to parse comments from filenames""" - def __init__(self, anatomy, template_key, data): - - self.fname_regex = None - - template = anatomy.templates[template_key]["file"] - if "{comment}" not in template: - # Don't look for comment if template doesn't allow it - return - - # Create a regex group for extensions - extensions = api.registered_host().file_extensions() - any_extension = "(?:{})".format( - "|".join(re.escape(ext[1:]) for ext in extensions) - ) - - # Use placeholders that will never be in the filename - temp_data = copy.deepcopy(data) - temp_data["comment"] = "<>" - temp_data["version"] = "<>" - temp_data["ext"] = "<>" - - formatted = anatomy.format(temp_data) - fname_pattern = formatted[template_key]["file"] - fname_pattern = re.escape(fname_pattern) - - # Replace comment and version with something we can match with regex - replacements = { - "<>": "(.+)", - "<>": "[0-9]+", - "<>": any_extension, - } - for src, dest in replacements.items(): - fname_pattern = fname_pattern.replace(re.escape(src), dest) - - # Match from beginning to end of string to be safe - fname_pattern = "^{}$".format(fname_pattern) - - self.fname_regex = re.compile(fname_pattern) - - def parse_comment(self, filepath): - """Parse the {comment} part from a filename""" - if not self.fname_regex: - return - - fname = os.path.basename(filepath) - match = self.fname_regex.match(fname) - if match: - return match.group(1) - - -class SubversionLineEdit(QtWidgets.QWidget): - """QLineEdit with QPushButton for drop down selection of list of strings""" - def __init__(self, parent=None): - super(SubversionLineEdit, self).__init__(parent=parent) - - layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 0, 0, 0) - layout.setSpacing(3) - - self._input = PlaceholderLineEdit() - self._button = QtWidgets.QPushButton("") - self._button.setFixedWidth(18) - self._menu = QtWidgets.QMenu(self) - self._button.setMenu(self._menu) - - layout.addWidget(self._input) - layout.addWidget(self._button) - - @property - def input(self): - return self._input - - def set_values(self, values): - self._update(values) - - def _on_button_clicked(self): - self._menu.exec_() - - def _on_action_clicked(self, action): - self._input.setText(action.text()) - - def _update(self, values): - """Create optional predefined subset names - - Args: - default_names(list): all predefined names - - Returns: - None - """ - - menu = self._menu - button = self._button - - state = any(values) - button.setEnabled(state) - if state is False: - return - - # Include an empty string - values = [""] + sorted(values) - - # Get and destroy the action group - group = button.findChild(QtWidgets.QActionGroup) - if group: - group.deleteLater() - - # Build new action group - group = QtWidgets.QActionGroup(button) - for name in values: - action = group.addAction(name) - menu.addAction(action) - - group.triggered.connect(self._on_action_clicked) - - -class NameWindow(QtWidgets.QDialog): - """Name Window to define a unique filename inside a root folder - - The filename will be based on the "workfile" template defined in the - project["config"]["template"]. - - """ - - def __init__(self, parent, root, anatomy, template_key, session=None): - super(NameWindow, self).__init__(parent=parent) - self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) - - self.result = None - self.host = api.registered_host() - self.root = root - self.work_file = None - - if not session: - # Fallback to active session - session = api.Session - - self.data = build_workfile_data(session) - - # Store project anatomy - self.anatomy = anatomy - self.template = anatomy.templates[template_key]["file"] - self.template_key = template_key - - # Btns widget - btns_widget = QtWidgets.QWidget(self) - - btn_ok = QtWidgets.QPushButton("Ok", btns_widget) - btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.addWidget(btn_ok) - btns_layout.addWidget(btn_cancel) - - # Inputs widget - inputs_widget = QtWidgets.QWidget(self) - - # Version widget - version_widget = QtWidgets.QWidget(inputs_widget) - - # Version number input - version_input = QtWidgets.QSpinBox(version_widget) - version_input.setMinimum(1) - version_input.setMaximum(9999) - - # Last version checkbox - last_version_check = QtWidgets.QCheckBox( - "Next Available Version", version_widget - ) - last_version_check.setChecked(True) - - version_layout = QtWidgets.QHBoxLayout(version_widget) - version_layout.setContentsMargins(0, 0, 0, 0) - version_layout.addWidget(version_input) - version_layout.addWidget(last_version_check) - - # Preview widget - preview_label = QtWidgets.QLabel("Preview filename", inputs_widget) - - # Subversion input - subversion = SubversionLineEdit(inputs_widget) - subversion.input.setPlaceholderText("Will be part of filename.") - - # Extensions combobox - ext_combo = QtWidgets.QComboBox(inputs_widget) - # Add styled delegate to use stylesheets - ext_delegate = QtWidgets.QStyledItemDelegate() - ext_combo.setItemDelegate(ext_delegate) - ext_combo.addItems(self.host.file_extensions()) - - # Build inputs - inputs_layout = QtWidgets.QFormLayout(inputs_widget) - # Add version only if template contains version key - # - since the version can be padded with "{version:0>4}" we only search - # for "{version". - if "{version" in self.template: - inputs_layout.addRow("Version:", version_widget) - else: - version_widget.setVisible(False) - - # Add subversion only if template contains `{comment}` - if "{comment}" in self.template: - inputs_layout.addRow("Subversion:", subversion) - - # Detect whether a {comment} is in the current filename - if so, - # preserve it by default and set it in the comment/subversion field - current_filepath = self.host.current_file() - if current_filepath: - # We match the current filename against the current session - # instead of the session where the user is saving to. - current_data = build_workfile_data(api.Session) - matcher = CommentMatcher(anatomy, template_key, current_data) - comment = matcher.parse_comment(current_filepath) - if comment: - log.info("Detected subversion comment: {}".format(comment)) - self.data["comment"] = comment - subversion.input.setText(comment) - - existing_comments = self.get_existing_comments() - subversion.set_values(existing_comments) - - else: - subversion.setVisible(False) - inputs_layout.addRow("Extension:", ext_combo) - inputs_layout.addRow("Preview:", preview_label) - - # Build layout - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(inputs_widget) - main_layout.addWidget(btns_widget) - - # Signal callback registration - version_input.valueChanged.connect(self.on_version_spinbox_changed) - last_version_check.stateChanged.connect( - self.on_version_checkbox_changed - ) - - subversion.input.textChanged.connect(self.on_comment_changed) - ext_combo.currentIndexChanged.connect(self.on_extension_changed) - - btn_ok.pressed.connect(self.on_ok_pressed) - btn_cancel.pressed.connect(self.on_cancel_pressed) - - # Allow "Enter" key to accept the save. - btn_ok.setDefault(True) - - # Force default focus to comment, some hosts didn't automatically - # apply focus to this line edit (e.g. Houdini) - subversion.input.setFocus() - - # Store widgets - self.btn_ok = btn_ok - - self.version_widget = version_widget - - self.version_input = version_input - self.last_version_check = last_version_check - - self.preview_label = preview_label - self.subversion = subversion - self.ext_combo = ext_combo - self._ext_delegate = ext_delegate - - self.refresh() - - def get_existing_comments(self): - - matcher = CommentMatcher(self.anatomy, self.template_key, self.data) - host_extensions = set(self.host.file_extensions()) - comments = set() - if os.path.isdir(self.root): - for fname in os.listdir(self.root): - if not os.path.isfile(os.path.join(self.root, fname)): - continue - - ext = os.path.splitext(fname)[-1] - if ext not in host_extensions: - continue - - comment = matcher.parse_comment(fname) - if comment: - comments.add(comment) - - return list(comments) - - def on_version_spinbox_changed(self, value): - self.data["version"] = value - self.refresh() - - def on_version_checkbox_changed(self, _value): - self.refresh() - - def on_comment_changed(self, text): - self.data["comment"] = text - self.refresh() - - def on_extension_changed(self): - ext = self.ext_combo.currentText() - if ext == self.data["ext"]: - return - self.data["ext"] = ext - self.refresh() - - def on_ok_pressed(self): - self.result = self.work_file - self.close() - - def on_cancel_pressed(self): - self.close() - - def get_result(self): - return self.result - - def get_work_file(self): - data = copy.deepcopy(self.data) - if not data["comment"]: - data.pop("comment", None) - - data["ext"] = data["ext"][1:] - - anatomy_filled = self.anatomy.format(data) - return anatomy_filled[self.template_key]["file"] - - def refresh(self): - extensions = self.host.file_extensions() - extension = self.data["ext"] - if extension is None: - # Define saving file extension - current_file = self.host.current_file() - if current_file: - # Match the extension of current file - _, extension = os.path.splitext(current_file) - else: - extension = extensions[0] - - if extension != self.data["ext"]: - self.data["ext"] = extension - index = self.ext_combo.findText( - extension, QtCore.Qt.MatchFixedString - ) - if index >= 0: - self.ext_combo.setCurrentIndex(index) - - if not self.last_version_check.isChecked(): - self.version_input.setEnabled(True) - self.data["version"] = self.version_input.value() - - work_file = self.get_work_file() - - else: - self.version_input.setEnabled(False) - - data = copy.deepcopy(self.data) - template = str(self.template) - - if not data["comment"]: - data.pop("comment", None) - - data["ext"] = data["ext"][1:] - - version = get_last_workfile_with_version( - self.root, template, data, extensions - )[1] - - if version is None: - version = 1 - else: - version += 1 - - found_valid_version = False - # Check if next version is valid version and give a chance to try - # next 100 versions - for idx in range(100): - # Store version to data - self.data["version"] = version - - work_file = self.get_work_file() - # Safety check - path = os.path.join(self.root, work_file) - if not os.path.exists(path): - found_valid_version = True - break - - # Try next version - version += 1 - # Log warning - if idx == 0: - log.warning(( - "BUG: Function `get_last_workfile_with_version` " - "didn't return last version." - )) - # Raise exception if even 100 version fallback didn't help - if not found_valid_version: - raise AssertionError( - "This is a bug. Couldn't find valid version!" - ) - - self.work_file = work_file - - path_exists = os.path.exists(os.path.join(self.root, work_file)) - - self.btn_ok.setEnabled(not path_exists) - - if path_exists: - self.preview_label.setText( - "Cannot create \"{0}\" because file exists!" - "".format(work_file) - ) - else: - self.preview_label.setText( - "{0}".format(work_file) - ) - - -class FilesWidget(QtWidgets.QWidget): - """A widget displaying files that allows to save and open files.""" - file_selected = QtCore.Signal(str) - workfile_created = QtCore.Signal(str) - file_opened = QtCore.Signal() - - def __init__(self, parent=None): - super(FilesWidget, self).__init__(parent=parent) - - # Setup - self._asset_id = None - self._asset_doc = None - self._task_name = None - self._task_type = None - - # Pype's anatomy object for current project - self.anatomy = Anatomy(io.Session["AVALON_PROJECT"]) - # Template key used to get work template from anatomy templates - self.template_key = "work" - - # This is not root but workfile directory - self._workfiles_root = None - self._workdir_path = None - self.host = api.registered_host() - - # Whether to automatically select the latest modified - # file on a refresh of the files model. - self.auto_select_latest_modified = True - - # Avoid crash in Blender and store the message box - # (setting parent doesn't work as it hides the message box) - self._messagebox = None - - files_view = FilesView(self) - - # Create the Files model - extensions = set(self.host.file_extensions()) - files_model = FilesModel(file_extensions=extensions) - - # Create proxy model for files to be able sort and filter - proxy_model = QtCore.QSortFilterProxyModel() - proxy_model.setSourceModel(files_model) - proxy_model.setDynamicSortFilter(True) - proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) - - # Set up the file list tree view - files_view.setModel(proxy_model) - files_view.setSortingEnabled(True) - files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) - - # Date modified delegate - time_delegate = PrettyTimeDelegate() - files_view.setItemDelegateForColumn(1, time_delegate) - files_view.setIndentation(3) # smaller indentation - - # Default to a wider first filename column it is what we mostly care - # about and the date modified is relatively small anyway. - files_view.setColumnWidth(0, 330) - - # Filtering input - filter_input = PlaceholderLineEdit(self) - filter_input.setPlaceholderText("Filter files..") - filter_input.textChanged.connect(proxy_model.setFilterFixedString) - - # Home Page - # Build buttons widget for files widget - btns_widget = QtWidgets.QWidget(self) - btn_save = QtWidgets.QPushButton("Save As", btns_widget) - btn_browse = QtWidgets.QPushButton("Browse", btns_widget) - btn_open = QtWidgets.QPushButton("Open", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.setContentsMargins(0, 0, 0, 0) - btns_layout.addWidget(btn_open) - btns_layout.addWidget(btn_browse) - btns_layout.addWidget(btn_save) - - # Build files widgets for home page - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.setContentsMargins(0, 0, 0, 0) - main_layout.addWidget(filter_input) - main_layout.addWidget(files_view) - main_layout.addWidget(btns_widget) - - # Register signal callbacks - files_view.doubleClickedLeft.connect(self.on_open_pressed) - files_view.customContextMenuRequested.connect(self.on_context_menu) - files_view.selectionModel().selectionChanged.connect( - self.on_file_select - ) - - btn_open.pressed.connect(self.on_open_pressed) - btn_browse.pressed.connect(self.on_browse_pressed) - btn_save.pressed.connect(self.on_save_as_pressed) - - # Store attributes - self.time_delegate = time_delegate - - self.filter_input = filter_input - - self.files_view = files_view - self.files_model = files_model - - self.btns_widget = btns_widget - self.btn_open = btn_open - self.btn_browse = btn_browse - self.btn_save = btn_save - - def set_asset_task(self, asset_id, task_name, task_type): - if asset_id != self._asset_id: - self._asset_doc = None - self._asset_id = asset_id - self._task_name = task_name - self._task_type = task_type - - # Define a custom session so we can query the work root - # for a "Work area" that is not our current Session. - # This way we can browse it even before we enter it. - if self._asset_id and self._task_name and self._task_type: - session = self._get_session() - self._workdir_path = session["AVALON_WORKDIR"] - self._workfiles_root = self.host.work_root(session) - self.files_model.set_root(self._workfiles_root) - - else: - self.files_model.set_root(None) - - # Disable/Enable buttons based on available files in model - has_filenames = self.files_model.has_filenames() - self.btn_browse.setEnabled(has_filenames) - self.btn_open.setEnabled(has_filenames) - if not has_filenames: - # Manually trigger file selection - self.on_file_select() - - def _get_asset_doc(self): - if self._asset_id is None: - return None - - if self._asset_doc is None: - self._asset_doc = io.find_one({"_id": self._asset_id}) - return self._asset_doc - - def _get_session(self): - """Return a modified session for the current asset and task""" - - session = api.Session.copy() - self.template_key = get_workfile_template_key( - self._task_type, - session["AVALON_APP"], - project_name=session["AVALON_PROJECT"] - ) - changes = compute_session_changes( - session, - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - session.update(changes) - - return session - - def _enter_session(self): - """Enter the asset and task session currently selected""" - - session = api.Session.copy() - changes = compute_session_changes( - session, - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - if not changes: - # Return early if we're already in the right Session context - # to avoid any unwanted Task Changed callbacks to be triggered. - return - - update_current_task( - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - - def open_file(self, filepath): - host = self.host - if host.has_unsaved_changes(): - result = self.save_changes_prompt() - if result is None: - # Cancel operation - return False - - # Save first if has changes - if result: - current_file = host.current_file() - if not current_file: - # If the user requested to save the current scene - # we can't actually automatically do so if the current - # file has not been saved with a name yet. So we'll have - # to opt out. - log.error("Can't save scene with no filename. Please " - "first save your work file using 'Save As'.") - return - - # Save current scene, continue to open file - host.save_file(current_file) - - self._enter_session() - host.open_file(filepath) - self.file_opened.emit() - - def save_changes_prompt(self): - self._messagebox = messagebox = QtWidgets.QMessageBox(parent=self) - messagebox.setWindowFlags(messagebox.windowFlags() | - QtCore.Qt.FramelessWindowHint) - messagebox.setIcon(messagebox.Warning) - messagebox.setWindowTitle("Unsaved Changes!") - messagebox.setText( - "There are unsaved changes to the current file." - "\nDo you want to save the changes?" - ) - messagebox.setStandardButtons( - messagebox.Yes | messagebox.No | messagebox.Cancel - ) - - result = messagebox.exec_() - if result == messagebox.Yes: - return True - if result == messagebox.No: - return False - return None - - def get_filename(self): - """Show save dialog to define filename for save or duplicate - - Returns: - str: The filename to create. - - """ - session = self._get_session() - - window = NameWindow( - parent=self, - root=self._workfiles_root, - anatomy=self.anatomy, - template_key=self.template_key, - session=session - ) - window.exec_() - - return window.get_result() - - def on_duplicate_pressed(self): - work_file = self.get_filename() - if not work_file: - return - - src = self._get_selected_filepath() - dst = os.path.join(self._workfiles_root, work_file) - shutil.copy(src, dst) - - self.workfile_created.emit(dst) - - self.refresh() - - def _get_selected_filepath(self): - """Return current filepath selected in view""" - selection = self.files_view.selectionModel() - index = selection.currentIndex() - if not index.isValid(): - return - - return index.data(self.files_model.FilePathRole) - - def on_open_pressed(self): - path = self._get_selected_filepath() - if not path: - print("No file selected to open..") - return - - self.open_file(path) - - def on_browse_pressed(self): - ext_filter = "Work File (*{0})".format( - " *".join(self.host.file_extensions()) - ) - kwargs = { - "caption": "Work Files", - "filter": ext_filter - } - if Qt.__binding__ in ("PySide", "PySide2"): - kwargs["dir"] = self._workfiles_root - else: - kwargs["directory"] = self._workfiles_root - - work_file = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] - if work_file: - self.open_file(work_file) - - def on_save_as_pressed(self): - work_filename = self.get_filename() - if not work_filename: - return - - # Trigger before save event - emit_event( - "workfile.save.before", - {"filename": work_filename, "workdir_path": self._workdir_path}, - source="workfiles.tool" - ) - - # Make sure workfiles root is updated - # - this triggers 'workio.work_root(...)' which may change value of - # '_workfiles_root' - self.set_asset_task( - self._asset_id, self._task_name, self._task_type - ) - - # Create workfiles root folder - if not os.path.exists(self._workfiles_root): - log.debug("Initializing Work Directory: %s", self._workfiles_root) - os.makedirs(self._workfiles_root) - - # Update session if context has changed - self._enter_session() - # Prepare full path to workfile and save it - filepath = os.path.join( - os.path.normpath(self._workfiles_root), work_filename - ) - self.host.save_file(filepath) - # Create extra folders - create_workdir_extra_folders( - self._workdir_path, - api.Session["AVALON_APP"], - self._task_type, - self._task_name, - api.Session["AVALON_PROJECT"] - ) - # Trigger after save events - emit_event( - "workfile.save.after", - {"filename": work_filename, "workdir_path": self._workdir_path}, - source="workfiles.tool" - ) - - self.workfile_created.emit(filepath) - # Refresh files model - self.refresh() - - def on_file_select(self): - self.file_selected.emit(self._get_selected_filepath()) - - def refresh(self): - """Refresh listed files for current selection in the interface""" - self.files_model.refresh() - - if self.auto_select_latest_modified: - self._select_last_modified_file() - - def on_context_menu(self, point): - index = self.files_view.indexAt(point) - if not index.isValid(): - return - - is_enabled = index.data(FilesModel.IsEnabled) - if not is_enabled: - return - - menu = QtWidgets.QMenu(self) - - # Duplicate - action = QtWidgets.QAction("Duplicate", menu) - tip = "Duplicate selected file." - action.setToolTip(tip) - action.setStatusTip(tip) - action.triggered.connect(self.on_duplicate_pressed) - menu.addAction(action) - - # Show the context action menu - global_point = self.files_view.mapToGlobal(point) - action = menu.exec_(global_point) - if not action: - return - - def _select_last_modified_file(self): - """Utility function to select the file with latest date modified""" - role = self.files_model.DateModifiedRole - model = self.files_view.model() - - highest_index = None - highest = 0 - for row in range(model.rowCount()): - index = model.index(row, 0, parent=QtCore.QModelIndex()) - if not index.isValid(): - continue - - modified = index.data(role) - if modified is not None and modified > highest: - highest_index = index - highest = modified - - if highest_index: - self.files_view.setCurrentIndex(highest_index) - - -class SidePanelWidget(QtWidgets.QWidget): - save_clicked = QtCore.Signal() - - def __init__(self, parent=None): - super(SidePanelWidget, self).__init__(parent) - - details_label = QtWidgets.QLabel("Details", self) - details_input = QtWidgets.QPlainTextEdit(self) - details_input.setReadOnly(True) - - note_label = QtWidgets.QLabel("Artist note", self) - note_input = QtWidgets.QPlainTextEdit(self) - btn_note_save = QtWidgets.QPushButton("Save note", self) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.setContentsMargins(0, 0, 0, 0) - main_layout.addWidget(details_label, 0) - main_layout.addWidget(details_input, 0) - main_layout.addWidget(note_label, 0) - main_layout.addWidget(note_input, 1) - main_layout.addWidget(btn_note_save, alignment=QtCore.Qt.AlignRight) - - note_input.textChanged.connect(self.on_note_change) - btn_note_save.clicked.connect(self.on_save_click) - - self.details_input = details_input - self.note_input = note_input - self.btn_note_save = btn_note_save - - self._orig_note = "" - self._workfile_doc = None - - def on_note_change(self): - text = self.note_input.toPlainText() - self.btn_note_save.setEnabled(self._orig_note != text) - - def on_save_click(self): - self._orig_note = self.note_input.toPlainText() - self.on_note_change() - self.save_clicked.emit() - - def set_context(self, asset_id, task_name, filepath, workfile_doc): - # Check if asset, task and file are selected - # NOTE workfile document is not requirement - enabled = bool(asset_id) and bool(task_name) and bool(filepath) - - self.details_input.setEnabled(enabled) - self.note_input.setEnabled(enabled) - self.btn_note_save.setEnabled(enabled) - - # Make sure workfile doc is overridden - self._workfile_doc = workfile_doc - # Disable inputs and remove texts if any required arguments are missing - if not enabled: - self._orig_note = "" - self.details_input.setPlainText("") - self.note_input.setPlainText("") - return - - orig_note = "" - if workfile_doc: - orig_note = workfile_doc["data"].get("note") or orig_note - - self._orig_note = orig_note - self.note_input.setPlainText(orig_note) - # Set as empty string - self.details_input.setPlainText("") - - filestat = os.stat(filepath) - size_ending_mapping = { - "KB": 1024 ** 1, - "MB": 1024 ** 2, - "GB": 1024 ** 3 - } - size = filestat.st_size - ending = "B" - for _ending, _size in size_ending_mapping.items(): - if filestat.st_size < _size: - break - size = filestat.st_size / _size - ending = _ending - - # Append html string - datetime_format = "%b %d %Y %H:%M:%S" - creation_time = datetime.datetime.fromtimestamp(filestat.st_ctime) - modification_time = datetime.datetime.fromtimestamp(filestat.st_mtime) - lines = ( - "Size:", - "{:.2f} {}".format(size, ending), - "Created:", - creation_time.strftime(datetime_format), - "Modified:", - modification_time.strftime(datetime_format) - ) - self.details_input.appendHtml("
".join(lines)) - - def get_workfile_data(self): - data = { - "note": self.note_input.toPlainText() - } - return self._workfile_doc, data - - -class Window(QtWidgets.QMainWindow): - """Work Files Window""" - title = "Work Files" - - def __init__(self, parent=None): - super(Window, self).__init__(parent=parent) - self.setWindowTitle(self.title) - window_flags = QtCore.Qt.Window | QtCore.Qt.WindowCloseButtonHint - if not parent: - window_flags |= QtCore.Qt.WindowStaysOnTopHint - self.setWindowFlags(window_flags) - - # Create pages widget and set it as central widget - pages_widget = QtWidgets.QStackedWidget(self) - self.setCentralWidget(pages_widget) - - home_page_widget = QtWidgets.QWidget(pages_widget) - home_body_widget = QtWidgets.QWidget(home_page_widget) - - assets_widget = SingleSelectAssetsWidget(io, parent=home_body_widget) - assets_widget.set_current_asset_btn_visibility(True) - - tasks_widget = TasksWidget(io, home_body_widget) - files_widget = FilesWidget(home_body_widget) - side_panel = SidePanelWidget(home_body_widget) - - pages_widget.addWidget(home_page_widget) - - # Build home - home_page_layout = QtWidgets.QVBoxLayout(home_page_widget) - home_page_layout.addWidget(home_body_widget) - - # Build home - body - body_layout = QtWidgets.QVBoxLayout(home_body_widget) - split_widget = QtWidgets.QSplitter(home_body_widget) - split_widget.addWidget(assets_widget) - split_widget.addWidget(tasks_widget) - split_widget.addWidget(files_widget) - split_widget.addWidget(side_panel) - split_widget.setSizes([255, 160, 455, 175]) - - body_layout.addWidget(split_widget) - - # Add top margin for tasks to align it visually with files as - # the files widget has a filter field which tasks does not. - tasks_widget.setContentsMargins(0, 32, 0, 0) - - # Set context after asset widget is refreshed - # - to do so it is necessary to wait until refresh is done - set_context_timer = QtCore.QTimer() - set_context_timer.setInterval(100) - - # Connect signals - set_context_timer.timeout.connect(self._on_context_set_timeout) - assets_widget.selection_changed.connect(self._on_asset_changed) - tasks_widget.task_changed.connect(self._on_task_changed) - files_widget.file_selected.connect(self.on_file_select) - files_widget.workfile_created.connect(self.on_workfile_create) - files_widget.file_opened.connect(self._on_file_opened) - side_panel.save_clicked.connect(self.on_side_panel_save) - - self._set_context_timer = set_context_timer - self.home_page_widget = home_page_widget - self.pages_widget = pages_widget - self.home_body_widget = home_body_widget - self.split_widget = split_widget - - self.assets_widget = assets_widget - self.tasks_widget = tasks_widget - self.files_widget = files_widget - self.side_panel = side_panel - - # Force focus on the open button by default, required for Houdini. - files_widget.btn_open.setFocus() - - self.resize(1200, 600) - - self._first_show = True - self._context_to_set = None - - def showEvent(self, event): - super(Window, self).showEvent(event) - if self._first_show: - self._first_show = False - self.refresh() - self.setStyleSheet(style.load_stylesheet()) - - def keyPressEvent(self, event): - """Custom keyPressEvent. - - Override keyPressEvent to do nothing so that Maya's panels won't - take focus when pressing "SHIFT" whilst mouse is over viewport or - outliner. This way users don't accidentally perform Maya commands - whilst trying to name an instance. - - """ - - def set_save_enabled(self, enabled): - self.files_widget.btn_save.setEnabled(enabled) - - def on_file_select(self, filepath): - asset_id = self.assets_widget.get_selected_asset_id() - task_name = self.tasks_widget.get_selected_task_name() - - workfile_doc = None - if asset_id and task_name and filepath: - filename = os.path.split(filepath)[1] - workfile_doc = get_workfile_doc( - asset_id, task_name, filename, io - ) - self.side_panel.set_context( - asset_id, task_name, filepath, workfile_doc - ) - - def on_workfile_create(self, filepath): - self._create_workfile_doc(filepath) - - def _on_file_opened(self): - self.close() - - def on_side_panel_save(self): - workfile_doc, data = self.side_panel.get_workfile_data() - if not workfile_doc: - filepath = self.files_widget._get_selected_filepath() - self._create_workfile_doc(filepath, force=True) - workfile_doc = self._get_current_workfile_doc() - - save_workfile_data_to_doc(workfile_doc, data, io) - - def _get_current_workfile_doc(self, filepath=None): - if filepath is None: - filepath = self.files_widget._get_selected_filepath() - task_name = self.tasks_widget.get_selected_task_name() - asset_id = self.assets_widget.get_selected_asset_id() - if not task_name or not asset_id or not filepath: - return - - filename = os.path.split(filepath)[1] - return get_workfile_doc( - asset_id, task_name, filename, io - ) - - def _create_workfile_doc(self, filepath, force=False): - workfile_doc = None - if not force: - workfile_doc = self._get_current_workfile_doc(filepath) - - if not workfile_doc: - workdir, filename = os.path.split(filepath) - asset_id = self.assets_widget.get_selected_asset_id() - asset_doc = io.find_one({"_id": asset_id}) - task_name = self.tasks_widget.get_selected_task_name() - create_workfile_doc(asset_doc, task_name, filename, workdir, io) - - def refresh(self): - # Refresh asset widget - self.assets_widget.refresh() - - self._on_task_changed() - - def set_context(self, context): - self._context_to_set = context - self._set_context_timer.start() - - def _on_context_set_timeout(self): - if self._context_to_set is None: - self._set_context_timer.stop() - return - - if self.assets_widget.refreshing: - return - - self._context_to_set, context = None, self._context_to_set - if "asset" in context: - asset_doc = io.find_one( - { - "name": context["asset"], - "type": "asset" - }, - {"_id": 1} - ) or {} - asset_id = asset_doc.get("_id") - # Select the asset - self.assets_widget.select_asset(asset_id) - self.tasks_widget.set_asset_id(asset_id) - - if "task" in context: - self.tasks_widget.select_task_name(context["task"]) - self._on_task_changed() - - def _on_asset_changed(self): - asset_id = self.assets_widget.get_selected_asset_id() - if asset_id: - self.tasks_widget.setEnabled(True) - else: - # Force disable the other widgets if no - # active selection - self.tasks_widget.setEnabled(False) - self.files_widget.setEnabled(False) - - self.tasks_widget.set_asset_id(asset_id) - - def _on_task_changed(self): - asset_id = self.assets_widget.get_selected_asset_id() - task_name = self.tasks_widget.get_selected_task_name() - task_type = self.tasks_widget.get_selected_task_type() - - asset_is_valid = asset_id is not None - self.tasks_widget.setEnabled(asset_is_valid) - - self.files_widget.setEnabled(bool(task_name) and asset_is_valid) - self.files_widget.set_asset_task(asset_id, task_name, task_type) - self.files_widget.refresh() - - def validate_host_requirements(host): if host is None: raise RuntimeError("No registered host.") @@ -1266,7 +61,6 @@ def show(root=None, debug=False, parent=None, use_context=True, save=True): if use_context: context = { "asset": api.Session["AVALON_ASSET"], - "silo": api.Session["AVALON_SILO"], "task": api.Session["AVALON_TASK"] } window.set_context(context) diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py new file mode 100644 index 0000000000..d2b8a76952 --- /dev/null +++ b/openpype/tools/workfiles/files_widget.py @@ -0,0 +1,583 @@ +import os +import logging +import shutil + +import Qt +from Qt import QtWidgets, QtCore +from avalon import io, api + +from openpype.tools.utils import PlaceholderLineEdit +from openpype.tools.utils.delegates import PrettyTimeDelegate +from openpype.lib import ( + emit_event, + Anatomy, + get_workfile_template_key, + create_workdir_extra_folders, +) +from openpype.lib.avalon_context import ( + update_current_task, + compute_session_changes +) +from .model import ( + WorkAreaFilesModel, + PublishFilesModel, + + FILEPATH_ROLE, + DATE_MODIFIED_ROLE, +) +from .save_as_dialog import SaveAsDialog +from .lib import TempPublishFiles + +log = logging.getLogger(__name__) + + +class FilesView(QtWidgets.QTreeView): + doubleClickedLeft = QtCore.Signal() + doubleClickedRight = QtCore.Signal() + + def mouseDoubleClickEvent(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.doubleClickedLeft.emit() + + elif event.button() == QtCore.Qt.RightButton: + self.doubleClickedRight.emit() + + return super(FilesView, self).mouseDoubleClickEvent(event) + + +class FilesWidget(QtWidgets.QWidget): + """A widget displaying files that allows to save and open files.""" + file_selected = QtCore.Signal(str) + file_opened = QtCore.Signal() + publish_file_viewed = QtCore.Signal() + workfile_created = QtCore.Signal(str) + published_visible_changed = QtCore.Signal(bool) + + def __init__(self, parent): + super(FilesWidget, self).__init__(parent) + + # Setup + self._asset_id = None + self._asset_doc = None + self._task_name = None + self._task_type = None + + # Pype's anatomy object for current project + self.anatomy = Anatomy(io.Session["AVALON_PROJECT"]) + # Template key used to get work template from anatomy templates + self.template_key = "work" + + # This is not root but workfile directory + self._workfiles_root = None + self._workdir_path = None + self.host = api.registered_host() + temp_publish_files = TempPublishFiles() + temp_publish_files.cleanup() + self._temp_publish_files = temp_publish_files + + # Whether to automatically select the latest modified + # file on a refresh of the files model. + self.auto_select_latest_modified = True + + # Avoid crash in Blender and store the message box + # (setting parent doesn't work as it hides the message box) + self._messagebox = None + + # Filtering input + filter_widget = QtWidgets.QWidget(self) + + published_checkbox = QtWidgets.QCheckBox("Published", filter_widget) + + filter_input = PlaceholderLineEdit(filter_widget) + filter_input.setPlaceholderText("Filter files..") + + filter_layout = QtWidgets.QHBoxLayout(filter_widget) + filter_layout.setContentsMargins(0, 0, 0, 0) + filter_layout.addWidget(published_checkbox, 0) + filter_layout.addWidget(filter_input, 1) + + # Create the Files models + extensions = set(self.host.file_extensions()) + + views_widget = QtWidgets.QWidget(self) + # Workarea view + workarea_files_model = WorkAreaFilesModel(extensions) + + # Create proxy model for files to be able sort and filter + workarea_proxy_model = QtCore.QSortFilterProxyModel() + workarea_proxy_model.setSourceModel(workarea_files_model) + workarea_proxy_model.setDynamicSortFilter(True) + workarea_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + # Set up the file list tree view + workarea_files_view = FilesView(views_widget) + workarea_files_view.setModel(workarea_proxy_model) + workarea_files_view.setSortingEnabled(True) + workarea_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + # Date modified delegate + workarea_time_delegate = PrettyTimeDelegate() + workarea_files_view.setItemDelegateForColumn(1, workarea_time_delegate) + workarea_files_view.setIndentation(3) # smaller indentation + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + workarea_files_view.setColumnWidth(0, 330) + + # Publish files view + publish_files_model = PublishFilesModel(extensions, io, self.anatomy) + + publish_proxy_model = QtCore.QSortFilterProxyModel() + publish_proxy_model.setSourceModel(publish_files_model) + publish_proxy_model.setDynamicSortFilter(True) + publish_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + publish_files_view = FilesView(views_widget) + publish_files_view.setModel(publish_proxy_model) + + publish_files_view.setSortingEnabled(True) + publish_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + # Date modified delegate + publish_time_delegate = PrettyTimeDelegate() + publish_files_view.setItemDelegateForColumn(1, publish_time_delegate) + publish_files_view.setIndentation(3) # smaller indentation + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + publish_files_view.setColumnWidth(0, 330) + + views_layout = QtWidgets.QHBoxLayout(views_widget) + views_layout.setContentsMargins(0, 0, 0, 0) + views_layout.addWidget(workarea_files_view, 1) + views_layout.addWidget(publish_files_view, 1) + + # Home Page + # Build buttons widget for files widget + btns_widget = QtWidgets.QWidget(self) + btn_save = QtWidgets.QPushButton("Save As", btns_widget) + btn_browse = QtWidgets.QPushButton("Browse", btns_widget) + btn_open = QtWidgets.QPushButton("Open", btns_widget) + + btn_view_published = QtWidgets.QPushButton("View", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.setContentsMargins(0, 0, 0, 0) + btns_layout.addWidget(btn_open, 1) + btns_layout.addWidget(btn_browse, 1) + btns_layout.addWidget(btn_save, 1) + btns_layout.addWidget(btn_view_published, 1) + + # Build files widgets for home page + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(filter_widget, 0) + main_layout.addWidget(views_widget, 1) + main_layout.addWidget(btns_widget, 0) + + # Register signal callbacks + published_checkbox.stateChanged.connect(self._on_published_change) + filter_input.textChanged.connect(self._on_filter_text_change) + + workarea_files_view.doubleClickedLeft.connect( + self._on_workarea_open_pressed + ) + workarea_files_view.customContextMenuRequested.connect( + self._on_workarea_context_menu + ) + workarea_files_view.selectionModel().selectionChanged.connect( + self.on_file_select + ) + publish_files_view.doubleClickedLeft.connect( + self._on_view_published_pressed + ) + + btn_open.pressed.connect(self._on_workarea_open_pressed) + btn_browse.pressed.connect(self.on_browse_pressed) + btn_save.pressed.connect(self.on_save_as_pressed) + btn_view_published.pressed.connect(self._on_view_published_pressed) + + # Store attributes + self._published_checkbox = published_checkbox + self._filter_input = filter_input + + self._workarea_time_delegate = workarea_time_delegate + self._workarea_files_view = workarea_files_view + self._workarea_files_model = workarea_files_model + self._workarea_proxy_model = workarea_proxy_model + + self._publish_time_delegate = publish_time_delegate + self._publish_files_view = publish_files_view + self._publish_files_model = publish_files_model + self._publish_proxy_model = publish_proxy_model + + self._btns_widget = btns_widget + self._btn_open = btn_open + self._btn_browse = btn_browse + self._btn_save = btn_save + self._btn_view_published = btn_view_published + + # Create a proxy widget for files widget + self.setFocusProxy(btn_open) + + # Hide publish files widgets + publish_files_view.setVisible(False) + btn_view_published.setVisible(False) + + @property + def published_enabled(self): + return self._published_checkbox.isChecked() + + def _on_published_change(self): + published_enabled = self.published_enabled + + self._workarea_files_view.setVisible(not published_enabled) + self._btn_open.setVisible(not published_enabled) + self._btn_browse.setVisible(not published_enabled) + self._btn_save.setVisible(not published_enabled) + + self._publish_files_view.setVisible(published_enabled) + self._btn_view_published.setVisible(published_enabled) + + self._update_filtering() + self._update_asset_task() + + self.published_visible_changed.emit(published_enabled) + + self._select_last_modified_file() + + def _on_filter_text_change(self): + self._update_filtering() + + def _update_filtering(self): + text = self._filter_input.text() + if self.published_enabled: + self._publish_proxy_model.setFilterFixedString(text) + else: + self._workarea_proxy_model.setFilterFixedString(text) + + def set_save_enabled(self, enabled): + self._btn_save.setEnabled(enabled) + + def set_asset_task(self, asset_id, task_name, task_type): + if asset_id != self._asset_id: + self._asset_doc = None + self._asset_id = asset_id + self._task_name = task_name + self._task_type = task_type + self._update_asset_task() + + def _update_asset_task(self): + if self.published_enabled: + self._publish_files_model.set_context( + self._asset_id, self._task_name + ) + has_valid_items = self._publish_files_model.has_valid_items() + self._btn_view_published.setEnabled(has_valid_items) + else: + # Define a custom session so we can query the work root + # for a "Work area" that is not our current Session. + # This way we can browse it even before we enter it. + if self._asset_id and self._task_name and self._task_type: + session = self._get_session() + self._workdir_path = session["AVALON_WORKDIR"] + self._workfiles_root = self.host.work_root(session) + self._workarea_files_model.set_root(self._workfiles_root) + + else: + self._workarea_files_model.set_root(None) + + # Disable/Enable buttons based on available files in model + has_valid_items = self._workarea_files_model.has_valid_items() + self._btn_browse.setEnabled(has_valid_items) + self._btn_open.setEnabled(has_valid_items) + # Manually trigger file selection + if not has_valid_items: + self.on_file_select() + + def _get_asset_doc(self): + if self._asset_id is None: + return None + + if self._asset_doc is None: + self._asset_doc = io.find_one({"_id": self._asset_id}) + return self._asset_doc + + def _get_session(self): + """Return a modified session for the current asset and task""" + + session = api.Session.copy() + self.template_key = get_workfile_template_key( + self._task_type, + session["AVALON_APP"], + project_name=session["AVALON_PROJECT"] + ) + changes = compute_session_changes( + session, + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + session.update(changes) + + return session + + def _enter_session(self): + """Enter the asset and task session currently selected""" + + session = api.Session.copy() + changes = compute_session_changes( + session, + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + if not changes: + # Return early if we're already in the right Session context + # to avoid any unwanted Task Changed callbacks to be triggered. + return + + update_current_task( + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + + def open_file(self, filepath): + host = self.host + if host.has_unsaved_changes(): + result = self.save_changes_prompt() + if result is None: + # Cancel operation + return False + + # Save first if has changes + if result: + current_file = host.current_file() + if not current_file: + # If the user requested to save the current scene + # we can't actually automatically do so if the current + # file has not been saved with a name yet. So we'll have + # to opt out. + log.error("Can't save scene with no filename. Please " + "first save your work file using 'Save As'.") + return + + # Save current scene, continue to open file + host.save_file(current_file) + + self._enter_session() + host.open_file(filepath) + self.file_opened.emit() + + def save_changes_prompt(self): + self._messagebox = messagebox = QtWidgets.QMessageBox(parent=self) + messagebox.setWindowFlags(messagebox.windowFlags() | + QtCore.Qt.FramelessWindowHint) + messagebox.setIcon(messagebox.Warning) + messagebox.setWindowTitle("Unsaved Changes!") + messagebox.setText( + "There are unsaved changes to the current file." + "\nDo you want to save the changes?" + ) + messagebox.setStandardButtons( + messagebox.Yes | messagebox.No | messagebox.Cancel + ) + + result = messagebox.exec_() + if result == messagebox.Yes: + return True + if result == messagebox.No: + return False + return None + + def get_filename(self): + """Show save dialog to define filename for save or duplicate + + Returns: + str: The filename to create. + + """ + session = self._get_session() + + window = SaveAsDialog( + parent=self, + root=self._workfiles_root, + anatomy=self.anatomy, + template_key=self.template_key, + session=session + ) + window.exec_() + + return window.get_result() + + def on_duplicate_pressed(self): + work_file = self.get_filename() + if not work_file: + return + + src = self._get_selected_filepath() + dst = os.path.join(self._workfiles_root, work_file) + shutil.copy(src, dst) + + self.workfile_created.emit(dst) + + self.refresh() + + def _get_selected_filepath(self): + """Return current filepath selected in view""" + if self.published_enabled: + source_view = self._publish_files_view + else: + source_view = self._workarea_files_view + selection = source_view.selectionModel() + index = selection.currentIndex() + if not index.isValid(): + return + + return index.data(FILEPATH_ROLE) + + def _on_workarea_open_pressed(self): + path = self._get_selected_filepath() + if not path: + print("No file selected to open..") + return + + self.open_file(path) + + def on_browse_pressed(self): + ext_filter = "Work File (*{0})".format( + " *".join(self.host.file_extensions()) + ) + kwargs = { + "caption": "Work Files", + "filter": ext_filter + } + if Qt.__binding__ in ("PySide", "PySide2"): + kwargs["dir"] = self._workfiles_root + else: + kwargs["directory"] = self._workfiles_root + + work_file = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] + if work_file: + self.open_file(work_file) + + def on_save_as_pressed(self): + work_filename = self.get_filename() + if not work_filename: + return + + # Trigger before save event + emit_event( + "workfile.save.before", + {"filename": work_filename, "workdir_path": self._workdir_path}, + source="workfiles.tool" + ) + + # Make sure workfiles root is updated + # - this triggers 'workio.work_root(...)' which may change value of + # '_workfiles_root' + self.set_asset_task( + self._asset_id, self._task_name, self._task_type + ) + + # Create workfiles root folder + if not os.path.exists(self._workfiles_root): + log.debug("Initializing Work Directory: %s", self._workfiles_root) + os.makedirs(self._workfiles_root) + + # Update session if context has changed + self._enter_session() + # Prepare full path to workfile and save it + filepath = os.path.join( + os.path.normpath(self._workfiles_root), work_filename + ) + self.host.save_file(filepath) + # Create extra folders + create_workdir_extra_folders( + self._workdir_path, + api.Session["AVALON_APP"], + self._task_type, + self._task_name, + api.Session["AVALON_PROJECT"] + ) + # Trigger after save events + emit_event( + "workfile.save.after", + {"filename": work_filename, "workdir_path": self._workdir_path}, + source="workfiles.tool" + ) + + self.workfile_created.emit(filepath) + # Refresh files model + self.refresh() + + def _on_view_published_pressed(self): + filepath = self._get_selected_filepath() + if not filepath or not os.path.exists(filepath): + return + item = self._temp_publish_files.add_file(filepath) + self.host.open_file(item.filepath) + self.publish_file_viewed.emit() + # Change state back to workarea + self._published_checkbox.setChecked(False) + + def on_file_select(self): + self.file_selected.emit(self._get_selected_filepath()) + + def refresh(self): + """Refresh listed files for current selection in the interface""" + if self.published_enabled: + self._publish_files_model.refresh() + else: + self._workarea_files_model.refresh() + + if self.auto_select_latest_modified: + self._select_last_modified_file() + + def _on_workarea_context_menu(self, point): + index = self._workarea_files_view.indexAt(point) + if not index.isValid(): + return + + if not index.flags() & QtCore.Qt.ItemIsEnabled: + return + + menu = QtWidgets.QMenu(self) + + # Duplicate + action = QtWidgets.QAction("Duplicate", menu) + tip = "Duplicate selected file." + action.setToolTip(tip) + action.setStatusTip(tip) + action.triggered.connect(self.on_duplicate_pressed) + menu.addAction(action) + + # Show the context action menu + global_point = self._workarea_files_view.mapToGlobal(point) + action = menu.exec_(global_point) + if not action: + return + + def _select_last_modified_file(self): + """Utility function to select the file with latest date modified""" + if self.published_enabled: + source_view = self._publish_files_view + else: + source_view = self._workarea_files_view + model = source_view.model() + + highest_index = None + highest = 0 + for row in range(model.rowCount()): + index = model.index(row, 0, parent=QtCore.QModelIndex()) + if not index.isValid(): + continue + + modified = index.data(DATE_MODIFIED_ROLE) + if modified is not None and modified > highest: + highest_index = index + highest = modified + + if highest_index: + source_view.setCurrentIndex(highest_index) diff --git a/openpype/tools/workfiles/lib.py b/openpype/tools/workfiles/lib.py new file mode 100644 index 0000000000..21a7485b7b --- /dev/null +++ b/openpype/tools/workfiles/lib.py @@ -0,0 +1,272 @@ +import os +import shutil +import uuid +import time +import json +import logging +import contextlib + +import appdirs + + +class TempPublishFilesItem(object): + """Object representing copied workfile in app temp folder. + + Args: + item_id (str): Id of item used as subfolder. + data (dict): Metadata about temp files. + directory (str): Path to directory where files are copied to. + """ + + def __init__(self, item_id, data, directory): + self._id = item_id + self._directory = directory + self._filepath = os.path.join(directory, data["filename"]) + + @property + def directory(self): + return self._directory + + @property + def filepath(self): + return self._filepath + + @property + def id(self): + return self._id + + @property + def size(self): + if os.path.exists(self.filepath): + s = os.stat(self.filepath) + return s.st_size + return 0 + + +class TempPublishFiles(object): + """Directory where published workfiles are copied when opened. + + Directory is located in appdirs on the machine. Folder contains file + with metadata about stored files. Each item in metadata has id, filename + and expiration time. When expiration time is higher then current time the + item is removed from metadata and it's files are deleted. Files of items + are stored in subfolder named by item's id. + + Metadata file can be in theory opened and modified by multiple processes, + threads at one time. For those cases is created simple lock file which + is created before modification begins and is removed when modification + ends. Existence of the file means that it should not be modified by + any other process at the same time. + + Metadata example: + ``` + { + "96050b4a-8974-4fca-8179-7c446c478d54": { + "created": 1647880725.555, + "expiration": 1647884325.555, + "filename": "cg_pigeon_workfileModeling_v025.ma" + }, + ... + } + ``` + + ## Why is this needed + Combination of more issues. Temp files are not automatically removed by + OS on windows so using tempfiles in TEMP would lead to kill disk space of + machine. There are also cases when someone wants to open multiple files + in short period of time and want to manually remove those files so keeping + track of temporary copied files in pre-defined structure is needed. + """ + minute_in_seconds = 60 + hour_in_seconds = 60 * minute_in_seconds + day_in_seconds = 24 * hour_in_seconds + + def __init__(self): + root_dir = appdirs.user_data_dir( + "published_workfiles_temp", "openpype" + ) + if not os.path.exists(root_dir): + os.makedirs(root_dir) + + metadata_path = os.path.join(root_dir, "metadata.json") + lock_path = os.path.join(root_dir, "lock.json") + + self._root_dir = root_dir + self._metadata_path = metadata_path + self._lock_path = lock_path + self._log = None + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + @property + def life_time(self): + """How long will be new item kept in temp in seconds. + + Returns: + int: Lifetime of temp item. + """ + return int(self.hour_in_seconds) + + @property + def size(self): + """File size of existing items.""" + size = 0 + for item in self.get_items(): + size += item.size + return size + + def add_file(self, src_path): + """Add workfile to temp directory. + + This will create new item and source path is copied to it's directory. + """ + filename = os.path.basename(src_path) + + item_id = str(uuid.uuid4()) + dst_dirpath = os.path.join(self._root_dir, item_id) + if not os.path.exists(dst_dirpath): + os.makedirs(dst_dirpath) + + dst_path = os.path.join(dst_dirpath, filename) + shutil.copy(src_path, dst_path) + + now = time.time() + item_data = { + "filename": filename, + "expiration": now + self.life_time, + "created": now + } + with self._modify_data() as data: + data[item_id] = item_data + + return TempPublishFilesItem(item_id, item_data, dst_dirpath) + + @contextlib.contextmanager + def _modify_data(self): + """Create lock file when data in metadata file are modified.""" + start_time = time.time() + timeout = 3 + while os.path.exists(self._lock_path): + time.sleep(0.01) + if start_time > timeout: + self.log.warning(( + "Waited for {} seconds to free lock file. Overriding lock." + ).format(timeout)) + + with open(self._lock_path, "w") as stream: + json.dump({"pid": os.getpid()}, stream) + + try: + data = self._get_data() + yield data + with open(self._metadata_path, "w") as stream: + json.dump(data, stream) + + finally: + os.remove(self._lock_path) + + def _get_data(self): + output = {} + if not os.path.exists(self._metadata_path): + return output + + try: + with open(self._metadata_path, "r") as stream: + output = json.load(stream) + except Exception: + self.log.warning("Failed to read metadata file.", exc_info=True) + return output + + def cleanup(self, check_expiration=True): + """Cleanup files based on metadata. + + Items that passed expiration are removed when this is called. Or all + files are removed when `check_expiration` is set to False. + + Args: + check_expiration (bool): All items and files are removed when set + to True. + """ + data = self._get_data() + now = time.time() + remove_ids = set() + all_ids = set() + for item_id, item_data in data.items(): + all_ids.add(item_id) + if check_expiration and now < item_data["expiration"]: + continue + + remove_ids.add(item_id) + + for item_id in remove_ids: + try: + self.remove_id(item_id) + except Exception: + self.log.warning( + "Failed to remove temp publish item \"{}\"".format( + item_id + ), + exc_info=True + ) + + # Remove unknown folders/files + for filename in os.listdir(self._root_dir): + if filename in all_ids: + continue + + full_path = os.path.join(self._root_dir, filename) + if full_path in (self._metadata_path, self._lock_path): + continue + + try: + shutil.rmtree(full_path) + except Exception: + self.log.warning( + "Couldn't remove arbitrary path \"{}\"".format(full_path), + exc_info=True + ) + + def clear(self): + self.cleanup(False) + + def get_items(self): + """Receive all items from metadata file. + + Returns: + list: Info about each item in metadata. + """ + output = [] + data = self._get_data() + for item_id, item_data in data.items(): + item_path = os.path.join(self._root_dir, item_id) + output.append(TempPublishFilesItem(item_id, item_data, item_path)) + return output + + def remove_id(self, item_id): + """Remove files of item and then remove the item from metadata.""" + filepath = os.path.join(self._root_dir, item_id) + if os.path.exists(filepath): + shutil.rmtree(filepath) + + with self._modify_data() as data: + data.pop(item_id, None) + + +def file_size_to_string(file_size): + size = 0 + size_ending_mapping = { + "KB": 1024 ** 1, + "MB": 1024 ** 2, + "GB": 1024 ** 3 + } + ending = "B" + for _ending, _size in size_ending_mapping.items(): + if file_size < _size: + break + size = file_size / _size + ending = _ending + return "{:.2f} {}".format(size, ending) diff --git a/openpype/tools/workfiles/model.py b/openpype/tools/workfiles/model.py index e9184842fc..8f9dd8c6ba 100644 --- a/openpype/tools/workfiles/model.py +++ b/openpype/tools/workfiles/model.py @@ -1,153 +1,179 @@ import os import logging -from Qt import QtCore +from Qt import QtCore, QtGui import qtawesome from openpype.style import ( get_default_entity_icon_color, get_disabled_entity_icon_color, ) - -from openpype.tools.utils.models import TreeModel, Item +from openpype.pipeline import get_representation_path log = logging.getLogger(__name__) -class FilesModel(TreeModel): - """Model listing files with specified extensions in a root folder""" - Columns = ["filename", "date"] +FILEPATH_ROLE = QtCore.Qt.UserRole + 2 +DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3 +ITEM_ID_ROLE = QtCore.Qt.UserRole + 4 - FileNameRole = QtCore.Qt.UserRole + 2 - DateModifiedRole = QtCore.Qt.UserRole + 3 - FilePathRole = QtCore.Qt.UserRole + 4 - IsEnabled = QtCore.Qt.UserRole + 5 - def __init__(self, file_extensions, parent=None): - super(FilesModel, self).__init__(parent=parent) +class WorkAreaFilesModel(QtGui.QStandardItemModel): + """Model is looking into one folder for files with extension.""" + + def __init__(self, extensions, *args, **kwargs): + super(WorkAreaFilesModel, self).__init__(*args, **kwargs) + + self.setColumnCount(2) self._root = None - self._file_extensions = file_extensions - self._icons = { - "file": qtawesome.icon( - "fa.file-o", - color=get_default_entity_icon_color() + self._file_extensions = extensions + self._invalid_path_item = None + self._empty_root_item = None + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._invalid_item_visible = False + self._items_by_filename = {} + + def _get_invalid_path_item(self): + if self._invalid_path_item is None: + message = "Work Area does not exist. Use Save As to create it." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() ) - } + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._invalid_path_item = item + return self._invalid_path_item + + def _get_empty_root_item(self): + if self._empty_root_item is None: + message = "Work Area is empty." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._empty_root_item = item + return self._empty_root_item def set_root(self, root): + """Change directory where to look for file.""" self._root = root + if root and not os.path.exists(root): + log.debug("Work Area does not exist: {}".format(root)) self.refresh() - def _add_empty(self): - item = Item() - item.update({ - # Put a display message in 'filename' - "filename": "No files found.", - # Not-selectable - "enabled": False, - "date": None, - "filepath": None - }) - - self.add_child(item) + def _clear(self): + root_item = self.invisibleRootItem() + rows = root_item.rowCount() + if rows > 0: + if self._invalid_item_visible: + for row in range(rows): + root_item.takeRow(row) + else: + root_item.removeRows(0, rows) + self._items_by_filename = {} def refresh(self): - self.clear() - self.beginResetModel() - - root = self._root - - if not root: - self.endResetModel() - return - - if not os.path.exists(root): + """Refresh and update model items.""" + root_item = self.invisibleRootItem() + # If path is not set or does not exist then add invalid path item + if not self._root or not os.path.exists(self._root): + self._clear() # Add Work Area does not exist placeholder - log.debug("Work Area does not exist: %s", root) - message = "Work Area does not exist. Use Save As to create it." - item = Item({ - "filename": message, - "date": None, - "filepath": None, - "enabled": False, - "icon": qtawesome.icon( - "fa.times", - color=get_disabled_entity_icon_color() - ) - }) - self.add_child(item) - self.endResetModel() + item = self._get_invalid_path_item() + root_item.appendRow(item) + self._invalid_item_visible = True return - extensions = self._file_extensions + # Clear items if previous refresh set '_invalid_item_visible' to True + # - Invalid items are not stored to '_items_by_filename' so they would + # not be removed + if self._invalid_item_visible: + self._clear() - for filename in os.listdir(root): - path = os.path.join(root, filename) - if os.path.isdir(path): + # Check for new items that should be added and items that should be + # removed + new_items = [] + items_to_remove = set(self._items_by_filename.keys()) + for filename in os.listdir(self._root): + filepath = os.path.join(self._root, filename) + if os.path.isdir(filepath): continue ext = os.path.splitext(filename)[1] - if extensions and ext not in extensions: + if ext not in self._file_extensions: continue - modified = os.path.getmtime(path) + modified = os.path.getmtime(filepath) - item = Item({ - "filename": filename, - "date": modified, - "filepath": path - }) + # Use existing item or create new one + if filename in items_to_remove: + items_to_remove.remove(filename) + item = self._items_by_filename[filename] + else: + item = QtGui.QStandardItem(filename) + item.setColumnCount(self.columnCount()) + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + new_items.append(item) + self._items_by_filename[filename] = item + # Update data that may be different + item.setData(filepath, FILEPATH_ROLE) + item.setData(modified, DATE_MODIFIED_ROLE) - self.add_child(item) + # Add new items if there are any + if new_items: + root_item.appendRows(new_items) - if self.rowCount() == 0: - self._add_empty() + # Remove items that are no longer available + for filename in items_to_remove: + item = self._items_by_filename.pop(filename) + root_item.removeRow(item.row()) - self.endResetModel() - - def has_filenames(self): - for item in self._root_item.children(): - if item.get("enabled", True): - return True - return False - - def rowCount(self, parent=None): - if parent is None or not parent.isValid(): - parent_item = self._root_item + # Add empty root item if there are not filenames that could be shown + if root_item.rowCount() > 0: + self._invalid_item_visible = False else: - parent_item = parent.internalPointer() - return parent_item.childCount() + self._invalid_item_visible = True + item = self._get_empty_root_item() + root_item.appendRow(item) - def data(self, index, role): - if not index.isValid(): - return + def has_valid_items(self): + """Directory has files that are listed in items.""" + return not self._invalid_item_visible - if role == QtCore.Qt.DecorationRole: - # Add icon to filename column - item = index.internalPointer() - if index.column() == 0: - if item["filepath"]: - return self._icons["file"] - return item.get("icon", None) + def flags(self, index): + # Use flags of first column for all columns + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(WorkAreaFilesModel, self).flags(index) - if role == self.FileNameRole: - item = index.internalPointer() - return item["filename"] + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole - if role == self.DateModifiedRole: - item = index.internalPointer() - return item["date"] + # Handle roles for first column + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None - if role == self.FilePathRole: - item = index.internalPointer() - return item["filepath"] + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) - if role == self.IsEnabled: - item = index.internalPointer() - return item.get("enabled", True) - - return super(FilesModel, self).data(index, role) + return super(WorkAreaFilesModel, self).data(index, role) def headerData(self, section, orientation, role): # Show nice labels in the header @@ -160,4 +186,274 @@ class FilesModel(TreeModel): elif section == 1: return "Date modified" - return super(FilesModel, self).headerData(section, orientation, role) + return super(WorkAreaFilesModel, self).headerData( + section, orientation, role + ) + + +class PublishFilesModel(QtGui.QStandardItemModel): + """Model filling files with published files calculated from representation. + + This model looks for workfile family representations based on selected + asset and task. + + Asset must set to be able look for representations that could be used. + Task is used to filter representations by task. + Model has few filter criteria for filling. + - First criteria is that version document must have "workfile" in + "data.families". + - Second cirteria is that representation must have extension same as + defined extensions + - If task is set then representation must have 'task["name"]' with same + name. + """ + + def __init__(self, extensions, dbcon, anatomy, *args, **kwargs): + super(PublishFilesModel, self).__init__(*args, **kwargs) + + self.setColumnCount(2) + + self._dbcon = dbcon + self._anatomy = anatomy + self._file_extensions = extensions + + self._invalid_context_item = None + self._empty_root_item = None + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._invalid_icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + self._invalid_item_visible = False + + self._items_by_id = {} + + self._asset_id = None + self._task_name = None + + def _set_item_invalid(self, item): + item.setFlags(QtCore.Qt.NoItemFlags) + item.setData(self._invalid_icon, QtCore.Qt.DecorationRole) + + def _set_item_valid(self, item): + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + + def _get_invalid_context_item(self): + if self._invalid_context_item is None: + item = QtGui.QStandardItem("Selected context is not valid.") + item.setColumnCount(self.columnCount()) + self._set_item_invalid(item) + self._invalid_context_item = item + return self._invalid_context_item + + def _get_empty_root_item(self): + if self._empty_root_item is None: + item = QtGui.QStandardItem("Didn't find any published workfiles.") + item.setColumnCount(self.columnCount()) + self._set_item_invalid(item) + self._empty_root_item = item + return self._empty_root_item + + def set_context(self, asset_id, task_name): + """Change context to asset and task. + + Args: + asset_id (ObjectId): Id of selected asset. + task_name (str): Name of selected task. + """ + self._asset_id = asset_id + self._task_name = task_name + self.refresh() + + def _clear(self): + root_item = self.invisibleRootItem() + rows = root_item.rowCount() + if rows > 0: + if self._invalid_item_visible: + for row in range(rows): + root_item.takeRow(row) + else: + root_item.removeRows(0, rows) + self._items_by_id = {} + + def _get_workfie_representations(self): + output = [] + # Get subset docs of asset + subset_docs = self._dbcon.find( + { + "type": "subset", + "parent": self._asset_id + }, + { + "_id": True, + "name": True + } + ) + + subset_ids = [subset_doc["_id"] for subset_doc in subset_docs] + if not subset_ids: + return output + + # Get version docs of subsets with their families + version_docs = self._dbcon.find( + { + "type": "version", + "parent": {"$in": subset_ids} + }, + { + "_id": True, + "data.families": True, + "parent": True + } + ) + # Filter versions if they contain 'workfile' family + filtered_versions = [] + for version_doc in version_docs: + data = version_doc.get("data") or {} + families = data.get("families") or [] + if "workfile" in families: + filtered_versions.append(version_doc) + + version_ids = [version_doc["_id"] for version_doc in filtered_versions] + if not version_ids: + return output + + # Query representations of filtered versions and add filter for + # extension + extensions = [ext.replace(".", "") for ext in self._file_extensions] + repre_docs = self._dbcon.find( + { + "type": "representation", + "parent": {"$in": version_ids}, + "context.ext": {"$in": extensions} + } + ) + # Filter queried representations by task name if task is set + filtered_repre_docs = [] + for repre_doc in repre_docs: + if self._task_name is None: + filtered_repre_docs.append(repre_doc) + continue + + task_info = repre_doc["context"].get("task") + if not task_info: + print("Not task info") + continue + + if isinstance(task_info, dict): + task_name = task_info.get("name") + else: + task_name = task_info + + if task_name == self._task_name: + filtered_repre_docs.append(repre_doc) + + # Collect paths of representations + for repre_doc in filtered_repre_docs: + path = get_representation_path( + repre_doc, root=self._anatomy.roots + ) + output.append((path, repre_doc["_id"])) + return output + + def refresh(self): + root_item = self.invisibleRootItem() + if not self._asset_id: + self._clear() + # Add Work Area does not exist placeholder + item = self._get_invalid_context_item() + root_item.appendRow(item) + self._invalid_item_visible = True + return + + if self._invalid_item_visible: + self._clear() + + new_items = [] + items_to_remove = set(self._items_by_id.keys()) + for item in self._get_workfie_representations(): + filepath, repre_id = item + # TODO handle empty filepaths + if not filepath: + continue + filename = os.path.basename(filepath) + + if repre_id in items_to_remove: + items_to_remove.remove(repre_id) + item = self._items_by_id[repre_id] + else: + item = QtGui.QStandardItem(filename) + item.setColumnCount(self.columnCount()) + new_items.append(item) + self._items_by_id[repre_id] = item + + if os.path.exists(filepath): + modified = os.path.getmtime(filepath) + tooltip = None + self._set_item_valid(item) + else: + modified = None + tooltip = "File is not available from this machine" + self._set_item_invalid(item) + + item.setData(tooltip, QtCore.Qt.ToolTipRole) + item.setData(filepath, FILEPATH_ROLE) + item.setData(modified, DATE_MODIFIED_ROLE) + item.setData(repre_id, ITEM_ID_ROLE) + + if new_items: + root_item.appendRows(new_items) + + for filename in items_to_remove: + item = self._items_by_id.pop(filename) + root_item.removeRow(item.row()) + + if root_item.rowCount() > 0: + self._invalid_item_visible = False + else: + self._invalid_item_visible = True + item = self._get_empty_root_item() + root_item.appendRow(item) + + def has_valid_items(self): + return not self._invalid_item_visible + + def flags(self, index): + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(PublishFilesModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None + + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) + + return super(PublishFilesModel, self).data(index, role) + + def headerData(self, section, orientation, role): + # Show nice labels in the header + if ( + role == QtCore.Qt.DisplayRole + and orientation == QtCore.Qt.Horizontal + ): + if section == 0: + return "Name" + elif section == 1: + return "Date modified" + + return super(PublishFilesModel, self).headerData( + section, orientation, role + ) diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py new file mode 100644 index 0000000000..e616a325cc --- /dev/null +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -0,0 +1,482 @@ +import os +import re +import copy +import logging + +from Qt import QtWidgets, QtCore + +from avalon import api, io + +from openpype.lib import ( + get_last_workfile_with_version, + get_workdir_data, +) +from openpype.tools.utils import PlaceholderLineEdit + +log = logging.getLogger(__name__) + + +def build_workfile_data(session): + """Get the data required for workfile formatting from avalon `session`""" + + # Set work file data for template formatting + asset_name = session["AVALON_ASSET"] + task_name = session["AVALON_TASK"] + host_name = session["AVALON_APP"] + project_doc = io.find_one( + {"type": "project"}, + { + "name": True, + "data.code": True, + "config.tasks": True, + } + ) + + asset_doc = io.find_one( + { + "type": "asset", + "name": asset_name + }, + { + "name": True, + "data.tasks": True, + "data.parents": True + } + ) + data = get_workdir_data(project_doc, asset_doc, task_name, host_name) + data.update({ + "version": 1, + "comment": "", + "ext": None + }) + + return data + + +class CommentMatcher(object): + """Use anatomy and work file data to parse comments from filenames""" + def __init__(self, anatomy, template_key, data): + + self.fname_regex = None + + template = anatomy.templates[template_key]["file"] + if "{comment}" not in template: + # Don't look for comment if template doesn't allow it + return + + # Create a regex group for extensions + extensions = api.registered_host().file_extensions() + any_extension = "(?:{})".format( + "|".join(re.escape(ext[1:]) for ext in extensions) + ) + + # Use placeholders that will never be in the filename + temp_data = copy.deepcopy(data) + temp_data["comment"] = "<>" + temp_data["version"] = "<>" + temp_data["ext"] = "<>" + + formatted = anatomy.format(temp_data) + fname_pattern = formatted[template_key]["file"] + fname_pattern = re.escape(fname_pattern) + + # Replace comment and version with something we can match with regex + replacements = { + "<>": "(.+)", + "<>": "[0-9]+", + "<>": any_extension, + } + for src, dest in replacements.items(): + fname_pattern = fname_pattern.replace(re.escape(src), dest) + + # Match from beginning to end of string to be safe + fname_pattern = "^{}$".format(fname_pattern) + + self.fname_regex = re.compile(fname_pattern) + + def parse_comment(self, filepath): + """Parse the {comment} part from a filename""" + if not self.fname_regex: + return + + fname = os.path.basename(filepath) + match = self.fname_regex.match(fname) + if match: + return match.group(1) + + +class SubversionLineEdit(QtWidgets.QWidget): + """QLineEdit with QPushButton for drop down selection of list of strings""" + + text_changed = QtCore.Signal(str) + + def __init__(self, *args, **kwargs): + super(SubversionLineEdit, self).__init__(*args, **kwargs) + + input_field = PlaceholderLineEdit(self) + menu_btn = QtWidgets.QPushButton(self) + menu_btn.setFixedWidth(18) + + menu = QtWidgets.QMenu(self) + menu_btn.setMenu(menu) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(3) + + layout.addWidget(input_field, 1) + layout.addWidget(menu_btn, 0) + + input_field.textChanged.connect(self.text_changed) + + self.setFocusProxy(input_field) + + self._input_field = input_field + self._menu_btn = menu_btn + self._menu = menu + + def set_placeholder(self, placeholder): + self._input_field.setPlaceholderText(placeholder) + + def set_text(self, text): + self._input_field.setText(text) + + def set_values(self, values): + self._update(values) + + def _on_button_clicked(self): + self._menu.exec_() + + def _on_action_clicked(self, action): + self._input_field.setText(action.text()) + + def _update(self, values): + """Create optional predefined subset names + + Args: + default_names(list): all predefined names + + Returns: + None + """ + + menu = self._menu + button = self._menu_btn + + state = any(values) + button.setEnabled(state) + if state is False: + return + + # Include an empty string + values = [""] + sorted(values) + + # Get and destroy the action group + group = button.findChild(QtWidgets.QActionGroup) + if group: + group.deleteLater() + + # Build new action group + group = QtWidgets.QActionGroup(button) + for name in values: + action = group.addAction(name) + menu.addAction(action) + + group.triggered.connect(self._on_action_clicked) + + +class SaveAsDialog(QtWidgets.QDialog): + """Name Window to define a unique filename inside a root folder + + The filename will be based on the "workfile" template defined in the + project["config"]["template"]. + + """ + + def __init__(self, parent, root, anatomy, template_key, session=None): + super(SaveAsDialog, self).__init__(parent=parent) + self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) + + self.result = None + self.host = api.registered_host() + self.root = root + self.work_file = None + + if not session: + # Fallback to active session + session = api.Session + + self.data = build_workfile_data(session) + + # Store project anatomy + self.anatomy = anatomy + self.template = anatomy.templates[template_key]["file"] + self.template_key = template_key + + # Btns widget + btns_widget = QtWidgets.QWidget(self) + + btn_ok = QtWidgets.QPushButton("Ok", btns_widget) + btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.addWidget(btn_ok) + btns_layout.addWidget(btn_cancel) + + # Inputs widget + inputs_widget = QtWidgets.QWidget(self) + + # Version widget + version_widget = QtWidgets.QWidget(inputs_widget) + + # Version number input + version_input = QtWidgets.QSpinBox(version_widget) + version_input.setMinimum(1) + version_input.setMaximum(9999) + + # Last version checkbox + last_version_check = QtWidgets.QCheckBox( + "Next Available Version", version_widget + ) + last_version_check.setChecked(True) + + version_layout = QtWidgets.QHBoxLayout(version_widget) + version_layout.setContentsMargins(0, 0, 0, 0) + version_layout.addWidget(version_input) + version_layout.addWidget(last_version_check) + + # Preview widget + preview_label = QtWidgets.QLabel("Preview filename", inputs_widget) + + # Subversion input + subversion = SubversionLineEdit(inputs_widget) + subversion.set_placeholder("Will be part of filename.") + + # Extensions combobox + ext_combo = QtWidgets.QComboBox(inputs_widget) + # Add styled delegate to use stylesheets + ext_delegate = QtWidgets.QStyledItemDelegate() + ext_combo.setItemDelegate(ext_delegate) + ext_combo.addItems(self.host.file_extensions()) + + # Build inputs + inputs_layout = QtWidgets.QFormLayout(inputs_widget) + # Add version only if template contains version key + # - since the version can be padded with "{version:0>4}" we only search + # for "{version". + if "{version" in self.template: + inputs_layout.addRow("Version:", version_widget) + else: + version_widget.setVisible(False) + + # Add subversion only if template contains `{comment}` + if "{comment}" in self.template: + inputs_layout.addRow("Subversion:", subversion) + + # Detect whether a {comment} is in the current filename - if so, + # preserve it by default and set it in the comment/subversion field + current_filepath = self.host.current_file() + if current_filepath: + # We match the current filename against the current session + # instead of the session where the user is saving to. + current_data = build_workfile_data(api.Session) + matcher = CommentMatcher(anatomy, template_key, current_data) + comment = matcher.parse_comment(current_filepath) + if comment: + log.info("Detected subversion comment: {}".format(comment)) + self.data["comment"] = comment + subversion.set_text(comment) + + existing_comments = self.get_existing_comments() + subversion.set_values(existing_comments) + + else: + subversion.setVisible(False) + inputs_layout.addRow("Extension:", ext_combo) + inputs_layout.addRow("Preview:", preview_label) + + # Build layout + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.addWidget(inputs_widget) + main_layout.addWidget(btns_widget) + + # Signal callback registration + version_input.valueChanged.connect(self.on_version_spinbox_changed) + last_version_check.stateChanged.connect( + self.on_version_checkbox_changed + ) + + subversion.text_changed.connect(self.on_comment_changed) + ext_combo.currentIndexChanged.connect(self.on_extension_changed) + + btn_ok.pressed.connect(self.on_ok_pressed) + btn_cancel.pressed.connect(self.on_cancel_pressed) + + # Allow "Enter" key to accept the save. + btn_ok.setDefault(True) + + # Force default focus to comment, some hosts didn't automatically + # apply focus to this line edit (e.g. Houdini) + subversion.setFocus() + + # Store widgets + self.btn_ok = btn_ok + + self.version_widget = version_widget + + self.version_input = version_input + self.last_version_check = last_version_check + + self.preview_label = preview_label + self.subversion = subversion + self.ext_combo = ext_combo + self._ext_delegate = ext_delegate + + self.refresh() + + def get_existing_comments(self): + matcher = CommentMatcher(self.anatomy, self.template_key, self.data) + host_extensions = set(self.host.file_extensions()) + comments = set() + if os.path.isdir(self.root): + for fname in os.listdir(self.root): + if not os.path.isfile(os.path.join(self.root, fname)): + continue + + ext = os.path.splitext(fname)[-1] + if ext not in host_extensions: + continue + + comment = matcher.parse_comment(fname) + if comment: + comments.add(comment) + + return list(comments) + + def on_version_spinbox_changed(self, value): + self.data["version"] = value + self.refresh() + + def on_version_checkbox_changed(self, _value): + self.refresh() + + def on_comment_changed(self, text): + self.data["comment"] = text + self.refresh() + + def on_extension_changed(self): + ext = self.ext_combo.currentText() + if ext == self.data["ext"]: + return + self.data["ext"] = ext + self.refresh() + + def on_ok_pressed(self): + self.result = self.work_file + self.close() + + def on_cancel_pressed(self): + self.close() + + def get_result(self): + return self.result + + def get_work_file(self): + data = copy.deepcopy(self.data) + if not data["comment"]: + data.pop("comment", None) + + data["ext"] = data["ext"][1:] + + anatomy_filled = self.anatomy.format(data) + return anatomy_filled[self.template_key]["file"] + + def refresh(self): + extensions = self.host.file_extensions() + extension = self.data["ext"] + if extension is None: + # Define saving file extension + current_file = self.host.current_file() + if current_file: + # Match the extension of current file + _, extension = os.path.splitext(current_file) + else: + extension = extensions[0] + + if extension != self.data["ext"]: + self.data["ext"] = extension + index = self.ext_combo.findText( + extension, QtCore.Qt.MatchFixedString + ) + if index >= 0: + self.ext_combo.setCurrentIndex(index) + + if not self.last_version_check.isChecked(): + self.version_input.setEnabled(True) + self.data["version"] = self.version_input.value() + + work_file = self.get_work_file() + + else: + self.version_input.setEnabled(False) + + data = copy.deepcopy(self.data) + template = str(self.template) + + if not data["comment"]: + data.pop("comment", None) + + data["ext"] = data["ext"][1:] + + version = get_last_workfile_with_version( + self.root, template, data, extensions + )[1] + + if version is None: + version = 1 + else: + version += 1 + + found_valid_version = False + # Check if next version is valid version and give a chance to try + # next 100 versions + for idx in range(100): + # Store version to data + self.data["version"] = version + + work_file = self.get_work_file() + # Safety check + path = os.path.join(self.root, work_file) + if not os.path.exists(path): + found_valid_version = True + break + + # Try next version + version += 1 + # Log warning + if idx == 0: + log.warning(( + "BUG: Function `get_last_workfile_with_version` " + "didn't return last version." + )) + # Raise exception if even 100 version fallback didn't help + if not found_valid_version: + raise AssertionError( + "This is a bug. Couldn't find valid version!" + ) + + self.work_file = work_file + + path_exists = os.path.exists(os.path.join(self.root, work_file)) + + self.btn_ok.setEnabled(not path_exists) + + if path_exists: + self.preview_label.setText( + "Cannot create \"{0}\" because file exists!" + "".format(work_file) + ) + else: + self.preview_label.setText( + "{0}".format(work_file) + ) diff --git a/openpype/tools/workfiles/view.py b/openpype/tools/workfiles/view.py deleted file mode 100644 index 8e3993e4c7..0000000000 --- a/openpype/tools/workfiles/view.py +++ /dev/null @@ -1,15 +0,0 @@ -from Qt import QtWidgets, QtCore - - -class FilesView(QtWidgets.QTreeView): - doubleClickedLeft = QtCore.Signal() - doubleClickedRight = QtCore.Signal() - - def mouseDoubleClickEvent(self, event): - if event.button() == QtCore.Qt.LeftButton: - self.doubleClickedLeft.emit() - - elif event.button() == QtCore.Qt.RightButton: - self.doubleClickedRight.emit() - - return super(FilesView, self).mouseDoubleClickEvent(event) diff --git a/openpype/tools/workfiles/window.py b/openpype/tools/workfiles/window.py new file mode 100644 index 0000000000..8654a18036 --- /dev/null +++ b/openpype/tools/workfiles/window.py @@ -0,0 +1,393 @@ +import os +import datetime +from Qt import QtCore, QtWidgets + +from avalon import io + +from openpype import style +from openpype.lib import ( + get_workfile_doc, + create_workfile_doc, + save_workfile_data_to_doc, +) +from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget +from openpype.tools.utils.tasks_widget import TasksWidget + +from .files_widget import FilesWidget +from .lib import TempPublishFiles, file_size_to_string + + +class SidePanelWidget(QtWidgets.QWidget): + save_clicked = QtCore.Signal() + published_workfile_message = ( + "INFO: Opened published workfiles will be stored in" + " temp directory on your machine. Current temp size: {}." + ) + + def __init__(self, parent=None): + super(SidePanelWidget, self).__init__(parent) + + details_label = QtWidgets.QLabel("Details", self) + details_input = QtWidgets.QPlainTextEdit(self) + details_input.setReadOnly(True) + + artist_note_widget = QtWidgets.QWidget(self) + note_label = QtWidgets.QLabel("Artist note", artist_note_widget) + note_input = QtWidgets.QPlainTextEdit(artist_note_widget) + btn_note_save = QtWidgets.QPushButton("Save note", artist_note_widget) + + artist_note_layout = QtWidgets.QVBoxLayout(artist_note_widget) + artist_note_layout.setContentsMargins(0, 0, 0, 0) + artist_note_layout.addWidget(note_label, 0) + artist_note_layout.addWidget(note_input, 1) + artist_note_layout.addWidget( + btn_note_save, 0, alignment=QtCore.Qt.AlignRight + ) + + publish_temp_widget = QtWidgets.QWidget(self) + publish_temp_info_label = QtWidgets.QLabel( + self.published_workfile_message.format( + file_size_to_string(0) + ), + publish_temp_widget + ) + publish_temp_info_label.setWordWrap(True) + + btn_clear_temp = QtWidgets.QPushButton( + "Clear temp", publish_temp_widget + ) + + publish_temp_layout = QtWidgets.QVBoxLayout(publish_temp_widget) + publish_temp_layout.setContentsMargins(0, 0, 0, 0) + publish_temp_layout.addWidget(publish_temp_info_label, 0) + publish_temp_layout.addWidget( + btn_clear_temp, 0, alignment=QtCore.Qt.AlignRight + ) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(details_label, 0) + main_layout.addWidget(details_input, 1) + main_layout.addWidget(artist_note_widget, 1) + main_layout.addWidget(publish_temp_widget, 0) + + note_input.textChanged.connect(self._on_note_change) + btn_note_save.clicked.connect(self._on_save_click) + btn_clear_temp.clicked.connect(self._on_clear_temp_click) + + self._details_input = details_input + self._artist_note_widget = artist_note_widget + self._note_input = note_input + self._btn_note_save = btn_note_save + + self._publish_temp_info_label = publish_temp_info_label + self._publish_temp_widget = publish_temp_widget + + self._orig_note = "" + self._workfile_doc = None + + publish_temp_widget.setVisible(False) + + def set_published_visible(self, published_visible): + self._artist_note_widget.setVisible(not published_visible) + self._publish_temp_widget.setVisible(published_visible) + if published_visible: + self.refresh_publish_temp_sizes() + + def refresh_publish_temp_sizes(self): + temp_publish_files = TempPublishFiles() + text = self.published_workfile_message.format( + file_size_to_string(temp_publish_files.size) + ) + self._publish_temp_info_label.setText(text) + + def _on_clear_temp_click(self): + temp_publish_files = TempPublishFiles() + temp_publish_files.clear() + self.refresh_publish_temp_sizes() + + def _on_note_change(self): + text = self._note_input.toPlainText() + self._btn_note_save.setEnabled(self._orig_note != text) + + def _on_save_click(self): + self._orig_note = self._note_input.toPlainText() + self._on_note_change() + self.save_clicked.emit() + + def set_context(self, asset_id, task_name, filepath, workfile_doc): + # Check if asset, task and file are selected + # NOTE workfile document is not requirement + enabled = bool(asset_id) and bool(task_name) and bool(filepath) + + self._details_input.setEnabled(enabled) + self._note_input.setEnabled(enabled) + self._btn_note_save.setEnabled(enabled) + + # Make sure workfile doc is overridden + self._workfile_doc = workfile_doc + # Disable inputs and remove texts if any required arguments are missing + if not enabled: + self._orig_note = "" + self._details_input.setPlainText("") + self._note_input.setPlainText("") + return + + orig_note = "" + if workfile_doc: + orig_note = workfile_doc["data"].get("note") or orig_note + + self._orig_note = orig_note + self._note_input.setPlainText(orig_note) + # Set as empty string + self._details_input.setPlainText("") + + filestat = os.stat(filepath) + size_value = file_size_to_string(filestat.st_size) + + # Append html string + datetime_format = "%b %d %Y %H:%M:%S" + creation_time = datetime.datetime.fromtimestamp(filestat.st_ctime) + modification_time = datetime.datetime.fromtimestamp(filestat.st_mtime) + lines = ( + "Size:", + size_value, + "Created:", + creation_time.strftime(datetime_format), + "Modified:", + modification_time.strftime(datetime_format) + ) + self._details_input.appendHtml("
".join(lines)) + + def get_workfile_data(self): + data = { + "note": self._note_input.toPlainText() + } + return self._workfile_doc, data + + +class Window(QtWidgets.QMainWindow): + """Work Files Window""" + title = "Work Files" + + def __init__(self, parent=None): + super(Window, self).__init__(parent=parent) + self.setWindowTitle(self.title) + window_flags = QtCore.Qt.Window | QtCore.Qt.WindowCloseButtonHint + if not parent: + window_flags |= QtCore.Qt.WindowStaysOnTopHint + self.setWindowFlags(window_flags) + + # Create pages widget and set it as central widget + pages_widget = QtWidgets.QStackedWidget(self) + self.setCentralWidget(pages_widget) + + home_page_widget = QtWidgets.QWidget(pages_widget) + home_body_widget = QtWidgets.QWidget(home_page_widget) + + assets_widget = SingleSelectAssetsWidget(io, parent=home_body_widget) + assets_widget.set_current_asset_btn_visibility(True) + + tasks_widget = TasksWidget(io, home_body_widget) + files_widget = FilesWidget(home_body_widget) + side_panel = SidePanelWidget(home_body_widget) + + pages_widget.addWidget(home_page_widget) + + # Build home + home_page_layout = QtWidgets.QVBoxLayout(home_page_widget) + home_page_layout.addWidget(home_body_widget) + + # Build home - body + body_layout = QtWidgets.QVBoxLayout(home_body_widget) + split_widget = QtWidgets.QSplitter(home_body_widget) + split_widget.addWidget(assets_widget) + split_widget.addWidget(tasks_widget) + split_widget.addWidget(files_widget) + split_widget.addWidget(side_panel) + split_widget.setSizes([255, 160, 455, 175]) + + body_layout.addWidget(split_widget) + + # Add top margin for tasks to align it visually with files as + # the files widget has a filter field which tasks does not. + tasks_widget.setContentsMargins(0, 32, 0, 0) + + # Set context after asset widget is refreshed + # - to do so it is necessary to wait until refresh is done + set_context_timer = QtCore.QTimer() + set_context_timer.setInterval(100) + + # Connect signals + set_context_timer.timeout.connect(self._on_context_set_timeout) + assets_widget.selection_changed.connect(self._on_asset_changed) + tasks_widget.task_changed.connect(self._on_task_changed) + files_widget.file_selected.connect(self.on_file_select) + files_widget.workfile_created.connect(self.on_workfile_create) + files_widget.file_opened.connect(self._on_file_opened) + files_widget.publish_file_viewed.connect( + self._on_publish_file_viewed + ) + files_widget.published_visible_changed.connect( + self._on_published_change + ) + side_panel.save_clicked.connect(self.on_side_panel_save) + + self._set_context_timer = set_context_timer + self.home_page_widget = home_page_widget + self.pages_widget = pages_widget + self.home_body_widget = home_body_widget + self.split_widget = split_widget + + self.assets_widget = assets_widget + self.tasks_widget = tasks_widget + self.files_widget = files_widget + self.side_panel = side_panel + + # Force focus on the open button by default, required for Houdini. + files_widget.setFocus() + + self.resize(1200, 600) + + self._first_show = True + self._context_to_set = None + + def showEvent(self, event): + super(Window, self).showEvent(event) + if self._first_show: + self._first_show = False + self.refresh() + self.setStyleSheet(style.load_stylesheet()) + + def keyPressEvent(self, event): + """Custom keyPressEvent. + + Override keyPressEvent to do nothing so that Maya's panels won't + take focus when pressing "SHIFT" whilst mouse is over viewport or + outliner. This way users don't accidentally perform Maya commands + whilst trying to name an instance. + + """ + + def set_save_enabled(self, enabled): + self.files_widget.set_save_enabled(enabled) + + def on_file_select(self, filepath): + asset_id = self.assets_widget.get_selected_asset_id() + task_name = self.tasks_widget.get_selected_task_name() + + workfile_doc = None + if asset_id and task_name and filepath: + filename = os.path.split(filepath)[1] + workfile_doc = get_workfile_doc( + asset_id, task_name, filename, io + ) + self.side_panel.set_context( + asset_id, task_name, filepath, workfile_doc + ) + + def on_workfile_create(self, filepath): + self._create_workfile_doc(filepath) + + def _on_file_opened(self): + self.close() + + def _on_publish_file_viewed(self): + self.side_panel.refresh_publish_temp_sizes() + + def _on_published_change(self, visible): + self.side_panel.set_published_visible(visible) + + def on_side_panel_save(self): + workfile_doc, data = self.side_panel.get_workfile_data() + if not workfile_doc: + filepath = self.files_widget._get_selected_filepath() + self._create_workfile_doc(filepath, force=True) + workfile_doc = self._get_current_workfile_doc() + + save_workfile_data_to_doc(workfile_doc, data, io) + + def _get_current_workfile_doc(self, filepath=None): + if filepath is None: + filepath = self.files_widget._get_selected_filepath() + task_name = self.tasks_widget.get_selected_task_name() + asset_id = self.assets_widget.get_selected_asset_id() + if not task_name or not asset_id or not filepath: + return + + filename = os.path.split(filepath)[1] + return get_workfile_doc( + asset_id, task_name, filename, io + ) + + def _create_workfile_doc(self, filepath, force=False): + workfile_doc = None + if not force: + workfile_doc = self._get_current_workfile_doc(filepath) + + if not workfile_doc: + workdir, filename = os.path.split(filepath) + asset_id = self.assets_widget.get_selected_asset_id() + asset_doc = io.find_one({"_id": asset_id}) + task_name = self.tasks_widget.get_selected_task_name() + create_workfile_doc(asset_doc, task_name, filename, workdir, io) + + def refresh(self): + # Refresh asset widget + self.assets_widget.refresh() + + self._on_task_changed() + + def set_context(self, context): + self._context_to_set = context + self._set_context_timer.start() + + def _on_context_set_timeout(self): + if self._context_to_set is None: + self._set_context_timer.stop() + return + + if self.assets_widget.refreshing: + return + + self._context_to_set, context = None, self._context_to_set + if "asset" in context: + asset_doc = io.find_one( + { + "name": context["asset"], + "type": "asset" + }, + {"_id": 1} + ) or {} + asset_id = asset_doc.get("_id") + # Select the asset + self.assets_widget.select_asset(asset_id) + self.tasks_widget.set_asset_id(asset_id) + + if "task" in context: + self.tasks_widget.select_task_name(context["task"]) + self._on_task_changed() + + def _on_asset_changed(self): + asset_id = self.assets_widget.get_selected_asset_id() + if asset_id: + self.tasks_widget.setEnabled(True) + else: + # Force disable the other widgets if no + # active selection + self.tasks_widget.setEnabled(False) + self.files_widget.setEnabled(False) + + self.tasks_widget.set_asset_id(asset_id) + + def _on_task_changed(self): + asset_id = self.assets_widget.get_selected_asset_id() + task_name = self.tasks_widget.get_selected_task_name() + task_type = self.tasks_widget.get_selected_task_type() + + asset_is_valid = asset_id is not None + self.tasks_widget.setEnabled(asset_is_valid) + + self.files_widget.setEnabled(bool(task_name) and asset_is_valid) + self.files_widget.set_asset_task(asset_id, task_name, task_type) + self.files_widget.refresh() diff --git a/openpype/version.py b/openpype/version.py index 2390309e76..6d55672aca 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.9.2-nightly.1" +__version__ = "3.9.2-nightly.3" diff --git a/openpype/widgets/attribute_defs/widgets.py b/openpype/widgets/attribute_defs/widgets.py index a6f1b8d6c9..23f025967d 100644 --- a/openpype/widgets/attribute_defs/widgets.py +++ b/openpype/widgets/attribute_defs/widgets.py @@ -2,7 +2,7 @@ import uuid from Qt import QtWidgets, QtCore -from openpype.pipeline.lib import ( +from openpype.lib.attribute_definitions import ( AbtractAttrDef, UnknownDef, NumberDef, diff --git a/poetry.lock b/poetry.lock index ee7b839b8d..ed2b0dd3c2 100644 --- a/poetry.lock +++ b/poetry.lock @@ -11,7 +11,7 @@ develop = false type = "git" url = "https://github.com/pypeclub/acre.git" reference = "master" -resolved_reference = "55a7c331e6dc5f81639af50ca4a8cc9d73e9273d" +resolved_reference = "126f7a188cfe36718f707f42ebbc597e86aa86c3" [[package]] name = "aiohttp" diff --git a/pyproject.toml b/pyproject.toml index 90e264d456..479cd731fe 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.9.2-nightly.1" # OpenPype +version = "3.9.2-nightly.3" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/repos/avalon-core b/repos/avalon-core index 64491fbbcf..2fa14cea6f 160000 --- a/repos/avalon-core +++ b/repos/avalon-core @@ -1 +1 @@ -Subproject commit 64491fbbcf89ba2a0b3a20d67d7486c6142232b3 +Subproject commit 2fa14cea6f6a9d86eec70bbb96860cbe4c75c8eb diff --git a/website/docs/dev_requirements.md b/website/docs/dev_requirements.md index bbf3b1fb5b..6c87054ba0 100644 --- a/website/docs/dev_requirements.md +++ b/website/docs/dev_requirements.md @@ -33,6 +33,8 @@ It can be built and ran on all common platforms. We develop and test on the foll ## Database +Database version should be at least **MongoDB 4.4**. + Pype needs site-wide installation of **MongoDB**. It should be installed on reliable server, that all workstations (and possibly render nodes) can connect. This server holds **Avalon** database that is at the core of everything