diff --git a/CHANGELOG.md b/CHANGELOG.md index f20276cbd7..d6bbef702a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,24 +1,76 @@ # Changelog -## [3.9.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD) +## [3.9.2-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.1...HEAD) +### 📖 Documentation + +- Documentation: Added mention of adding My Drive as a root [\#2999](https://github.com/pypeclub/OpenPype/pull/2999) +- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951) +- Documentation: New publisher develop docs [\#2896](https://github.com/pypeclub/OpenPype/pull/2896) + +**🆕 New features** + +- nuke: bypass baking [\#2992](https://github.com/pypeclub/OpenPype/pull/2992) +- Multiverse: Initial Support [\#2908](https://github.com/pypeclub/OpenPype/pull/2908) + **🚀 Enhancements** -- CI: change the version bump logic [\#2919](https://github.com/pypeclub/OpenPype/pull/2919) -- Deadline: Add headless argument [\#2916](https://github.com/pypeclub/OpenPype/pull/2916) -- Ftrack: Fill workfile in custom attribute [\#2906](https://github.com/pypeclub/OpenPype/pull/2906) -- Settings UI: Add simple tooltips for settings entities [\#2901](https://github.com/pypeclub/OpenPype/pull/2901) +- Photoshop: create image without instance [\#3001](https://github.com/pypeclub/OpenPype/pull/3001) +- TVPaint: Render scene family [\#3000](https://github.com/pypeclub/OpenPype/pull/3000) +- Nuke: ReviewDataMov Read RAW attribute [\#2985](https://github.com/pypeclub/OpenPype/pull/2985) +- General: `METADATA\_KEYS` constant as `frozenset` for optimal immutable lookup [\#2980](https://github.com/pypeclub/OpenPype/pull/2980) +- General: Tools with host filters [\#2975](https://github.com/pypeclub/OpenPype/pull/2975) +- Hero versions: Use custom templates [\#2967](https://github.com/pypeclub/OpenPype/pull/2967) +- Slack: Added configurable maximum file size of review upload to Slack [\#2945](https://github.com/pypeclub/OpenPype/pull/2945) +- NewPublisher: Prepared implementation of optional pyblish plugin [\#2943](https://github.com/pypeclub/OpenPype/pull/2943) +- TVPaint: Extractor to convert PNG into EXR [\#2942](https://github.com/pypeclub/OpenPype/pull/2942) +- Workfiles: Open published workfiles [\#2925](https://github.com/pypeclub/OpenPype/pull/2925) +- General: Default modules loaded dynamically [\#2923](https://github.com/pypeclub/OpenPype/pull/2923) +- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911) +- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903) **🐛 Bug fixes** -- Ftrack: Missing Ftrack id after editorial publish [\#2905](https://github.com/pypeclub/OpenPype/pull/2905) -- AfterEffects: Fix rendering for single frame in DL [\#2875](https://github.com/pypeclub/OpenPype/pull/2875) +- Hosts: Remove path existence checks in 'add\_implementation\_envs' [\#3004](https://github.com/pypeclub/OpenPype/pull/3004) +- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998) +- PS: fix renaming subset incorrectly in PS [\#2991](https://github.com/pypeclub/OpenPype/pull/2991) +- Fix: Disable setuptools auto discovery [\#2990](https://github.com/pypeclub/OpenPype/pull/2990) +- AEL: fix opening existing workfile if no scene opened [\#2989](https://github.com/pypeclub/OpenPype/pull/2989) +- Maya: Don't do hardlinks on windows for look publishing [\#2986](https://github.com/pypeclub/OpenPype/pull/2986) +- Settings UI: Fix version completer on linux [\#2981](https://github.com/pypeclub/OpenPype/pull/2981) +- Photoshop: Fix creation of subset names in PS review and workfile [\#2969](https://github.com/pypeclub/OpenPype/pull/2969) +- Slack: Added default for review\_upload\_limit for Slack [\#2965](https://github.com/pypeclub/OpenPype/pull/2965) +- General: OIIO conversion for ffmeg can handle sequences [\#2958](https://github.com/pypeclub/OpenPype/pull/2958) +- Settings: Conditional dictionary avoid invalid logs [\#2956](https://github.com/pypeclub/OpenPype/pull/2956) +- General: Smaller fixes and typos [\#2950](https://github.com/pypeclub/OpenPype/pull/2950) +- LogViewer: Don't refresh on initialization [\#2949](https://github.com/pypeclub/OpenPype/pull/2949) +- nuke: python3 compatibility issue with `iteritems` [\#2948](https://github.com/pypeclub/OpenPype/pull/2948) +- General: anatomy data with correct task short key [\#2947](https://github.com/pypeclub/OpenPype/pull/2947) +- SceneInventory: Fix imports in UI [\#2944](https://github.com/pypeclub/OpenPype/pull/2944) +- Slack: add generic exception [\#2941](https://github.com/pypeclub/OpenPype/pull/2941) +- General: Python specific vendor paths on env injection [\#2939](https://github.com/pypeclub/OpenPype/pull/2939) +- General: More fail safe delete old versions [\#2936](https://github.com/pypeclub/OpenPype/pull/2936) +- Settings UI: Collapsed of collapsible wrapper works as expected [\#2934](https://github.com/pypeclub/OpenPype/pull/2934) +- Maya: Do not pass `set` to maya commands \(fixes support for older maya versions\) [\#2932](https://github.com/pypeclub/OpenPype/pull/2932) +- General: Don't print log record on OSError [\#2926](https://github.com/pypeclub/OpenPype/pull/2926) +- Flame: centos related debugging [\#2922](https://github.com/pypeclub/OpenPype/pull/2922) **🔀 Refactored code** -- General: Move formatting and workfile functions [\#2914](https://github.com/pypeclub/OpenPype/pull/2914) +- General: Move plugins register and discover [\#2935](https://github.com/pypeclub/OpenPype/pull/2935) +- General: Move Attribute Definitions from pipeline [\#2931](https://github.com/pypeclub/OpenPype/pull/2931) +- General: Removed silo references and terminal splash [\#2927](https://github.com/pypeclub/OpenPype/pull/2927) +- General: Move pipeline constants to OpenPype [\#2918](https://github.com/pypeclub/OpenPype/pull/2918) +- General: Move remaining plugins from avalon [\#2912](https://github.com/pypeclub/OpenPype/pull/2912) + +**Merged pull requests:** + +- Bump paramiko from 2.9.2 to 2.10.1 [\#2973](https://github.com/pypeclub/OpenPype/pull/2973) +- Bump minimist from 1.2.5 to 1.2.6 in /website [\#2954](https://github.com/pypeclub/OpenPype/pull/2954) +- Bump node-forge from 1.2.1 to 1.3.0 in /website [\#2953](https://github.com/pypeclub/OpenPype/pull/2953) +- Maya - added transparency into review creator [\#2952](https://github.com/pypeclub/OpenPype/pull/2952) ## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18) @@ -52,10 +104,6 @@ [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.0-nightly.9...3.9.0) -**Deprecated:** - -- AssetCreator: Remove the tool [\#2845](https://github.com/pypeclub/OpenPype/pull/2845) - ### 📖 Documentation - Documentation: Change Photoshop & AfterEffects plugin path [\#2878](https://github.com/pypeclub/OpenPype/pull/2878) @@ -66,14 +114,6 @@ - NewPublisher: Descriptions and Icons in creator dialog [\#2867](https://github.com/pypeclub/OpenPype/pull/2867) - NewPublisher: Changing task on publishing instance [\#2863](https://github.com/pypeclub/OpenPype/pull/2863) - TrayPublisher: Choose project widget is more clear [\#2859](https://github.com/pypeclub/OpenPype/pull/2859) -- New: Validation exceptions [\#2841](https://github.com/pypeclub/OpenPype/pull/2841) -- Maya: add loaded containers to published instance [\#2837](https://github.com/pypeclub/OpenPype/pull/2837) -- Ftrack: Can sync fps as string [\#2836](https://github.com/pypeclub/OpenPype/pull/2836) -- General: Custom function for find executable [\#2822](https://github.com/pypeclub/OpenPype/pull/2822) -- General: Color dialog UI fixes [\#2817](https://github.com/pypeclub/OpenPype/pull/2817) -- global: letter box calculated on output as last process [\#2812](https://github.com/pypeclub/OpenPype/pull/2812) -- Nuke: adding Reformat to baking mov plugin [\#2811](https://github.com/pypeclub/OpenPype/pull/2811) -- Manager: Update all to latest button [\#2805](https://github.com/pypeclub/OpenPype/pull/2805) **🐛 Bug fixes** @@ -87,31 +127,11 @@ - General: Fix getattr clalback on dynamic modules [\#2855](https://github.com/pypeclub/OpenPype/pull/2855) - Nuke: slate resolution to input video resolution [\#2853](https://github.com/pypeclub/OpenPype/pull/2853) - WebPublisher: Fix username stored in DB [\#2852](https://github.com/pypeclub/OpenPype/pull/2852) -- WebPublisher: Fix wrong number of frames for video file [\#2851](https://github.com/pypeclub/OpenPype/pull/2851) -- Nuke: Fix family test in validate\_write\_legacy to work with stillImage [\#2847](https://github.com/pypeclub/OpenPype/pull/2847) -- Nuke: fix multiple baking profile farm publishing [\#2842](https://github.com/pypeclub/OpenPype/pull/2842) -- Blender: Fixed parameters for FBX export of the camera [\#2840](https://github.com/pypeclub/OpenPype/pull/2840) -- Maya: Stop creation of reviews for Cryptomattes [\#2832](https://github.com/pypeclub/OpenPype/pull/2832) -- Deadline: Remove recreated event [\#2828](https://github.com/pypeclub/OpenPype/pull/2828) -- Deadline: Added missing events folder [\#2827](https://github.com/pypeclub/OpenPype/pull/2827) -- Maya: Deformer node ids validation plugin [\#2826](https://github.com/pypeclub/OpenPype/pull/2826) -- Settings: Missing document with OP versions may break start of OpenPype [\#2825](https://github.com/pypeclub/OpenPype/pull/2825) -- Deadline: more detailed temp file name for environment json [\#2824](https://github.com/pypeclub/OpenPype/pull/2824) -- General: Host name was formed from obsolete code [\#2821](https://github.com/pypeclub/OpenPype/pull/2821) -- Settings UI: Fix "Apply from" action [\#2820](https://github.com/pypeclub/OpenPype/pull/2820) -- Ftrack: Job killer with missing user [\#2819](https://github.com/pypeclub/OpenPype/pull/2819) -- Nuke: Use AVALON\_APP to get value for "app" key [\#2818](https://github.com/pypeclub/OpenPype/pull/2818) -- StandalonePublisher: use dynamic groups in subset names [\#2816](https://github.com/pypeclub/OpenPype/pull/2816) **🔀 Refactored code** - Refactor: move webserver tool to openpype [\#2876](https://github.com/pypeclub/OpenPype/pull/2876) - General: Move create logic from avalon to OpenPype [\#2854](https://github.com/pypeclub/OpenPype/pull/2854) -- General: Add vendors from avalon [\#2848](https://github.com/pypeclub/OpenPype/pull/2848) -- General: Basic event system [\#2846](https://github.com/pypeclub/OpenPype/pull/2846) -- General: Move change context functions [\#2839](https://github.com/pypeclub/OpenPype/pull/2839) -- Tools: Don't use avalon tools code [\#2829](https://github.com/pypeclub/OpenPype/pull/2829) -- Move Unreal Implementation to OpenPype [\#2823](https://github.com/pypeclub/OpenPype/pull/2823) ## [3.8.2](https://github.com/pypeclub/OpenPype/tree/3.8.2) (2022-02-07) diff --git a/openpype/__init__.py b/openpype/__init__.py index 8b94b2dc3f..7fc7e63e61 100644 --- a/openpype/__init__.py +++ b/openpype/__init__.py @@ -2,20 +2,16 @@ """Pype module.""" import os import platform -import functools import logging from .settings import get_project_settings from .lib import ( Anatomy, filter_pyblish_plugins, - set_plugin_attributes_from_settings, change_timer_to_current_context, register_event_callback, ) -pyblish = avalon = _original_discover = None - log = logging.getLogger(__name__) @@ -27,60 +23,17 @@ PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") -def import_wrapper(func): - """Wrap module imports to specific functions.""" - @functools.wraps(func) - def decorated(*args, **kwargs): - global pyblish - global avalon - global _original_discover - if pyblish is None: - from pyblish import api as pyblish - from avalon import api as avalon - - # we are monkey patching `avalon.api.discover()` to allow us to - # load plugin presets on plugins being discovered by avalon. - # Little bit of hacking, but it allows us to add out own features - # without need to modify upstream code. - - _original_discover = avalon.discover - - return func(*args, **kwargs) - - return decorated - - -@import_wrapper -def patched_discover(superclass): - """Patch `avalon.api.discover()`. - - Monkey patched version of :func:`avalon.api.discover()`. It allows - us to load presets on plugins being discovered. - """ - # run original discover and get plugins - plugins = _original_discover(superclass) - filtered_plugins = [ - plugin - for plugin in plugins - if issubclass(plugin, superclass) - ] - - set_plugin_attributes_from_settings(filtered_plugins, superclass) - - return filtered_plugins - - -@import_wrapper def install(): - """Install Pype to Avalon.""" + """Install OpenPype to Avalon.""" + import avalon.api + import pyblish.api from pyblish.lib import MessageHandler from openpype.modules import load_modules from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, register_inventory_action, + register_creator_plugin_path, ) - from avalon import pipeline # Make sure modules are loaded load_modules() @@ -93,8 +46,8 @@ def install(): MessageHandler.emit = modified_emit log.info("Registering global plug-ins..") - pyblish.register_plugin_path(PUBLISH_PATH) - pyblish.register_discovery_filter(filter_pyblish_plugins) + pyblish.api.register_plugin_path(PUBLISH_PATH) + pyblish.api.register_discovery_filter(filter_pyblish_plugins) register_loader_plugin_path(LOAD_PATH) project_name = os.environ.get("AVALON_PROJECT") @@ -103,7 +56,7 @@ def install(): if project_name: anatomy = Anatomy(project_name) anatomy.set_root_environments() - avalon.register_root(anatomy.roots) + avalon.api.register_root(anatomy.roots) project_settings = get_project_settings(project_name) platform_name = platform.system().lower() @@ -122,17 +75,14 @@ def install(): if not path or not os.path.exists(path): continue - pyblish.register_plugin_path(path) + pyblish.api.register_plugin_path(path) register_loader_plugin_path(path) - avalon.register_plugin_path(LegacyCreator, path) + register_creator_plugin_path(path) register_inventory_action(path) # apply monkey patched discover to original one log.info("Patching discovery") - avalon.discover = patched_discover - pipeline.discover = patched_discover - register_event_callback("taskChanged", _on_task_change) @@ -140,16 +90,13 @@ def _on_task_change(): change_timer_to_current_context() -@import_wrapper def uninstall(): """Uninstall Pype from Avalon.""" + import pyblish.api from openpype.pipeline import deregister_loader_plugin_path log.info("Deregistering global plug-ins..") - pyblish.deregister_plugin_path(PUBLISH_PATH) - pyblish.deregister_discovery_filter(filter_pyblish_plugins) + pyblish.api.deregister_plugin_path(PUBLISH_PATH) + pyblish.api.deregister_discovery_filter(filter_pyblish_plugins) deregister_loader_plugin_path(LOAD_PATH) log.info("Global plug-ins unregistred") - - # restore original discover - avalon.discover = _original_discover diff --git a/openpype/hooks/pre_python_2_prelaunch.py b/openpype/hooks/pre_python_2_prelaunch.py deleted file mode 100644 index 84272d2e5d..0000000000 --- a/openpype/hooks/pre_python_2_prelaunch.py +++ /dev/null @@ -1,35 +0,0 @@ -import os -from openpype.lib import PreLaunchHook - - -class PrePython2Vendor(PreLaunchHook): - """Prepend python 2 dependencies for py2 hosts.""" - order = 10 - - def execute(self): - if not self.application.use_python_2: - return - - # Prepare vendor dir path - self.log.info("adding global python 2 vendor") - pype_root = os.getenv("OPENPYPE_REPOS_ROOT") - python_2_vendor = os.path.join( - pype_root, - "openpype", - "vendor", - "python", - "python_2" - ) - - # Add Python 2 modules - python_paths = [ - python_2_vendor - ] - - # Load PYTHONPATH from current launch context - python_path = self.launch_context.env.get("PYTHONPATH") - if python_path: - python_paths.append(python_path) - - # Set new PYTHONPATH to launch context environments - self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths) diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index bb9affc9b6..94bc369856 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -5,15 +5,15 @@ from Qt import QtWidgets from bson.objectid import ObjectId import pyblish.api -import avalon.api from avalon import io from openpype import lib from openpype.api import Logger from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) import openpype.hosts.aftereffects @@ -73,7 +73,7 @@ def install(): pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) log.info(PUBLISH_PATH) pyblish.api.register_callback( @@ -86,7 +86,7 @@ def install(): def uninstall(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) def on_pyblish_instance_toggled(instance, old_value, new_value): diff --git a/openpype/hosts/aftereffects/api/workio.py b/openpype/hosts/aftereffects/api/workio.py index 5a8f86ead5..70815bda6b 100644 --- a/openpype/hosts/aftereffects/api/workio.py +++ b/openpype/hosts/aftereffects/api/workio.py @@ -5,14 +5,6 @@ from openpype.pipeline import HOST_WORKFILE_EXTENSIONS from .launch_logic import get_stub -def _active_document(): - document_name = get_stub().get_active_document_name() - if not document_name: - return None - - return document_name - - def file_extensions(): return HOST_WORKFILE_EXTENSIONS["aftereffects"] @@ -39,7 +31,8 @@ def current_file(): full_name = get_stub().get_active_document_full_name() if full_name and full_name != "null": return os.path.normpath(full_name).replace("\\", "/") - except Exception: + except ValueError: + print("Nothing opened") pass return None @@ -47,3 +40,15 @@ def current_file(): def work_root(session): return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/") + + +def _active_document(): + # TODO merge with current_file - even in extension + document_name = None + try: + document_name = get_stub().get_active_document_name() + except ValueError: + print("Nothing opened") + pass + + return document_name \ No newline at end of file diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index 41efb4b0ba..831085a5f1 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -1,12 +1,14 @@ -from openpype.pipeline import create -from openpype.pipeline import CreatorError +from openpype.pipeline import ( + CreatorError, + LegacyCreator +) from openpype.hosts.aftereffects.api import ( get_stub, list_instances ) -class CreateRender(create.LegacyCreator): +class CreateRender(LegacyCreator): """Render folder for publish. Creates subsets in format 'familyTaskSubsetname', diff --git a/openpype/hosts/blender/__init__.py b/openpype/hosts/blender/__init__.py index 3081d3c9ba..0f27882c7e 100644 --- a/openpype/hosts/blender/__init__.py +++ b/openpype/hosts/blender/__init__.py @@ -29,12 +29,12 @@ def add_implementation_envs(env, _app): env.get("OPENPYPE_BLENDER_USER_SCRIPTS") or "" ) for path in openpype_blender_user_scripts.split(os.pathsep): - if path and os.path.exists(path): + if path: previous_user_scripts.add(os.path.normpath(path)) blender_user_scripts = env.get("BLENDER_USER_SCRIPTS") or "" for path in blender_user_scripts.split(os.pathsep): - if path and os.path.exists(path): + if path: previous_user_scripts.add(os.path.normpath(path)) # Remove implementation path from user script paths as is set to diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 8c580cf214..b9ec2cfea4 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -14,9 +14,10 @@ import avalon.api from avalon import io, schema from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) from openpype.api import Logger @@ -54,7 +55,7 @@ def install(): pyblish.api.register_plugin_path(str(PUBLISH_PATH)) register_loader_plugin_path(str(LOAD_PATH)) - avalon.api.register_plugin_path(LegacyCreator, str(CREATE_PATH)) + register_creator_plugin_path(str(CREATE_PATH)) lib.append_user_scripts() @@ -76,7 +77,7 @@ def uninstall(): pyblish.api.deregister_plugin_path(str(PUBLISH_PATH)) deregister_loader_plugin_path(str(LOAD_PATH)) - avalon.api.deregister_plugin_path(LegacyCreator, str(CREATE_PATH)) + deregister_creator_plugin_path(str(CREATE_PATH)) if not IS_HEADLESS: ops.unregister() diff --git a/openpype/hosts/flame/api/lib.py b/openpype/hosts/flame/api/lib.py index 74d9e7607a..aa2cfcb96d 100644 --- a/openpype/hosts/flame/api/lib.py +++ b/openpype/hosts/flame/api/lib.py @@ -18,6 +18,7 @@ log = Logger.get_logger(__name__) FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]") + class CTX: # singleton used for passing data between api modules app_framework = None @@ -538,9 +539,17 @@ def get_segment_attributes(segment): # head and tail with forward compatibility if segment.head: - clip_data["segment_head"] = int(segment.head) + # `infinite` can be also returned + if isinstance(segment.head, str): + clip_data["segment_head"] = 0 + else: + clip_data["segment_head"] = int(segment.head) if segment.tail: - clip_data["segment_tail"] = int(segment.tail) + # `infinite` can be also returned + if isinstance(segment.tail, str): + clip_data["segment_tail"] = 0 + else: + clip_data["segment_tail"] = int(segment.tail) # add all available shot tokens shot_tokens = _get_shot_tokens_values(segment, [ diff --git a/openpype/hosts/flame/api/pipeline.py b/openpype/hosts/flame/api/pipeline.py index ca3f38c1bc..da44be1b15 100644 --- a/openpype/hosts/flame/api/pipeline.py +++ b/openpype/hosts/flame/api/pipeline.py @@ -3,14 +3,14 @@ Basic avalon integration """ import os import contextlib -from avalon import api as avalon from pyblish import api as pyblish from openpype.api import Logger from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) from .lib import ( @@ -37,7 +37,7 @@ def install(): pyblish.register_host("flame") pyblish.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) log.info("OpenPype Flame plug-ins registred ...") # register callback for switching publishable @@ -52,7 +52,7 @@ def uninstall(): log.info("Deregistering Flame plug-ins..") pyblish.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) # register callback for switching publishable pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/flame/api/scripts/wiretap_com.py b/openpype/hosts/flame/api/scripts/wiretap_com.py index ee906c2608..54993d34eb 100644 --- a/openpype/hosts/flame/api/scripts/wiretap_com.py +++ b/openpype/hosts/flame/api/scripts/wiretap_com.py @@ -422,7 +422,13 @@ class WireTapCom(object): color_policy = color_policy or "Legacy" # check if the colour policy in custom dir - if not os.path.exists(color_policy): + if "/" in color_policy: + # if unlikelly full path was used make it redundant + color_policy = color_policy.replace("/syncolor/policies/", "") + # expecting input is `Shared/NameOfPolicy` + color_policy = "/syncolor/policies/{}".format( + color_policy) + else: color_policy = "/syncolor/policies/Autodesk/{}".format( color_policy) diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py index 70340ad7a2..2482abd9c7 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py @@ -34,119 +34,125 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): def process(self, context): project = context.data["flameProject"] sequence = context.data["flameSequence"] + selected_segments = context.data["flameSelectedSegments"] + self.log.debug("__ selected_segments: {}".format(selected_segments)) + self.otio_timeline = context.data["otioTimeline"] self.clips_in_reels = opfapi.get_clips_in_reels(project) self.fps = context.data["fps"] # process all sellected - with opfapi.maintained_segment_selection(sequence) as segments: - for segment in segments: - comment_attributes = self._get_comment_attributes(segment) - self.log.debug("_ comment_attributes: {}".format( - pformat(comment_attributes))) + for segment in selected_segments: + # get openpype tag data + marker_data = opfapi.get_segment_data_marker(segment) + self.log.debug("__ marker_data: {}".format( + pformat(marker_data))) - clip_data = opfapi.get_segment_attributes(segment) - clip_name = clip_data["segment_name"] - self.log.debug("clip_name: {}".format(clip_name)) + if not marker_data: + continue - # get openpype tag data - marker_data = opfapi.get_segment_data_marker(segment) - self.log.debug("__ marker_data: {}".format( - pformat(marker_data))) + if marker_data.get("id") != "pyblish.avalon.instance": + continue - if not marker_data: - continue + self.log.debug("__ segment.name: {}".format( + segment.name + )) - if marker_data.get("id") != "pyblish.avalon.instance": - continue + comment_attributes = self._get_comment_attributes(segment) - # get file path - file_path = clip_data["fpath"] + self.log.debug("_ comment_attributes: {}".format( + pformat(comment_attributes))) - # get source clip - source_clip = self._get_reel_clip(file_path) + clip_data = opfapi.get_segment_attributes(segment) + clip_name = clip_data["segment_name"] + self.log.debug("clip_name: {}".format(clip_name)) - first_frame = opfapi.get_frame_from_filename(file_path) or 0 + # get file path + file_path = clip_data["fpath"] - head, tail = self._get_head_tail(clip_data, first_frame) + # get source clip + source_clip = self._get_reel_clip(file_path) - # solve handles length - marker_data["handleStart"] = min( - marker_data["handleStart"], head) - marker_data["handleEnd"] = min( - marker_data["handleEnd"], tail) + first_frame = opfapi.get_frame_from_filename(file_path) or 0 - with_audio = bool(marker_data.pop("audio")) + head, tail = self._get_head_tail(clip_data, first_frame) - # add marker data to instance data - inst_data = dict(marker_data.items()) + # solve handles length + marker_data["handleStart"] = min( + marker_data["handleStart"], head) + marker_data["handleEnd"] = min( + marker_data["handleEnd"], tail) - asset = marker_data["asset"] - subset = marker_data["subset"] + with_audio = bool(marker_data.pop("audio")) - # insert family into families - family = marker_data["family"] - families = [str(f) for f in marker_data["families"]] - families.insert(0, str(family)) + # add marker data to instance data + inst_data = dict(marker_data.items()) - # form label - label = asset - if asset != clip_name: - label += " ({})".format(clip_name) - label += " {}".format(subset) - label += " {}".format("[" + ", ".join(families) + "]") + asset = marker_data["asset"] + subset = marker_data["subset"] - inst_data.update({ - "name": "{}_{}".format(asset, subset), - "label": label, - "asset": asset, - "item": segment, - "families": families, - "publish": marker_data["publish"], - "fps": self.fps, - "flameSourceClip": source_clip, - "sourceFirstFrame": int(first_frame), - "path": file_path - }) + # insert family into families + family = marker_data["family"] + families = [str(f) for f in marker_data["families"]] + families.insert(0, str(family)) - # get otio clip data - otio_data = self._get_otio_clip_instance_data(clip_data) or {} - self.log.debug("__ otio_data: {}".format(pformat(otio_data))) + # form label + label = asset + if asset != clip_name: + label += " ({})".format(clip_name) + label += " {} [{}]".format(subset, ", ".join(families)) - # add to instance data - inst_data.update(otio_data) - self.log.debug("__ inst_data: {}".format(pformat(inst_data))) + inst_data.update({ + "name": "{}_{}".format(asset, subset), + "label": label, + "asset": asset, + "item": segment, + "families": families, + "publish": marker_data["publish"], + "fps": self.fps, + "flameSourceClip": source_clip, + "sourceFirstFrame": int(first_frame), + "path": file_path + }) - # add resolution - self._get_resolution_to_data(inst_data, context) + # get otio clip data + otio_data = self._get_otio_clip_instance_data(clip_data) or {} + self.log.debug("__ otio_data: {}".format(pformat(otio_data))) - # add comment attributes if any - inst_data.update(comment_attributes) + # add to instance data + inst_data.update(otio_data) + self.log.debug("__ inst_data: {}".format(pformat(inst_data))) - # create instance - instance = context.create_instance(**inst_data) + # add resolution + self._get_resolution_to_data(inst_data, context) - # add colorspace data - instance.data.update({ - "versionData": { - "colorspace": clip_data["colour_space"], - } - }) + # add comment attributes if any + inst_data.update(comment_attributes) - # create shot instance for shot attributes create/update - self._create_shot_instance(context, clip_name, **inst_data) + # create instance + instance = context.create_instance(**inst_data) - self.log.info("Creating instance: {}".format(instance)) - self.log.info( - "_ instance.data: {}".format(pformat(instance.data))) + # add colorspace data + instance.data.update({ + "versionData": { + "colorspace": clip_data["colour_space"], + } + }) - if not with_audio: - continue + # create shot instance for shot attributes create/update + self._create_shot_instance(context, clip_name, **inst_data) - # add audioReview attribute to plate instance data - # if reviewTrack is on - if marker_data.get("reviewTrack") is not None: - instance.data["reviewAudio"] = True + self.log.info("Creating instance: {}".format(instance)) + self.log.info( + "_ instance.data: {}".format(pformat(instance.data))) + + if not with_audio: + continue + + # add audioReview attribute to plate instance data + # if reviewTrack is on + if marker_data.get("reviewTrack") is not None: + instance.data["reviewAudio"] = True def _get_comment_attributes(self, segment): comment = segment.comment.get_value() @@ -188,7 +194,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): # get pattern defined by type pattern = TXT_PATERN - if a_type in ("number" , "float"): + if a_type in ("number", "float"): pattern = NUM_PATERN res_goup = pattern.findall(value) diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py index faa5be9d68..c6aeae7730 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py @@ -31,27 +31,28 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin): ) # adding otio timeline to context - with opfapi.maintained_segment_selection(sequence): + with opfapi.maintained_segment_selection(sequence) as selected_seg: otio_timeline = flame_export.create_otio_timeline(sequence) - instance_data = { - "name": subset_name, - "asset": asset_doc["name"], - "subset": subset_name, - "family": "workfile" - } + instance_data = { + "name": subset_name, + "asset": asset_doc["name"], + "subset": subset_name, + "family": "workfile" + } - # create instance with workfile - instance = context.create_instance(**instance_data) - self.log.info("Creating instance: {}".format(instance)) + # create instance with workfile + instance = context.create_instance(**instance_data) + self.log.info("Creating instance: {}".format(instance)) - # update context with main project attributes - context.data.update({ - "flameProject": project, - "flameSequence": sequence, - "otioTimeline": otio_timeline, - "currentFile": "Flame/{}/{}".format( - project.name, sequence.name - ), - "fps": float(str(sequence.frame_rate)[:-4]) - }) + # update context with main project attributes + context.data.update({ + "flameProject": project, + "flameSequence": sequence, + "otioTimeline": otio_timeline, + "currentFile": "Flame/{}/{}".format( + project.name, sequence.name + ), + "flameSelectedSegments": selected_seg, + "fps": float(str(sequence.frame_rate)[:-4]) + }) diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py index c9cd76770a..0867b464d5 100644 --- a/openpype/hosts/fusion/api/pipeline.py +++ b/openpype/hosts/fusion/api/pipeline.py @@ -7,14 +7,14 @@ import logging import contextlib import pyblish.api -import avalon.api from openpype.api import Logger from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, - deregister_loader_plugin_path, + register_creator_plugin_path, register_inventory_action_path, + deregister_loader_plugin_path, + deregister_creator_plugin_path, deregister_inventory_action_path, AVALON_CONTAINER_ID, ) @@ -70,7 +70,7 @@ def install(): log.info("Registering Fusion plug-ins..") register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) register_inventory_action_path(INVENTORY_PATH) pyblish.api.register_callback( @@ -94,7 +94,7 @@ def uninstall(): log.info("Deregistering Fusion plug-ins..") deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) deregister_inventory_action_path(INVENTORY_PATH) pyblish.api.deregister_callback( diff --git a/openpype/hosts/fusion/plugins/create/create_exr_saver.py b/openpype/hosts/fusion/plugins/create/create_exr_saver.py index ff8bdb21ef..8bab5ee9b1 100644 --- a/openpype/hosts/fusion/plugins/create/create_exr_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_exr_saver.py @@ -1,13 +1,13 @@ import os -from openpype.pipeline import create +from openpype.pipeline import LegacyCreator from openpype.hosts.fusion.api import ( get_current_comp, comp_lock_and_undo_chunk ) -class CreateOpenEXRSaver(create.LegacyCreator): +class CreateOpenEXRSaver(LegacyCreator): name = "openexrDefault" label = "Create OpenEXR Saver" diff --git a/openpype/hosts/harmony/api/pipeline.py b/openpype/hosts/harmony/api/pipeline.py index 420e9720db..88f11dd16f 100644 --- a/openpype/hosts/harmony/api/pipeline.py +++ b/openpype/hosts/harmony/api/pipeline.py @@ -6,14 +6,14 @@ from bson.objectid import ObjectId import pyblish.api from avalon import io -import avalon.api from openpype import lib from openpype.lib import register_event_callback from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) import openpype.hosts.harmony @@ -108,9 +108,8 @@ def check_inventory(): if not lib.any_outdated(): return - host = avalon.api.registered_host() outdated_containers = [] - for container in host.ls(): + for container in ls(): representation = container['representation'] representation_doc = io.find_one( { @@ -186,7 +185,7 @@ def install(): pyblish.api.register_host("harmony") pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) log.info(PUBLISH_PATH) # Register callbacks. @@ -200,7 +199,7 @@ def install(): def uninstall(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) def on_pyblish_instance_toggled(instance, old_value, new_value): diff --git a/openpype/hosts/hiero/__init__.py b/openpype/hosts/hiero/__init__.py index 2d674b3fa7..d2ac82391b 100644 --- a/openpype/hosts/hiero/__init__.py +++ b/openpype/hosts/hiero/__init__.py @@ -10,7 +10,7 @@ def add_implementation_envs(env, _app): ] old_hiero_path = env.get("HIERO_PLUGIN_PATH") or "" for path in old_hiero_path.split(os.pathsep): - if not path or not os.path.exists(path): + if not path: continue norm_path = os.path.normpath(path) diff --git a/openpype/hosts/hiero/api/pipeline.py b/openpype/hosts/hiero/api/pipeline.py index 0d3c8914ce..b334102129 100644 --- a/openpype/hosts/hiero/api/pipeline.py +++ b/openpype/hosts/hiero/api/pipeline.py @@ -5,13 +5,13 @@ import os import contextlib from collections import OrderedDict -from avalon import api as avalon from avalon import schema from pyblish import api as pyblish from openpype.api import Logger from openpype.pipeline import ( - LegacyCreator, + register_creator_plugin_path, register_loader_plugin_path, + deregister_creator_plugin_path, deregister_loader_plugin_path, AVALON_CONTAINER_ID, ) @@ -50,7 +50,7 @@ def install(): pyblish.register_host("hiero") pyblish.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) # register callback for switching publishable pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled) @@ -71,7 +71,7 @@ def uninstall(): pyblish.deregister_host("hiero") pyblish.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) # register callback for switching publishable pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/houdini/__init__.py b/openpype/hosts/houdini/__init__.py index 8c12d13c81..a3ee38db8d 100644 --- a/openpype/hosts/houdini/__init__.py +++ b/openpype/hosts/houdini/__init__.py @@ -15,7 +15,7 @@ def add_implementation_envs(env, _app): old_houdini_menu_path = env.get("HOUDINI_MENU_PATH") or "" for path in old_houdini_path.split(os.pathsep): - if not path or not os.path.exists(path): + if not path: continue norm_path = os.path.normpath(path) @@ -23,7 +23,7 @@ def add_implementation_envs(env, _app): new_houdini_path.append(norm_path) for path in old_houdini_menu_path.split(os.pathsep): - if not path or not os.path.exists(path): + if not path: continue norm_path = os.path.normpath(path) diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index d079c9ea81..8e093a89bc 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -11,7 +11,7 @@ import avalon.api from avalon.lib import find_submodule from openpype.pipeline import ( - LegacyCreator, + register_creator_plugin_path, register_loader_plugin_path, AVALON_CONTAINER_ID, ) @@ -54,7 +54,7 @@ def install(): pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) log.info("Installing callbacks ... ") # register_event_callback("init", on_init) diff --git a/openpype/hosts/maya/__init__.py b/openpype/hosts/maya/__init__.py index b7d26a7818..c1c82c62e5 100644 --- a/openpype/hosts/maya/__init__.py +++ b/openpype/hosts/maya/__init__.py @@ -9,7 +9,7 @@ def add_implementation_envs(env, _app): ] old_python_path = env.get("PYTHONPATH") or "" for path in old_python_path.split(os.pathsep): - if not path or not os.path.exists(path): + if not path: continue norm_path = os.path.normpath(path) diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 376c033d46..92fc5133a9 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -1511,7 +1511,7 @@ def get_container_members(container): members = cmds.sets(container, query=True) or [] members = cmds.ls(members, long=True, objectsOnly=True) or [] - members = set(members) + all_members = set(members) # Include any referenced nodes from any reference in the container # This is required since we've removed adding ALL nodes of a reference @@ -1530,9 +1530,9 @@ def get_container_members(container): reference_members = cmds.ls(reference_members, long=True, objectsOnly=True) - members.update(reference_members) + all_members.update(reference_members) - return members + return list(all_members) # region LOOKDEV diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index bb61128178..a8834d1ea3 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -23,8 +23,10 @@ from openpype.pipeline import ( LegacyCreator, register_loader_plugin_path, register_inventory_action_path, + register_creator_plugin_path, deregister_loader_plugin_path, deregister_inventory_action_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) from openpype.hosts.maya.lib import copy_workspace_mel @@ -60,7 +62,7 @@ def install(): pyblish.api.register_host("maya") register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) register_inventory_action_path(INVENTORY_PATH) log.info(PUBLISH_PATH) @@ -189,7 +191,7 @@ def uninstall(): pyblish.api.deregister_host("maya") deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) deregister_inventory_action_path(INVENTORY_PATH) menu.uninstall() diff --git a/openpype/hosts/maya/plugins/create/create_look.py b/openpype/hosts/maya/plugins/create/create_look.py index 56e2640919..44e439fe1f 100644 --- a/openpype/hosts/maya/plugins/create/create_look.py +++ b/openpype/hosts/maya/plugins/create/create_look.py @@ -22,4 +22,6 @@ class CreateLook(plugin.Creator): self.data["maketx"] = self.make_tx # Enable users to force a copy. + # - on Windows is "forceCopy" always changed to `True` because of + # windows implementation of hardlinks self.data["forceCopy"] = False diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py new file mode 100644 index 0000000000..b2266e5a57 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py @@ -0,0 +1,51 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsd(plugin.Creator): + """Multiverse USD data""" + + name = "usdMain" + label = "Multiverse USD" + family = "usd" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsd, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["stripNamespaces"] = False + self.data["mergeTransformAndShape"] = False + self.data["writeAncestors"] = True + self.data["flattenParentXforms"] = False + self.data["writeSparseOverrides"] = False + self.data["useMetaPrimPath"] = False + self.data["customRootPath"] = '' + self.data["customAttributes"] = '' + self.data["nodeTypesToIgnore"] = '' + self.data["writeMeshes"] = True + self.data["writeCurves"] = True + self.data["writeParticles"] = True + self.data["writeCameras"] = False + self.data["writeLights"] = False + self.data["writeJoints"] = False + self.data["writeCollections"] = False + self.data["writePositions"] = True + self.data["writeNormals"] = True + self.data["writeUVs"] = True + self.data["writeColorSets"] = False + self.data["writeTangents"] = False + self.data["writeRefPositions"] = False + self.data["writeBlendShapes"] = False + self.data["writeDisplayColor"] = False + self.data["writeSkinWeights"] = False + self.data["writeMaterialAssignment"] = False + self.data["writeHardwareShader"] = False + self.data["writeShadingNetworks"] = False + self.data["writeTransformMatrix"] = True + self.data["writeUsdAttributes"] = False + self.data["timeVaryingTopology"] = False + self.data["customMaterialNamespace"] = '' + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py new file mode 100644 index 0000000000..77b808c459 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py @@ -0,0 +1,23 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsdComp(plugin.Creator): + """Create Multiverse USD Composition""" + + name = "usdCompositionMain" + label = "Multiverse USD Composition" + family = "usdComposition" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["stripNamespaces"] = False + self.data["mergeTransformAndShape"] = False + self.data["flattenContent"] = False + self.data["writePendingOverrides"] = False + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py new file mode 100644 index 0000000000..bb82ab2039 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py @@ -0,0 +1,28 @@ +from openpype.hosts.maya.api import plugin, lib + + +class CreateMultiverseUsdOver(plugin.Creator): + """Multiverse USD data""" + + name = "usdOverrideMain" + label = "Multiverse USD Override" + family = "usdOverride" + icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs) + + # Add animation data first, since it maintains order. + self.data.update(lib.collect_animation_data(True)) + + self.data["writeAll"] = False + self.data["writeTransforms"] = True + self.data["writeVisibility"] = True + self.data["writeAttributes"] = True + self.data["writeMaterials"] = True + self.data["writeVariants"] = True + self.data["writeVariantsDefinition"] = True + self.data["writeActiveState"] = True + self.data["writeNamespaces"] = False + self.data["numTimeSamples"] = 1 + self.data["timeSamplesSpan"] = 0.0 diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index 14a21d28ca..fbf3399f61 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -15,6 +15,14 @@ class CreateReview(plugin.Creator): keepImages = False isolate = False imagePlane = True + transparency = [ + "preset", + "simple", + "object sorting", + "weighted average", + "depth peeling", + "alpha cut" + ] def __init__(self, *args, **kwargs): super(CreateReview, self).__init__(*args, **kwargs) @@ -28,5 +36,6 @@ class CreateReview(plugin.Creator): data["isolate"] = self.isolate data["keepImages"] = self.keepImages data["imagePlane"] = self.imagePlane + data["transparency"] = self.transparency self.data = data diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py new file mode 100644 index 0000000000..c03f2c5d92 --- /dev/null +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py @@ -0,0 +1,102 @@ +# -*- coding: utf-8 -*- +import maya.cmds as cmds + +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.maya.api.lib import ( + maintained_selection, + namespaced, + unique_namespace +) +from openpype.hosts.maya.api.pipeline import containerise + + +class MultiverseUsdLoader(load.LoaderPlugin): + """Load the USD by Multiverse""" + + families = ["model", "usd", "usdComposition", "usdOverride", + "pointcache", "animation"] + representations = ["usd", "usda", "usdc", "usdz", "abc"] + + label = "Read USD by Multiverse" + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, options=None): + + asset = context['asset']['name'] + namespace = namespace or unique_namespace( + asset + "_", + prefix="_" if asset[0].isdigit() else "", + suffix="_", + ) + + # Create the shape + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + shape = None + transform = None + with maintained_selection(): + cmds.namespace(addNamespace=namespace) + with namespaced(namespace, new=False): + import multiverse + shape = multiverse.CreateUsdCompound(self.fname) + transform = cmds.listRelatives( + shape, parent=True, fullPath=True)[0] + + # Lock the shape node so the user cannot delete it. + cmds.lockNode(shape, lock=True) + + nodes = [transform, shape] + self[:] = nodes + + return containerise( + name=name, + namespace=namespace, + nodes=nodes, + context=context, + loader=self.__class__.__name__) + + def update(self, container, representation): + # type: (dict, dict) -> None + """Update container with specified representation.""" + node = container['objectName'] + assert cmds.objExists(node), "Missing container" + + members = cmds.sets(node, query=True) or [] + shapes = cmds.ls(members, type="mvUsdCompoundShape") + assert shapes, "Cannot find mvUsdCompoundShape in container" + + path = get_representation_path(representation) + + import multiverse + for shape in shapes: + multiverse.SetUsdCompoundAssetPaths(shape, [path]) + + cmds.setAttr("{}.representation".format(node), + str(representation["_id"]), + type="string") + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + # type: (dict) -> None + """Remove loaded container.""" + # Delete container and its contents + if cmds.objExists(container['objectName']): + members = cmds.sets(container['objectName'], query=True) or [] + cmds.delete([container['objectName']] + members) + + # Remove the namespace, if empty + namespace = container['namespace'] + if cmds.namespace(exists=namespace): + members = cmds.namespaceInfo(namespace, listNamespace=True) + if not members: + cmds.namespace(removeNamespace=namespace) + else: + self.log.warning("Namespace not deleted because it " + "still has members: %s", namespace) diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index a8893072d0..6fcc308f78 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -4,6 +4,7 @@ import os import sys import json import tempfile +import platform import contextlib import subprocess from collections import OrderedDict @@ -334,7 +335,14 @@ class ExtractLook(openpype.api.Extractor): transfers = [] hardlinks = [] hashes = {} - force_copy = instance.data.get("forceCopy", False) + # Temporary fix to NOT create hardlinks on windows machines + if platform.system().lower() == "windows": + self.log.info( + "Forcing copy instead of hardlink due to issues on Windows..." + ) + force_copy = True + else: + force_copy = instance.data.get("forceCopy", False) for filepath in files_metadata: diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py new file mode 100644 index 0000000000..4e4efdc32c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py @@ -0,0 +1,210 @@ +import os +import six + +from maya import cmds + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + + +class ExtractMultiverseUsd(openpype.api.Extractor): + """Extractor for USD by Multiverse.""" + + label = "Extract Multiverse USD" + hosts = ["maya"] + families = ["usd"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "stripNamespaces": bool, + "mergeTransformAndShape": bool, + "writeAncestors": bool, + "flattenParentXforms": bool, + "writeSparseOverrides": bool, + "useMetaPrimPath": bool, + "customRootPath": str, + "customAttributes": str, + "nodeTypesToIgnore": str, + "writeMeshes": bool, + "writeCurves": bool, + "writeParticles": bool, + "writeCameras": bool, + "writeLights": bool, + "writeJoints": bool, + "writeCollections": bool, + "writePositions": bool, + "writeNormals": bool, + "writeUVs": bool, + "writeColorSets": bool, + "writeTangents": bool, + "writeRefPositions": bool, + "writeBlendShapes": bool, + "writeDisplayColor": bool, + "writeSkinWeights": bool, + "writeMaterialAssignment": bool, + "writeHardwareShader": bool, + "writeShadingNetworks": bool, + "writeTransformMatrix": bool, + "writeUsdAttributes": bool, + "timeVaryingTopology": bool, + "customMaterialNamespace": str, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "stripNamespaces": False, + "mergeTransformAndShape": False, + "writeAncestors": True, + "flattenParentXforms": False, + "writeSparseOverrides": False, + "useMetaPrimPath": False, + "customRootPath": str(), + "customAttributes": str(), + "nodeTypesToIgnore": str(), + "writeMeshes": True, + "writeCurves": True, + "writeParticles": True, + "writeCameras": False, + "writeLights": False, + "writeJoints": False, + "writeCollections": False, + "writePositions": True, + "writeNormals": True, + "writeUVs": True, + "writeColorSets": False, + "writeTangents": False, + "writeRefPositions": False, + "writeBlendShapes": False, + "writeDisplayColor": False, + "writeSkinWeights": False, + "writeMaterialAssignment": False, + "writeHardwareShader": False, + "writeShadingNetworks": False, + "writeTransformMatrix": True, + "writeUsdAttributes": False, + "timeVaryingTopology": False, + "customMaterialNamespace": str(), + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def parse_overrides(self, instance, options): + """Inspect data of instance to determine overridden options""" + + for key in instance.data: + if key not in self.options: + continue + + # Ensure the data is of correct type + value = instance.data[key] + if isinstance(value, six.text_type): + value = str(value) + if not isinstance(value, self.options[key]): + self.log.warning( + "Overridden attribute {key} was of " + "the wrong type: {invalid_type} " + "- should have been {valid_type}".format( + key=key, + invalid_type=type(value).__name__, + valid_type=self.options[key].__name__)) + continue + + options[key] = value + + return options + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usd".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace('\\', '/') + + # Parse export options + options = self.default_options + options = self.parse_overrides(instance, options) + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type=("mesh"), + noIntermediate=True, + long=True) + self.log.info('Collected object {}'.format(members)) + + import multiverse + + time_opts = None + frame_start = instance.data['frameStart'] + frame_end = instance.data['frameEnd'] + handle_start = instance.data['handleStart'] + handle_end = instance.data['handleEnd'] + step = instance.data['step'] + fps = instance.data['fps'] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + asset_write_opts = multiverse.AssetWriteOptions(time_opts) + options_discard_keys = { + 'numTimeSamples', + 'timeSamplesSpan', + 'frameStart', + 'frameEnd', + 'handleStart', + 'handleEnd', + 'step', + 'fps' + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(asset_write_opts, key, value) + + multiverse.WriteAsset(file_path, members, asset_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usd', + 'ext': 'usd', + 'files': file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py new file mode 100644 index 0000000000..8fccc412e6 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py @@ -0,0 +1,151 @@ +import os + +from maya import cmds + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + + +class ExtractMultiverseUsdComposition(openpype.api.Extractor): + """Extractor of Multiverse USD Composition.""" + + label = "Extract Multiverse USD Composition" + hosts = ["maya"] + families = ["usdComposition"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "stripNamespaces": bool, + "mergeTransformAndShape": bool, + "flattenContent": bool, + "writePendingOverrides": bool, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "stripNamespaces": True, + "mergeTransformAndShape": False, + "flattenContent": False, + "writePendingOverrides": False, + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def parse_overrides(self, instance, options): + """Inspect data of instance to determine overridden options""" + + for key in instance.data: + if key not in self.options: + continue + + # Ensure the data is of correct type + value = instance.data[key] + if not isinstance(value, self.options[key]): + self.log.warning( + "Overridden attribute {key} was of " + "the wrong type: {invalid_type} " + "- should have been {valid_type}".format( + key=key, + invalid_type=type(value).__name__, + valid_type=self.options[key].__name__)) + continue + + options[key] = value + + return options + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usd".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace('\\', '/') + + # Parse export options + options = self.default_options + options = self.parse_overrides(instance, options) + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type="mvUsdCompoundShape", + noIntermediate=True, + long=True) + self.log.info('Collected object {}'.format(members)) + + import multiverse + + time_opts = None + frame_start = instance.data['frameStart'] + frame_end = instance.data['frameEnd'] + handle_start = instance.data['handleStart'] + handle_end = instance.data['handleEnd'] + step = instance.data['step'] + fps = instance.data['fps'] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + comp_write_opts = multiverse.CompositionWriteOptions() + options_discard_keys = { + 'numTimeSamples', + 'timeSamplesSpan', + 'frameStart', + 'frameEnd', + 'handleStart', + 'handleEnd', + 'step', + 'fps' + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(comp_write_opts, key, value) + + multiverse.WriteComposition(file_path, members, comp_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usd', + 'ext': 'usd', + 'files': file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py new file mode 100644 index 0000000000..ce0e8a392a --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py @@ -0,0 +1,139 @@ +import os + +import openpype.api +from openpype.hosts.maya.api.lib import maintained_selection + +from maya import cmds + + +class ExtractMultiverseUsdOverride(openpype.api.Extractor): + """Extractor for USD Override by Multiverse.""" + + label = "Extract Multiverse USD Override" + hosts = ["maya"] + families = ["usdOverride"] + + @property + def options(self): + """Overridable options for Multiverse USD Export + + Given in the following format + - {NAME: EXPECTED TYPE} + + If the overridden option's type does not match, + the option is not included and a warning is logged. + + """ + + return { + "writeAll": bool, + "writeTransforms": bool, + "writeVisibility": bool, + "writeAttributes": bool, + "writeMaterials": bool, + "writeVariants": bool, + "writeVariantsDefinition": bool, + "writeActiveState": bool, + "writeNamespaces": bool, + "numTimeSamples": int, + "timeSamplesSpan": float + } + + @property + def default_options(self): + """The default options for Multiverse USD extraction.""" + + return { + "writeAll": False, + "writeTransforms": True, + "writeVisibility": True, + "writeAttributes": True, + "writeMaterials": True, + "writeVariants": True, + "writeVariantsDefinition": True, + "writeActiveState": True, + "writeNamespaces": False, + "numTimeSamples": 1, + "timeSamplesSpan": 0.0 + } + + def process(self, instance): + # Load plugin firstly + cmds.loadPlugin("MultiverseForMaya", quiet=True) + + # Define output file path + staging_dir = self.staging_dir(instance) + file_name = "{}.usda".format(instance.name) + file_path = os.path.join(staging_dir, file_name) + file_path = file_path.replace("\\", "/") + + # Parse export options + options = self.default_options + self.log.info("Export options: {0}".format(options)) + + # Perform extraction + self.log.info("Performing extraction ...") + + with maintained_selection(): + members = instance.data("setMembers") + members = cmds.ls(members, + dag=True, + shapes=True, + type="mvUsdCompoundShape", + noIntermediate=True, + long=True) + self.log.info("Collected object {}".format(members)) + + # TODO: Deal with asset, composition, overide with options. + import multiverse + + time_opts = None + frame_start = instance.data["frameStart"] + frame_end = instance.data["frameEnd"] + handle_start = instance.data["handleStart"] + handle_end = instance.data["handleEnd"] + step = instance.data["step"] + fps = instance.data["fps"] + if frame_end != frame_start: + time_opts = multiverse.TimeOptions() + + time_opts.writeTimeRange = True + time_opts.frameRange = ( + frame_start - handle_start, frame_end + handle_end) + time_opts.frameIncrement = step + time_opts.numTimeSamples = instance.data["numTimeSamples"] + time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"] + time_opts.framePerSecond = fps + + over_write_opts = multiverse.OverridesWriteOptions(time_opts) + options_discard_keys = { + "numTimeSamples", + "timeSamplesSpan", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd", + "step", + "fps" + } + for key, value in options.items(): + if key in options_discard_keys: + continue + setattr(over_write_opts, key, value) + + for member in members: + multiverse.WriteOverrides(file_path, member, over_write_opts) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "usd", + "ext": "usd", + "files": file_name, + "stagingDir": staging_dir + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance {} to {}".format( + instance.name, file_path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index b233a57453..bb1ecf279d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -73,6 +73,11 @@ class ExtractPlayblast(openpype.api.Extractor): pm.currentTime(refreshFrameInt - 1, edit=True) pm.currentTime(refreshFrameInt, edit=True) + # Override transparency if requested. + transparency = instance.data.get("transparency", 0) + if transparency != 0: + preset["viewport2_options"]["transparencyAlgorithm"] = transparency + # Isolate view is requested by having objects in the set besides a # camera. if preset.pop("isolate_view", False) and instance.data.get("isolate"): diff --git a/openpype/hosts/nuke/__init__.py b/openpype/hosts/nuke/__init__.py index 60b37ce1dd..134a6621c4 100644 --- a/openpype/hosts/nuke/__init__.py +++ b/openpype/hosts/nuke/__init__.py @@ -10,7 +10,7 @@ def add_implementation_envs(env, _app): ] old_nuke_path = env.get("NUKE_PATH") or "" for path in old_nuke_path.split(os.pathsep): - if not path or not os.path.exists(path): + if not path: continue norm_path = os.path.normpath(path) diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 3c8ba3e77c..c22488f728 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -26,6 +26,7 @@ from openpype.tools.utils import host_tools from openpype.lib.path_tools import HostDirmap from openpype.settings import get_project_settings from openpype.modules import ModulesManager +from openpype.pipeline import discover_legacy_creator_plugins from .workio import ( save_file, @@ -1902,7 +1903,7 @@ def recreate_instance(origin_node, avalon_data=None): # create new node # get appropriate plugin class creator_plugin = None - for Creator in api.discover(api.Creator): + for Creator in discover_legacy_creator_plugins(): if Creator.__name__ == data["creator"]: creator_plugin = Creator break diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index 1d110cb94a..6ee3d2ce05 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -5,7 +5,6 @@ from collections import OrderedDict import nuke import pyblish.api -import avalon.api import openpype from openpype.api import ( @@ -15,10 +14,11 @@ from openpype.api import ( ) from openpype.lib import register_event_callback from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, register_inventory_action_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, deregister_inventory_action_path, AVALON_CONTAINER_ID, ) @@ -106,7 +106,7 @@ def install(): log.info("Registering Nuke plug-ins..") pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) register_inventory_action_path(INVENTORY_PATH) # Register Avalon event for workfiles loading. @@ -132,7 +132,7 @@ def uninstall(): pyblish.deregister_host("nuke") pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) deregister_inventory_action_path(INVENTORY_PATH) pyblish.api.deregister_callback( diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index d0bb45a05d..3ac750a48f 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -450,6 +450,7 @@ class ExporterReviewMov(ExporterReview): def generate_mov(self, farm=False, **kwargs): self.publish_on_farm = farm + read_raw = kwargs["read_raw"] reformat_node_add = kwargs["reformat_node_add"] reformat_node_config = kwargs["reformat_node_config"] bake_viewer_process = kwargs["bake_viewer_process"] @@ -484,6 +485,9 @@ class ExporterReviewMov(ExporterReview): r_node["origlast"].setValue(self.last_frame) r_node["colorspace"].setValue(self.write_colorspace) + if read_raw: + r_node["raw"].setValue(1) + # connect self._temp_nodes[subset].append(r_node) self.previous_node = r_node diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py index 68c3952942..1ed32996e1 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects.py +++ b/openpype/hosts/nuke/plugins/load/load_effects.py @@ -72,7 +72,7 @@ class LoadEffects(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -188,7 +188,7 @@ class LoadEffects(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -330,11 +330,11 @@ class LoadEffects(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] - elif isinstance(input, unicode): - return input.encode('utf-8') + elif isinstance(input, str): + return str(input) else: return input diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py index 9c4fd4c2c6..383776111f 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py @@ -74,7 +74,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -194,7 +194,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # getting data from json file with unicode conversion with open(file, "r") as f: json_f = {self.byteify(key): self.byteify(value) - for key, value in json.load(f).iteritems()} + for key, value in json.load(f).items()} # get correct order of nodes by positions on track and subtrack nodes_order = self.reorder_nodes(json_f) @@ -350,11 +350,11 @@ class LoadEffectsInputProcess(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] - elif isinstance(input, unicode): - return input.encode('utf-8') + elif isinstance(input, str): + return str(input) else: return input diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py index 87bebce15b..df52a22364 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py @@ -240,7 +240,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): if isinstance(input, dict): return {self.byteify(key): self.byteify(value) - for key, value in input.iteritems()} + for key, value in input.items()} elif isinstance(input, list): return [self.byteify(element) for element in input] elif isinstance(input, unicode): diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data.py b/openpype/hosts/nuke/plugins/publish/extract_review_data.py new file mode 100644 index 0000000000..38a8140cff --- /dev/null +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data.py @@ -0,0 +1,47 @@ +import os +import pyblish.api +import openpype +from pprint import pformat + + +class ExtractReviewData(openpype.api.Extractor): + """Extracts review tag into available representation + """ + + order = pyblish.api.ExtractorOrder + 0.01 + # order = pyblish.api.CollectorOrder + 0.499 + label = "Extract Review Data" + + families = ["review"] + hosts = ["nuke"] + + def process(self, instance): + fpath = instance.data["path"] + ext = os.path.splitext(fpath)[-1][1:] + + representations = instance.data.get("representations", []) + + # review can be removed since `ProcessSubmittedJobOnFarm` will create + # reviable representation if needed + if ( + "render.farm" in instance.data["families"] + and "review" in instance.data["families"] + ): + instance.data["families"].remove("review") + + # iterate representations and add `review` tag + for repre in representations: + if ext != repre["ext"]: + continue + + if not repre.get("tags"): + repre["tags"] = [] + + if "review" not in repre["tags"]: + repre["tags"].append("review") + + self.log.debug("Matching representation: {}".format( + pformat(repre) + )) + + instance.data["representations"] = representations diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 544b9e04da..31a8ff18ee 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -24,7 +24,11 @@ class ExtractReviewDataMov(openpype.api.Extractor): outputs = {} def process(self, instance): - families = instance.data["families"] + families = set(instance.data["families"]) + + # add main family to make sure all families are compared + families.add(instance.data["family"]) + task_type = instance.context.data["taskType"] subset = instance.data["subset"] self.log.info("Creating staging dir...") @@ -50,51 +54,31 @@ class ExtractReviewDataMov(openpype.api.Extractor): f_task_types = o_data["filter"]["task_types"] f_subsets = o_data["filter"]["sebsets"] + self.log.debug( + "f_families `{}` > families: {}".format( + f_families, families)) + + self.log.debug( + "f_task_types `{}` > task_type: {}".format( + f_task_types, task_type)) + + self.log.debug( + "f_subsets `{}` > subset: {}".format( + f_subsets, subset)) + # test if family found in context - test_families = any([ - # first if exact family set is matching - # make sure only interesetion of list is correct - bool(set(families).intersection(f_families)), - # and if famiies are set at all - # if not then return True because we want this preset - # to be active if nothig is set - bool(not f_families) - ]) + # using intersection to make sure all defined + # families are present in combination + if f_families and not families.intersection(f_families): + continue # test task types from filter - test_task_types = any([ - # check if actual task type is defined in task types - # set in preset's filter - bool(task_type in f_task_types), - # and if taskTypes are defined in preset filter - # if not then return True, because we want this filter - # to be active if no taskType is set - bool(not f_task_types) - ]) + if f_task_types and task_type not in f_task_types: + continue # test subsets from filter - test_subsets = any([ - # check if any of subset filter inputs - # converted to regex patern is not found in subset - # we keep strict case sensitivity - bool(next(( - s for s in f_subsets - if re.search(re.compile(s), subset) - ), None)), - # but if no subsets were set then make this acuntable too - bool(not f_subsets) - ]) - - # we need all filters to be positive for this - # preset to be activated - test_all = all([ - test_families, - test_task_types, - test_subsets - ]) - - # if it is not positive then skip this preset - if not test_all: + if f_subsets and not any( + re.search(s, subset) for s in f_subsets): continue self.log.info( diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py b/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py index 5ee93403d0..907577a97d 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_deadline_tab.py @@ -25,7 +25,7 @@ class RepairNukeWriteDeadlineTab(pyblish.api.Action): # Remove existing knobs. knob_names = openpype.hosts.nuke.lib.get_deadline_knob_names() - for name, knob in group_node.knobs().iteritems(): + for name, knob in group_node.knobs().items(): if name in knob_names: group_node.removeKnob(knob) diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py b/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py index 08f09f8097..9fb57c1698 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py @@ -1,11 +1,10 @@ -import os import toml import nuke -from avalon import api import pyblish.api import openpype.api +from openpype.pipeline import discover_creator_plugins from openpype.hosts.nuke.api.lib import get_avalon_knob_data @@ -79,7 +78,7 @@ class ValidateWriteLegacy(pyblish.api.InstancePlugin): # get appropriate plugin class creator_plugin = None - for Creator in api.discover(api.Creator): + for Creator in discover_creator_plugins(): if Creator.__name__ != Create_name: continue diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py index c2ad0ac7b0..7fdaa61b40 100644 --- a/openpype/hosts/photoshop/api/pipeline.py +++ b/openpype/hosts/photoshop/api/pipeline.py @@ -9,9 +9,10 @@ from avalon import io from openpype.api import Logger from openpype.lib import register_event_callback from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) import openpype.hosts.photoshop @@ -75,7 +76,7 @@ def install(): pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) log.info(PUBLISH_PATH) pyblish.api.register_callback( @@ -88,7 +89,7 @@ def install(): def uninstall(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) def ls(): diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py index a001b5f171..5078cbb587 100644 --- a/openpype/hosts/photoshop/plugins/create/create_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_image.py @@ -1,9 +1,9 @@ from Qt import QtWidgets -from openpype.pipeline import create +from openpype.pipeline import LegacyCreator from openpype.hosts.photoshop import api as photoshop -class CreateImage(create.LegacyCreator): +class CreateImage(LegacyCreator): """Image folder for publish.""" name = "imageDefault" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_instances.py index c3e27e9646..6198ed0156 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_instances.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_instances.py @@ -1,6 +1,9 @@ +from avalon import api import pyblish.api +from openpype.settings import get_project_settings from openpype.hosts.photoshop import api as photoshop +from openpype.lib import prepare_template_data class CollectInstances(pyblish.api.ContextPlugin): @@ -9,6 +12,10 @@ class CollectInstances(pyblish.api.ContextPlugin): This collector takes into account assets that are associated with an LayerSet and marked with a unique identifier; + If no image instances are explicitly created, it looks if there is value + in `flatten_subset_template` (configurable in Settings), in that case it + produces flatten image with all visible layers. + Identifier: id (str): "pyblish.avalon.instance" """ @@ -19,13 +26,17 @@ class CollectInstances(pyblish.api.ContextPlugin): families_mapping = { "image": [] } + # configurable in Settings + flatten_subset_template = "" def process(self, context): stub = photoshop.stub() layers = stub.get_layers() layers_meta = stub.get_layers_metadata() instance_names = [] + all_layer_ids = [] for layer in layers: + all_layer_ids.append(layer.id) layer_data = stub.read(layer, layers_meta) # Skip layers without metadata. @@ -59,3 +70,33 @@ class CollectInstances(pyblish.api.ContextPlugin): if len(instance_names) != len(set(instance_names)): self.log.warning("Duplicate instances found. " + "Remove unwanted via SubsetManager") + + if len(instance_names) == 0 and self.flatten_subset_template: + project_name = context.data["projectEntity"]["name"] + variants = get_project_settings(project_name).get( + "photoshop", {}).get( + "create", {}).get( + "CreateImage", {}).get( + "defaults", ['']) + family = "image" + task_name = api.Session["AVALON_TASK"] + asset_name = context.data["assetEntity"]["name"] + + fill_pairs = { + "variant": variants[0], + "family": family, + "task": task_name + } + + subset = self.flatten_subset_template.format( + **prepare_template_data(fill_pairs)) + + instance = context.create_instance(subset) + instance.data["family"] = family + instance.data["asset"] = asset_name + instance.data["subset"] = subset + instance.data["ids"] = all_layer_ids + instance.data["families"] = self.families_mapping[family] + instance.data["publish"] = True + + self.log.info("flatten instance: {} ".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_review.py b/openpype/hosts/photoshop/plugins/publish/collect_review.py index 5ab48b76da..f3842b9ee5 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_review.py @@ -2,18 +2,26 @@ import os import pyblish.api +from openpype.lib import get_subset_name_with_asset_doc + class CollectReview(pyblish.api.ContextPlugin): """Gather the active document as review instance.""" label = "Review" - order = pyblish.api.CollectorOrder + order = pyblish.api.CollectorOrder + 0.1 hosts = ["photoshop"] def process(self, context): family = "review" - task = os.getenv("AVALON_TASK", None) - subset = family + task.capitalize() + subset = get_subset_name_with_asset_doc( + family, + "", + context.data["anatomyData"]["task"]["name"], + context.data["assetEntity"], + context.data["anatomyData"]["project"]["name"], + host_name=context.data["hostName"] + ) file_path = context.data["currentFile"] base_name = os.path.basename(file_path) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py index db1ede14d5..0dbe2c6609 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py @@ -1,6 +1,8 @@ import os import pyblish.api +from openpype.lib import get_subset_name_with_asset_doc + class CollectWorkfile(pyblish.api.ContextPlugin): """Collect current script for publish.""" @@ -11,8 +13,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin): def process(self, context): family = "workfile" - task = os.getenv("AVALON_TASK", None) - subset = family + task.capitalize() + subset = get_subset_name_with_asset_doc( + family, + "", + context.data["anatomyData"]["task"]["name"], + context.data["assetEntity"], + context.data["anatomyData"]["project"]["name"], + host_name=context.data["hostName"] + ) file_path = context.data["currentFile"] staging_dir = os.path.dirname(file_path) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_image.py b/openpype/hosts/photoshop/plugins/publish/extract_image.py index 04ce77ee34..b07d0740c1 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_image.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_image.py @@ -26,8 +26,10 @@ class ExtractImage(openpype.api.Extractor): with photoshop.maintained_selection(): self.log.info("Extracting %s" % str(list(instance))) with photoshop.maintained_visibility(): + ids = set() layer = instance.data.get("layer") - ids = set([layer.id]) + if layer: + ids.add(layer.id) add_ids = instance.data.pop("ids", None) if add_ids: ids.update(set(add_ids)) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index b8f4470c7b..d076610ead 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -155,6 +155,9 @@ class ExtractReview(openpype.api.Extractor): for image_instance in instance.context: if image_instance.data["family"] != "image": continue + if not image_instance.data.get("layer"): + # dummy instance for flatten image + continue layers.append(image_instance.data.get("layer")) return sorted(layers) diff --git a/openpype/hosts/photoshop/plugins/publish/validate_naming.py b/openpype/hosts/photoshop/plugins/publish/validate_naming.py index b40e44d016..583e9c7a4e 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_naming.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_naming.py @@ -29,7 +29,8 @@ class ValidateNamingRepair(pyblish.api.Action): stub = photoshop.stub() for instance in instances: self.log.info("validate_naming instance {}".format(instance)) - metadata = stub.read(instance[0]) + layer_item = instance.data["layer"] + metadata = stub.read(layer_item) self.log.info("metadata instance {}".format(metadata)) layer_name = None if metadata.get("uuid"): @@ -43,11 +44,11 @@ class ValidateNamingRepair(pyblish.api.Action): stub.rename_layer(instance.data["uuid"], layer_name) subset_name = re.sub(invalid_chars, replace_char, - instance.data["name"]) + instance.data["subset"]) - instance[0].Name = layer_name or subset_name + layer_item.name = layer_name or subset_name metadata["subset"] = subset_name - stub.imprint(instance[0], metadata) + stub.imprint(layer_item, metadata) return True diff --git a/openpype/hosts/resolve/api/pipeline.py b/openpype/hosts/resolve/api/pipeline.py index e8b017ead5..636c826a11 100644 --- a/openpype/hosts/resolve/api/pipeline.py +++ b/openpype/hosts/resolve/api/pipeline.py @@ -4,14 +4,17 @@ Basic avalon integration import os import contextlib from collections import OrderedDict -from avalon import api as avalon -from avalon import schema + from pyblish import api as pyblish + +from avalon import schema + from openpype.api import Logger from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) from . import lib @@ -46,7 +49,7 @@ def install(): log.info("Registering DaVinci Resovle plug-ins..") register_loader_plugin_path(LOAD_PATH) - avalon.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) # register callback for switching publishable pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled) @@ -70,7 +73,7 @@ def uninstall(): log.info("Deregistering DaVinci Resovle plug-ins..") deregister_loader_plugin_path(LOAD_PATH) - avalon.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) # register callback for switching publishable pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled) diff --git a/openpype/hosts/testhost/plugins/create/auto_creator.py b/openpype/hosts/testhost/plugins/create/auto_creator.py index 45c573e487..4c22eea9dd 100644 --- a/openpype/hosts/testhost/plugins/create/auto_creator.py +++ b/openpype/hosts/testhost/plugins/create/auto_creator.py @@ -1,10 +1,10 @@ +from avalon import io +from openpype.lib import NumberDef from openpype.hosts.testhost.api import pipeline from openpype.pipeline import ( AutoCreator, CreatedInstance, - lib ) -from avalon import io class MyAutoCreator(AutoCreator): @@ -13,7 +13,7 @@ class MyAutoCreator(AutoCreator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key", label="Number") + NumberDef("number_key", label="Number") ] return output @@ -30,7 +30,7 @@ class MyAutoCreator(AutoCreator): def update_instances(self, update_list): pipeline.update_instances(update_list) - def create(self, options=None): + def create(self): existing_instance = None for instance in self.create_context.instances: if instance.family == self.family: diff --git a/openpype/hosts/testhost/plugins/create/test_creator_1.py b/openpype/hosts/testhost/plugins/create/test_creator_1.py index 45c30e8a27..7664276fa2 100644 --- a/openpype/hosts/testhost/plugins/create/test_creator_1.py +++ b/openpype/hosts/testhost/plugins/create/test_creator_1.py @@ -1,10 +1,16 @@ import json from openpype import resources from openpype.hosts.testhost.api import pipeline +from openpype.lib import ( + UISeparatorDef, + UILabelDef, + BoolDef, + NumberDef, + FileDef, +) from openpype.pipeline import ( Creator, CreatedInstance, - lib ) @@ -54,17 +60,17 @@ class TestCreatorOne(Creator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key", label="Number"), + NumberDef("number_key", label="Number"), ] return output def get_pre_create_attr_defs(self): output = [ - lib.BoolDef("use_selection", label="Use selection"), - lib.UISeparatorDef(), - lib.UILabelDef("Testing label"), - lib.FileDef("filepath", folders=True, label="Filepath"), - lib.FileDef( + BoolDef("use_selection", label="Use selection"), + UISeparatorDef(), + UILabelDef("Testing label"), + FileDef("filepath", folders=True, label="Filepath"), + FileDef( "filepath_2", multipath=True, folders=True, label="Filepath 2" ) ] diff --git a/openpype/hosts/testhost/plugins/create/test_creator_2.py b/openpype/hosts/testhost/plugins/create/test_creator_2.py index e66304a038..f54adee8a2 100644 --- a/openpype/hosts/testhost/plugins/create/test_creator_2.py +++ b/openpype/hosts/testhost/plugins/create/test_creator_2.py @@ -1,8 +1,8 @@ +from openpype.lib import NumberDef, TextDef from openpype.hosts.testhost.api import pipeline from openpype.pipeline import ( Creator, CreatedInstance, - lib ) @@ -40,8 +40,8 @@ class TestCreatorTwo(Creator): def get_instance_attr_defs(self): output = [ - lib.NumberDef("number_key"), - lib.TextDef("text_key") + NumberDef("number_key"), + TextDef("text_key") ] return output diff --git a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py b/openpype/hosts/testhost/plugins/publish/collect_instance_1.py index 3c035eccb6..c7241a15a8 100644 --- a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py +++ b/openpype/hosts/testhost/plugins/publish/collect_instance_1.py @@ -1,10 +1,8 @@ import json import pyblish.api -from openpype.pipeline import ( - OpenPypePyblishPluginMixin, - attribute_definitions -) +from openpype.lib import attribute_definitions +from openpype.pipeline import OpenPypePyblishPluginMixin class CollectInstanceOneTestHost( diff --git a/openpype/hosts/traypublisher/plugins/create/create_workfile.py b/openpype/hosts/traypublisher/plugins/create/create_workfile.py index 2db4770bbc..5e0af350f0 100644 --- a/openpype/hosts/traypublisher/plugins/create/create_workfile.py +++ b/openpype/hosts/traypublisher/plugins/create/create_workfile.py @@ -1,8 +1,8 @@ from openpype.hosts.traypublisher.api import pipeline +from openpype.lib import FileDef from openpype.pipeline import ( Creator, - CreatedInstance, - lib + CreatedInstance ) @@ -80,7 +80,7 @@ class WorkfileCreator(Creator): def get_instance_attr_defs(self): output = [ - lib.FileDef( + FileDef( "filepath", folders=False, extensions=self.extensions, diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_workfile.py b/openpype/hosts/traypublisher/plugins/publish/validate_workfile.py index 88339d2aac..e8eeb46065 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_workfile.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_workfile.py @@ -6,7 +6,7 @@ from openpype.pipeline import PublishValidationError class ValidateWorkfilePath(pyblish.api.InstancePlugin): """Validate existence of workfile instance existence.""" - label = "Collect Workfile" + label = "Validate Workfile" order = pyblish.api.ValidatorOrder - 0.49 families = ["workfile"] hosts = ["traypublisher"] diff --git a/openpype/hosts/tvpaint/api/pipeline.py b/openpype/hosts/tvpaint/api/pipeline.py index ec880a1abc..cafdf0701d 100644 --- a/openpype/hosts/tvpaint/api/pipeline.py +++ b/openpype/hosts/tvpaint/api/pipeline.py @@ -15,9 +15,10 @@ from openpype.hosts import tvpaint from openpype.api import get_current_project_settings from openpype.lib import register_event_callback from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) @@ -82,7 +83,7 @@ def install(): pyblish.api.register_host("tvpaint") pyblish.api.register_plugin_path(PUBLISH_PATH) register_loader_plugin_path(LOAD_PATH) - avalon.api.register_plugin_path(LegacyCreator, CREATE_PATH) + register_creator_plugin_path(CREATE_PATH) registered_callbacks = ( pyblish.api.registered_callbacks().get("instanceToggled") or [] @@ -104,7 +105,7 @@ def uninstall(): pyblish.api.deregister_host("tvpaint") pyblish.api.deregister_plugin_path(PUBLISH_PATH) deregister_loader_plugin_path(LOAD_PATH) - avalon.api.deregister_plugin_path(LegacyCreator, CREATE_PATH) + deregister_creator_plugin_path(CREATE_PATH) def containerise( diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py index 9cbfb61550..5e8d13592c 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py @@ -20,21 +20,30 @@ class CollectInstances(pyblish.api.ContextPlugin): json.dumps(workfile_instances, indent=4) )) + filtered_instance_data = [] # Backwards compatibility for workfiles that already have review # instance in metadata. review_instance_exist = False for instance_data in workfile_instances: - if instance_data["family"] == "review": + family = instance_data["family"] + if family == "review": review_instance_exist = True - break + + elif family not in ("renderPass", "renderLayer"): + self.log.info("Unknown family \"{}\". Skipping {}".format( + family, json.dumps(instance_data, indent=4) + )) + continue + + filtered_instance_data.append(instance_data) # Fake review instance if review was not found in metadata families if not review_instance_exist: - workfile_instances.append( + filtered_instance_data.append( self._create_review_instance_data(context) ) - for instance_data in workfile_instances: + for instance_data in filtered_instance_data: instance_data["fps"] = context.data["sceneFps"] # Store workfile instance data to instance data @@ -42,8 +51,11 @@ class CollectInstances(pyblish.api.ContextPlugin): # Global instance data modifications # Fill families family = instance_data["family"] + families = [family] + if family != "review": + families.append("review") # Add `review` family for thumbnail integration - instance_data["families"] = [family, "review"] + instance_data["families"] = families # Instance name subset_name = instance_data["subset"] @@ -78,7 +90,7 @@ class CollectInstances(pyblish.api.ContextPlugin): # Project name from workfile context project_name = context.data["workfile_context"]["project"] # Host name from environment variable - host_name = os.environ["AVALON_APP"] + host_name = context.data["hostName"] # Use empty variant value variant = "" task_name = io.Session["AVALON_TASK"] @@ -106,12 +118,6 @@ class CollectInstances(pyblish.api.ContextPlugin): instance = self.create_render_pass_instance( context, instance_data ) - else: - raise AssertionError( - "Instance with unknown family \"{}\": {}".format( - family, instance_data - ) - ) if instance is None: continue diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py b/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py new file mode 100644 index 0000000000..0af9a9a400 --- /dev/null +++ b/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py @@ -0,0 +1,110 @@ +import json +import copy +import pyblish.api +from avalon import io + +from openpype.lib import get_subset_name_with_asset_doc + + +class CollectRenderScene(pyblish.api.ContextPlugin): + """Collect instance which renders whole scene in PNG. + + Creates instance with family 'renderScene' which will have all layers + to render which will be composite into one result. The instance is not + collected from scene. + + Scene will be rendered with all visible layers similar way like review is. + + Instance is disabled if there are any created instances of 'renderLayer' + or 'renderPass'. That is because it is expected that this instance is + used as lazy publish of TVPaint file. + + Subset name is created similar way like 'renderLayer' family. It can use + `renderPass` and `renderLayer` keys which can be set using settings and + `variant` is filled using `renderPass` value. + """ + label = "Collect Render Scene" + order = pyblish.api.CollectorOrder - 0.39 + hosts = ["tvpaint"] + + # Value of 'render_pass' in subset name template + render_pass = "beauty" + + # Settings attributes + enabled = False + # Value of 'render_layer' and 'variant' in subset name template + render_layer = "Main" + + def process(self, context): + # Check if there are created instances of renderPass and renderLayer + # - that will define if renderScene instance is enabled after + # collection + any_created_instance = False + for instance in context: + family = instance.data["family"] + if family in ("renderPass", "renderLayer"): + any_created_instance = True + break + + # Global instance data modifications + # Fill families + family = "renderScene" + # Add `review` family for thumbnail integration + families = [family, "review"] + + # Collect asset doc to get asset id + # - not sure if it's good idea to require asset id in + # get_subset_name? + workfile_context = context.data["workfile_context"] + asset_name = workfile_context["asset"] + asset_doc = io.find_one({ + "type": "asset", + "name": asset_name + }) + + # Project name from workfile context + project_name = context.data["workfile_context"]["project"] + # Host name from environment variable + host_name = context.data["hostName"] + # Variant is using render pass name + variant = self.render_layer + dynamic_data = { + "render_layer": self.render_layer, + "render_pass": self.render_pass + } + + task_name = workfile_context["task"] + subset_name = get_subset_name_with_asset_doc( + "render", + variant, + task_name, + asset_doc, + project_name, + host_name, + dynamic_data=dynamic_data + ) + + instance_data = { + "family": family, + "families": families, + "fps": context.data["sceneFps"], + "subset": subset_name, + "name": subset_name, + "label": "{} [{}-{}]".format( + subset_name, + context.data["sceneMarkIn"] + 1, + context.data["sceneMarkOut"] + 1 + ), + "active": not any_created_instance, + "publish": not any_created_instance, + "representations": [], + "layers": copy.deepcopy(context.data["layersData"]), + "asset": asset_name, + "task": task_name + } + + instance = context.create_instance(**instance_data) + + self.log.debug("Created instance: {}\n{}".format( + instance, json.dumps(instance.data, indent=4) + )) diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py new file mode 100644 index 0000000000..ab5bbc5e2c --- /dev/null +++ b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py @@ -0,0 +1,99 @@ +"""Plugin converting png files from ExtractSequence into exrs. + +Requires: + ExtractSequence - source of PNG + ExtractReview - review was already created so we can convert to any exr +""" +import os +import json + +import pyblish.api +from openpype.lib import ( + get_oiio_tools_path, + run_subprocess, +) +from openpype.pipeline import KnownPublishError + + +class ExtractConvertToEXR(pyblish.api.InstancePlugin): + # Offset to get after ExtractSequence plugin. + order = pyblish.api.ExtractorOrder + 0.1 + label = "Extract Sequence EXR" + hosts = ["tvpaint"] + families = ["render"] + + enabled = False + + # Replace source PNG files or just add + replace_pngs = True + # EXR compression + exr_compression = "ZIP" + + def process(self, instance): + repres = instance.data.get("representations") + if not repres: + return + + oiio_path = get_oiio_tools_path() + # Raise an exception when oiiotool is not available + # - this can currently happen on MacOS machines + if not os.path.exists(oiio_path): + KnownPublishError( + "OpenImageIO tool is not available on this machine." + ) + + new_repres = [] + for repre in repres: + if repre["name"] != "png": + continue + + self.log.info( + "Processing representation: {}".format( + json.dumps(repre, sort_keys=True, indent=4) + ) + ) + + src_filepaths = set() + new_filenames = [] + for src_filename in repre["files"]: + dst_filename = os.path.splitext(src_filename)[0] + ".exr" + new_filenames.append(dst_filename) + + src_filepath = os.path.join(repre["stagingDir"], src_filename) + dst_filepath = os.path.join(repre["stagingDir"], dst_filename) + + src_filepaths.add(src_filepath) + + args = [ + oiio_path, src_filepath, + "--compression", self.exr_compression, + # TODO how to define color conversion? + "--colorconvert", "sRGB", "linear", + "-o", dst_filepath + ] + run_subprocess(args) + + new_repres.append( + { + "name": "exr", + "ext": "exr", + "files": new_filenames, + "stagingDir": repre["stagingDir"], + "tags": list(repre["tags"]) + } + ) + + if self.replace_pngs: + instance.data["representations"].remove(repre) + + for filepath in src_filepaths: + instance.context.data["cleanupFullPaths"].append(filepath) + + instance.data["representations"].extend(new_repres) + self.log.info( + "Representations: {}".format( + json.dumps( + instance.data["representations"], sort_keys=True, indent=4 + ) + ) + ) diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py index 729c545545..d4fd1dff4b 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py @@ -12,14 +12,13 @@ from openpype.hosts.tvpaint.lib import ( fill_reference_frames, composite_rendered_layers, rename_filepaths_by_frame_start, - composite_images ) class ExtractSequence(pyblish.api.Extractor): label = "Extract Sequence" hosts = ["tvpaint"] - families = ["review", "renderPass", "renderLayer"] + families = ["review", "renderPass", "renderLayer", "renderScene"] # Modifiable with settings review_bg = [255, 255, 255, 255] @@ -160,7 +159,7 @@ class ExtractSequence(pyblish.api.Extractor): # Fill tags and new families tags = [] - if family_lowered in ("review", "renderlayer"): + if family_lowered in ("review", "renderlayer", "renderscene"): tags.append("review") # Sequence of one frame @@ -186,7 +185,7 @@ class ExtractSequence(pyblish.api.Extractor): instance.data["representations"].append(new_repre) - if family_lowered in ("renderpass", "renderlayer"): + if family_lowered in ("renderpass", "renderlayer", "renderscene"): # Change family to render instance.data["family"] = "render" diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_layers_visibility.py b/openpype/hosts/tvpaint/plugins/publish/validate_layers_visibility.py index 7ea0587b8f..d3a04cc69f 100644 --- a/openpype/hosts/tvpaint/plugins/publish/validate_layers_visibility.py +++ b/openpype/hosts/tvpaint/plugins/publish/validate_layers_visibility.py @@ -8,7 +8,7 @@ class ValidateLayersVisiblity(pyblish.api.InstancePlugin): label = "Validate Layers Visibility" order = pyblish.api.ValidatorOrder - families = ["review", "renderPass", "renderLayer"] + families = ["review", "renderPass", "renderLayer", "renderScene"] def process(self, instance): layer_names = set() diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index 713c588976..6d7a6ad1e2 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -7,9 +7,10 @@ import pyblish.api from avalon import api from openpype.pipeline import ( - LegacyCreator, register_loader_plugin_path, + register_creator_plugin_path, deregister_loader_plugin_path, + deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) from openpype.tools.utils import host_tools @@ -49,7 +50,7 @@ def install(): logger.info("installing OpenPype for Unreal") pyblish.api.register_plugin_path(str(PUBLISH_PATH)) register_loader_plugin_path(str(LOAD_PATH)) - api.register_plugin_path(LegacyCreator, str(CREATE_PATH)) + register_creator_plugin_path(str(CREATE_PATH)) _register_callbacks() _register_events() @@ -58,7 +59,7 @@ def uninstall(): """Uninstall Unreal configuration for Avalon.""" pyblish.api.deregister_plugin_path(str(PUBLISH_PATH)) deregister_loader_plugin_path(str(LOAD_PATH)) - api.deregister_plugin_path(LegacyCreator, str(CREATE_PATH)) + deregister_creator_plugin_path(str(CREATE_PATH)) def _register_callbacks(): diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 1ebafbb2d2..e8b6d18f4e 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -29,6 +29,21 @@ from .vendor_bin_utils import ( is_oiio_supported ) +from .attribute_definitions import ( + AbtractAttrDef, + + UIDef, + UISeparatorDef, + UILabelDef, + + UnknownDef, + NumberDef, + TextDef, + EnumDef, + BoolDef, + FileDef, +) + from .env_tools import ( env_value_to_bool, get_paths_from_environ, @@ -233,6 +248,19 @@ __all__ = [ "get_ffmpeg_tool_path", "is_oiio_supported", + "AbtractAttrDef", + + "UIDef", + "UISeparatorDef", + "UILabelDef", + + "UnknownDef", + "NumberDef", + "TextDef", + "EnumDef", + "BoolDef", + "FileDef", + "import_filepath", "modules_from_path", "recursive_bases_from_class", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index e72585c75a..5821c863d7 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -211,6 +211,7 @@ class ApplicationGroup: data (dict): Group defying data loaded from settings. manager (ApplicationManager): Manager that created the group. """ + def __init__(self, name, data, manager): self.name = name self.manager = manager @@ -374,6 +375,7 @@ class ApplicationManager: will always use these values. Gives ability to create manager using different settings. """ + def __init__(self, system_settings=None): self.log = PypeLogger.get_logger(self.__class__.__name__) @@ -530,13 +532,13 @@ class EnvironmentToolGroup: variants = data.get("variants") or {} label_by_key = variants.pop(M_DYNAMIC_KEY_LABEL, {}) variants_by_name = {} - for variant_name, variant_env in variants.items(): + for variant_name, variant_data in variants.items(): if variant_name in METADATA_KEYS: continue variant_label = label_by_key.get(variant_name) or variant_name tool = EnvironmentTool( - variant_name, variant_label, variant_env, self + variant_name, variant_label, variant_data, self ) variants_by_name[variant_name] = tool self.variants = variants_by_name @@ -560,15 +562,30 @@ class EnvironmentTool: Args: name (str): Name of the tool. - environment (dict): Variant environments. + variant_data (dict): Variant data with environments and + host and app variant filters. group (str): Name of group which wraps tool. """ - def __init__(self, name, label, environment, group): + def __init__(self, name, label, variant_data, group): + # Backwards compatibility 3.9.1 - 3.9.2 + # - 'variant_data' contained only environments but contain also host + # and application variant filters + host_names = variant_data.get("host_names", []) + app_variants = variant_data.get("app_variants", []) + + if "environment" in variant_data: + environment = variant_data["environment"] + else: + environment = variant_data + + self.host_names = host_names + self.app_variants = app_variants self.name = name self.variant_label = label self.label = " ".join((group.label, label)) self.group = group + self._environment = environment self.full_name = "/".join((group.name, name)) @@ -579,6 +596,19 @@ class EnvironmentTool: def environment(self): return copy.deepcopy(self._environment) + def is_valid_for_app(self, app): + """Is tool valid for application. + + Args: + app (Application): Application for which are prepared environments. + """ + if self.app_variants and app.full_name not in self.app_variants: + return False + + if self.host_names and app.host_name not in self.host_names: + return False + return True + class ApplicationExecutable: """Representation of executable loaded from settings.""" @@ -1319,6 +1349,41 @@ def _merge_env(env, current_env): return result +def _add_python_version_paths(app, env, logger): + """Add vendor packages specific for a Python version.""" + + # Skip adding if host name is not set + if not app.host_name: + return + + # Add Python 2/3 modules + openpype_root = os.getenv("OPENPYPE_REPOS_ROOT") + python_vendor_dir = os.path.join( + openpype_root, + "openpype", + "vendor", + "python" + ) + if app.use_python_2: + pythonpath = os.path.join(python_vendor_dir, "python_2") + else: + pythonpath = os.path.join(python_vendor_dir, "python_3") + + if not os.path.exists(pythonpath): + return + + logger.debug("Adding Python version specific paths to PYTHONPATH") + python_paths = [pythonpath] + + # Load PYTHONPATH from current launch context + python_path = env.get("PYTHONPATH") + if python_path: + python_paths.append(python_path) + + # Set new PYTHONPATH to launch context environments + env["PYTHONPATH"] = os.pathsep.join(python_paths) + + def prepare_app_environments(data, env_group=None, implementation_envs=True): """Modify launch environments based on launched app and context. @@ -1331,6 +1396,8 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True): app = data["app"] log = data["log"] + _add_python_version_paths(app, data["env"], log) + # `added_env_keys` has debug purpose added_env_keys = {app.group.name, app.name} # Environments for application @@ -1347,7 +1414,7 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True): # Make sure each tool group can be added only once for key in asset_doc["data"].get("tools_env") or []: tool = app.manager.tools.get(key) - if not tool: + if not tool or not tool.is_valid_for_app(app): continue groups_by_name[tool.group.name] = tool.group tool_by_group_name[tool.group.name][tool.name] = tool diff --git a/openpype/pipeline/lib/attribute_definitions.py b/openpype/lib/attribute_definitions.py similarity index 100% rename from openpype/pipeline/lib/attribute_definitions.py rename to openpype/lib/attribute_definitions.py diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index 05d2ffd821..0348d88be2 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -1604,13 +1604,13 @@ def get_creator_by_name(creator_name, case_sensitive=False): Returns: Creator: Return first matching plugin or `None`. """ - from openpype.pipeline import LegacyCreator + from openpype.pipeline import discover_legacy_creator_plugins # Lower input creator name if is not case sensitive if not case_sensitive: creator_name = creator_name.lower() - for creator_plugin in avalon.api.discover(LegacyCreator): + for creator_plugin in discover_legacy_creator_plugins(): _creator_name = creator_plugin.__name__ # Lower creator plugin name if is not case sensitive @@ -1705,7 +1705,7 @@ def _get_task_context_data_for_anatomy( "task": { "name": task_name, "type": task_type, - "short_name": project_task_type_data["short_name"] + "short": project_task_type_data["short_name"] } } @@ -1965,6 +1965,7 @@ def get_last_workfile( data.pop("comment", None) if not data.get("ext"): data["ext"] = extensions[0] + data["ext"] = data["ext"].replace('.', '') filename = StringTemplate.format_strict_template(file_template, data) if full_path: diff --git a/openpype/lib/python_module_tools.py b/openpype/lib/python_module_tools.py index f62c848e4a..6fad3b547f 100644 --- a/openpype/lib/python_module_tools.py +++ b/openpype/lib/python_module_tools.py @@ -5,8 +5,9 @@ import importlib import inspect import logging +import six + log = logging.getLogger(__name__) -PY3 = sys.version_info[0] == 3 def import_filepath(filepath, module_name=None): @@ -28,7 +29,7 @@ def import_filepath(filepath, module_name=None): # Prepare module object where content of file will be parsed module = types.ModuleType(module_name) - if PY3: + if six.PY3: # Use loader so module has full specs module_loader = importlib.machinery.SourceFileLoader( module_name, filepath @@ -38,7 +39,7 @@ def import_filepath(filepath, module_name=None): # Execute module code and store content to module with open(filepath) as _stream: # Execute content and store it to module object - exec(_stream.read(), module.__dict__) + six.exec_(_stream.read(), module.__dict__) module.__file__ = filepath return module @@ -129,20 +130,12 @@ def classes_from_module(superclass, module): for name in dir(module): # It could be anything at this point obj = getattr(module, name) - if not inspect.isclass(obj): + if not inspect.isclass(obj) or obj is superclass: continue - # These are subclassed from nothing, not even `object` - if not len(obj.__bases__) > 0: - continue + if issubclass(obj, superclass): + classes.append(obj) - # Use string comparison rather than `issubclass` - # in order to support reloading of this module. - bases = recursive_bases_from_class(obj) - if not any(base.__name__ == superclass.__name__ for base in bases): - continue - - classes.append(obj) return classes @@ -228,7 +221,7 @@ def import_module_from_dirpath(dirpath, folder_name, dst_module_name=None): dst_module_name(str): Parent module name under which can be loaded module added. """ - if PY3: + if six.PY3: module = _import_module_from_dirpath_py3( dirpath, folder_name, dst_module_name ) diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index 6bab6a8160..8e79aba0ae 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -478,8 +478,14 @@ def convert_for_ffmpeg( oiio_cmd.extend(["--eraseattrib", attr_name]) # Add last argument - path to output - base_file_name = os.path.basename(first_input_path) - output_path = os.path.join(output_dir, base_file_name) + if is_sequence: + ext = os.path.splitext(first_input_path)[1] + base_filename = "tmp.%{:0>2}d{}".format( + len(str(input_frame_end)), ext + ) + else: + base_filename = os.path.basename(first_input_path) + output_path = os.path.join(output_dir, base_filename) oiio_cmd.extend([ "-o", output_path ]) diff --git a/openpype/modules/base.py b/openpype/modules/base.py index 175957ae39..5cdeb86087 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -28,26 +28,15 @@ from openpype.settings.lib import ( ) from openpype.lib import PypeLogger - -DEFAULT_OPENPYPE_MODULES = ( - "avalon_apps", - "clockify", - "log_viewer", - "deadline", - "muster", - "royalrender", - "python_console_interpreter", - "ftrack", - "slack", - "webserver", - "launcher_action", - "project_manager_action", - "settings_action", - "standalonepublish_action", - "traypublish_action", - "job_queue", - "timers_manager", - "sync_server", +# Files that will be always ignored on modules import +IGNORED_FILENAMES = ( + "__pycache__", +) +# Files ignored on modules import from "./openpype/modules" +IGNORED_DEFAULT_FILENAMES = ( + "__init__.py", + "base.py", + "interfaces.py", ) @@ -146,9 +135,16 @@ class _LoadCache: def get_default_modules_dir(): """Path to default OpenPype modules.""" + current_dir = os.path.abspath(os.path.dirname(__file__)) - return os.path.join(current_dir, "default_modules") + output = [] + for folder_name in ("default_modules", ): + path = os.path.join(current_dir, folder_name) + if os.path.exists(path) and os.path.isdir(path): + output.append(path) + + return output def get_dynamic_modules_dirs(): @@ -186,7 +182,7 @@ def get_dynamic_modules_dirs(): def get_module_dirs(): """List of paths where OpenPype modules can be found.""" _dirpaths = [] - _dirpaths.append(get_default_modules_dir()) + _dirpaths.extend(get_default_modules_dir()) _dirpaths.extend(get_dynamic_modules_dirs()) dirpaths = [] @@ -292,25 +288,45 @@ def _load_modules(): log = PypeLogger.get_logger("ModulesLoader") + current_dir = os.path.abspath(os.path.dirname(__file__)) + processed_paths = set() + processed_paths.add(current_dir) # Import default modules imported from 'openpype.modules' - for default_module_name in DEFAULT_OPENPYPE_MODULES: + for filename in os.listdir(current_dir): + # Ignore filenames + if ( + filename in IGNORED_FILENAMES + or filename in IGNORED_DEFAULT_FILENAMES + ): + continue + + fullpath = os.path.join(current_dir, filename) + basename, ext = os.path.splitext(filename) + + if not os.path.isdir(fullpath) and ext not in (".py", ): + continue + try: - import_str = "openpype.modules.{}".format(default_module_name) - new_import_str = "{}.{}".format(modules_key, default_module_name) + import_str = "openpype.modules.{}".format(basename) + new_import_str = "{}.{}".format(modules_key, basename) default_module = __import__(import_str, fromlist=("", )) sys.modules[new_import_str] = default_module - setattr(openpype_modules, default_module_name, default_module) + setattr(openpype_modules, basename, default_module) except Exception: msg = ( "Failed to import default module '{}'." - ).format(default_module_name) + ).format(basename) log.error(msg, exc_info=True) # Look for OpenPype modules in paths defined with `get_module_dirs` # - dynamically imported OpenPype modules and addons - dirpaths = get_module_dirs() - for dirpath in dirpaths: + for dirpath in get_module_dirs(): + # Skip already processed paths + if dirpath in processed_paths: + continue + processed_paths.add(dirpath) + if not os.path.exists(dirpath): log.warning(( "Could not find path when loading OpenPype modules \"{}\"" @@ -319,12 +335,15 @@ def _load_modules(): for filename in os.listdir(dirpath): # Ignore filenames - if filename in ("__pycache__", ): + if filename in IGNORED_FILENAMES: continue fullpath = os.path.join(dirpath, filename) basename, ext = os.path.splitext(filename) + if not os.path.isdir(fullpath) and ext not in (".py", ): + continue + # TODO add more logic how to define if folder is module or not # - check manifest and content of manifest try: diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py index 1b694e25f1..5871646b20 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py @@ -492,7 +492,8 @@ class DeleteOldVersions(BaseAction): os.remove(file_path) self.log.debug("Removed file: {}".format(file_path)) - remainders.remove(file_path_base) + if file_path_base in remainders: + remainders.remove(file_path_base) continue seq_path_base = os.path.split(seq_path)[1] diff --git a/openpype/modules/ftrack/lib/avalon_sync.py b/openpype/modules/ftrack/lib/avalon_sync.py index 5301ec568e..c5b58ca94d 100644 --- a/openpype/modules/ftrack/lib/avalon_sync.py +++ b/openpype/modules/ftrack/lib/avalon_sync.py @@ -286,21 +286,6 @@ def from_dict_to_set(data, is_project): return result -def get_avalon_project_template(project_name): - """Get avalon template - Args: - project_name: (string) - Returns: - dictionary with templates - """ - templates = Anatomy(project_name).templates - return { - "workfile": templates["avalon"]["workfile"], - "work": templates["avalon"]["work"], - "publish": templates["avalon"]["publish"] - } - - def get_project_apps(in_app_list): """ Application definitions for app name. diff --git a/openpype/modules/log_viewer/tray/app.py b/openpype/modules/log_viewer/tray/app.py index 1e8d6483cd..def319e0e3 100644 --- a/openpype/modules/log_viewer/tray/app.py +++ b/openpype/modules/log_viewer/tray/app.py @@ -26,3 +26,12 @@ class LogsWindow(QtWidgets.QWidget): self.log_detail = log_detail self.setStyleSheet(style.load_stylesheet()) + + self._first_show = True + + def showEvent(self, event): + super(LogsWindow, self).showEvent(event) + + if self._first_show: + self._first_show = False + self.logs_widget.refresh() diff --git a/openpype/modules/log_viewer/tray/widgets.py b/openpype/modules/log_viewer/tray/widgets.py index ff77405de5..ed08e62109 100644 --- a/openpype/modules/log_viewer/tray/widgets.py +++ b/openpype/modules/log_viewer/tray/widgets.py @@ -155,6 +155,11 @@ class LogsWidget(QtWidgets.QWidget): QtCore.Qt.DescendingOrder ) + refresh_triggered_timer = QtCore.QTimer() + refresh_triggered_timer.setSingleShot(True) + refresh_triggered_timer.setInterval(200) + + refresh_triggered_timer.timeout.connect(self._on_refresh_timeout) view.selectionModel().selectionChanged.connect(self._on_index_change) refresh_btn.clicked.connect(self._on_refresh_clicked) @@ -169,10 +174,12 @@ class LogsWidget(QtWidgets.QWidget): self.detail_widget = detail_widget self.refresh_btn = refresh_btn - # prepare - self.refresh() + self._refresh_triggered_timer = refresh_triggered_timer def refresh(self): + self._refresh_triggered_timer.start() + + def _on_refresh_timeout(self): self.model.refresh() self.detail_widget.refresh() diff --git a/openpype/modules/slack/plugins/publish/collect_slack_family.py b/openpype/modules/slack/plugins/publish/collect_slack_family.py index 6c965b04cd..7475bdc89e 100644 --- a/openpype/modules/slack/plugins/publish/collect_slack_family.py +++ b/openpype/modules/slack/plugins/publish/collect_slack_family.py @@ -35,20 +35,25 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin): return # make slack publishable - if profile: - self.log.info("Found profile: {}".format(profile)) - if instance.data.get('families'): - instance.data['families'].append('slack') - else: - instance.data['families'] = ['slack'] + if not profile: + return - instance.data["slack_channel_message_profiles"] = \ - profile["channel_messages"] + self.log.info("Found profile: {}".format(profile)) + if instance.data.get('families'): + instance.data['families'].append('slack') + else: + instance.data['families'] = ['slack'] - slack_token = (instance.context.data["project_settings"] - ["slack"] - ["token"]) - instance.data["slack_token"] = slack_token + selected_profiles = profile["channel_messages"] + for prof in selected_profiles: + prof["review_upload_limit"] = profile.get("review_upload_limit", + 50) + instance.data["slack_channel_message_profiles"] = selected_profiles + + slack_token = (instance.context.data["project_settings"] + ["slack"] + ["token"]) + instance.data["slack_token"] = slack_token def main_family_from_instance(self, instance): # TODO yank from integrate """Returns main family of entered instance.""" diff --git a/openpype/modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py index 018a7594bb..10bde7d4c0 100644 --- a/openpype/modules/slack/plugins/publish/integrate_slack_api.py +++ b/openpype/modules/slack/plugins/publish/integrate_slack_api.py @@ -35,7 +35,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): message = self._get_filled_message(message_profile["message"], instance, review_path) - self.log.info("message:: {}".format(message)) + self.log.debug("message:: {}".format(message)) if not message: return @@ -43,7 +43,8 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): publish_files.add(thumbnail_path) if message_profile["upload_review"] and review_path: - publish_files.add(review_path) + message, publish_files = self._handle_review_upload( + message, message_profile, publish_files, review_path) project = instance.context.data["anatomyData"]["project"]["code"] for channel in message_profile["channels"]: @@ -75,6 +76,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): dbcon = mongo_client[database_name]["notification_messages"] dbcon.insert_one(msg) + def _handle_review_upload(self, message, message_profile, publish_files, + review_path): + """Check if uploaded file is not too large""" + review_file_size_MB = os.path.getsize(review_path) / 1024 / 1024 + file_limit = message_profile.get("review_upload_limit", 50) + if review_file_size_MB > file_limit: + message += "\nReview upload omitted because of file size." + if review_path not in message: + message += "\nFile located at: {}".format(review_path) + else: + publish_files.add(review_path) + return message, publish_files + def _get_filled_message(self, message_templ, instance, review_path=None): """Use message_templ and data from instance to get message content. @@ -210,6 +224,9 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): # You will get a SlackApiError if "ok" is False error_str = self._enrich_error(str(e.response["error"]), channel) self.log.warning("Error happened {}".format(error_str)) + except Exception as e: + error_str = self._enrich_error(str(e), channel) + self.log.warning("Not SlackAPI error", exc_info=True) return None, [] diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index d44fbad33e..8460d20ef1 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -3,8 +3,6 @@ from .constants import ( HOST_WORKFILE_EXTENSIONS, ) -from .lib import attribute_definitions - from .create import ( BaseCreator, Creator, @@ -15,6 +13,13 @@ from .create import ( LegacyCreator, legacy_create, + + discover_creator_plugins, + discover_legacy_creator_plugins, + register_creator_plugin, + deregister_creator_plugin, + register_creator_plugin_path, + deregister_creator_plugin_path, ) from .load import ( @@ -43,7 +48,8 @@ from .publish import ( PublishValidationError, PublishXmlValidationError, KnownPublishError, - OpenPypePyblishPluginMixin + OpenPypePyblishPluginMixin, + OptionalPyblishPluginMixin, ) from .actions import ( @@ -81,6 +87,13 @@ __all__ = ( "LegacyCreator", "legacy_create", + "discover_creator_plugins", + "discover_legacy_creator_plugins", + "register_creator_plugin", + "deregister_creator_plugin", + "register_creator_plugin_path", + "deregister_creator_plugin_path", + # --- Load --- "HeroVersionType", "IncompatibleLoaderError", @@ -107,6 +120,7 @@ __all__ = ( "PublishXmlValidationError", "KnownPublishError", "OpenPypePyblishPluginMixin", + "OptionalPyblishPluginMixin", # --- Actions --- "LauncherAction", diff --git a/openpype/pipeline/actions.py b/openpype/pipeline/actions.py index a045c92aa7..b488fe3e1f 100644 --- a/openpype/pipeline/actions.py +++ b/openpype/pipeline/actions.py @@ -1,4 +1,11 @@ import logging +from openpype.pipeline.plugin_discover import ( + discover, + register_plugin, + register_plugin_path, + deregister_plugin, + deregister_plugin_path +) class LauncherAction(object): @@ -90,57 +97,39 @@ class InventoryAction(object): # Launcher action def discover_launcher_actions(): - import avalon.api - - return avalon.api.discover(LauncherAction) + return discover(LauncherAction) def register_launcher_action(plugin): - import avalon.api - - return avalon.api.register_plugin(LauncherAction, plugin) + return register_plugin(LauncherAction, plugin) def register_launcher_action_path(path): - import avalon.api - - return avalon.api.register_plugin_path(LauncherAction, path) + return register_plugin_path(LauncherAction, path) # Inventory action def discover_inventory_actions(): - import avalon.api - - actions = avalon.api.discover(InventoryAction) + actions = discover(InventoryAction) filtered_actions = [] for action in actions: if action is not InventoryAction: - print("DISCOVERED", action) filtered_actions.append(action) - else: - print("GOT SOURCE") + return filtered_actions def register_inventory_action(plugin): - import avalon.api - - return avalon.api.register_plugin(InventoryAction, plugin) + return register_plugin(InventoryAction, plugin) def deregister_inventory_action(plugin): - import avalon.api - - avalon.api.deregister_plugin(InventoryAction, plugin) + deregister_plugin(InventoryAction, plugin) def register_inventory_action_path(path): - import avalon.api - - return avalon.api.register_plugin_path(InventoryAction, path) + return register_plugin_path(InventoryAction, path) def deregister_inventory_action_path(path): - import avalon.api - - return avalon.api.deregister_plugin_path(InventoryAction, path) + return deregister_plugin_path(InventoryAction, path) diff --git a/openpype/pipeline/create/__init__.py b/openpype/pipeline/create/__init__.py index 9571f56b8f..1beeb4267b 100644 --- a/openpype/pipeline/create/__init__.py +++ b/openpype/pipeline/create/__init__.py @@ -6,7 +6,14 @@ from .creator_plugins import ( BaseCreator, Creator, - AutoCreator + AutoCreator, + + discover_creator_plugins, + discover_legacy_creator_plugins, + register_creator_plugin, + deregister_creator_plugin, + register_creator_plugin_path, + deregister_creator_plugin_path, ) from .context import ( @@ -29,6 +36,13 @@ __all__ = ( "Creator", "AutoCreator", + "discover_creator_plugins", + "discover_legacy_creator_plugins", + "register_creator_plugin", + "deregister_creator_plugin", + "register_creator_plugin_path", + "deregister_creator_plugin_path", + "CreatedInstance", "CreateContext", diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index c2757a4502..3efdb0e5c3 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -6,11 +6,11 @@ import inspect from uuid import uuid4 from contextlib import contextmanager -from ..lib import UnknownDef from .creator_plugins import ( BaseCreator, Creator, - AutoCreator + AutoCreator, + discover_creator_plugins, ) from openpype.api import ( @@ -18,6 +18,8 @@ from openpype.api import ( get_project_settings ) +UpdateData = collections.namedtuple("UpdateData", ["instance", "changes"]) + class ImmutableKeyError(TypeError): """Accessed key is immutable so does not allow changes or removements.""" @@ -87,6 +89,8 @@ class AttributeValues: origin_data(dict): Values loaded from host before conversion. """ def __init__(self, attr_defs, values, origin_data=None): + from openpype.lib.attribute_definitions import UnknownDef + if origin_data is None: origin_data = copy.deepcopy(values) self._origin_data = origin_data @@ -842,7 +846,7 @@ class CreateContext: creators = {} autocreators = {} manual_creators = {} - for creator_class in avalon.api.discover(BaseCreator): + for creator_class in discover_creator_plugins(): if inspect.isabstract(creator_class): self.log.info( "Skipping abstract Creator {}".format(str(creator_class)) @@ -1080,7 +1084,7 @@ class CreateContext: for instance in cretor_instances: instance_changes = instance.changes() if instance_changes: - update_list.append((instance, instance_changes)) + update_list.append(UpdateData(instance, instance_changes)) creator = self.creators[identifier] if update_list: diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index 1ac2c420a2..36bccd427e 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -8,7 +8,19 @@ from abc import ( ) import six -from openpype.lib import get_subset_name_with_asset_doc +from openpype.lib import ( + get_subset_name_with_asset_doc, + set_plugin_attributes_from_settings, +) +from openpype.pipeline.plugin_discover import ( + discover, + register_plugin, + register_plugin_path, + deregister_plugin, + deregister_plugin_path +) + +from .legacy_create import LegacyCreator class CreatorError(Exception): @@ -46,6 +58,11 @@ class BaseCreator: # - may not be used if `get_icon` is reimplemented icon = None + # Instance attribute definitions that can be changed per instance + # - returns list of attribute definitions from + # `openpype.pipeline.attribute_definitions` + instance_attr_defs = [] + def __init__( self, create_context, system_settings, project_settings, headless=False ): @@ -56,10 +73,13 @@ class BaseCreator: # - we may use UI inside processing this attribute should be checked self.headless = headless - @abstractproperty + @property def identifier(self): - """Identifier of creator (must be unique).""" - pass + """Identifier of creator (must be unique). + + Default implementation returns plugin's family. + """ + return self.family @abstractproperty def family(self): @@ -90,11 +110,39 @@ class BaseCreator: pass @abstractmethod - def collect_instances(self, attr_plugins=None): + def collect_instances(self): + """Collect existing instances related to this creator plugin. + + The implementation differs on host abilities. The creator has to + collect metadata about instance and create 'CreatedInstance' object + which should be added to 'CreateContext'. + + Example: + ```python + def collect_instances(self): + # Getting existing instances is different per host implementation + for instance_data in pipeline.list_instances(): + # Process only instances that were created by this creator + creator_id = instance_data.get("creator_identifier") + if creator_id == self.identifier: + # Create instance object from existing data + instance = CreatedInstance.from_existing( + instance_data, self + ) + # Add instance to create context + self._add_instance_to_context(instance) + ``` + """ pass @abstractmethod def update_instances(self, update_list): + """Store changes of existing instances so they can be recollected. + + Args: + update_list(list): Gets list of tuples. Each item + contain changed instance and it's changes. + """ pass @abstractmethod @@ -178,7 +226,7 @@ class BaseCreator: list: Attribute definitions that can be tweaked for created instance. """ - return [] + return self.instance_attr_defs class Creator(BaseCreator): @@ -191,6 +239,9 @@ class Creator(BaseCreator): # - default_variants may not be used if `get_default_variants` is overriden default_variants = [] + # Default variant used in 'get_default_variant' + default_variant = None + # Short description of family # - may not be used if `get_description` is overriden description = None @@ -204,6 +255,10 @@ class Creator(BaseCreator): # e.g. for buld creators create_allow_context_change = True + # Precreate attribute definitions showed before creation + # - similar to instance attribute definitions + pre_create_attr_defs = [] + @abstractmethod def create(self, subset_name, instance_data, pre_create_data): """Create new instance and store it. @@ -263,7 +318,7 @@ class Creator(BaseCreator): `get_default_variants` should be used. """ - return None + return self.default_variant def get_pre_create_attr_defs(self): """Plugin attribute definitions needed for creation. @@ -276,7 +331,7 @@ class Creator(BaseCreator): list: Attribute definitions that can be tweaked for created instance. """ - return [] + return self.pre_create_attr_defs class AutoCreator(BaseCreator): @@ -284,6 +339,43 @@ class AutoCreator(BaseCreator): Can be used e.g. for `workfile`. """ + def remove_instances(self, instances): """Skip removement.""" pass + + +def discover_creator_plugins(): + return discover(BaseCreator) + + +def discover_legacy_creator_plugins(): + plugins = discover(LegacyCreator) + set_plugin_attributes_from_settings(plugins, LegacyCreator) + return plugins + + +def register_creator_plugin(plugin): + if issubclass(plugin, BaseCreator): + register_plugin(BaseCreator, plugin) + + elif issubclass(plugin, LegacyCreator): + register_plugin(LegacyCreator, plugin) + + +def deregister_creator_plugin(plugin): + if issubclass(plugin, BaseCreator): + deregister_plugin(BaseCreator, plugin) + + elif issubclass(plugin, LegacyCreator): + deregister_plugin(LegacyCreator, plugin) + + +def register_creator_plugin_path(path): + register_plugin_path(BaseCreator, path) + register_plugin_path(LegacyCreator, path) + + +def deregister_creator_plugin_path(path): + deregister_plugin_path(BaseCreator, path) + deregister_plugin_path(LegacyCreator, path) diff --git a/openpype/pipeline/lib/__init__.py b/openpype/pipeline/lib/__init__.py deleted file mode 100644 index f762c4205d..0000000000 --- a/openpype/pipeline/lib/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -from .attribute_definitions import ( - AbtractAttrDef, - - UIDef, - UISeparatorDef, - UILabelDef, - - UnknownDef, - NumberDef, - TextDef, - EnumDef, - BoolDef, - FileDef, -) - - -__all__ = ( - "AbtractAttrDef", - - "UIDef", - "UISeparatorDef", - "UILabelDef", - - "UnknownDef", - "NumberDef", - "TextDef", - "EnumDef", - "BoolDef", - "FileDef", -) diff --git a/openpype/pipeline/load/plugins.py b/openpype/pipeline/load/plugins.py index 9b2b6bb084..d60aed0083 100644 --- a/openpype/pipeline/load/plugins.py +++ b/openpype/pipeline/load/plugins.py @@ -1,5 +1,13 @@ import logging +from openpype.lib import set_plugin_attributes_from_settings +from openpype.pipeline.plugin_discover import ( + discover, + register_plugin, + register_plugin_path, + deregister_plugin, + deregister_plugin_path +) from .utils import get_representation_path_from_context @@ -102,30 +110,22 @@ class SubsetLoaderPlugin(LoaderPlugin): def discover_loader_plugins(): - import avalon.api - - return avalon.api.discover(LoaderPlugin) + plugins = discover(LoaderPlugin) + set_plugin_attributes_from_settings(plugins, LoaderPlugin) + return plugins def register_loader_plugin(plugin): - import avalon.api - - return avalon.api.register_plugin(LoaderPlugin, plugin) - - -def deregister_loader_plugin_path(path): - import avalon.api - - avalon.api.deregister_plugin_path(LoaderPlugin, path) - - -def register_loader_plugin_path(path): - import avalon.api - - return avalon.api.register_plugin_path(LoaderPlugin, path) + return register_plugin(LoaderPlugin, plugin) def deregister_loader_plugin(plugin): - import avalon.api + deregister_plugin(LoaderPlugin, plugin) - avalon.api.deregister_plugin(LoaderPlugin, plugin) + +def deregister_loader_plugin_path(path): + deregister_plugin_path(LoaderPlugin, path) + + +def register_loader_plugin_path(path): + return register_plugin_path(LoaderPlugin, path) diff --git a/openpype/pipeline/plugin_discover.py b/openpype/pipeline/plugin_discover.py new file mode 100644 index 0000000000..fb860fe5f2 --- /dev/null +++ b/openpype/pipeline/plugin_discover.py @@ -0,0 +1,298 @@ +import os +import inspect +import traceback + +from openpype.api import Logger +from openpype.lib.python_module_tools import ( + modules_from_path, + classes_from_module, +) + +log = Logger.get_logger(__name__) + + +class DiscoverResult: + """Result of Plug-ins discovery of a single superclass type. + + Stores discovered, duplicated, ignored and abstract plugins and file paths + which crashed on execution of file. + """ + + def __init__(self, superclass): + self.superclass = superclass + self.plugins = [] + self.crashed_file_paths = {} + self.duplicated_plugins = [] + self.abstract_plugins = [] + self.ignored_plugins = set() + # Store loaded modules to keep them in memory + self._modules = set() + + def __iter__(self): + for plugin in self.plugins: + yield plugin + + def __getitem__(self, item): + return self.plugins[item] + + def __setitem__(self, item, value): + self.plugins[item] = value + + def add_module(self, module): + """Add dynamically loaded python module to keep it in memory.""" + self._modules.add(module) + + def get_report(self, only_errors=True, exc_info=True, full_report=False): + lines = [] + if not only_errors: + # Successfully discovered plugins + if self.plugins or full_report: + lines.append( + "*** Discovered {} plugins".format(len(self.plugins)) + ) + for cls in self.plugins: + lines.append("- {}".format(cls.__class__.__name__)) + + # Plugin that were defined to be ignored + if self.ignored_plugins or full_report: + lines.append("*** Ignored plugins {}".format(len( + self.ignored_plugins + ))) + for cls in self.ignored_plugins: + lines.append("- {}".format(cls.__class__.__name__)) + + # Abstract classes + if self.abstract_plugins or full_report: + lines.append("*** Discovered {} abstract plugins".format(len( + self.abstract_plugins + ))) + for cls in self.abstract_plugins: + lines.append("- {}".format(cls.__class__.__name__)) + + # Abstract classes + if self.duplicated_plugins or full_report: + lines.append("*** There were {} duplicated plugins".format(len( + self.duplicated_plugins + ))) + for cls in self.duplicated_plugins: + lines.append("- {}".format(cls.__class__.__name__)) + + if self.crashed_file_paths or full_report: + lines.append("*** Failed to load {} files".format(len( + self.crashed_file_paths + ))) + for path, exc_info_args in self.crashed_file_paths.items(): + lines.append("- {}".format(path)) + if exc_info: + lines.append(10 * "*") + lines.extend(traceback.format_exception(*exc_info_args)) + lines.append(10 * "*") + + return "\n".join(lines) + + def log_report(self, only_errors=True, exc_info=True): + report = self.get_report(only_errors, exc_info) + if report: + log.info(report) + + +class PluginDiscoverContext(object): + """Store and discover registered types nad registered paths to types. + + Keeps in memory all registered types and their paths. Paths are dynamically + loaded on discover so different discover calls won't return the same + class objects even if were loaded from same file. + """ + + def __init__(self): + self._registered_plugins = {} + self._registered_plugin_paths = {} + self._last_discovered_plugins = {} + # Store the last result to memory + self._last_discovered_results = {} + + def get_last_discovered_plugins(self, superclass): + """Access last discovered plugin by a subperclass. + + Returns: + None: When superclass was not discovered yet. + list: Lastly discovered plugins of the superclass. + """ + + return self._last_discovered_plugins.get(superclass) + + def discover( + self, + superclass, + allow_duplicates=True, + ignore_classes=None, + return_report=False + ): + """Find and return subclasses of `superclass` + + Args: + superclass (type): Class which determines discovered subclasses. + allow_duplicates (bool): Validate class name duplications. + ignore_classes (list): List of classes that will be ignored + and not added to result. + + Returns: + DiscoverResult: Object holding succesfully discovered plugins, + ignored plugins, plugins with missing abstract implementation + and duplicated plugin. + """ + + if not ignore_classes: + ignore_classes = [] + + result = DiscoverResult(superclass) + plugin_names = set() + registered_classes = self._registered_plugins.get(superclass) or [] + registered_paths = self._registered_plugin_paths.get(superclass) or [] + for cls in registered_classes: + if cls is superclass or cls in ignore_classes: + result.ignored_plugins.add(cls) + continue + + if inspect.isabstract(cls): + result.abstract_plugins.append(cls) + continue + + class_name = cls.__name__ + if class_name in plugin_names: + result.duplicated_plugins.append(cls) + continue + plugin_names.add(class_name) + result.plugins.append(cls) + + # Include plug-ins from registered paths + for path in registered_paths: + modules, crashed = modules_from_path(path) + for item in crashed: + filepath, exc_info = item + result.crashed_file_paths[filepath] = exc_info + + for item in modules: + filepath, module = item + result.add_module(module) + for cls in classes_from_module(superclass, module): + if cls is superclass or cls in ignore_classes: + result.ignored_plugins.add(cls) + continue + + if inspect.isabstract(cls): + result.abstract_plugins.append(cls) + continue + + if not allow_duplicates: + class_name = cls.__name__ + if class_name in plugin_names: + result.duplicated_plugins.append(cls) + continue + plugin_names.add(class_name) + + result.plugins.append(cls) + + # Store in memory last result to keep in memory loaded modules + self._last_discovered_results[superclass] = result + self._last_discovered_plugins[superclass] = list( + result.plugins + ) + result.log_report() + if return_report: + return result + return result.plugins + + def register_plugin(self, superclass, cls): + """Register a directory containing plug-ins of type `superclass` + + Arguments: + superclass (type): Superclass of plug-in + cls (object): Subclass of `superclass` + """ + + if superclass not in self._registered_plugins: + self._registered_plugins[superclass] = list() + + if cls not in self._registered_plugins[superclass]: + self._registered_plugins[superclass].append(cls) + + def register_plugin_path(self, superclass, path): + """Register a directory of one or more plug-ins + + Arguments: + superclass (type): Superclass of plug-ins to look for during + discovery + path (str): Absolute path to directory in which to discover + plug-ins + """ + + if superclass not in self._registered_plugin_paths: + self._registered_plugin_paths[superclass] = list() + + path = os.path.normpath(path) + if path not in self._registered_plugin_paths[superclass]: + self._registered_plugin_paths[superclass].append(path) + + def registered_plugin_paths(self): + """Return all currently registered plug-in paths""" + # Return shallow copy so we the original data can't be changed + return { + superclass: paths[:] + for superclass, paths in self._registered_plugin_paths.items() + } + + def deregister_plugin(self, superclass, plugin): + """Opposite of `register_plugin()`""" + if superclass in self._registered_plugins: + self._registered_plugins[superclass].remove(plugin) + + def deregister_plugin_path(self, superclass, path): + """Opposite of `register_plugin_path()`""" + self._registered_plugin_paths[superclass].remove(path) + + +class _GlobalDiscover: + """Access to global object of PluginDiscoverContext. + + Using singleton object to register/deregister plugins and plugin paths + and then discover them by superclass. + """ + + _context = None + + @classmethod + def get_context(cls): + if cls._context is None: + cls._context = PluginDiscoverContext() + return cls._context + + +def discover(superclass, allow_duplicates=True): + context = _GlobalDiscover.get_context() + return context.discover(superclass, allow_duplicates) + + +def get_last_discovered_plugins(superclass): + context = _GlobalDiscover.get_context() + return context.get_last_discovered_plugins(superclass) + + +def register_plugin(superclass, cls): + context = _GlobalDiscover.get_context() + context.register_plugin(superclass, cls) + + +def register_plugin_path(superclass, path): + context = _GlobalDiscover.get_context() + context.register_plugin_path(superclass, path) + + +def deregister_plugin(superclass, cls): + context = _GlobalDiscover.get_context() + context.deregister_plugin(superclass, cls) + + +def deregister_plugin_path(superclass, path): + context = _GlobalDiscover.get_context() + context.deregister_plugin_path(superclass, path) diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py index c2729a46ce..af5d7c4a91 100644 --- a/openpype/pipeline/publish/__init__.py +++ b/openpype/pipeline/publish/__init__.py @@ -3,6 +3,7 @@ from .publish_plugins import ( PublishXmlValidationError, KnownPublishError, OpenPypePyblishPluginMixin, + OptionalPyblishPluginMixin, ) from .lib import ( @@ -18,6 +19,7 @@ __all__ = ( "PublishXmlValidationError", "KnownPublishError", "OpenPypePyblishPluginMixin", + "OptionalPyblishPluginMixin", "DiscoverResult", "publish_plugins_discover", diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index bce64ec709..2402a005c2 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -1,3 +1,4 @@ +from openpype.lib import BoolDef from .lib import load_help_content_from_plugin @@ -108,3 +109,64 @@ class OpenPypePyblishPluginMixin: plugin_values[key] ) return attribute_values + + def get_attr_values_from_data(self, data): + """Get attribute values for attribute definitions from data. + + Args: + data(dict): Data from instance or context. + """ + return ( + data + .get("publish_attributes", {}) + .get(self.__class__.__name__, {}) + ) + + +class OptionalPyblishPluginMixin(OpenPypePyblishPluginMixin): + """Prepare mixin for optional plugins. + + Defined active attribute definition prepared for published and + prepares method which will check if is active or not. + + ``` + class ValidateScene( + pyblish.api.InstancePlugin, OptionalPyblishPluginMixin + ): + def process(self, instance): + # Skip the instance if is not active by data on the instance + if not self.is_active(instance.data): + return + ``` + """ + + @classmethod + def get_attribute_defs(cls): + """Attribute definitions based on plugin's optional attribute.""" + + # Empty list if plugin is not optional + if not getattr(cls, "optional", None): + return [] + + # Get active value from class as default value + active = getattr(cls, "active", True) + # Return boolean stored under 'active' key with label of the class name + label = cls.label or cls.__name__ + return [ + BoolDef("active", default=active, label=label) + ] + + def is_active(self, data): + """Check if plugins is active for instance/context based on their data. + + Args: + data(dict): Data from instance or context. + """ + # Skip if is not optional and return True + if not getattr(self, "optional", None): + return True + attr_values = self.get_attr_values_from_data(data) + active = attr_values.get("active") + if active is None: + active = getattr(self, "active", True) + return active diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index 12bab83be6..c09dab70eb 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -2,6 +2,11 @@ import os import copy import logging +from .plugin_discover import ( + discover, + register_plugin, + register_plugin_path, +) log = logging.getLogger(__name__) @@ -126,21 +131,15 @@ class BinaryThumbnail(ThumbnailResolver): # Thumbnail resolvers def discover_thumbnail_resolvers(): - import avalon.api - - return avalon.api.discover(ThumbnailResolver) + return discover(ThumbnailResolver) def register_thumbnail_resolver(plugin): - import avalon.api - - return avalon.api.register_plugin(ThumbnailResolver, plugin) + register_plugin(ThumbnailResolver, plugin) def register_thumbnail_resolver_path(path): - import avalon.api - - return avalon.api.register_plugin_path(ThumbnailResolver, path) + register_plugin_path(ThumbnailResolver, path) register_thumbnail_resolver(TemplateResolver) diff --git a/openpype/plugins/load/delete_old_versions.py b/openpype/plugins/load/delete_old_versions.py index 692acdec02..2789f4ea23 100644 --- a/openpype/plugins/load/delete_old_versions.py +++ b/openpype/plugins/load/delete_old_versions.py @@ -126,7 +126,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): os.remove(file_path) self.log.debug("Removed file: {}".format(file_path)) - remainders.remove(file_path_base) + if file_path_base in remainders: + remainders.remove(file_path_base) continue seq_path_base = os.path.split(seq_path)[1] @@ -333,6 +334,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): def main(self, data, remove_publish_folder): # Size of files. size = 0 + if not data: + return size if remove_publish_folder: size = self.delete_whole_dir_paths(data["dir_paths"].values()) @@ -418,6 +421,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): ) data = self.get_data(context, versions_to_keep) + if not data: + continue size += self.main(data, remove_publish_folder) print("Progressing {}/{}".format(count + 1, len(contexts))) diff --git a/openpype/plugins/publish/collect_cleanup_keys.py b/openpype/plugins/publish/collect_cleanup_keys.py new file mode 100644 index 0000000000..635b038387 --- /dev/null +++ b/openpype/plugins/publish/collect_cleanup_keys.py @@ -0,0 +1,21 @@ +""" +Requires: + None +Provides: + context + - cleanupFullPaths (list) + - cleanupEmptyDirs (list) +""" + +import pyblish.api + + +class CollectCleanupKeys(pyblish.api.ContextPlugin): + """Prepare keys for 'ExplicitCleanUp' plugin.""" + + label = "Collect Cleanup Keys" + order = pyblish.api.CollectorOrder + + def process(self, context): + context.data["cleanupFullPaths"] = [] + context.data["cleanupEmptyDirs"] = [] diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 466606d08b..d6df6535d8 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -7,8 +7,12 @@ import shutil from bson.objectid import ObjectId from pymongo import InsertOne, ReplaceOne import pyblish.api + from avalon import api, io, schema -from openpype.lib import create_hard_link +from openpype.lib import ( + create_hard_link, + filter_profiles +) class IntegrateHeroVersion(pyblish.api.InstancePlugin): @@ -17,7 +21,9 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder + 0.1 optional = True + active = True + # Families are modified using settings families = [ "model", "rig", @@ -33,11 +39,13 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): "project", "asset", "task", "subset", "representation", "family", "hierarchy", "task", "username" ] - # TODO add family filtering # QUESTION/TODO this process should happen on server if crashed due to # permissions error on files (files were used or user didn't have perms) # *but all other plugins must be sucessfully completed + template_name_profiles = [] + _default_template_name = "hero" + def process(self, instance): self.log.debug( "--- Integration of Hero version for subset `{}` begins.".format( @@ -51,27 +59,35 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): ) return - project_name = api.Session["AVALON_PROJECT"] + template_key = self._get_template_key(instance) - # TODO raise error if Hero not set? anatomy = instance.context.data["anatomy"] - if "hero" not in anatomy.templates: - self.log.warning("!!! Anatomy does not have set `hero` key!") - return - - if "path" not in anatomy.templates["hero"]: + project_name = api.Session["AVALON_PROJECT"] + if template_key not in anatomy.templates: self.log.warning(( - "!!! There is not set `path` template in `hero` anatomy" - " for project \"{}\"." - ).format(project_name)) + "!!! Anatomy of project \"{}\" does not have set" + " \"{}\" template key!" + ).format(project_name, template_key)) return - hero_template = anatomy.templates["hero"]["path"] + if "path" not in anatomy.templates[template_key]: + self.log.warning(( + "!!! There is not set \"path\" template in \"{}\" anatomy" + " for project \"{}\"." + ).format(template_key, project_name)) + return + + hero_template = anatomy.templates[template_key]["path"] self.log.debug("`hero` template check was successful. `{}`".format( hero_template )) - hero_publish_dir = self.get_publish_dir(instance) + self.integrate_instance(instance, template_key, hero_template) + + def integrate_instance(self, instance, template_key, hero_template): + anatomy = instance.context.data["anatomy"] + published_repres = instance.data["published_representations"] + hero_publish_dir = self.get_publish_dir(instance, template_key) src_version_entity = instance.data.get("versionEntity") filtered_repre_ids = [] @@ -271,12 +287,12 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): continue # Prepare anatomy data - anatomy_data = repre_info["anatomy_data"] + anatomy_data = copy.deepcopy(repre_info["anatomy_data"]) anatomy_data.pop("version", None) # Get filled path to repre context anatomy_filled = anatomy.format(anatomy_data) - template_filled = anatomy_filled["hero"]["path"] + template_filled = anatomy_filled[template_key]["path"] repre_data = { "path": str(template_filled), @@ -308,11 +324,11 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): collections, remainders = clique.assemble(published_files) if remainders or not collections or len(collections) > 1: raise Exception(( - "Integrity error. Files of published representation " - "is combination of frame collections and single files." - "Collections: `{}` Single files: `{}`" - ).format(str(collections), - str(remainders))) + "Integrity error. Files of published" + " representation is combination of frame" + " collections and single files. Collections:" + " `{}` Single files: `{}`" + ).format(str(collections), str(remainders))) src_col = collections[0] @@ -320,13 +336,10 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): frame_splitter = "_-_FRAME_SPLIT_-_" anatomy_data["frame"] = frame_splitter _anatomy_filled = anatomy.format(anatomy_data) - _template_filled = _anatomy_filled["hero"]["path"] + _template_filled = _anatomy_filled[template_key]["path"] head, tail = _template_filled.split(frame_splitter) padding = int( - anatomy.templates["render"].get( - "frame_padding", - anatomy.templates["render"].get("padding") - ) + anatomy.templates[template_key]["frame_padding"] ) dst_col = clique.Collection( @@ -444,6 +457,8 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): backup_hero_publish_dir is not None and os.path.exists(backup_hero_publish_dir) ): + if os.path.exists(hero_publish_dir): + shutil.rmtree(hero_publish_dir) os.rename(backup_hero_publish_dir, hero_publish_dir) self.log.error(( "!!! Creating of hero version failed." @@ -466,13 +481,13 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): files.append(_path) return files - def get_publish_dir(self, instance): + def get_publish_dir(self, instance, template_key): anatomy = instance.context.data["anatomy"] template_data = copy.deepcopy(instance.data["anatomyData"]) - if "folder" in anatomy.templates["hero"]: + if "folder" in anatomy.templates[template_key]: anatomy_filled = anatomy.format(template_data) - publish_folder = anatomy_filled["hero"]["folder"] + publish_folder = anatomy_filled[template_key]["folder"] else: # This is for cases of Deprecated anatomy without `folder` # TODO remove when all clients have solved this issue @@ -489,7 +504,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): " key underneath `publish` (in global of for project `{}`)." ).format(project_name)) - file_path = anatomy_filled["hero"]["path"] + file_path = anatomy_filled[template_key]["path"] # Directory publish_folder = os.path.dirname(file_path) @@ -499,6 +514,38 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): return publish_folder + def _get_template_key(self, instance): + anatomy_data = instance.data["anatomyData"] + task_data = anatomy_data.get("task") or {} + task_name = task_data.get("name") + task_type = task_data.get("type") + host_name = instance.context.data["hostName"] + # TODO raise error if Hero not set? + family = self.main_family_from_instance(instance) + key_values = { + "families": family, + "task_names": task_name, + "task_types": task_type, + "hosts": host_name + } + profile = filter_profiles( + self.template_name_profiles, + key_values, + logger=self.log + ) + if profile: + template_name = profile["template_name"] + else: + template_name = self._default_template_name + return template_name + + def main_family_from_instance(self, instance): + """Returns main family of entered instance.""" + family = instance.data.get("family") + if not family: + family = instance.data["families"][0] + return family + def copy_file(self, src_path, dst_path): # TODO check drives if are the same to check if cas hardlink dirname = os.path.dirname(dst_path) @@ -564,22 +611,16 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): src_file (string) - original file path dst_file (string) - hero file path """ - _, rootless = anatomy.find_root_template_from_path( - dst_file - ) - _, rtls_src = anatomy.find_root_template_from_path( - src_file - ) + _, rootless = anatomy.find_root_template_from_path(dst_file) + _, rtls_src = anatomy.find_root_template_from_path(src_file) return path.replace(rtls_src, rootless) def _update_hash(self, hash, src_file_name, dst_file): """ Updates hash value with proper hero name """ - src_file_name = self._get_name_without_ext( - src_file_name) - hero_file_name = self._get_name_without_ext( - dst_file) + src_file_name = self._get_name_without_ext(src_file_name) + hero_file_name = self._get_name_without_ext(dst_file) return hash.replace(src_file_name, hero_file_name) def _get_name_without_ext(self, value): diff --git a/openpype/plugins/publish/integrate_new.py b/openpype/plugins/publish/integrate_new.py index 8e666f3400..3543786949 100644 --- a/openpype/plugins/publish/integrate_new.py +++ b/openpype/plugins/publish/integrate_new.py @@ -151,7 +151,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): "effect", "xgen", "hda", - "usd" + "usd", + "usdComposition", + "usdOverride" ] exclude_families = ["clip"] db_representation_context_keys = [ diff --git a/openpype/settings/constants.py b/openpype/settings/constants.py index 8b8acf5714..19ff953eb4 100644 --- a/openpype/settings/constants.py +++ b/openpype/settings/constants.py @@ -8,11 +8,11 @@ M_ENVIRONMENT_KEY = "__environment_keys__" # Metadata key for storing dynamic created labels M_DYNAMIC_KEY_LABEL = "__dynamic_keys_labels__" -METADATA_KEYS = ( +METADATA_KEYS = frozenset([ M_OVERRIDDEN_KEY, M_ENVIRONMENT_KEY, M_DYNAMIC_KEY_LABEL -) +]) # Keys where studio's system overrides are stored GLOBAL_SETTINGS_KEY = "global_settings" diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 528df111f0..0f5ef860d5 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -44,20 +44,6 @@ "enabled": false, "profiles": [] }, - "IntegrateHeroVersion": { - "enabled": true, - "optional": true, - "families": [ - "model", - "rig", - "look", - "pointcache", - "animation", - "setdress", - "layout", - "mayaScene" - ] - }, "ExtractJpegEXR": { "enabled": true, "ffmpeg_args": { @@ -206,6 +192,22 @@ } ] }, + "IntegrateHeroVersion": { + "enabled": true, + "optional": true, + "active": true, + "families": [ + "model", + "rig", + "look", + "pointcache", + "animation", + "setdress", + "layout", + "mayaScene" + ], + "template_name_profiles": [] + }, "CleanUp": { "paterns": [], "remove_temp_renders": false diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 6992fb6e3e..44d7f2d9d0 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -106,6 +106,9 @@ ] } }, + "ExtractReviewData": { + "enabled": false + }, "ExtractReviewDataLut": { "enabled": false }, @@ -119,11 +122,10 @@ "families": [], "sebsets": [] }, - "extension": "mov", + "read_raw": false, "viewer_process_override": "", "bake_viewer_process": true, "bake_viewer_input_process": true, - "add_tags": [], "reformat_node_add": false, "reformat_node_config": [ { @@ -151,7 +153,9 @@ "name": "pbb", "value": false } - ] + ], + "extension": "mov", + "add_tags": [] } } }, diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json index 118b9c721e..d9b7a8083f 100644 --- a/openpype/settings/defaults/project_settings/photoshop.json +++ b/openpype/settings/defaults/project_settings/photoshop.json @@ -12,13 +12,16 @@ "flatten_subset_template": "", "color_code_mapping": [] }, + "CollectInstances": { + "flatten_subset_template": "" + }, "ValidateContainers": { "enabled": true, "optional": true, "active": true }, "ValidateNaming": { - "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,]", + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,;]", "replace_char": "_" }, "ExtractImage": { @@ -44,4 +47,4 @@ "create_first_version": false, "custom_templates": [] } -} +} \ No newline at end of file diff --git a/openpype/settings/defaults/project_settings/slack.json b/openpype/settings/defaults/project_settings/slack.json index d77b8c2208..c156fed08e 100644 --- a/openpype/settings/defaults/project_settings/slack.json +++ b/openpype/settings/defaults/project_settings/slack.json @@ -11,6 +11,7 @@ "task_types": [], "tasks": [], "subsets": [], + "review_upload_limit": 50.0, "channel_messages": [] } ] diff --git a/openpype/settings/defaults/project_settings/tvpaint.json b/openpype/settings/defaults/project_settings/tvpaint.json index 528bf6de8e..88b5a598cd 100644 --- a/openpype/settings/defaults/project_settings/tvpaint.json +++ b/openpype/settings/defaults/project_settings/tvpaint.json @@ -1,6 +1,10 @@ { "stop_timer_on_application_exit": false, "publish": { + "CollectRenderScene": { + "enabled": false, + "render_layer": "Main" + }, "ExtractSequence": { "review_bg": [ 255, @@ -28,6 +32,11 @@ "enabled": true, "optional": true, "active": true + }, + "ExtractConvertToEXR": { + "enabled": false, + "replace_pngs": true, + "exr_compression": "ZIP" } }, "load": { diff --git a/openpype/settings/defaults/system_settings/tools.json b/openpype/settings/defaults/system_settings/tools.json index 181236abe8..9e08465195 100644 --- a/openpype/settings/defaults/system_settings/tools.json +++ b/openpype/settings/defaults/system_settings/tools.json @@ -25,10 +25,18 @@ }, "variants": { "3-2": { - "MTOA_VERSION": "3.2" + "host_names": [], + "app_variants": [], + "environment": { + "MTOA_VERSION": "3.2" + } }, "3-1": { - "MTOA_VERSION": "3.1" + "host_names": [], + "app_variants": [], + "environment": { + "MTOA_VERSION": "3.1" + } }, "__dynamic_keys_labels__": { "3-2": "3.2", diff --git a/openpype/settings/entities/base_entity.py b/openpype/settings/entities/base_entity.py index 76700d605d..21ee44ae77 100644 --- a/openpype/settings/entities/base_entity.py +++ b/openpype/settings/entities/base_entity.py @@ -173,6 +173,10 @@ class BaseItemEntity(BaseEntity): # Entity has set `_project_override_value` (is not NOT_SET) self.had_project_override = False + self._default_log_invalid_types = True + self._studio_log_invalid_types = True + self._project_log_invalid_types = True + # Callbacks that are called on change. # - main current purspose is to register GUI callbacks self.on_change_callbacks = [] @@ -419,7 +423,7 @@ class BaseItemEntity(BaseEntity): raise InvalidValueType(self.valid_value_types, type(value), self.path) # TODO convert to private method - def _check_update_value(self, value, value_source): + def _check_update_value(self, value, value_source, log_invalid_types=True): """Validation of value on update methods. Update methods update data from currently saved settings so it is @@ -447,16 +451,17 @@ class BaseItemEntity(BaseEntity): if new_value is not NOT_SET: return new_value - # Warning log about invalid value type. - self.log.warning( - ( - "{} Got invalid value type for {} values." - " Expected types: {} | Got Type: {} | Value: \"{}\"" - ).format( - self.path, value_source, - self.valid_value_types, type(value), str(value) + if log_invalid_types: + # Warning log about invalid value type. + self.log.warning( + ( + "{} Got invalid value type for {} values." + " Expected types: {} | Got Type: {} | Value: \"{}\"" + ).format( + self.path, value_source, + self.valid_value_types, type(value), str(value) + ) ) - ) return NOT_SET def available_for_role(self, role_name=None): @@ -985,7 +990,7 @@ class ItemEntity(BaseItemEntity): return self.root_item.get_entity_from_path(path) @abstractmethod - def update_default_value(self, parent_values): + def update_default_value(self, parent_values, log_invalid_types=True): """Fill default values on startup or on refresh. Default values stored in `openpype` repository should update all items @@ -995,11 +1000,13 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass @abstractmethod - def update_studio_value(self, parent_values): + def update_studio_value(self, parent_values, log_invalid_types=True): """Fill studio override values on startup or on refresh. Set studio value if is not set to NOT_SET, in that case studio @@ -1008,11 +1015,13 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass @abstractmethod - def update_project_value(self, parent_values): + def update_project_value(self, parent_values, log_invalid_types=True): """Fill project override values on startup, refresh or project change. Set project value if is not set to NOT_SET, in that case project @@ -1021,5 +1030,7 @@ class ItemEntity(BaseItemEntity): Args: parent_values (dict): Values of parent's item. But in case item is used as widget, `parent_values` contain value for item. + log_invalid_types (bool): Log invalid type of value. Used when + entity can have children with same keys and different types. """ pass diff --git a/openpype/settings/entities/dict_conditional.py b/openpype/settings/entities/dict_conditional.py index 19f326aea7..88d2dc8296 100644 --- a/openpype/settings/entities/dict_conditional.py +++ b/openpype/settings/entities/dict_conditional.py @@ -518,12 +518,18 @@ class DictConditionalEntity(ItemEntity): output.update(self._current_metadata) return output - def _prepare_value(self, value): + def _prepare_value(self, value, log_invalid_types): if value is NOT_SET or self.enum_key not in value: return NOT_SET, NOT_SET enum_value = value.get(self.enum_key) if enum_value not in self.non_gui_children: + if log_invalid_types: + self.log.warning( + "{} Unknown enum key in default values: {}".format( + self.path, enum_value + ) + ) return NOT_SET, NOT_SET # Create copy of value before poping values @@ -551,22 +557,25 @@ class DictConditionalEntity(ItemEntity): return value, metadata - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET # TODO add value validation - value, metadata = self._prepare_value(value) + value, metadata = self._prepare_value(value, log_invalid_types) self._default_metadata = metadata if value is NOT_SET: - self.enum_entity.update_default_value(value) + self.enum_entity.update_default_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -574,7 +583,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value].keys()) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in default values: {}".format( self.path, @@ -582,28 +591,37 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_default_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_default_value(enum_value, log_invalid_types) + + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_default_value(child_value) + child_obj.update_default_value(child_value, _log_invalid_types) - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") - value, metadata = self._prepare_value(value) + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) + value, metadata = self._prepare_value(value, log_invalid_types) self._studio_override_metadata = metadata self.had_studio_override = metadata is not NOT_SET if value is NOT_SET: - self.enum_entity.update_studio_value(value) + self.enum_entity.update_studio_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -611,7 +629,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value]) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in studio overrides: {}".format( self.path, @@ -619,28 +637,36 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_studio_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_studio_value(enum_value, log_invalid_types) + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_studio_value(child_value) + child_obj.update_studio_value(child_value, _log_invalid_types) - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") - value, metadata = self._prepare_value(value) + + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) + value, metadata = self._prepare_value(value, log_invalid_types) self._project_override_metadata = metadata self.had_project_override = metadata is not NOT_SET if value is NOT_SET: - self.enum_entity.update_project_value(value) + self.enum_entity.update_project_value(value, log_invalid_types) for children_by_key in self.non_gui_children.values(): for child_obj in children_by_key.values(): - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) return value_keys = set(value.keys()) @@ -648,7 +674,7 @@ class DictConditionalEntity(ItemEntity): expected_keys = set(self.non_gui_children[enum_value]) expected_keys.add(self.enum_key) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in project overrides: {}".format( self.path, @@ -656,12 +682,16 @@ class DictConditionalEntity(ItemEntity): ) ) - self.enum_entity.update_project_value(enum_value) - for children_by_key in self.non_gui_children.values(): + self.enum_entity.update_project_value(enum_value, log_invalid_types) + for enum_key, children_by_key in self.non_gui_children.items(): + _log_invalid_types = log_invalid_types + if _log_invalid_types: + _log_invalid_types = enum_key == enum_value + value_copy = copy.deepcopy(value) for key, child_obj in children_by_key.items(): child_value = value_copy.get(key, NOT_SET) - child_obj.update_project_value(child_value) + child_obj.update_project_value(child_value, _log_invalid_types) def _discard_changes(self, on_change_trigger): self._ignore_child_changes = True diff --git a/openpype/settings/entities/dict_immutable_keys_entity.py b/openpype/settings/entities/dict_immutable_keys_entity.py index 060f8d522e..0209681e95 100644 --- a/openpype/settings/entities/dict_immutable_keys_entity.py +++ b/openpype/settings/entities/dict_immutable_keys_entity.py @@ -414,12 +414,16 @@ class DictImmutableKeysEntity(ItemEntity): return value, metadata - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET # TODO add value validation value, metadata = self._prepare_value(value) @@ -427,13 +431,13 @@ class DictImmutableKeysEntity(ItemEntity): if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in default values: {}".format( self.path, @@ -443,27 +447,31 @@ class DictImmutableKeysEntity(ItemEntity): for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_default_value(child_value) + child_obj.update_default_value(child_value, log_invalid_types) - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._studio_override_metadata = metadata self.had_studio_override = metadata is not NOT_SET if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in studio overrides: {}".format( self.path, @@ -472,27 +480,31 @@ class DictImmutableKeysEntity(ItemEntity): ) for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_studio_value(child_value) + child_obj.update_studio_value(child_value, log_invalid_types) - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._project_override_metadata = metadata self.had_project_override = metadata is not NOT_SET if value is NOT_SET: for child_obj in self.non_gui_children.values(): - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) return value_keys = set(value.keys()) expected_keys = set(self.non_gui_children) unknown_keys = value_keys - expected_keys - if unknown_keys: + if unknown_keys and log_invalid_types: self.log.warning( "{} Unknown keys in project overrides: {}".format( self.path, @@ -502,7 +514,7 @@ class DictImmutableKeysEntity(ItemEntity): for key, child_obj in self.non_gui_children.items(): child_value = value.get(key, NOT_SET) - child_obj.update_project_value(child_value) + child_obj.update_project_value(child_value, log_invalid_types) def _discard_changes(self, on_change_trigger): self._ignore_child_changes = True @@ -694,37 +706,48 @@ class RootsDictEntity(DictImmutableKeysEntity): self._metadata_are_modified = False self._current_metadata = {} - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) value, _ = self._prepare_value(value) self._default_value = value self._default_metadata = {} self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, _ = self._prepare_value(value) self._studio_value = value self._studio_override_metadata = {} self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, _metadata = self._prepare_value(value) self._project_value = value @@ -886,37 +909,48 @@ class SyncServerSites(DictImmutableKeysEntity): self._metadata_are_modified = False self._current_metadata = {} - def update_default_value(self, value): + def update_default_value(self, value, log_invalid_types=True): """Update default values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "default") + + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) value, _ = self._prepare_value(value) self._default_value = value self._default_metadata = {} self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): + def update_studio_value(self, value, log_invalid_types=True): """Update studio override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "studio override") + + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, _ = self._prepare_value(value) self._studio_value = value self._studio_override_metadata = {} self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): + def update_project_value(self, value, log_invalid_types=True): """Update project override values. Not an api method, should be called by parent. """ - value = self._check_update_value(value, "project override") + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, _metadata = self._prepare_value(value) self._project_value = value diff --git a/openpype/settings/entities/dict_mutable_keys_entity.py b/openpype/settings/entities/dict_mutable_keys_entity.py index 6b9c0bc7ed..a0c93b97a7 100644 --- a/openpype/settings/entities/dict_mutable_keys_entity.py +++ b/openpype/settings/entities/dict_mutable_keys_entity.py @@ -393,11 +393,15 @@ class DictMutableKeysEntity(EndpointEntity): value = self.value_on_not_set using_values_from_state = False + log_invalid_types = True if state is OverrideState.PROJECT: + log_invalid_types = self._project_log_invalid_types using_values_from_state = using_project_overrides elif state is OverrideState.STUDIO: + log_invalid_types = self._studio_log_invalid_types using_values_from_state = using_studio_overrides elif state is OverrideState.DEFAULTS: + log_invalid_types = self._default_log_invalid_types using_values_from_state = using_default_values new_value = copy.deepcopy(value) @@ -437,11 +441,11 @@ class DictMutableKeysEntity(EndpointEntity): if not label: label = metadata_labels.get(new_key) - child_entity.update_default_value(_value) + child_entity.update_default_value(_value, log_invalid_types) if using_project_overrides: - child_entity.update_project_value(_value) + child_entity.update_project_value(_value, log_invalid_types) elif using_studio_overrides: - child_entity.update_studio_value(_value) + child_entity.update_studio_value(_value, log_invalid_types) if label: children_label_by_id[child_entity.id] = label @@ -598,8 +602,11 @@ class DictMutableKeysEntity(EndpointEntity): metadata[key] = value.pop(key) return value, metadata - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) has_default_value = value is not NOT_SET if has_default_value: for required_key in self.required_keys: @@ -611,15 +618,21 @@ class DictMutableKeysEntity(EndpointEntity): self._default_value = value self._default_metadata = metadata - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._studio_override_value = value self._studio_override_metadata = metadata self.had_studio_override = value is not NOT_SET - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) value, metadata = self._prepare_value(value) self._project_override_value = value self._project_override_metadata = metadata @@ -686,9 +699,12 @@ class DictMutableKeysEntity(EndpointEntity): if not self._can_remove_from_project_override: return + log_invalid_types = True if self._has_studio_override: + log_invalid_types = self._studio_log_invalid_types value = self._studio_override_value elif self.has_default_value: + log_invalid_types = self._default_log_invalid_types value = self._default_value else: value = self.value_on_not_set @@ -709,9 +725,9 @@ class DictMutableKeysEntity(EndpointEntity): for _key, _value in new_value.items(): new_key = self._convert_to_regex_valid_key(_key) child_entity = self._add_key(new_key) - child_entity.update_default_value(_value) + child_entity.update_default_value(_value, log_invalid_types) if self._has_studio_override: - child_entity.update_studio_value(_value) + child_entity.update_studio_value(_value, log_invalid_types) label = metadata_labels.get(_key) if label: diff --git a/openpype/settings/entities/input_entities.py b/openpype/settings/entities/input_entities.py index 7512d7bfcc..3dcd238672 100644 --- a/openpype/settings/entities/input_entities.py +++ b/openpype/settings/entities/input_entities.py @@ -90,18 +90,27 @@ class EndpointEntity(ItemEntity): def require_restart(self): return self.has_unsaved_changes - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self._default_value = value self.has_default_value = value is not NOT_SET - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) self._studio_override_value = value self.had_studio_override = bool(value is not NOT_SET) - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) self._project_override_value = value self.had_project_override = bool(value is not NOT_SET) @@ -590,22 +599,26 @@ class RawJsonEntity(InputEntity): metadata[key] = value.pop(key) return value, metadata - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + value = self._check_update_value(value, "default", log_invalid_types) self.has_default_value = value is not NOT_SET value, metadata = self._prepare_value(value) self._default_value = value self.default_metadata = metadata - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + value = self._check_update_value( + value, "studio override", log_invalid_types + ) self.had_studio_override = value is not NOT_SET value, metadata = self._prepare_value(value) self._studio_override_value = value self.studio_override_metadata = metadata - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + value = self._check_update_value( + value, "project override", log_invalid_types + ) self.had_project_override = value is not NOT_SET value, metadata = self._prepare_value(value) self._project_override_value = value diff --git a/openpype/settings/entities/item_entities.py b/openpype/settings/entities/item_entities.py index 9c6f428b97..3b756e4ede 100644 --- a/openpype/settings/entities/item_entities.py +++ b/openpype/settings/entities/item_entities.py @@ -173,14 +173,17 @@ class PathEntity(ItemEntity): self._ignore_missing_defaults = ignore_missing_defaults self.child_obj.set_override_state(state, ignore_missing_defaults) - def update_default_value(self, value): - self.child_obj.update_default_value(value) + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + self.child_obj.update_default_value(value, log_invalid_types) - def update_project_value(self, value): - self.child_obj.update_project_value(value) + def update_project_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + self.child_obj.update_project_value(value, log_invalid_types) - def update_studio_value(self, value): - self.child_obj.update_studio_value(value) + def update_studio_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + self.child_obj.update_studio_value(value, log_invalid_types) def _discard_changes(self, *args, **kwargs): self.child_obj.discard_changes(*args, **kwargs) @@ -472,9 +475,9 @@ class ListStrictEntity(ItemEntity): self._has_project_override = False - def _check_update_value(self, value, value_type): + def _check_update_value(self, value, value_type, log_invalid_types=True): value = super(ListStrictEntity, self)._check_update_value( - value, value_type + value, value_type, log_invalid_types ) if value is NOT_SET: return value @@ -484,15 +487,16 @@ class ListStrictEntity(ItemEntity): if value_len == child_len: return value - self.log.warning( - ( - "{} Amount of strict list items in {} values is" - " not same as expected. Expected {} items. Got {} items. {}" - ).format( - self.path, value_type, - child_len, value_len, str(value) + if log_invalid_types: + self.log.warning( + ( + "{} Amount of strict list items in {} values is not same" + " as expected. Expected {} items. Got {} items. {}" + ).format( + self.path, value_type, + child_len, value_len, str(value) + ) ) - ) if value_len < child_len: # Fill missing values with NOT_SET @@ -504,36 +508,51 @@ class ListStrictEntity(ItemEntity): value.pop(child_len) return value - def update_default_value(self, value): - value = self._check_update_value(value, "default") + def update_default_value(self, value, log_invalid_types=True): + self._default_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "default", log_invalid_types + ) self.has_default_value = value is not NOT_SET if value is NOT_SET: for child_obj in self.children: - child_obj.update_default_value(value) + child_obj.update_default_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_default_value(item_value) + self.children[idx].update_default_value( + item_value, log_invalid_types + ) - def update_studio_value(self, value): - value = self._check_update_value(value, "studio override") + def update_studio_value(self, value, log_invalid_types=True): + self._studio_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "studio override", log_invalid_types + ) if value is NOT_SET: for child_obj in self.children: - child_obj.update_studio_value(value) + child_obj.update_studio_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_studio_value(item_value) + self.children[idx].update_studio_value( + item_value, log_invalid_types + ) - def update_project_value(self, value): - value = self._check_update_value(value, "project override") + def update_project_value(self, value, log_invalid_types=True): + self._project_log_invalid_types = log_invalid_types + value = self._check_update_value( + value, "project override", log_invalid_types + ) if value is NOT_SET: for child_obj in self.children: - child_obj.update_project_value(value) + child_obj.update_project_value(value, log_invalid_types) else: for idx, item_value in enumerate(value): - self.children[idx].update_project_value(item_value) + self.children[idx].update_project_value( + item_value, log_invalid_types + ) def reset_callbacks(self): super(ListStrictEntity, self).reset_callbacks() diff --git a/openpype/settings/entities/list_entity.py b/openpype/settings/entities/list_entity.py index 0268c208bb..5d6a64b3ea 100644 --- a/openpype/settings/entities/list_entity.py +++ b/openpype/settings/entities/list_entity.py @@ -325,16 +325,24 @@ class ListEntity(EndpointEntity): for item in value: child_obj = self._add_new_item() - child_obj.update_default_value(item) + child_obj.update_default_value( + item, self._default_log_invalid_types + ) if self._override_state is OverrideState.PROJECT: if self.had_project_override: - child_obj.update_project_value(item) + child_obj.update_project_value( + item, self._project_log_invalid_types + ) elif self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) elif self._override_state is OverrideState.STUDIO: if self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) for child_obj in self.children: child_obj.set_override_state( @@ -466,16 +474,24 @@ class ListEntity(EndpointEntity): for item in value: child_obj = self._add_new_item() - child_obj.update_default_value(item) + child_obj.update_default_value( + item, self._default_log_invalid_types + ) if self._override_state is OverrideState.PROJECT: if self.had_project_override: - child_obj.update_project_value(item) + child_obj.update_project_value( + item, self._project_log_invalid_types + ) elif self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) elif self._override_state is OverrideState.STUDIO: if self.had_studio_override: - child_obj.update_studio_value(item) + child_obj.update_studio_value( + item, self._studio_log_invalid_types + ) child_obj.set_override_state( self._override_state, self._ignore_missing_defaults diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json index b499ccc4be..badf94229b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json @@ -42,7 +42,7 @@ "children": [ { "type": "label", - "label": "Set color for publishable layers, set its resulting family and template for subset name. Can create flatten image from published instances" + "label": "Set color for publishable layers, set its resulting family and template for subset name. \nCan create flatten image from published instances.(Applicable only for remote publishing!)" }, { "type": "boolean", @@ -108,6 +108,23 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "CollectInstances", + "label": "Collect Instances", + "children": [ + { + "type": "label", + "label": "Name for flatten image created if no image instance present" + }, + { + "type": "text", + "key": "flatten_subset_template", + "label": "Subset template for flatten image" + } + ] + }, { "type": "schema_template", "name": "template_publish_plugin", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json index 14814d8b01..1a9804cd4f 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_slack.json @@ -75,6 +75,15 @@ "type": "list", "object_type": "text" }, + { + "type": "number", + "key": "review_upload_limit", + "label": "Upload review maximum file size (MB)", + "decimal": 2, + "default": 50, + "minimum": 0, + "maximum": 1000000 + }, { "type": "separator" }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json index 8286ed1193..20fe5b0855 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json @@ -16,6 +16,30 @@ "key": "publish", "label": "Publish plugins", "children": [ + { + "type": "dict", + "collapsible": true, + "key": "CollectRenderScene", + "label": "Collect Render Scene", + "is_group": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "It is possible to fill 'render_layer' or 'variant' in subset name template with custom value.
- value of 'render_pass' is always \"beauty\"." + }, + { + "type": "text", + "key": "render_layer", + "label": "Render Layer" + } + ] + }, { "type": "dict", "collapsible": true, @@ -78,6 +102,47 @@ "docstring": "Validate if shot on instances metadata is same as workfiles shot" } ] + }, + { + "type": "dict", + "key": "ExtractConvertToEXR", + "label": "Extract Convert To EXR", + "is_group": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "WARNING: This plugin does not work on MacOS (using OIIO tool)." + }, + { + "type": "boolean", + "key": "replace_pngs", + "label": "Replace source PNG" + }, + { + "type": "enum", + "key": "exr_compression", + "label": "EXR Compression", + "multiselection": false, + "enum_items": [ + {"ZIP": "ZIP"}, + {"ZIPS": "ZIPS"}, + {"DWAA": "DWAA"}, + {"DWAB": "DWAB"}, + {"PIZ": "PIZ"}, + {"RLE": "RLE"}, + {"PXR24": "PXR24"}, + {"B44": "B44"}, + {"B44A": "B44A"}, + {"none": "None"} + ] + } + ] } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index ab968037f6..77a41417d9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -177,32 +177,6 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "checkbox_key": "enabled", - "key": "IntegrateHeroVersion", - "label": "IntegrateHeroVersion", - "is_group": true, - "children": [ - { - "type": "boolean", - "key": "enabled", - "label": "Enabled" - }, - { - "type": "boolean", - "key": "optional", - "label": "Optional" - }, - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - } - ] - }, { "type": "dict", "collapsible": true, @@ -661,6 +635,80 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "IntegrateHeroVersion", + "label": "IntegrateHeroVersion", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "type": "list", + "key": "template_name_profiles", + "label": "Template name profiles", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "type": "hosts-enum", + "key": "hosts", + "label": "Hosts", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "template_name", + "label": "Template name", + "tooltip": "Name of template from Anatomy templates" + } + ] + } + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 1636a8d700..27e8957786 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -138,6 +138,21 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ExtractReviewData", + "label": "ExtractReviewData", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + } + ] + }, { "type": "dict", "collapsible": true, @@ -208,9 +223,10 @@ "type": "separator" }, { - "type": "text", - "key": "extension", - "label": "File extension" + "type": "boolean", + "key": "read_raw", + "label": "Read colorspace RAW", + "default": false }, { "type": "text", @@ -227,12 +243,6 @@ "key": "bake_viewer_input_process", "label": "Bake Viewer Input Process (LUTs)" }, - { - "key": "add_tags", - "label": "Add additional tags to representations", - "type": "list", - "object_type": "text" - }, { "type": "separator" }, @@ -246,7 +256,7 @@ "type": "collapsible-wrap", "label": "Reformat Node Knobs", "collapsible": true, - "collapsed": false, + "collapsed": true, "children": [ { "type": "list", @@ -347,6 +357,20 @@ } } ] + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "extension", + "label": "Write node file type" + }, + { + "key": "add_tags", + "label": "Add additional tags to representations", + "type": "list", + "object_type": "text" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json index 7607e1a8c1..484fbf9d07 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_representation_tags.json @@ -24,6 +24,9 @@ }, { "sequence": "Output as image sequence" + }, + { + "no-audio": "Do not add audio" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_color.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_color.json deleted file mode 100644 index af8fd9dae4..0000000000 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_color.json +++ /dev/null @@ -1,30 +0,0 @@ -[ - { - "type": "list-strict", - "key": "{name}", - "label": "{label}", - "object_types": [ - { - "label": "Red", - "type": "number", - "minimum": 0, - "maximum": 1, - "decimal": 3 - }, - { - "label": "Green", - "type": "number", - "minimum": 0, - "maximum": 1, - "decimal": 3 - }, - { - "label": "Blue", - "type": "number", - "minimum": 0, - "maximum": 1, - "decimal": 3 - } - ] - } -] diff --git a/openpype/settings/entities/schemas/system_schema/schema_tools.json b/openpype/settings/entities/schemas/system_schema/schema_tools.json index 2346bef36d..7962fdd465 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_tools.json +++ b/openpype/settings/entities/schemas/system_schema/schema_tools.json @@ -25,7 +25,30 @@ "key": "variants", "collapsible_key": true, "object_type": { - "type": "raw-json" + "type": "dict", + "children": [ + { + "key": "host_names", + "label": "Hosts", + "type": "hosts-enum", + "multiselection": true + }, + { + "key": "app_variants", + "label": "Applications", + "type": "apps-enum", + "multiselection": true, + "tooltip": "Applications are not \"live\" and may require to Save and refresh settings UI to update values." + }, + { + "type": "separator" + }, + { + "key": "environment", + "label": "Environments", + "type": "raw-json" + } + ] } } ] diff --git a/openpype/settings/lib.py b/openpype/settings/lib.py index 1d303564d5..54502292dc 100644 --- a/openpype/settings/lib.py +++ b/openpype/settings/lib.py @@ -265,11 +265,43 @@ def save_project_anatomy(project_name, anatomy_data): raise SaveWarningExc(warnings) +def _system_settings_backwards_compatible_conversion(studio_overrides): + # Backwards compatibility of tools 3.9.1 - 3.9.2 to keep + # "tools" environments + if ( + "tools" in studio_overrides + and "tool_groups" in studio_overrides["tools"] + ): + tool_groups = studio_overrides["tools"]["tool_groups"] + for tool_group, group_value in tool_groups.items(): + if tool_group in METADATA_KEYS: + continue + + variants = group_value.get("variants") + if not variants: + continue + + for key in set(variants.keys()): + if key in METADATA_KEYS: + continue + + variant_value = variants[key] + if "environment" not in variant_value: + variants[key] = { + "environment": variant_value + } + + @require_handler def get_studio_system_settings_overrides(return_version=False): - return _SETTINGS_HANDLER.get_studio_system_settings_overrides( + output = _SETTINGS_HANDLER.get_studio_system_settings_overrides( return_version ) + value = output + if return_version: + value, version = output + _system_settings_backwards_compatible_conversion(value) + return output @require_handler diff --git a/openpype/tests/test_avalon_plugin_presets.py b/openpype/tests/test_avalon_plugin_presets.py index f1b1a94713..c491be1c05 100644 --- a/openpype/tests/test_avalon_plugin_presets.py +++ b/openpype/tests/test_avalon_plugin_presets.py @@ -1,6 +1,10 @@ import avalon.api as api import openpype -from openpype.pipeline import LegacyCreator +from openpype.pipeline import ( + LegacyCreator, + register_creator_plugin, + discover_creator_plugins, +) class MyTestCreator(LegacyCreator): @@ -27,8 +31,8 @@ def test_avalon_plugin_presets(monkeypatch, printer): openpype.install() api.register_host(Test()) - api.register_plugin(LegacyCreator, MyTestCreator) - plugins = api.discover(LegacyCreator) + register_creator_plugin(MyTestCreator) + plugins = discover_creator_plugins() printer("Test if we got our test plugin") assert MyTestCreator in plugins for p in plugins: diff --git a/openpype/tools/creator/model.py b/openpype/tools/creator/model.py index ef61c6e0f0..d3d60b96f2 100644 --- a/openpype/tools/creator/model.py +++ b/openpype/tools/creator/model.py @@ -1,8 +1,7 @@ import uuid from Qt import QtGui, QtCore -from avalon import api -from openpype.pipeline import LegacyCreator +from openpype.pipeline import discover_legacy_creator_plugins from . constants import ( FAMILY_ROLE, @@ -22,7 +21,7 @@ class CreatorsModel(QtGui.QStandardItemModel): self._creators_by_id = {} items = [] - creators = api.discover(LegacyCreator) + creators = discover_legacy_creator_plugins() for creator in creators: item_id = str(uuid.uuid4()) self._creators_by_id[item_id] = creator diff --git a/openpype/tools/publisher/widgets/create_dialog.py b/openpype/tools/publisher/widgets/create_dialog.py index 27ce97955a..7d98609c2c 100644 --- a/openpype/tools/publisher/widgets/create_dialog.py +++ b/openpype/tools/publisher/widgets/create_dialog.py @@ -271,7 +271,7 @@ class CreateDialog(QtWidgets.QDialog): create_btn.setEnabled(False) form_layout = QtWidgets.QFormLayout() - form_layout.addRow("Name:", variant_layout) + form_layout.addRow("Variant:", variant_layout) form_layout.addRow("Subset:", subset_name_input) mid_widget = QtWidgets.QWidget(self) diff --git a/openpype/tools/sceneinventory/switch_dialog.py b/openpype/tools/sceneinventory/switch_dialog.py index 252f5cde4c..bb3e2615ac 100644 --- a/openpype/tools/sceneinventory/switch_dialog.py +++ b/openpype/tools/sceneinventory/switch_dialog.py @@ -4,11 +4,12 @@ from Qt import QtWidgets, QtCore import qtawesome from bson.objectid import ObjectId -from avalon import io, pipeline -from openpype.pipeline import ( +from avalon import io +from openpype.pipeline.load import ( discover_loader_plugins, switch_container, get_repres_contexts, + loaders_from_repre_context, ) from .widgets import ( @@ -370,7 +371,7 @@ class SwitchAssetDialog(QtWidgets.QDialog): loaders = None for repre_context in repre_contexts.values(): - _loaders = set(pipeline.loaders_from_repre_context( + _loaders = set(loaders_from_repre_context( available_loaders, repre_context )) if loaders is None: diff --git a/openpype/tools/settings/settings/widgets.py b/openpype/tools/settings/settings/widgets.py index 577c2630ab..6db001f2f6 100644 --- a/openpype/tools/settings/settings/widgets.py +++ b/openpype/tools/settings/settings/widgets.py @@ -97,6 +97,9 @@ class CompleterView(QtWidgets.QListView): QtCore.Qt.FramelessWindowHint | QtCore.Qt.Tool ) + + # Open the widget unactivated + self.setAttribute(QtCore.Qt.WA_ShowWithoutActivating) delegate = QtWidgets.QStyledItemDelegate() self.setItemDelegate(delegate) @@ -225,10 +228,18 @@ class SettingsLineEdit(PlaceholderLineEdit): def __init__(self, *args, **kwargs): super(SettingsLineEdit, self).__init__(*args, **kwargs) - self._completer = None + # Timer which will get started on focus in and stopped on focus out + # - callback checks if line edit or completer have focus + # and hide completer if not + focus_timer = QtCore.QTimer() + focus_timer.setInterval(50) + focus_timer.timeout.connect(self._on_focus_timer) self.textChanged.connect(self._on_text_change) + self._completer = None + self._focus_timer = focus_timer + def _on_text_change(self, text): if self._completer is not None: self._completer.set_text_filter(text) @@ -240,19 +251,19 @@ class SettingsLineEdit(PlaceholderLineEdit): new_point = self.mapToGlobal(point) self._completer.move(new_point) + def _on_focus_timer(self): + if not self.hasFocus() and not self._completer.hasFocus(): + self._completer.hide() + self._focus_timer.stop() + def focusInEvent(self, event): super(SettingsLineEdit, self).focusInEvent(event) self.focused_in.emit() - if self._completer is None: - return - self._completer.show() - self._update_completer() - - def focusOutEvent(self, event): - super(SettingsLineEdit, self).focusOutEvent(event) if self._completer is not None: - self._completer.hide() + self._focus_timer.start() + self._completer.show() + self._update_completer() def paintEvent(self, event): super(SettingsLineEdit, self).paintEvent(event) diff --git a/openpype/tools/settings/settings/wrapper_widgets.py b/openpype/tools/settings/settings/wrapper_widgets.py index 7370fcf945..b14a226912 100644 --- a/openpype/tools/settings/settings/wrapper_widgets.py +++ b/openpype/tools/settings/settings/wrapper_widgets.py @@ -92,7 +92,8 @@ class CollapsibleWrapper(WrapperWidget): self.content_layout = content_layout if self.collapsible: - body_widget.toggle_content(self.collapsed) + if not self.collapsed: + body_widget.toggle_content() else: body_widget.hide_toolbox(hide_content=False) diff --git a/openpype/tools/utils/delegates.py b/openpype/tools/utils/delegates.py index d3718b1734..71f817a1d7 100644 --- a/openpype/tools/utils/delegates.py +++ b/openpype/tools/utils/delegates.py @@ -287,9 +287,5 @@ class PrettyTimeDelegate(QtWidgets.QStyledItemDelegate): """ def displayText(self, value, locale): - - if value is None: - # Ignore None value - return - - return pretty_timestamp(value) + if value is not None: + return pretty_timestamp(value) diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 93b156bef8..422d0f5389 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -17,6 +17,8 @@ from openpype.lib import filter_profiles from openpype.style import get_objected_colors from openpype.resources import get_image_path +log = Logger.get_logger(__name__) + def center_window(window): """Move window to center of it's screen.""" @@ -111,13 +113,23 @@ def get_qta_icon_by_name_and_color(icon_name, icon_color): variants.append("{0}.{1}".format(key, icon_name)) icon = None + used_variant = None for variant in variants: try: icon = qtawesome.icon(variant, color=icon_color) + used_variant = variant break except Exception: pass + if used_variant is None: + log.info("Didn't find icon \"{}\"".format(icon_name)) + + elif used_variant != icon_name: + log.debug("Icon \"{}\" was not found \"{}\" is used instead".format( + icon_name, used_variant + )) + SharedObjects.icons[full_icon_name] = icon return icon @@ -140,8 +152,8 @@ def get_asset_icon_name(asset_doc, has_children=True): return icon_name if has_children: - return "folder" - return "folder-o" + return "fa.folder" + return "fa.folder-o" def get_asset_icon_color(asset_doc): diff --git a/openpype/tools/workfiles/__init__.py b/openpype/tools/workfiles/__init__.py index cde7293931..5fbc71797d 100644 --- a/openpype/tools/workfiles/__init__.py +++ b/openpype/tools/workfiles/__init__.py @@ -1,9 +1,12 @@ +from .window import Window from .app import ( show, - Window + validate_host_requirements, ) __all__ = [ + "Window", + "show", - "Window" + "validate_host_requirements", ] diff --git a/openpype/tools/workfiles/app.py b/openpype/tools/workfiles/app.py index b3d6003b28..f0e7900cf5 100644 --- a/openpype/tools/workfiles/app.py +++ b/openpype/tools/workfiles/app.py @@ -1,40 +1,10 @@ import sys -import os -import re -import copy -import shutil import logging -import datetime -import Qt -from Qt import QtWidgets, QtCore -from avalon import io, api +from avalon import api -from openpype import style -from openpype.tools.utils.lib import ( - qt_app_context -) -from openpype.tools.utils import PlaceholderLineEdit -from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget -from openpype.tools.utils.tasks_widget import TasksWidget -from openpype.tools.utils.delegates import PrettyTimeDelegate -from openpype.lib import ( - emit_event, - Anatomy, - get_workfile_doc, - create_workfile_doc, - save_workfile_data_to_doc, - get_workfile_template_key, - create_workdir_extra_folders, - get_workdir_data, - get_last_workfile_with_version -) -from openpype.lib.avalon_context import ( - update_current_task, - compute_session_changes -) -from .model import FilesModel -from .view import FilesView +from openpype.tools.utils import qt_app_context +from .window import Window log = logging.getLogger(__name__) @@ -42,1181 +12,6 @@ module = sys.modules[__name__] module.window = None -def build_workfile_data(session): - """Get the data required for workfile formatting from avalon `session`""" - - # Set work file data for template formatting - asset_name = session["AVALON_ASSET"] - task_name = session["AVALON_TASK"] - host_name = session["AVALON_APP"] - project_doc = io.find_one( - {"type": "project"}, - { - "name": True, - "data.code": True, - "config.tasks": True, - } - ) - - asset_doc = io.find_one( - { - "type": "asset", - "name": asset_name - }, - { - "name": True, - "data.tasks": True, - "data.parents": True - } - ) - data = get_workdir_data(project_doc, asset_doc, task_name, host_name) - data.update({ - "version": 1, - "comment": "", - "ext": None - }) - - return data - - -class CommentMatcher(object): - """Use anatomy and work file data to parse comments from filenames""" - def __init__(self, anatomy, template_key, data): - - self.fname_regex = None - - template = anatomy.templates[template_key]["file"] - if "{comment}" not in template: - # Don't look for comment if template doesn't allow it - return - - # Create a regex group for extensions - extensions = api.registered_host().file_extensions() - any_extension = "(?:{})".format( - "|".join(re.escape(ext[1:]) for ext in extensions) - ) - - # Use placeholders that will never be in the filename - temp_data = copy.deepcopy(data) - temp_data["comment"] = "<>" - temp_data["version"] = "<>" - temp_data["ext"] = "<>" - - formatted = anatomy.format(temp_data) - fname_pattern = formatted[template_key]["file"] - fname_pattern = re.escape(fname_pattern) - - # Replace comment and version with something we can match with regex - replacements = { - "<>": "(.+)", - "<>": "[0-9]+", - "<>": any_extension, - } - for src, dest in replacements.items(): - fname_pattern = fname_pattern.replace(re.escape(src), dest) - - # Match from beginning to end of string to be safe - fname_pattern = "^{}$".format(fname_pattern) - - self.fname_regex = re.compile(fname_pattern) - - def parse_comment(self, filepath): - """Parse the {comment} part from a filename""" - if not self.fname_regex: - return - - fname = os.path.basename(filepath) - match = self.fname_regex.match(fname) - if match: - return match.group(1) - - -class SubversionLineEdit(QtWidgets.QWidget): - """QLineEdit with QPushButton for drop down selection of list of strings""" - def __init__(self, parent=None): - super(SubversionLineEdit, self).__init__(parent=parent) - - layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 0, 0, 0) - layout.setSpacing(3) - - self._input = PlaceholderLineEdit() - self._button = QtWidgets.QPushButton("") - self._button.setFixedWidth(18) - self._menu = QtWidgets.QMenu(self) - self._button.setMenu(self._menu) - - layout.addWidget(self._input) - layout.addWidget(self._button) - - @property - def input(self): - return self._input - - def set_values(self, values): - self._update(values) - - def _on_button_clicked(self): - self._menu.exec_() - - def _on_action_clicked(self, action): - self._input.setText(action.text()) - - def _update(self, values): - """Create optional predefined subset names - - Args: - default_names(list): all predefined names - - Returns: - None - """ - - menu = self._menu - button = self._button - - state = any(values) - button.setEnabled(state) - if state is False: - return - - # Include an empty string - values = [""] + sorted(values) - - # Get and destroy the action group - group = button.findChild(QtWidgets.QActionGroup) - if group: - group.deleteLater() - - # Build new action group - group = QtWidgets.QActionGroup(button) - for name in values: - action = group.addAction(name) - menu.addAction(action) - - group.triggered.connect(self._on_action_clicked) - - -class NameWindow(QtWidgets.QDialog): - """Name Window to define a unique filename inside a root folder - - The filename will be based on the "workfile" template defined in the - project["config"]["template"]. - - """ - - def __init__(self, parent, root, anatomy, template_key, session=None): - super(NameWindow, self).__init__(parent=parent) - self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) - - self.result = None - self.host = api.registered_host() - self.root = root - self.work_file = None - - if not session: - # Fallback to active session - session = api.Session - - self.data = build_workfile_data(session) - - # Store project anatomy - self.anatomy = anatomy - self.template = anatomy.templates[template_key]["file"] - self.template_key = template_key - - # Btns widget - btns_widget = QtWidgets.QWidget(self) - - btn_ok = QtWidgets.QPushButton("Ok", btns_widget) - btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.addWidget(btn_ok) - btns_layout.addWidget(btn_cancel) - - # Inputs widget - inputs_widget = QtWidgets.QWidget(self) - - # Version widget - version_widget = QtWidgets.QWidget(inputs_widget) - - # Version number input - version_input = QtWidgets.QSpinBox(version_widget) - version_input.setMinimum(1) - version_input.setMaximum(9999) - - # Last version checkbox - last_version_check = QtWidgets.QCheckBox( - "Next Available Version", version_widget - ) - last_version_check.setChecked(True) - - version_layout = QtWidgets.QHBoxLayout(version_widget) - version_layout.setContentsMargins(0, 0, 0, 0) - version_layout.addWidget(version_input) - version_layout.addWidget(last_version_check) - - # Preview widget - preview_label = QtWidgets.QLabel("Preview filename", inputs_widget) - - # Subversion input - subversion = SubversionLineEdit(inputs_widget) - subversion.input.setPlaceholderText("Will be part of filename.") - - # Extensions combobox - ext_combo = QtWidgets.QComboBox(inputs_widget) - # Add styled delegate to use stylesheets - ext_delegate = QtWidgets.QStyledItemDelegate() - ext_combo.setItemDelegate(ext_delegate) - ext_combo.addItems(self.host.file_extensions()) - - # Build inputs - inputs_layout = QtWidgets.QFormLayout(inputs_widget) - # Add version only if template contains version key - # - since the version can be padded with "{version:0>4}" we only search - # for "{version". - if "{version" in self.template: - inputs_layout.addRow("Version:", version_widget) - else: - version_widget.setVisible(False) - - # Add subversion only if template contains `{comment}` - if "{comment}" in self.template: - inputs_layout.addRow("Subversion:", subversion) - - # Detect whether a {comment} is in the current filename - if so, - # preserve it by default and set it in the comment/subversion field - current_filepath = self.host.current_file() - if current_filepath: - # We match the current filename against the current session - # instead of the session where the user is saving to. - current_data = build_workfile_data(api.Session) - matcher = CommentMatcher(anatomy, template_key, current_data) - comment = matcher.parse_comment(current_filepath) - if comment: - log.info("Detected subversion comment: {}".format(comment)) - self.data["comment"] = comment - subversion.input.setText(comment) - - existing_comments = self.get_existing_comments() - subversion.set_values(existing_comments) - - else: - subversion.setVisible(False) - inputs_layout.addRow("Extension:", ext_combo) - inputs_layout.addRow("Preview:", preview_label) - - # Build layout - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(inputs_widget) - main_layout.addWidget(btns_widget) - - # Signal callback registration - version_input.valueChanged.connect(self.on_version_spinbox_changed) - last_version_check.stateChanged.connect( - self.on_version_checkbox_changed - ) - - subversion.input.textChanged.connect(self.on_comment_changed) - ext_combo.currentIndexChanged.connect(self.on_extension_changed) - - btn_ok.pressed.connect(self.on_ok_pressed) - btn_cancel.pressed.connect(self.on_cancel_pressed) - - # Allow "Enter" key to accept the save. - btn_ok.setDefault(True) - - # Force default focus to comment, some hosts didn't automatically - # apply focus to this line edit (e.g. Houdini) - subversion.input.setFocus() - - # Store widgets - self.btn_ok = btn_ok - - self.version_widget = version_widget - - self.version_input = version_input - self.last_version_check = last_version_check - - self.preview_label = preview_label - self.subversion = subversion - self.ext_combo = ext_combo - self._ext_delegate = ext_delegate - - self.refresh() - - def get_existing_comments(self): - - matcher = CommentMatcher(self.anatomy, self.template_key, self.data) - host_extensions = set(self.host.file_extensions()) - comments = set() - if os.path.isdir(self.root): - for fname in os.listdir(self.root): - if not os.path.isfile(os.path.join(self.root, fname)): - continue - - ext = os.path.splitext(fname)[-1] - if ext not in host_extensions: - continue - - comment = matcher.parse_comment(fname) - if comment: - comments.add(comment) - - return list(comments) - - def on_version_spinbox_changed(self, value): - self.data["version"] = value - self.refresh() - - def on_version_checkbox_changed(self, _value): - self.refresh() - - def on_comment_changed(self, text): - self.data["comment"] = text - self.refresh() - - def on_extension_changed(self): - ext = self.ext_combo.currentText() - if ext == self.data["ext"]: - return - self.data["ext"] = ext - self.refresh() - - def on_ok_pressed(self): - self.result = self.work_file - self.close() - - def on_cancel_pressed(self): - self.close() - - def get_result(self): - return self.result - - def get_work_file(self): - data = copy.deepcopy(self.data) - if not data["comment"]: - data.pop("comment", None) - - data["ext"] = data["ext"][1:] - - anatomy_filled = self.anatomy.format(data) - return anatomy_filled[self.template_key]["file"] - - def refresh(self): - extensions = self.host.file_extensions() - extension = self.data["ext"] - if extension is None: - # Define saving file extension - current_file = self.host.current_file() - if current_file: - # Match the extension of current file - _, extension = os.path.splitext(current_file) - else: - extension = extensions[0] - - if extension != self.data["ext"]: - self.data["ext"] = extension - index = self.ext_combo.findText( - extension, QtCore.Qt.MatchFixedString - ) - if index >= 0: - self.ext_combo.setCurrentIndex(index) - - if not self.last_version_check.isChecked(): - self.version_input.setEnabled(True) - self.data["version"] = self.version_input.value() - - work_file = self.get_work_file() - - else: - self.version_input.setEnabled(False) - - data = copy.deepcopy(self.data) - template = str(self.template) - - if not data["comment"]: - data.pop("comment", None) - - data["ext"] = data["ext"][1:] - - version = get_last_workfile_with_version( - self.root, template, data, extensions - )[1] - - if version is None: - version = 1 - else: - version += 1 - - found_valid_version = False - # Check if next version is valid version and give a chance to try - # next 100 versions - for idx in range(100): - # Store version to data - self.data["version"] = version - - work_file = self.get_work_file() - # Safety check - path = os.path.join(self.root, work_file) - if not os.path.exists(path): - found_valid_version = True - break - - # Try next version - version += 1 - # Log warning - if idx == 0: - log.warning(( - "BUG: Function `get_last_workfile_with_version` " - "didn't return last version." - )) - # Raise exception if even 100 version fallback didn't help - if not found_valid_version: - raise AssertionError( - "This is a bug. Couldn't find valid version!" - ) - - self.work_file = work_file - - path_exists = os.path.exists(os.path.join(self.root, work_file)) - - self.btn_ok.setEnabled(not path_exists) - - if path_exists: - self.preview_label.setText( - "Cannot create \"{0}\" because file exists!" - "".format(work_file) - ) - else: - self.preview_label.setText( - "{0}".format(work_file) - ) - - -class FilesWidget(QtWidgets.QWidget): - """A widget displaying files that allows to save and open files.""" - file_selected = QtCore.Signal(str) - workfile_created = QtCore.Signal(str) - file_opened = QtCore.Signal() - - def __init__(self, parent=None): - super(FilesWidget, self).__init__(parent=parent) - - # Setup - self._asset_id = None - self._asset_doc = None - self._task_name = None - self._task_type = None - - # Pype's anatomy object for current project - self.anatomy = Anatomy(io.Session["AVALON_PROJECT"]) - # Template key used to get work template from anatomy templates - self.template_key = "work" - - # This is not root but workfile directory - self._workfiles_root = None - self._workdir_path = None - self.host = api.registered_host() - - # Whether to automatically select the latest modified - # file on a refresh of the files model. - self.auto_select_latest_modified = True - - # Avoid crash in Blender and store the message box - # (setting parent doesn't work as it hides the message box) - self._messagebox = None - - files_view = FilesView(self) - - # Create the Files model - extensions = set(self.host.file_extensions()) - files_model = FilesModel(file_extensions=extensions) - - # Create proxy model for files to be able sort and filter - proxy_model = QtCore.QSortFilterProxyModel() - proxy_model.setSourceModel(files_model) - proxy_model.setDynamicSortFilter(True) - proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) - - # Set up the file list tree view - files_view.setModel(proxy_model) - files_view.setSortingEnabled(True) - files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) - - # Date modified delegate - time_delegate = PrettyTimeDelegate() - files_view.setItemDelegateForColumn(1, time_delegate) - files_view.setIndentation(3) # smaller indentation - - # Default to a wider first filename column it is what we mostly care - # about and the date modified is relatively small anyway. - files_view.setColumnWidth(0, 330) - - # Filtering input - filter_input = PlaceholderLineEdit(self) - filter_input.setPlaceholderText("Filter files..") - filter_input.textChanged.connect(proxy_model.setFilterFixedString) - - # Home Page - # Build buttons widget for files widget - btns_widget = QtWidgets.QWidget(self) - btn_save = QtWidgets.QPushButton("Save As", btns_widget) - btn_browse = QtWidgets.QPushButton("Browse", btns_widget) - btn_open = QtWidgets.QPushButton("Open", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.setContentsMargins(0, 0, 0, 0) - btns_layout.addWidget(btn_open) - btns_layout.addWidget(btn_browse) - btns_layout.addWidget(btn_save) - - # Build files widgets for home page - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.setContentsMargins(0, 0, 0, 0) - main_layout.addWidget(filter_input) - main_layout.addWidget(files_view) - main_layout.addWidget(btns_widget) - - # Register signal callbacks - files_view.doubleClickedLeft.connect(self.on_open_pressed) - files_view.customContextMenuRequested.connect(self.on_context_menu) - files_view.selectionModel().selectionChanged.connect( - self.on_file_select - ) - - btn_open.pressed.connect(self.on_open_pressed) - btn_browse.pressed.connect(self.on_browse_pressed) - btn_save.pressed.connect(self.on_save_as_pressed) - - # Store attributes - self.time_delegate = time_delegate - - self.filter_input = filter_input - - self.files_view = files_view - self.files_model = files_model - - self.btns_widget = btns_widget - self.btn_open = btn_open - self.btn_browse = btn_browse - self.btn_save = btn_save - - def set_asset_task(self, asset_id, task_name, task_type): - if asset_id != self._asset_id: - self._asset_doc = None - self._asset_id = asset_id - self._task_name = task_name - self._task_type = task_type - - # Define a custom session so we can query the work root - # for a "Work area" that is not our current Session. - # This way we can browse it even before we enter it. - if self._asset_id and self._task_name and self._task_type: - session = self._get_session() - self._workdir_path = session["AVALON_WORKDIR"] - self._workfiles_root = self.host.work_root(session) - self.files_model.set_root(self._workfiles_root) - - else: - self.files_model.set_root(None) - - # Disable/Enable buttons based on available files in model - has_filenames = self.files_model.has_filenames() - self.btn_browse.setEnabled(has_filenames) - self.btn_open.setEnabled(has_filenames) - if not has_filenames: - # Manually trigger file selection - self.on_file_select() - - def _get_asset_doc(self): - if self._asset_id is None: - return None - - if self._asset_doc is None: - self._asset_doc = io.find_one({"_id": self._asset_id}) - return self._asset_doc - - def _get_session(self): - """Return a modified session for the current asset and task""" - - session = api.Session.copy() - self.template_key = get_workfile_template_key( - self._task_type, - session["AVALON_APP"], - project_name=session["AVALON_PROJECT"] - ) - changes = compute_session_changes( - session, - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - session.update(changes) - - return session - - def _enter_session(self): - """Enter the asset and task session currently selected""" - - session = api.Session.copy() - changes = compute_session_changes( - session, - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - if not changes: - # Return early if we're already in the right Session context - # to avoid any unwanted Task Changed callbacks to be triggered. - return - - update_current_task( - asset=self._get_asset_doc(), - task=self._task_name, - template_key=self.template_key - ) - - def open_file(self, filepath): - host = self.host - if host.has_unsaved_changes(): - result = self.save_changes_prompt() - if result is None: - # Cancel operation - return False - - # Save first if has changes - if result: - current_file = host.current_file() - if not current_file: - # If the user requested to save the current scene - # we can't actually automatically do so if the current - # file has not been saved with a name yet. So we'll have - # to opt out. - log.error("Can't save scene with no filename. Please " - "first save your work file using 'Save As'.") - return - - # Save current scene, continue to open file - host.save_file(current_file) - - self._enter_session() - host.open_file(filepath) - self.file_opened.emit() - - def save_changes_prompt(self): - self._messagebox = messagebox = QtWidgets.QMessageBox(parent=self) - messagebox.setWindowFlags(messagebox.windowFlags() | - QtCore.Qt.FramelessWindowHint) - messagebox.setIcon(messagebox.Warning) - messagebox.setWindowTitle("Unsaved Changes!") - messagebox.setText( - "There are unsaved changes to the current file." - "\nDo you want to save the changes?" - ) - messagebox.setStandardButtons( - messagebox.Yes | messagebox.No | messagebox.Cancel - ) - - result = messagebox.exec_() - if result == messagebox.Yes: - return True - if result == messagebox.No: - return False - return None - - def get_filename(self): - """Show save dialog to define filename for save or duplicate - - Returns: - str: The filename to create. - - """ - session = self._get_session() - - window = NameWindow( - parent=self, - root=self._workfiles_root, - anatomy=self.anatomy, - template_key=self.template_key, - session=session - ) - window.exec_() - - return window.get_result() - - def on_duplicate_pressed(self): - work_file = self.get_filename() - if not work_file: - return - - src = self._get_selected_filepath() - dst = os.path.join(self._workfiles_root, work_file) - shutil.copy(src, dst) - - self.workfile_created.emit(dst) - - self.refresh() - - def _get_selected_filepath(self): - """Return current filepath selected in view""" - selection = self.files_view.selectionModel() - index = selection.currentIndex() - if not index.isValid(): - return - - return index.data(self.files_model.FilePathRole) - - def on_open_pressed(self): - path = self._get_selected_filepath() - if not path: - print("No file selected to open..") - return - - self.open_file(path) - - def on_browse_pressed(self): - ext_filter = "Work File (*{0})".format( - " *".join(self.host.file_extensions()) - ) - kwargs = { - "caption": "Work Files", - "filter": ext_filter - } - if Qt.__binding__ in ("PySide", "PySide2"): - kwargs["dir"] = self._workfiles_root - else: - kwargs["directory"] = self._workfiles_root - - work_file = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] - if work_file: - self.open_file(work_file) - - def on_save_as_pressed(self): - work_filename = self.get_filename() - if not work_filename: - return - - # Trigger before save event - emit_event( - "workfile.save.before", - {"filename": work_filename, "workdir_path": self._workdir_path}, - source="workfiles.tool" - ) - - # Make sure workfiles root is updated - # - this triggers 'workio.work_root(...)' which may change value of - # '_workfiles_root' - self.set_asset_task( - self._asset_id, self._task_name, self._task_type - ) - - # Create workfiles root folder - if not os.path.exists(self._workfiles_root): - log.debug("Initializing Work Directory: %s", self._workfiles_root) - os.makedirs(self._workfiles_root) - - # Update session if context has changed - self._enter_session() - # Prepare full path to workfile and save it - filepath = os.path.join( - os.path.normpath(self._workfiles_root), work_filename - ) - self.host.save_file(filepath) - # Create extra folders - create_workdir_extra_folders( - self._workdir_path, - api.Session["AVALON_APP"], - self._task_type, - self._task_name, - api.Session["AVALON_PROJECT"] - ) - # Trigger after save events - emit_event( - "workfile.save.after", - {"filename": work_filename, "workdir_path": self._workdir_path}, - source="workfiles.tool" - ) - - self.workfile_created.emit(filepath) - # Refresh files model - self.refresh() - - def on_file_select(self): - self.file_selected.emit(self._get_selected_filepath()) - - def refresh(self): - """Refresh listed files for current selection in the interface""" - self.files_model.refresh() - - if self.auto_select_latest_modified: - self._select_last_modified_file() - - def on_context_menu(self, point): - index = self.files_view.indexAt(point) - if not index.isValid(): - return - - is_enabled = index.data(FilesModel.IsEnabled) - if not is_enabled: - return - - menu = QtWidgets.QMenu(self) - - # Duplicate - action = QtWidgets.QAction("Duplicate", menu) - tip = "Duplicate selected file." - action.setToolTip(tip) - action.setStatusTip(tip) - action.triggered.connect(self.on_duplicate_pressed) - menu.addAction(action) - - # Show the context action menu - global_point = self.files_view.mapToGlobal(point) - action = menu.exec_(global_point) - if not action: - return - - def _select_last_modified_file(self): - """Utility function to select the file with latest date modified""" - role = self.files_model.DateModifiedRole - model = self.files_view.model() - - highest_index = None - highest = 0 - for row in range(model.rowCount()): - index = model.index(row, 0, parent=QtCore.QModelIndex()) - if not index.isValid(): - continue - - modified = index.data(role) - if modified is not None and modified > highest: - highest_index = index - highest = modified - - if highest_index: - self.files_view.setCurrentIndex(highest_index) - - -class SidePanelWidget(QtWidgets.QWidget): - save_clicked = QtCore.Signal() - - def __init__(self, parent=None): - super(SidePanelWidget, self).__init__(parent) - - details_label = QtWidgets.QLabel("Details", self) - details_input = QtWidgets.QPlainTextEdit(self) - details_input.setReadOnly(True) - - note_label = QtWidgets.QLabel("Artist note", self) - note_input = QtWidgets.QPlainTextEdit(self) - btn_note_save = QtWidgets.QPushButton("Save note", self) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.setContentsMargins(0, 0, 0, 0) - main_layout.addWidget(details_label, 0) - main_layout.addWidget(details_input, 0) - main_layout.addWidget(note_label, 0) - main_layout.addWidget(note_input, 1) - main_layout.addWidget(btn_note_save, alignment=QtCore.Qt.AlignRight) - - note_input.textChanged.connect(self.on_note_change) - btn_note_save.clicked.connect(self.on_save_click) - - self.details_input = details_input - self.note_input = note_input - self.btn_note_save = btn_note_save - - self._orig_note = "" - self._workfile_doc = None - - def on_note_change(self): - text = self.note_input.toPlainText() - self.btn_note_save.setEnabled(self._orig_note != text) - - def on_save_click(self): - self._orig_note = self.note_input.toPlainText() - self.on_note_change() - self.save_clicked.emit() - - def set_context(self, asset_id, task_name, filepath, workfile_doc): - # Check if asset, task and file are selected - # NOTE workfile document is not requirement - enabled = bool(asset_id) and bool(task_name) and bool(filepath) - - self.details_input.setEnabled(enabled) - self.note_input.setEnabled(enabled) - self.btn_note_save.setEnabled(enabled) - - # Make sure workfile doc is overridden - self._workfile_doc = workfile_doc - # Disable inputs and remove texts if any required arguments are missing - if not enabled: - self._orig_note = "" - self.details_input.setPlainText("") - self.note_input.setPlainText("") - return - - orig_note = "" - if workfile_doc: - orig_note = workfile_doc["data"].get("note") or orig_note - - self._orig_note = orig_note - self.note_input.setPlainText(orig_note) - # Set as empty string - self.details_input.setPlainText("") - - filestat = os.stat(filepath) - size_ending_mapping = { - "KB": 1024 ** 1, - "MB": 1024 ** 2, - "GB": 1024 ** 3 - } - size = filestat.st_size - ending = "B" - for _ending, _size in size_ending_mapping.items(): - if filestat.st_size < _size: - break - size = filestat.st_size / _size - ending = _ending - - # Append html string - datetime_format = "%b %d %Y %H:%M:%S" - creation_time = datetime.datetime.fromtimestamp(filestat.st_ctime) - modification_time = datetime.datetime.fromtimestamp(filestat.st_mtime) - lines = ( - "Size:", - "{:.2f} {}".format(size, ending), - "Created:", - creation_time.strftime(datetime_format), - "Modified:", - modification_time.strftime(datetime_format) - ) - self.details_input.appendHtml("
".join(lines)) - - def get_workfile_data(self): - data = { - "note": self.note_input.toPlainText() - } - return self._workfile_doc, data - - -class Window(QtWidgets.QMainWindow): - """Work Files Window""" - title = "Work Files" - - def __init__(self, parent=None): - super(Window, self).__init__(parent=parent) - self.setWindowTitle(self.title) - window_flags = QtCore.Qt.Window | QtCore.Qt.WindowCloseButtonHint - if not parent: - window_flags |= QtCore.Qt.WindowStaysOnTopHint - self.setWindowFlags(window_flags) - - # Create pages widget and set it as central widget - pages_widget = QtWidgets.QStackedWidget(self) - self.setCentralWidget(pages_widget) - - home_page_widget = QtWidgets.QWidget(pages_widget) - home_body_widget = QtWidgets.QWidget(home_page_widget) - - assets_widget = SingleSelectAssetsWidget(io, parent=home_body_widget) - assets_widget.set_current_asset_btn_visibility(True) - - tasks_widget = TasksWidget(io, home_body_widget) - files_widget = FilesWidget(home_body_widget) - side_panel = SidePanelWidget(home_body_widget) - - pages_widget.addWidget(home_page_widget) - - # Build home - home_page_layout = QtWidgets.QVBoxLayout(home_page_widget) - home_page_layout.addWidget(home_body_widget) - - # Build home - body - body_layout = QtWidgets.QVBoxLayout(home_body_widget) - split_widget = QtWidgets.QSplitter(home_body_widget) - split_widget.addWidget(assets_widget) - split_widget.addWidget(tasks_widget) - split_widget.addWidget(files_widget) - split_widget.addWidget(side_panel) - split_widget.setSizes([255, 160, 455, 175]) - - body_layout.addWidget(split_widget) - - # Add top margin for tasks to align it visually with files as - # the files widget has a filter field which tasks does not. - tasks_widget.setContentsMargins(0, 32, 0, 0) - - # Set context after asset widget is refreshed - # - to do so it is necessary to wait until refresh is done - set_context_timer = QtCore.QTimer() - set_context_timer.setInterval(100) - - # Connect signals - set_context_timer.timeout.connect(self._on_context_set_timeout) - assets_widget.selection_changed.connect(self._on_asset_changed) - tasks_widget.task_changed.connect(self._on_task_changed) - files_widget.file_selected.connect(self.on_file_select) - files_widget.workfile_created.connect(self.on_workfile_create) - files_widget.file_opened.connect(self._on_file_opened) - side_panel.save_clicked.connect(self.on_side_panel_save) - - self._set_context_timer = set_context_timer - self.home_page_widget = home_page_widget - self.pages_widget = pages_widget - self.home_body_widget = home_body_widget - self.split_widget = split_widget - - self.assets_widget = assets_widget - self.tasks_widget = tasks_widget - self.files_widget = files_widget - self.side_panel = side_panel - - # Force focus on the open button by default, required for Houdini. - files_widget.btn_open.setFocus() - - self.resize(1200, 600) - - self._first_show = True - self._context_to_set = None - - def showEvent(self, event): - super(Window, self).showEvent(event) - if self._first_show: - self._first_show = False - self.refresh() - self.setStyleSheet(style.load_stylesheet()) - - def keyPressEvent(self, event): - """Custom keyPressEvent. - - Override keyPressEvent to do nothing so that Maya's panels won't - take focus when pressing "SHIFT" whilst mouse is over viewport or - outliner. This way users don't accidentally perform Maya commands - whilst trying to name an instance. - - """ - - def set_save_enabled(self, enabled): - self.files_widget.btn_save.setEnabled(enabled) - - def on_file_select(self, filepath): - asset_id = self.assets_widget.get_selected_asset_id() - task_name = self.tasks_widget.get_selected_task_name() - - workfile_doc = None - if asset_id and task_name and filepath: - filename = os.path.split(filepath)[1] - workfile_doc = get_workfile_doc( - asset_id, task_name, filename, io - ) - self.side_panel.set_context( - asset_id, task_name, filepath, workfile_doc - ) - - def on_workfile_create(self, filepath): - self._create_workfile_doc(filepath) - - def _on_file_opened(self): - self.close() - - def on_side_panel_save(self): - workfile_doc, data = self.side_panel.get_workfile_data() - if not workfile_doc: - filepath = self.files_widget._get_selected_filepath() - self._create_workfile_doc(filepath, force=True) - workfile_doc = self._get_current_workfile_doc() - - save_workfile_data_to_doc(workfile_doc, data, io) - - def _get_current_workfile_doc(self, filepath=None): - if filepath is None: - filepath = self.files_widget._get_selected_filepath() - task_name = self.tasks_widget.get_selected_task_name() - asset_id = self.assets_widget.get_selected_asset_id() - if not task_name or not asset_id or not filepath: - return - - filename = os.path.split(filepath)[1] - return get_workfile_doc( - asset_id, task_name, filename, io - ) - - def _create_workfile_doc(self, filepath, force=False): - workfile_doc = None - if not force: - workfile_doc = self._get_current_workfile_doc(filepath) - - if not workfile_doc: - workdir, filename = os.path.split(filepath) - asset_id = self.assets_widget.get_selected_asset_id() - asset_doc = io.find_one({"_id": asset_id}) - task_name = self.tasks_widget.get_selected_task_name() - create_workfile_doc(asset_doc, task_name, filename, workdir, io) - - def refresh(self): - # Refresh asset widget - self.assets_widget.refresh() - - self._on_task_changed() - - def set_context(self, context): - self._context_to_set = context - self._set_context_timer.start() - - def _on_context_set_timeout(self): - if self._context_to_set is None: - self._set_context_timer.stop() - return - - if self.assets_widget.refreshing: - return - - self._context_to_set, context = None, self._context_to_set - if "asset" in context: - asset_doc = io.find_one( - { - "name": context["asset"], - "type": "asset" - }, - {"_id": 1} - ) or {} - asset_id = asset_doc.get("_id") - # Select the asset - self.assets_widget.select_asset(asset_id) - self.tasks_widget.set_asset_id(asset_id) - - if "task" in context: - self.tasks_widget.select_task_name(context["task"]) - self._on_task_changed() - - def _on_asset_changed(self): - asset_id = self.assets_widget.get_selected_asset_id() - if asset_id: - self.tasks_widget.setEnabled(True) - else: - # Force disable the other widgets if no - # active selection - self.tasks_widget.setEnabled(False) - self.files_widget.setEnabled(False) - - self.tasks_widget.set_asset_id(asset_id) - - def _on_task_changed(self): - asset_id = self.assets_widget.get_selected_asset_id() - task_name = self.tasks_widget.get_selected_task_name() - task_type = self.tasks_widget.get_selected_task_type() - - asset_is_valid = asset_id is not None - self.tasks_widget.setEnabled(asset_is_valid) - - self.files_widget.setEnabled(bool(task_name) and asset_is_valid) - self.files_widget.set_asset_task(asset_id, task_name, task_type) - self.files_widget.refresh() - - def validate_host_requirements(host): if host is None: raise RuntimeError("No registered host.") diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py new file mode 100644 index 0000000000..d2b8a76952 --- /dev/null +++ b/openpype/tools/workfiles/files_widget.py @@ -0,0 +1,583 @@ +import os +import logging +import shutil + +import Qt +from Qt import QtWidgets, QtCore +from avalon import io, api + +from openpype.tools.utils import PlaceholderLineEdit +from openpype.tools.utils.delegates import PrettyTimeDelegate +from openpype.lib import ( + emit_event, + Anatomy, + get_workfile_template_key, + create_workdir_extra_folders, +) +from openpype.lib.avalon_context import ( + update_current_task, + compute_session_changes +) +from .model import ( + WorkAreaFilesModel, + PublishFilesModel, + + FILEPATH_ROLE, + DATE_MODIFIED_ROLE, +) +from .save_as_dialog import SaveAsDialog +from .lib import TempPublishFiles + +log = logging.getLogger(__name__) + + +class FilesView(QtWidgets.QTreeView): + doubleClickedLeft = QtCore.Signal() + doubleClickedRight = QtCore.Signal() + + def mouseDoubleClickEvent(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.doubleClickedLeft.emit() + + elif event.button() == QtCore.Qt.RightButton: + self.doubleClickedRight.emit() + + return super(FilesView, self).mouseDoubleClickEvent(event) + + +class FilesWidget(QtWidgets.QWidget): + """A widget displaying files that allows to save and open files.""" + file_selected = QtCore.Signal(str) + file_opened = QtCore.Signal() + publish_file_viewed = QtCore.Signal() + workfile_created = QtCore.Signal(str) + published_visible_changed = QtCore.Signal(bool) + + def __init__(self, parent): + super(FilesWidget, self).__init__(parent) + + # Setup + self._asset_id = None + self._asset_doc = None + self._task_name = None + self._task_type = None + + # Pype's anatomy object for current project + self.anatomy = Anatomy(io.Session["AVALON_PROJECT"]) + # Template key used to get work template from anatomy templates + self.template_key = "work" + + # This is not root but workfile directory + self._workfiles_root = None + self._workdir_path = None + self.host = api.registered_host() + temp_publish_files = TempPublishFiles() + temp_publish_files.cleanup() + self._temp_publish_files = temp_publish_files + + # Whether to automatically select the latest modified + # file on a refresh of the files model. + self.auto_select_latest_modified = True + + # Avoid crash in Blender and store the message box + # (setting parent doesn't work as it hides the message box) + self._messagebox = None + + # Filtering input + filter_widget = QtWidgets.QWidget(self) + + published_checkbox = QtWidgets.QCheckBox("Published", filter_widget) + + filter_input = PlaceholderLineEdit(filter_widget) + filter_input.setPlaceholderText("Filter files..") + + filter_layout = QtWidgets.QHBoxLayout(filter_widget) + filter_layout.setContentsMargins(0, 0, 0, 0) + filter_layout.addWidget(published_checkbox, 0) + filter_layout.addWidget(filter_input, 1) + + # Create the Files models + extensions = set(self.host.file_extensions()) + + views_widget = QtWidgets.QWidget(self) + # Workarea view + workarea_files_model = WorkAreaFilesModel(extensions) + + # Create proxy model for files to be able sort and filter + workarea_proxy_model = QtCore.QSortFilterProxyModel() + workarea_proxy_model.setSourceModel(workarea_files_model) + workarea_proxy_model.setDynamicSortFilter(True) + workarea_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + # Set up the file list tree view + workarea_files_view = FilesView(views_widget) + workarea_files_view.setModel(workarea_proxy_model) + workarea_files_view.setSortingEnabled(True) + workarea_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + # Date modified delegate + workarea_time_delegate = PrettyTimeDelegate() + workarea_files_view.setItemDelegateForColumn(1, workarea_time_delegate) + workarea_files_view.setIndentation(3) # smaller indentation + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + workarea_files_view.setColumnWidth(0, 330) + + # Publish files view + publish_files_model = PublishFilesModel(extensions, io, self.anatomy) + + publish_proxy_model = QtCore.QSortFilterProxyModel() + publish_proxy_model.setSourceModel(publish_files_model) + publish_proxy_model.setDynamicSortFilter(True) + publish_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + publish_files_view = FilesView(views_widget) + publish_files_view.setModel(publish_proxy_model) + + publish_files_view.setSortingEnabled(True) + publish_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + # Date modified delegate + publish_time_delegate = PrettyTimeDelegate() + publish_files_view.setItemDelegateForColumn(1, publish_time_delegate) + publish_files_view.setIndentation(3) # smaller indentation + + # Default to a wider first filename column it is what we mostly care + # about and the date modified is relatively small anyway. + publish_files_view.setColumnWidth(0, 330) + + views_layout = QtWidgets.QHBoxLayout(views_widget) + views_layout.setContentsMargins(0, 0, 0, 0) + views_layout.addWidget(workarea_files_view, 1) + views_layout.addWidget(publish_files_view, 1) + + # Home Page + # Build buttons widget for files widget + btns_widget = QtWidgets.QWidget(self) + btn_save = QtWidgets.QPushButton("Save As", btns_widget) + btn_browse = QtWidgets.QPushButton("Browse", btns_widget) + btn_open = QtWidgets.QPushButton("Open", btns_widget) + + btn_view_published = QtWidgets.QPushButton("View", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.setContentsMargins(0, 0, 0, 0) + btns_layout.addWidget(btn_open, 1) + btns_layout.addWidget(btn_browse, 1) + btns_layout.addWidget(btn_save, 1) + btns_layout.addWidget(btn_view_published, 1) + + # Build files widgets for home page + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(filter_widget, 0) + main_layout.addWidget(views_widget, 1) + main_layout.addWidget(btns_widget, 0) + + # Register signal callbacks + published_checkbox.stateChanged.connect(self._on_published_change) + filter_input.textChanged.connect(self._on_filter_text_change) + + workarea_files_view.doubleClickedLeft.connect( + self._on_workarea_open_pressed + ) + workarea_files_view.customContextMenuRequested.connect( + self._on_workarea_context_menu + ) + workarea_files_view.selectionModel().selectionChanged.connect( + self.on_file_select + ) + publish_files_view.doubleClickedLeft.connect( + self._on_view_published_pressed + ) + + btn_open.pressed.connect(self._on_workarea_open_pressed) + btn_browse.pressed.connect(self.on_browse_pressed) + btn_save.pressed.connect(self.on_save_as_pressed) + btn_view_published.pressed.connect(self._on_view_published_pressed) + + # Store attributes + self._published_checkbox = published_checkbox + self._filter_input = filter_input + + self._workarea_time_delegate = workarea_time_delegate + self._workarea_files_view = workarea_files_view + self._workarea_files_model = workarea_files_model + self._workarea_proxy_model = workarea_proxy_model + + self._publish_time_delegate = publish_time_delegate + self._publish_files_view = publish_files_view + self._publish_files_model = publish_files_model + self._publish_proxy_model = publish_proxy_model + + self._btns_widget = btns_widget + self._btn_open = btn_open + self._btn_browse = btn_browse + self._btn_save = btn_save + self._btn_view_published = btn_view_published + + # Create a proxy widget for files widget + self.setFocusProxy(btn_open) + + # Hide publish files widgets + publish_files_view.setVisible(False) + btn_view_published.setVisible(False) + + @property + def published_enabled(self): + return self._published_checkbox.isChecked() + + def _on_published_change(self): + published_enabled = self.published_enabled + + self._workarea_files_view.setVisible(not published_enabled) + self._btn_open.setVisible(not published_enabled) + self._btn_browse.setVisible(not published_enabled) + self._btn_save.setVisible(not published_enabled) + + self._publish_files_view.setVisible(published_enabled) + self._btn_view_published.setVisible(published_enabled) + + self._update_filtering() + self._update_asset_task() + + self.published_visible_changed.emit(published_enabled) + + self._select_last_modified_file() + + def _on_filter_text_change(self): + self._update_filtering() + + def _update_filtering(self): + text = self._filter_input.text() + if self.published_enabled: + self._publish_proxy_model.setFilterFixedString(text) + else: + self._workarea_proxy_model.setFilterFixedString(text) + + def set_save_enabled(self, enabled): + self._btn_save.setEnabled(enabled) + + def set_asset_task(self, asset_id, task_name, task_type): + if asset_id != self._asset_id: + self._asset_doc = None + self._asset_id = asset_id + self._task_name = task_name + self._task_type = task_type + self._update_asset_task() + + def _update_asset_task(self): + if self.published_enabled: + self._publish_files_model.set_context( + self._asset_id, self._task_name + ) + has_valid_items = self._publish_files_model.has_valid_items() + self._btn_view_published.setEnabled(has_valid_items) + else: + # Define a custom session so we can query the work root + # for a "Work area" that is not our current Session. + # This way we can browse it even before we enter it. + if self._asset_id and self._task_name and self._task_type: + session = self._get_session() + self._workdir_path = session["AVALON_WORKDIR"] + self._workfiles_root = self.host.work_root(session) + self._workarea_files_model.set_root(self._workfiles_root) + + else: + self._workarea_files_model.set_root(None) + + # Disable/Enable buttons based on available files in model + has_valid_items = self._workarea_files_model.has_valid_items() + self._btn_browse.setEnabled(has_valid_items) + self._btn_open.setEnabled(has_valid_items) + # Manually trigger file selection + if not has_valid_items: + self.on_file_select() + + def _get_asset_doc(self): + if self._asset_id is None: + return None + + if self._asset_doc is None: + self._asset_doc = io.find_one({"_id": self._asset_id}) + return self._asset_doc + + def _get_session(self): + """Return a modified session for the current asset and task""" + + session = api.Session.copy() + self.template_key = get_workfile_template_key( + self._task_type, + session["AVALON_APP"], + project_name=session["AVALON_PROJECT"] + ) + changes = compute_session_changes( + session, + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + session.update(changes) + + return session + + def _enter_session(self): + """Enter the asset and task session currently selected""" + + session = api.Session.copy() + changes = compute_session_changes( + session, + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + if not changes: + # Return early if we're already in the right Session context + # to avoid any unwanted Task Changed callbacks to be triggered. + return + + update_current_task( + asset=self._get_asset_doc(), + task=self._task_name, + template_key=self.template_key + ) + + def open_file(self, filepath): + host = self.host + if host.has_unsaved_changes(): + result = self.save_changes_prompt() + if result is None: + # Cancel operation + return False + + # Save first if has changes + if result: + current_file = host.current_file() + if not current_file: + # If the user requested to save the current scene + # we can't actually automatically do so if the current + # file has not been saved with a name yet. So we'll have + # to opt out. + log.error("Can't save scene with no filename. Please " + "first save your work file using 'Save As'.") + return + + # Save current scene, continue to open file + host.save_file(current_file) + + self._enter_session() + host.open_file(filepath) + self.file_opened.emit() + + def save_changes_prompt(self): + self._messagebox = messagebox = QtWidgets.QMessageBox(parent=self) + messagebox.setWindowFlags(messagebox.windowFlags() | + QtCore.Qt.FramelessWindowHint) + messagebox.setIcon(messagebox.Warning) + messagebox.setWindowTitle("Unsaved Changes!") + messagebox.setText( + "There are unsaved changes to the current file." + "\nDo you want to save the changes?" + ) + messagebox.setStandardButtons( + messagebox.Yes | messagebox.No | messagebox.Cancel + ) + + result = messagebox.exec_() + if result == messagebox.Yes: + return True + if result == messagebox.No: + return False + return None + + def get_filename(self): + """Show save dialog to define filename for save or duplicate + + Returns: + str: The filename to create. + + """ + session = self._get_session() + + window = SaveAsDialog( + parent=self, + root=self._workfiles_root, + anatomy=self.anatomy, + template_key=self.template_key, + session=session + ) + window.exec_() + + return window.get_result() + + def on_duplicate_pressed(self): + work_file = self.get_filename() + if not work_file: + return + + src = self._get_selected_filepath() + dst = os.path.join(self._workfiles_root, work_file) + shutil.copy(src, dst) + + self.workfile_created.emit(dst) + + self.refresh() + + def _get_selected_filepath(self): + """Return current filepath selected in view""" + if self.published_enabled: + source_view = self._publish_files_view + else: + source_view = self._workarea_files_view + selection = source_view.selectionModel() + index = selection.currentIndex() + if not index.isValid(): + return + + return index.data(FILEPATH_ROLE) + + def _on_workarea_open_pressed(self): + path = self._get_selected_filepath() + if not path: + print("No file selected to open..") + return + + self.open_file(path) + + def on_browse_pressed(self): + ext_filter = "Work File (*{0})".format( + " *".join(self.host.file_extensions()) + ) + kwargs = { + "caption": "Work Files", + "filter": ext_filter + } + if Qt.__binding__ in ("PySide", "PySide2"): + kwargs["dir"] = self._workfiles_root + else: + kwargs["directory"] = self._workfiles_root + + work_file = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] + if work_file: + self.open_file(work_file) + + def on_save_as_pressed(self): + work_filename = self.get_filename() + if not work_filename: + return + + # Trigger before save event + emit_event( + "workfile.save.before", + {"filename": work_filename, "workdir_path": self._workdir_path}, + source="workfiles.tool" + ) + + # Make sure workfiles root is updated + # - this triggers 'workio.work_root(...)' which may change value of + # '_workfiles_root' + self.set_asset_task( + self._asset_id, self._task_name, self._task_type + ) + + # Create workfiles root folder + if not os.path.exists(self._workfiles_root): + log.debug("Initializing Work Directory: %s", self._workfiles_root) + os.makedirs(self._workfiles_root) + + # Update session if context has changed + self._enter_session() + # Prepare full path to workfile and save it + filepath = os.path.join( + os.path.normpath(self._workfiles_root), work_filename + ) + self.host.save_file(filepath) + # Create extra folders + create_workdir_extra_folders( + self._workdir_path, + api.Session["AVALON_APP"], + self._task_type, + self._task_name, + api.Session["AVALON_PROJECT"] + ) + # Trigger after save events + emit_event( + "workfile.save.after", + {"filename": work_filename, "workdir_path": self._workdir_path}, + source="workfiles.tool" + ) + + self.workfile_created.emit(filepath) + # Refresh files model + self.refresh() + + def _on_view_published_pressed(self): + filepath = self._get_selected_filepath() + if not filepath or not os.path.exists(filepath): + return + item = self._temp_publish_files.add_file(filepath) + self.host.open_file(item.filepath) + self.publish_file_viewed.emit() + # Change state back to workarea + self._published_checkbox.setChecked(False) + + def on_file_select(self): + self.file_selected.emit(self._get_selected_filepath()) + + def refresh(self): + """Refresh listed files for current selection in the interface""" + if self.published_enabled: + self._publish_files_model.refresh() + else: + self._workarea_files_model.refresh() + + if self.auto_select_latest_modified: + self._select_last_modified_file() + + def _on_workarea_context_menu(self, point): + index = self._workarea_files_view.indexAt(point) + if not index.isValid(): + return + + if not index.flags() & QtCore.Qt.ItemIsEnabled: + return + + menu = QtWidgets.QMenu(self) + + # Duplicate + action = QtWidgets.QAction("Duplicate", menu) + tip = "Duplicate selected file." + action.setToolTip(tip) + action.setStatusTip(tip) + action.triggered.connect(self.on_duplicate_pressed) + menu.addAction(action) + + # Show the context action menu + global_point = self._workarea_files_view.mapToGlobal(point) + action = menu.exec_(global_point) + if not action: + return + + def _select_last_modified_file(self): + """Utility function to select the file with latest date modified""" + if self.published_enabled: + source_view = self._publish_files_view + else: + source_view = self._workarea_files_view + model = source_view.model() + + highest_index = None + highest = 0 + for row in range(model.rowCount()): + index = model.index(row, 0, parent=QtCore.QModelIndex()) + if not index.isValid(): + continue + + modified = index.data(DATE_MODIFIED_ROLE) + if modified is not None and modified > highest: + highest_index = index + highest = modified + + if highest_index: + source_view.setCurrentIndex(highest_index) diff --git a/openpype/tools/workfiles/lib.py b/openpype/tools/workfiles/lib.py new file mode 100644 index 0000000000..21a7485b7b --- /dev/null +++ b/openpype/tools/workfiles/lib.py @@ -0,0 +1,272 @@ +import os +import shutil +import uuid +import time +import json +import logging +import contextlib + +import appdirs + + +class TempPublishFilesItem(object): + """Object representing copied workfile in app temp folder. + + Args: + item_id (str): Id of item used as subfolder. + data (dict): Metadata about temp files. + directory (str): Path to directory where files are copied to. + """ + + def __init__(self, item_id, data, directory): + self._id = item_id + self._directory = directory + self._filepath = os.path.join(directory, data["filename"]) + + @property + def directory(self): + return self._directory + + @property + def filepath(self): + return self._filepath + + @property + def id(self): + return self._id + + @property + def size(self): + if os.path.exists(self.filepath): + s = os.stat(self.filepath) + return s.st_size + return 0 + + +class TempPublishFiles(object): + """Directory where published workfiles are copied when opened. + + Directory is located in appdirs on the machine. Folder contains file + with metadata about stored files. Each item in metadata has id, filename + and expiration time. When expiration time is higher then current time the + item is removed from metadata and it's files are deleted. Files of items + are stored in subfolder named by item's id. + + Metadata file can be in theory opened and modified by multiple processes, + threads at one time. For those cases is created simple lock file which + is created before modification begins and is removed when modification + ends. Existence of the file means that it should not be modified by + any other process at the same time. + + Metadata example: + ``` + { + "96050b4a-8974-4fca-8179-7c446c478d54": { + "created": 1647880725.555, + "expiration": 1647884325.555, + "filename": "cg_pigeon_workfileModeling_v025.ma" + }, + ... + } + ``` + + ## Why is this needed + Combination of more issues. Temp files are not automatically removed by + OS on windows so using tempfiles in TEMP would lead to kill disk space of + machine. There are also cases when someone wants to open multiple files + in short period of time and want to manually remove those files so keeping + track of temporary copied files in pre-defined structure is needed. + """ + minute_in_seconds = 60 + hour_in_seconds = 60 * minute_in_seconds + day_in_seconds = 24 * hour_in_seconds + + def __init__(self): + root_dir = appdirs.user_data_dir( + "published_workfiles_temp", "openpype" + ) + if not os.path.exists(root_dir): + os.makedirs(root_dir) + + metadata_path = os.path.join(root_dir, "metadata.json") + lock_path = os.path.join(root_dir, "lock.json") + + self._root_dir = root_dir + self._metadata_path = metadata_path + self._lock_path = lock_path + self._log = None + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + @property + def life_time(self): + """How long will be new item kept in temp in seconds. + + Returns: + int: Lifetime of temp item. + """ + return int(self.hour_in_seconds) + + @property + def size(self): + """File size of existing items.""" + size = 0 + for item in self.get_items(): + size += item.size + return size + + def add_file(self, src_path): + """Add workfile to temp directory. + + This will create new item and source path is copied to it's directory. + """ + filename = os.path.basename(src_path) + + item_id = str(uuid.uuid4()) + dst_dirpath = os.path.join(self._root_dir, item_id) + if not os.path.exists(dst_dirpath): + os.makedirs(dst_dirpath) + + dst_path = os.path.join(dst_dirpath, filename) + shutil.copy(src_path, dst_path) + + now = time.time() + item_data = { + "filename": filename, + "expiration": now + self.life_time, + "created": now + } + with self._modify_data() as data: + data[item_id] = item_data + + return TempPublishFilesItem(item_id, item_data, dst_dirpath) + + @contextlib.contextmanager + def _modify_data(self): + """Create lock file when data in metadata file are modified.""" + start_time = time.time() + timeout = 3 + while os.path.exists(self._lock_path): + time.sleep(0.01) + if start_time > timeout: + self.log.warning(( + "Waited for {} seconds to free lock file. Overriding lock." + ).format(timeout)) + + with open(self._lock_path, "w") as stream: + json.dump({"pid": os.getpid()}, stream) + + try: + data = self._get_data() + yield data + with open(self._metadata_path, "w") as stream: + json.dump(data, stream) + + finally: + os.remove(self._lock_path) + + def _get_data(self): + output = {} + if not os.path.exists(self._metadata_path): + return output + + try: + with open(self._metadata_path, "r") as stream: + output = json.load(stream) + except Exception: + self.log.warning("Failed to read metadata file.", exc_info=True) + return output + + def cleanup(self, check_expiration=True): + """Cleanup files based on metadata. + + Items that passed expiration are removed when this is called. Or all + files are removed when `check_expiration` is set to False. + + Args: + check_expiration (bool): All items and files are removed when set + to True. + """ + data = self._get_data() + now = time.time() + remove_ids = set() + all_ids = set() + for item_id, item_data in data.items(): + all_ids.add(item_id) + if check_expiration and now < item_data["expiration"]: + continue + + remove_ids.add(item_id) + + for item_id in remove_ids: + try: + self.remove_id(item_id) + except Exception: + self.log.warning( + "Failed to remove temp publish item \"{}\"".format( + item_id + ), + exc_info=True + ) + + # Remove unknown folders/files + for filename in os.listdir(self._root_dir): + if filename in all_ids: + continue + + full_path = os.path.join(self._root_dir, filename) + if full_path in (self._metadata_path, self._lock_path): + continue + + try: + shutil.rmtree(full_path) + except Exception: + self.log.warning( + "Couldn't remove arbitrary path \"{}\"".format(full_path), + exc_info=True + ) + + def clear(self): + self.cleanup(False) + + def get_items(self): + """Receive all items from metadata file. + + Returns: + list: Info about each item in metadata. + """ + output = [] + data = self._get_data() + for item_id, item_data in data.items(): + item_path = os.path.join(self._root_dir, item_id) + output.append(TempPublishFilesItem(item_id, item_data, item_path)) + return output + + def remove_id(self, item_id): + """Remove files of item and then remove the item from metadata.""" + filepath = os.path.join(self._root_dir, item_id) + if os.path.exists(filepath): + shutil.rmtree(filepath) + + with self._modify_data() as data: + data.pop(item_id, None) + + +def file_size_to_string(file_size): + size = 0 + size_ending_mapping = { + "KB": 1024 ** 1, + "MB": 1024 ** 2, + "GB": 1024 ** 3 + } + ending = "B" + for _ending, _size in size_ending_mapping.items(): + if file_size < _size: + break + size = file_size / _size + ending = _ending + return "{:.2f} {}".format(size, ending) diff --git a/openpype/tools/workfiles/model.py b/openpype/tools/workfiles/model.py index e9184842fc..8f9dd8c6ba 100644 --- a/openpype/tools/workfiles/model.py +++ b/openpype/tools/workfiles/model.py @@ -1,153 +1,179 @@ import os import logging -from Qt import QtCore +from Qt import QtCore, QtGui import qtawesome from openpype.style import ( get_default_entity_icon_color, get_disabled_entity_icon_color, ) - -from openpype.tools.utils.models import TreeModel, Item +from openpype.pipeline import get_representation_path log = logging.getLogger(__name__) -class FilesModel(TreeModel): - """Model listing files with specified extensions in a root folder""" - Columns = ["filename", "date"] +FILEPATH_ROLE = QtCore.Qt.UserRole + 2 +DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3 +ITEM_ID_ROLE = QtCore.Qt.UserRole + 4 - FileNameRole = QtCore.Qt.UserRole + 2 - DateModifiedRole = QtCore.Qt.UserRole + 3 - FilePathRole = QtCore.Qt.UserRole + 4 - IsEnabled = QtCore.Qt.UserRole + 5 - def __init__(self, file_extensions, parent=None): - super(FilesModel, self).__init__(parent=parent) +class WorkAreaFilesModel(QtGui.QStandardItemModel): + """Model is looking into one folder for files with extension.""" + + def __init__(self, extensions, *args, **kwargs): + super(WorkAreaFilesModel, self).__init__(*args, **kwargs) + + self.setColumnCount(2) self._root = None - self._file_extensions = file_extensions - self._icons = { - "file": qtawesome.icon( - "fa.file-o", - color=get_default_entity_icon_color() + self._file_extensions = extensions + self._invalid_path_item = None + self._empty_root_item = None + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._invalid_item_visible = False + self._items_by_filename = {} + + def _get_invalid_path_item(self): + if self._invalid_path_item is None: + message = "Work Area does not exist. Use Save As to create it." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() ) - } + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._invalid_path_item = item + return self._invalid_path_item + + def _get_empty_root_item(self): + if self._empty_root_item is None: + message = "Work Area is empty." + item = QtGui.QStandardItem(message) + icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + item.setColumnCount(self.columnCount()) + self._empty_root_item = item + return self._empty_root_item def set_root(self, root): + """Change directory where to look for file.""" self._root = root + if root and not os.path.exists(root): + log.debug("Work Area does not exist: {}".format(root)) self.refresh() - def _add_empty(self): - item = Item() - item.update({ - # Put a display message in 'filename' - "filename": "No files found.", - # Not-selectable - "enabled": False, - "date": None, - "filepath": None - }) - - self.add_child(item) + def _clear(self): + root_item = self.invisibleRootItem() + rows = root_item.rowCount() + if rows > 0: + if self._invalid_item_visible: + for row in range(rows): + root_item.takeRow(row) + else: + root_item.removeRows(0, rows) + self._items_by_filename = {} def refresh(self): - self.clear() - self.beginResetModel() - - root = self._root - - if not root: - self.endResetModel() - return - - if not os.path.exists(root): + """Refresh and update model items.""" + root_item = self.invisibleRootItem() + # If path is not set or does not exist then add invalid path item + if not self._root or not os.path.exists(self._root): + self._clear() # Add Work Area does not exist placeholder - log.debug("Work Area does not exist: %s", root) - message = "Work Area does not exist. Use Save As to create it." - item = Item({ - "filename": message, - "date": None, - "filepath": None, - "enabled": False, - "icon": qtawesome.icon( - "fa.times", - color=get_disabled_entity_icon_color() - ) - }) - self.add_child(item) - self.endResetModel() + item = self._get_invalid_path_item() + root_item.appendRow(item) + self._invalid_item_visible = True return - extensions = self._file_extensions + # Clear items if previous refresh set '_invalid_item_visible' to True + # - Invalid items are not stored to '_items_by_filename' so they would + # not be removed + if self._invalid_item_visible: + self._clear() - for filename in os.listdir(root): - path = os.path.join(root, filename) - if os.path.isdir(path): + # Check for new items that should be added and items that should be + # removed + new_items = [] + items_to_remove = set(self._items_by_filename.keys()) + for filename in os.listdir(self._root): + filepath = os.path.join(self._root, filename) + if os.path.isdir(filepath): continue ext = os.path.splitext(filename)[1] - if extensions and ext not in extensions: + if ext not in self._file_extensions: continue - modified = os.path.getmtime(path) + modified = os.path.getmtime(filepath) - item = Item({ - "filename": filename, - "date": modified, - "filepath": path - }) + # Use existing item or create new one + if filename in items_to_remove: + items_to_remove.remove(filename) + item = self._items_by_filename[filename] + else: + item = QtGui.QStandardItem(filename) + item.setColumnCount(self.columnCount()) + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + new_items.append(item) + self._items_by_filename[filename] = item + # Update data that may be different + item.setData(filepath, FILEPATH_ROLE) + item.setData(modified, DATE_MODIFIED_ROLE) - self.add_child(item) + # Add new items if there are any + if new_items: + root_item.appendRows(new_items) - if self.rowCount() == 0: - self._add_empty() + # Remove items that are no longer available + for filename in items_to_remove: + item = self._items_by_filename.pop(filename) + root_item.removeRow(item.row()) - self.endResetModel() - - def has_filenames(self): - for item in self._root_item.children(): - if item.get("enabled", True): - return True - return False - - def rowCount(self, parent=None): - if parent is None or not parent.isValid(): - parent_item = self._root_item + # Add empty root item if there are not filenames that could be shown + if root_item.rowCount() > 0: + self._invalid_item_visible = False else: - parent_item = parent.internalPointer() - return parent_item.childCount() + self._invalid_item_visible = True + item = self._get_empty_root_item() + root_item.appendRow(item) - def data(self, index, role): - if not index.isValid(): - return + def has_valid_items(self): + """Directory has files that are listed in items.""" + return not self._invalid_item_visible - if role == QtCore.Qt.DecorationRole: - # Add icon to filename column - item = index.internalPointer() - if index.column() == 0: - if item["filepath"]: - return self._icons["file"] - return item.get("icon", None) + def flags(self, index): + # Use flags of first column for all columns + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(WorkAreaFilesModel, self).flags(index) - if role == self.FileNameRole: - item = index.internalPointer() - return item["filename"] + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole - if role == self.DateModifiedRole: - item = index.internalPointer() - return item["date"] + # Handle roles for first column + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None - if role == self.FilePathRole: - item = index.internalPointer() - return item["filepath"] + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) - if role == self.IsEnabled: - item = index.internalPointer() - return item.get("enabled", True) - - return super(FilesModel, self).data(index, role) + return super(WorkAreaFilesModel, self).data(index, role) def headerData(self, section, orientation, role): # Show nice labels in the header @@ -160,4 +186,274 @@ class FilesModel(TreeModel): elif section == 1: return "Date modified" - return super(FilesModel, self).headerData(section, orientation, role) + return super(WorkAreaFilesModel, self).headerData( + section, orientation, role + ) + + +class PublishFilesModel(QtGui.QStandardItemModel): + """Model filling files with published files calculated from representation. + + This model looks for workfile family representations based on selected + asset and task. + + Asset must set to be able look for representations that could be used. + Task is used to filter representations by task. + Model has few filter criteria for filling. + - First criteria is that version document must have "workfile" in + "data.families". + - Second cirteria is that representation must have extension same as + defined extensions + - If task is set then representation must have 'task["name"]' with same + name. + """ + + def __init__(self, extensions, dbcon, anatomy, *args, **kwargs): + super(PublishFilesModel, self).__init__(*args, **kwargs) + + self.setColumnCount(2) + + self._dbcon = dbcon + self._anatomy = anatomy + self._file_extensions = extensions + + self._invalid_context_item = None + self._empty_root_item = None + self._file_icon = qtawesome.icon( + "fa.file-o", + color=get_default_entity_icon_color() + ) + self._invalid_icon = qtawesome.icon( + "fa.times", + color=get_disabled_entity_icon_color() + ) + self._invalid_item_visible = False + + self._items_by_id = {} + + self._asset_id = None + self._task_name = None + + def _set_item_invalid(self, item): + item.setFlags(QtCore.Qt.NoItemFlags) + item.setData(self._invalid_icon, QtCore.Qt.DecorationRole) + + def _set_item_valid(self, item): + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(self._file_icon, QtCore.Qt.DecorationRole) + + def _get_invalid_context_item(self): + if self._invalid_context_item is None: + item = QtGui.QStandardItem("Selected context is not valid.") + item.setColumnCount(self.columnCount()) + self._set_item_invalid(item) + self._invalid_context_item = item + return self._invalid_context_item + + def _get_empty_root_item(self): + if self._empty_root_item is None: + item = QtGui.QStandardItem("Didn't find any published workfiles.") + item.setColumnCount(self.columnCount()) + self._set_item_invalid(item) + self._empty_root_item = item + return self._empty_root_item + + def set_context(self, asset_id, task_name): + """Change context to asset and task. + + Args: + asset_id (ObjectId): Id of selected asset. + task_name (str): Name of selected task. + """ + self._asset_id = asset_id + self._task_name = task_name + self.refresh() + + def _clear(self): + root_item = self.invisibleRootItem() + rows = root_item.rowCount() + if rows > 0: + if self._invalid_item_visible: + for row in range(rows): + root_item.takeRow(row) + else: + root_item.removeRows(0, rows) + self._items_by_id = {} + + def _get_workfie_representations(self): + output = [] + # Get subset docs of asset + subset_docs = self._dbcon.find( + { + "type": "subset", + "parent": self._asset_id + }, + { + "_id": True, + "name": True + } + ) + + subset_ids = [subset_doc["_id"] for subset_doc in subset_docs] + if not subset_ids: + return output + + # Get version docs of subsets with their families + version_docs = self._dbcon.find( + { + "type": "version", + "parent": {"$in": subset_ids} + }, + { + "_id": True, + "data.families": True, + "parent": True + } + ) + # Filter versions if they contain 'workfile' family + filtered_versions = [] + for version_doc in version_docs: + data = version_doc.get("data") or {} + families = data.get("families") or [] + if "workfile" in families: + filtered_versions.append(version_doc) + + version_ids = [version_doc["_id"] for version_doc in filtered_versions] + if not version_ids: + return output + + # Query representations of filtered versions and add filter for + # extension + extensions = [ext.replace(".", "") for ext in self._file_extensions] + repre_docs = self._dbcon.find( + { + "type": "representation", + "parent": {"$in": version_ids}, + "context.ext": {"$in": extensions} + } + ) + # Filter queried representations by task name if task is set + filtered_repre_docs = [] + for repre_doc in repre_docs: + if self._task_name is None: + filtered_repre_docs.append(repre_doc) + continue + + task_info = repre_doc["context"].get("task") + if not task_info: + print("Not task info") + continue + + if isinstance(task_info, dict): + task_name = task_info.get("name") + else: + task_name = task_info + + if task_name == self._task_name: + filtered_repre_docs.append(repre_doc) + + # Collect paths of representations + for repre_doc in filtered_repre_docs: + path = get_representation_path( + repre_doc, root=self._anatomy.roots + ) + output.append((path, repre_doc["_id"])) + return output + + def refresh(self): + root_item = self.invisibleRootItem() + if not self._asset_id: + self._clear() + # Add Work Area does not exist placeholder + item = self._get_invalid_context_item() + root_item.appendRow(item) + self._invalid_item_visible = True + return + + if self._invalid_item_visible: + self._clear() + + new_items = [] + items_to_remove = set(self._items_by_id.keys()) + for item in self._get_workfie_representations(): + filepath, repre_id = item + # TODO handle empty filepaths + if not filepath: + continue + filename = os.path.basename(filepath) + + if repre_id in items_to_remove: + items_to_remove.remove(repre_id) + item = self._items_by_id[repre_id] + else: + item = QtGui.QStandardItem(filename) + item.setColumnCount(self.columnCount()) + new_items.append(item) + self._items_by_id[repre_id] = item + + if os.path.exists(filepath): + modified = os.path.getmtime(filepath) + tooltip = None + self._set_item_valid(item) + else: + modified = None + tooltip = "File is not available from this machine" + self._set_item_invalid(item) + + item.setData(tooltip, QtCore.Qt.ToolTipRole) + item.setData(filepath, FILEPATH_ROLE) + item.setData(modified, DATE_MODIFIED_ROLE) + item.setData(repre_id, ITEM_ID_ROLE) + + if new_items: + root_item.appendRows(new_items) + + for filename in items_to_remove: + item = self._items_by_id.pop(filename) + root_item.removeRow(item.row()) + + if root_item.rowCount() > 0: + self._invalid_item_visible = False + else: + self._invalid_item_visible = True + item = self._get_empty_root_item() + root_item.appendRow(item) + + def has_valid_items(self): + return not self._invalid_item_visible + + def flags(self, index): + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(PublishFilesModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + if index.column() == 1: + if role == QtCore.Qt.DecorationRole: + return None + + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): + role = DATE_MODIFIED_ROLE + index = self.index(index.row(), 0, index.parent()) + + return super(PublishFilesModel, self).data(index, role) + + def headerData(self, section, orientation, role): + # Show nice labels in the header + if ( + role == QtCore.Qt.DisplayRole + and orientation == QtCore.Qt.Horizontal + ): + if section == 0: + return "Name" + elif section == 1: + return "Date modified" + + return super(PublishFilesModel, self).headerData( + section, orientation, role + ) diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py new file mode 100644 index 0000000000..e616a325cc --- /dev/null +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -0,0 +1,482 @@ +import os +import re +import copy +import logging + +from Qt import QtWidgets, QtCore + +from avalon import api, io + +from openpype.lib import ( + get_last_workfile_with_version, + get_workdir_data, +) +from openpype.tools.utils import PlaceholderLineEdit + +log = logging.getLogger(__name__) + + +def build_workfile_data(session): + """Get the data required for workfile formatting from avalon `session`""" + + # Set work file data for template formatting + asset_name = session["AVALON_ASSET"] + task_name = session["AVALON_TASK"] + host_name = session["AVALON_APP"] + project_doc = io.find_one( + {"type": "project"}, + { + "name": True, + "data.code": True, + "config.tasks": True, + } + ) + + asset_doc = io.find_one( + { + "type": "asset", + "name": asset_name + }, + { + "name": True, + "data.tasks": True, + "data.parents": True + } + ) + data = get_workdir_data(project_doc, asset_doc, task_name, host_name) + data.update({ + "version": 1, + "comment": "", + "ext": None + }) + + return data + + +class CommentMatcher(object): + """Use anatomy and work file data to parse comments from filenames""" + def __init__(self, anatomy, template_key, data): + + self.fname_regex = None + + template = anatomy.templates[template_key]["file"] + if "{comment}" not in template: + # Don't look for comment if template doesn't allow it + return + + # Create a regex group for extensions + extensions = api.registered_host().file_extensions() + any_extension = "(?:{})".format( + "|".join(re.escape(ext[1:]) for ext in extensions) + ) + + # Use placeholders that will never be in the filename + temp_data = copy.deepcopy(data) + temp_data["comment"] = "<>" + temp_data["version"] = "<>" + temp_data["ext"] = "<>" + + formatted = anatomy.format(temp_data) + fname_pattern = formatted[template_key]["file"] + fname_pattern = re.escape(fname_pattern) + + # Replace comment and version with something we can match with regex + replacements = { + "<>": "(.+)", + "<>": "[0-9]+", + "<>": any_extension, + } + for src, dest in replacements.items(): + fname_pattern = fname_pattern.replace(re.escape(src), dest) + + # Match from beginning to end of string to be safe + fname_pattern = "^{}$".format(fname_pattern) + + self.fname_regex = re.compile(fname_pattern) + + def parse_comment(self, filepath): + """Parse the {comment} part from a filename""" + if not self.fname_regex: + return + + fname = os.path.basename(filepath) + match = self.fname_regex.match(fname) + if match: + return match.group(1) + + +class SubversionLineEdit(QtWidgets.QWidget): + """QLineEdit with QPushButton for drop down selection of list of strings""" + + text_changed = QtCore.Signal(str) + + def __init__(self, *args, **kwargs): + super(SubversionLineEdit, self).__init__(*args, **kwargs) + + input_field = PlaceholderLineEdit(self) + menu_btn = QtWidgets.QPushButton(self) + menu_btn.setFixedWidth(18) + + menu = QtWidgets.QMenu(self) + menu_btn.setMenu(menu) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(3) + + layout.addWidget(input_field, 1) + layout.addWidget(menu_btn, 0) + + input_field.textChanged.connect(self.text_changed) + + self.setFocusProxy(input_field) + + self._input_field = input_field + self._menu_btn = menu_btn + self._menu = menu + + def set_placeholder(self, placeholder): + self._input_field.setPlaceholderText(placeholder) + + def set_text(self, text): + self._input_field.setText(text) + + def set_values(self, values): + self._update(values) + + def _on_button_clicked(self): + self._menu.exec_() + + def _on_action_clicked(self, action): + self._input_field.setText(action.text()) + + def _update(self, values): + """Create optional predefined subset names + + Args: + default_names(list): all predefined names + + Returns: + None + """ + + menu = self._menu + button = self._menu_btn + + state = any(values) + button.setEnabled(state) + if state is False: + return + + # Include an empty string + values = [""] + sorted(values) + + # Get and destroy the action group + group = button.findChild(QtWidgets.QActionGroup) + if group: + group.deleteLater() + + # Build new action group + group = QtWidgets.QActionGroup(button) + for name in values: + action = group.addAction(name) + menu.addAction(action) + + group.triggered.connect(self._on_action_clicked) + + +class SaveAsDialog(QtWidgets.QDialog): + """Name Window to define a unique filename inside a root folder + + The filename will be based on the "workfile" template defined in the + project["config"]["template"]. + + """ + + def __init__(self, parent, root, anatomy, template_key, session=None): + super(SaveAsDialog, self).__init__(parent=parent) + self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) + + self.result = None + self.host = api.registered_host() + self.root = root + self.work_file = None + + if not session: + # Fallback to active session + session = api.Session + + self.data = build_workfile_data(session) + + # Store project anatomy + self.anatomy = anatomy + self.template = anatomy.templates[template_key]["file"] + self.template_key = template_key + + # Btns widget + btns_widget = QtWidgets.QWidget(self) + + btn_ok = QtWidgets.QPushButton("Ok", btns_widget) + btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.addWidget(btn_ok) + btns_layout.addWidget(btn_cancel) + + # Inputs widget + inputs_widget = QtWidgets.QWidget(self) + + # Version widget + version_widget = QtWidgets.QWidget(inputs_widget) + + # Version number input + version_input = QtWidgets.QSpinBox(version_widget) + version_input.setMinimum(1) + version_input.setMaximum(9999) + + # Last version checkbox + last_version_check = QtWidgets.QCheckBox( + "Next Available Version", version_widget + ) + last_version_check.setChecked(True) + + version_layout = QtWidgets.QHBoxLayout(version_widget) + version_layout.setContentsMargins(0, 0, 0, 0) + version_layout.addWidget(version_input) + version_layout.addWidget(last_version_check) + + # Preview widget + preview_label = QtWidgets.QLabel("Preview filename", inputs_widget) + + # Subversion input + subversion = SubversionLineEdit(inputs_widget) + subversion.set_placeholder("Will be part of filename.") + + # Extensions combobox + ext_combo = QtWidgets.QComboBox(inputs_widget) + # Add styled delegate to use stylesheets + ext_delegate = QtWidgets.QStyledItemDelegate() + ext_combo.setItemDelegate(ext_delegate) + ext_combo.addItems(self.host.file_extensions()) + + # Build inputs + inputs_layout = QtWidgets.QFormLayout(inputs_widget) + # Add version only if template contains version key + # - since the version can be padded with "{version:0>4}" we only search + # for "{version". + if "{version" in self.template: + inputs_layout.addRow("Version:", version_widget) + else: + version_widget.setVisible(False) + + # Add subversion only if template contains `{comment}` + if "{comment}" in self.template: + inputs_layout.addRow("Subversion:", subversion) + + # Detect whether a {comment} is in the current filename - if so, + # preserve it by default and set it in the comment/subversion field + current_filepath = self.host.current_file() + if current_filepath: + # We match the current filename against the current session + # instead of the session where the user is saving to. + current_data = build_workfile_data(api.Session) + matcher = CommentMatcher(anatomy, template_key, current_data) + comment = matcher.parse_comment(current_filepath) + if comment: + log.info("Detected subversion comment: {}".format(comment)) + self.data["comment"] = comment + subversion.set_text(comment) + + existing_comments = self.get_existing_comments() + subversion.set_values(existing_comments) + + else: + subversion.setVisible(False) + inputs_layout.addRow("Extension:", ext_combo) + inputs_layout.addRow("Preview:", preview_label) + + # Build layout + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.addWidget(inputs_widget) + main_layout.addWidget(btns_widget) + + # Signal callback registration + version_input.valueChanged.connect(self.on_version_spinbox_changed) + last_version_check.stateChanged.connect( + self.on_version_checkbox_changed + ) + + subversion.text_changed.connect(self.on_comment_changed) + ext_combo.currentIndexChanged.connect(self.on_extension_changed) + + btn_ok.pressed.connect(self.on_ok_pressed) + btn_cancel.pressed.connect(self.on_cancel_pressed) + + # Allow "Enter" key to accept the save. + btn_ok.setDefault(True) + + # Force default focus to comment, some hosts didn't automatically + # apply focus to this line edit (e.g. Houdini) + subversion.setFocus() + + # Store widgets + self.btn_ok = btn_ok + + self.version_widget = version_widget + + self.version_input = version_input + self.last_version_check = last_version_check + + self.preview_label = preview_label + self.subversion = subversion + self.ext_combo = ext_combo + self._ext_delegate = ext_delegate + + self.refresh() + + def get_existing_comments(self): + matcher = CommentMatcher(self.anatomy, self.template_key, self.data) + host_extensions = set(self.host.file_extensions()) + comments = set() + if os.path.isdir(self.root): + for fname in os.listdir(self.root): + if not os.path.isfile(os.path.join(self.root, fname)): + continue + + ext = os.path.splitext(fname)[-1] + if ext not in host_extensions: + continue + + comment = matcher.parse_comment(fname) + if comment: + comments.add(comment) + + return list(comments) + + def on_version_spinbox_changed(self, value): + self.data["version"] = value + self.refresh() + + def on_version_checkbox_changed(self, _value): + self.refresh() + + def on_comment_changed(self, text): + self.data["comment"] = text + self.refresh() + + def on_extension_changed(self): + ext = self.ext_combo.currentText() + if ext == self.data["ext"]: + return + self.data["ext"] = ext + self.refresh() + + def on_ok_pressed(self): + self.result = self.work_file + self.close() + + def on_cancel_pressed(self): + self.close() + + def get_result(self): + return self.result + + def get_work_file(self): + data = copy.deepcopy(self.data) + if not data["comment"]: + data.pop("comment", None) + + data["ext"] = data["ext"][1:] + + anatomy_filled = self.anatomy.format(data) + return anatomy_filled[self.template_key]["file"] + + def refresh(self): + extensions = self.host.file_extensions() + extension = self.data["ext"] + if extension is None: + # Define saving file extension + current_file = self.host.current_file() + if current_file: + # Match the extension of current file + _, extension = os.path.splitext(current_file) + else: + extension = extensions[0] + + if extension != self.data["ext"]: + self.data["ext"] = extension + index = self.ext_combo.findText( + extension, QtCore.Qt.MatchFixedString + ) + if index >= 0: + self.ext_combo.setCurrentIndex(index) + + if not self.last_version_check.isChecked(): + self.version_input.setEnabled(True) + self.data["version"] = self.version_input.value() + + work_file = self.get_work_file() + + else: + self.version_input.setEnabled(False) + + data = copy.deepcopy(self.data) + template = str(self.template) + + if not data["comment"]: + data.pop("comment", None) + + data["ext"] = data["ext"][1:] + + version = get_last_workfile_with_version( + self.root, template, data, extensions + )[1] + + if version is None: + version = 1 + else: + version += 1 + + found_valid_version = False + # Check if next version is valid version and give a chance to try + # next 100 versions + for idx in range(100): + # Store version to data + self.data["version"] = version + + work_file = self.get_work_file() + # Safety check + path = os.path.join(self.root, work_file) + if not os.path.exists(path): + found_valid_version = True + break + + # Try next version + version += 1 + # Log warning + if idx == 0: + log.warning(( + "BUG: Function `get_last_workfile_with_version` " + "didn't return last version." + )) + # Raise exception if even 100 version fallback didn't help + if not found_valid_version: + raise AssertionError( + "This is a bug. Couldn't find valid version!" + ) + + self.work_file = work_file + + path_exists = os.path.exists(os.path.join(self.root, work_file)) + + self.btn_ok.setEnabled(not path_exists) + + if path_exists: + self.preview_label.setText( + "Cannot create \"{0}\" because file exists!" + "".format(work_file) + ) + else: + self.preview_label.setText( + "{0}".format(work_file) + ) diff --git a/openpype/tools/workfiles/view.py b/openpype/tools/workfiles/view.py deleted file mode 100644 index 8e3993e4c7..0000000000 --- a/openpype/tools/workfiles/view.py +++ /dev/null @@ -1,15 +0,0 @@ -from Qt import QtWidgets, QtCore - - -class FilesView(QtWidgets.QTreeView): - doubleClickedLeft = QtCore.Signal() - doubleClickedRight = QtCore.Signal() - - def mouseDoubleClickEvent(self, event): - if event.button() == QtCore.Qt.LeftButton: - self.doubleClickedLeft.emit() - - elif event.button() == QtCore.Qt.RightButton: - self.doubleClickedRight.emit() - - return super(FilesView, self).mouseDoubleClickEvent(event) diff --git a/openpype/tools/workfiles/window.py b/openpype/tools/workfiles/window.py new file mode 100644 index 0000000000..8654a18036 --- /dev/null +++ b/openpype/tools/workfiles/window.py @@ -0,0 +1,393 @@ +import os +import datetime +from Qt import QtCore, QtWidgets + +from avalon import io + +from openpype import style +from openpype.lib import ( + get_workfile_doc, + create_workfile_doc, + save_workfile_data_to_doc, +) +from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget +from openpype.tools.utils.tasks_widget import TasksWidget + +from .files_widget import FilesWidget +from .lib import TempPublishFiles, file_size_to_string + + +class SidePanelWidget(QtWidgets.QWidget): + save_clicked = QtCore.Signal() + published_workfile_message = ( + "INFO: Opened published workfiles will be stored in" + " temp directory on your machine. Current temp size: {}." + ) + + def __init__(self, parent=None): + super(SidePanelWidget, self).__init__(parent) + + details_label = QtWidgets.QLabel("Details", self) + details_input = QtWidgets.QPlainTextEdit(self) + details_input.setReadOnly(True) + + artist_note_widget = QtWidgets.QWidget(self) + note_label = QtWidgets.QLabel("Artist note", artist_note_widget) + note_input = QtWidgets.QPlainTextEdit(artist_note_widget) + btn_note_save = QtWidgets.QPushButton("Save note", artist_note_widget) + + artist_note_layout = QtWidgets.QVBoxLayout(artist_note_widget) + artist_note_layout.setContentsMargins(0, 0, 0, 0) + artist_note_layout.addWidget(note_label, 0) + artist_note_layout.addWidget(note_input, 1) + artist_note_layout.addWidget( + btn_note_save, 0, alignment=QtCore.Qt.AlignRight + ) + + publish_temp_widget = QtWidgets.QWidget(self) + publish_temp_info_label = QtWidgets.QLabel( + self.published_workfile_message.format( + file_size_to_string(0) + ), + publish_temp_widget + ) + publish_temp_info_label.setWordWrap(True) + + btn_clear_temp = QtWidgets.QPushButton( + "Clear temp", publish_temp_widget + ) + + publish_temp_layout = QtWidgets.QVBoxLayout(publish_temp_widget) + publish_temp_layout.setContentsMargins(0, 0, 0, 0) + publish_temp_layout.addWidget(publish_temp_info_label, 0) + publish_temp_layout.addWidget( + btn_clear_temp, 0, alignment=QtCore.Qt.AlignRight + ) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(details_label, 0) + main_layout.addWidget(details_input, 1) + main_layout.addWidget(artist_note_widget, 1) + main_layout.addWidget(publish_temp_widget, 0) + + note_input.textChanged.connect(self._on_note_change) + btn_note_save.clicked.connect(self._on_save_click) + btn_clear_temp.clicked.connect(self._on_clear_temp_click) + + self._details_input = details_input + self._artist_note_widget = artist_note_widget + self._note_input = note_input + self._btn_note_save = btn_note_save + + self._publish_temp_info_label = publish_temp_info_label + self._publish_temp_widget = publish_temp_widget + + self._orig_note = "" + self._workfile_doc = None + + publish_temp_widget.setVisible(False) + + def set_published_visible(self, published_visible): + self._artist_note_widget.setVisible(not published_visible) + self._publish_temp_widget.setVisible(published_visible) + if published_visible: + self.refresh_publish_temp_sizes() + + def refresh_publish_temp_sizes(self): + temp_publish_files = TempPublishFiles() + text = self.published_workfile_message.format( + file_size_to_string(temp_publish_files.size) + ) + self._publish_temp_info_label.setText(text) + + def _on_clear_temp_click(self): + temp_publish_files = TempPublishFiles() + temp_publish_files.clear() + self.refresh_publish_temp_sizes() + + def _on_note_change(self): + text = self._note_input.toPlainText() + self._btn_note_save.setEnabled(self._orig_note != text) + + def _on_save_click(self): + self._orig_note = self._note_input.toPlainText() + self._on_note_change() + self.save_clicked.emit() + + def set_context(self, asset_id, task_name, filepath, workfile_doc): + # Check if asset, task and file are selected + # NOTE workfile document is not requirement + enabled = bool(asset_id) and bool(task_name) and bool(filepath) + + self._details_input.setEnabled(enabled) + self._note_input.setEnabled(enabled) + self._btn_note_save.setEnabled(enabled) + + # Make sure workfile doc is overridden + self._workfile_doc = workfile_doc + # Disable inputs and remove texts if any required arguments are missing + if not enabled: + self._orig_note = "" + self._details_input.setPlainText("") + self._note_input.setPlainText("") + return + + orig_note = "" + if workfile_doc: + orig_note = workfile_doc["data"].get("note") or orig_note + + self._orig_note = orig_note + self._note_input.setPlainText(orig_note) + # Set as empty string + self._details_input.setPlainText("") + + filestat = os.stat(filepath) + size_value = file_size_to_string(filestat.st_size) + + # Append html string + datetime_format = "%b %d %Y %H:%M:%S" + creation_time = datetime.datetime.fromtimestamp(filestat.st_ctime) + modification_time = datetime.datetime.fromtimestamp(filestat.st_mtime) + lines = ( + "Size:", + size_value, + "Created:", + creation_time.strftime(datetime_format), + "Modified:", + modification_time.strftime(datetime_format) + ) + self._details_input.appendHtml("
".join(lines)) + + def get_workfile_data(self): + data = { + "note": self._note_input.toPlainText() + } + return self._workfile_doc, data + + +class Window(QtWidgets.QMainWindow): + """Work Files Window""" + title = "Work Files" + + def __init__(self, parent=None): + super(Window, self).__init__(parent=parent) + self.setWindowTitle(self.title) + window_flags = QtCore.Qt.Window | QtCore.Qt.WindowCloseButtonHint + if not parent: + window_flags |= QtCore.Qt.WindowStaysOnTopHint + self.setWindowFlags(window_flags) + + # Create pages widget and set it as central widget + pages_widget = QtWidgets.QStackedWidget(self) + self.setCentralWidget(pages_widget) + + home_page_widget = QtWidgets.QWidget(pages_widget) + home_body_widget = QtWidgets.QWidget(home_page_widget) + + assets_widget = SingleSelectAssetsWidget(io, parent=home_body_widget) + assets_widget.set_current_asset_btn_visibility(True) + + tasks_widget = TasksWidget(io, home_body_widget) + files_widget = FilesWidget(home_body_widget) + side_panel = SidePanelWidget(home_body_widget) + + pages_widget.addWidget(home_page_widget) + + # Build home + home_page_layout = QtWidgets.QVBoxLayout(home_page_widget) + home_page_layout.addWidget(home_body_widget) + + # Build home - body + body_layout = QtWidgets.QVBoxLayout(home_body_widget) + split_widget = QtWidgets.QSplitter(home_body_widget) + split_widget.addWidget(assets_widget) + split_widget.addWidget(tasks_widget) + split_widget.addWidget(files_widget) + split_widget.addWidget(side_panel) + split_widget.setSizes([255, 160, 455, 175]) + + body_layout.addWidget(split_widget) + + # Add top margin for tasks to align it visually with files as + # the files widget has a filter field which tasks does not. + tasks_widget.setContentsMargins(0, 32, 0, 0) + + # Set context after asset widget is refreshed + # - to do so it is necessary to wait until refresh is done + set_context_timer = QtCore.QTimer() + set_context_timer.setInterval(100) + + # Connect signals + set_context_timer.timeout.connect(self._on_context_set_timeout) + assets_widget.selection_changed.connect(self._on_asset_changed) + tasks_widget.task_changed.connect(self._on_task_changed) + files_widget.file_selected.connect(self.on_file_select) + files_widget.workfile_created.connect(self.on_workfile_create) + files_widget.file_opened.connect(self._on_file_opened) + files_widget.publish_file_viewed.connect( + self._on_publish_file_viewed + ) + files_widget.published_visible_changed.connect( + self._on_published_change + ) + side_panel.save_clicked.connect(self.on_side_panel_save) + + self._set_context_timer = set_context_timer + self.home_page_widget = home_page_widget + self.pages_widget = pages_widget + self.home_body_widget = home_body_widget + self.split_widget = split_widget + + self.assets_widget = assets_widget + self.tasks_widget = tasks_widget + self.files_widget = files_widget + self.side_panel = side_panel + + # Force focus on the open button by default, required for Houdini. + files_widget.setFocus() + + self.resize(1200, 600) + + self._first_show = True + self._context_to_set = None + + def showEvent(self, event): + super(Window, self).showEvent(event) + if self._first_show: + self._first_show = False + self.refresh() + self.setStyleSheet(style.load_stylesheet()) + + def keyPressEvent(self, event): + """Custom keyPressEvent. + + Override keyPressEvent to do nothing so that Maya's panels won't + take focus when pressing "SHIFT" whilst mouse is over viewport or + outliner. This way users don't accidentally perform Maya commands + whilst trying to name an instance. + + """ + + def set_save_enabled(self, enabled): + self.files_widget.set_save_enabled(enabled) + + def on_file_select(self, filepath): + asset_id = self.assets_widget.get_selected_asset_id() + task_name = self.tasks_widget.get_selected_task_name() + + workfile_doc = None + if asset_id and task_name and filepath: + filename = os.path.split(filepath)[1] + workfile_doc = get_workfile_doc( + asset_id, task_name, filename, io + ) + self.side_panel.set_context( + asset_id, task_name, filepath, workfile_doc + ) + + def on_workfile_create(self, filepath): + self._create_workfile_doc(filepath) + + def _on_file_opened(self): + self.close() + + def _on_publish_file_viewed(self): + self.side_panel.refresh_publish_temp_sizes() + + def _on_published_change(self, visible): + self.side_panel.set_published_visible(visible) + + def on_side_panel_save(self): + workfile_doc, data = self.side_panel.get_workfile_data() + if not workfile_doc: + filepath = self.files_widget._get_selected_filepath() + self._create_workfile_doc(filepath, force=True) + workfile_doc = self._get_current_workfile_doc() + + save_workfile_data_to_doc(workfile_doc, data, io) + + def _get_current_workfile_doc(self, filepath=None): + if filepath is None: + filepath = self.files_widget._get_selected_filepath() + task_name = self.tasks_widget.get_selected_task_name() + asset_id = self.assets_widget.get_selected_asset_id() + if not task_name or not asset_id or not filepath: + return + + filename = os.path.split(filepath)[1] + return get_workfile_doc( + asset_id, task_name, filename, io + ) + + def _create_workfile_doc(self, filepath, force=False): + workfile_doc = None + if not force: + workfile_doc = self._get_current_workfile_doc(filepath) + + if not workfile_doc: + workdir, filename = os.path.split(filepath) + asset_id = self.assets_widget.get_selected_asset_id() + asset_doc = io.find_one({"_id": asset_id}) + task_name = self.tasks_widget.get_selected_task_name() + create_workfile_doc(asset_doc, task_name, filename, workdir, io) + + def refresh(self): + # Refresh asset widget + self.assets_widget.refresh() + + self._on_task_changed() + + def set_context(self, context): + self._context_to_set = context + self._set_context_timer.start() + + def _on_context_set_timeout(self): + if self._context_to_set is None: + self._set_context_timer.stop() + return + + if self.assets_widget.refreshing: + return + + self._context_to_set, context = None, self._context_to_set + if "asset" in context: + asset_doc = io.find_one( + { + "name": context["asset"], + "type": "asset" + }, + {"_id": 1} + ) or {} + asset_id = asset_doc.get("_id") + # Select the asset + self.assets_widget.select_asset(asset_id) + self.tasks_widget.set_asset_id(asset_id) + + if "task" in context: + self.tasks_widget.select_task_name(context["task"]) + self._on_task_changed() + + def _on_asset_changed(self): + asset_id = self.assets_widget.get_selected_asset_id() + if asset_id: + self.tasks_widget.setEnabled(True) + else: + # Force disable the other widgets if no + # active selection + self.tasks_widget.setEnabled(False) + self.files_widget.setEnabled(False) + + self.tasks_widget.set_asset_id(asset_id) + + def _on_task_changed(self): + asset_id = self.assets_widget.get_selected_asset_id() + task_name = self.tasks_widget.get_selected_task_name() + task_type = self.tasks_widget.get_selected_task_type() + + asset_is_valid = asset_id is not None + self.tasks_widget.setEnabled(asset_is_valid) + + self.files_widget.setEnabled(bool(task_name) and asset_is_valid) + self.files_widget.set_asset_task(asset_id, task_name, task_type) + self.files_widget.refresh() diff --git a/openpype/version.py b/openpype/version.py index 2390309e76..c7ee5f0415 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.9.2-nightly.1" +__version__ = "3.9.2-nightly.4" diff --git a/openpype/widgets/attribute_defs/files_widget.py b/openpype/widgets/attribute_defs/files_widget.py index 87b98e2378..34f7d159ad 100644 --- a/openpype/widgets/attribute_defs/files_widget.py +++ b/openpype/widgets/attribute_defs/files_widget.py @@ -641,5 +641,6 @@ class SingleFileWidget(QtWidgets.QWidget): filepaths.append(filepath) # TODO filter check if len(filepaths) == 1: - self.set_value(filepaths[0], False) + self._filepath_input.setText(filepaths[0]) + event.accept() diff --git a/openpype/widgets/attribute_defs/widgets.py b/openpype/widgets/attribute_defs/widgets.py index a6f1b8d6c9..23f025967d 100644 --- a/openpype/widgets/attribute_defs/widgets.py +++ b/openpype/widgets/attribute_defs/widgets.py @@ -2,7 +2,7 @@ import uuid from Qt import QtWidgets, QtCore -from openpype.pipeline.lib import ( +from openpype.lib.attribute_definitions import ( AbtractAttrDef, UnknownDef, NumberDef, diff --git a/poetry.lock b/poetry.lock index ee7b839b8d..7998ede693 100644 --- a/poetry.lock +++ b/poetry.lock @@ -11,7 +11,7 @@ develop = false type = "git" url = "https://github.com/pypeclub/acre.git" reference = "master" -resolved_reference = "55a7c331e6dc5f81639af50ca4a8cc9d73e9273d" +resolved_reference = "126f7a188cfe36718f707f42ebbc597e86aa86c3" [[package]] name = "aiohttp" @@ -680,15 +680,8 @@ category = "main" optional = false python-versions = "*" -[package.dependencies] -attrs = ">=17.4.0" -importlib-metadata = {version = "*", markers = "python_version < \"3.8\""} -pyrsistent = ">=0.14.0" -six = ">=1.11.0" - [package.extras] -format = ["idna", "jsonpointer (>1.13)", "rfc3987", "strict-rfc3339", "webcolors"] -format_nongpl = ["idna", "jsonpointer (>1.13)", "webcolors", "rfc3986-validator (>0.1.0)", "rfc3339-validator"] +format = ["rfc3987", "strict-rfc3339", "webcolors"] [[package]] name = "keyring" @@ -784,7 +777,7 @@ pyparsing = ">=2.0.2,<3.0.5 || >3.0.5" [[package]] name = "paramiko" -version = "2.9.2" +version = "2.10.1" description = "SSH2 protocol library" category = "main" optional = false @@ -794,6 +787,7 @@ python-versions = "*" bcrypt = ">=3.1.3" cryptography = ">=2.5" pynacl = ">=1.0.1" +six = "*" [package.extras] all = ["pyasn1 (>=0.1.7)", "pynacl (>=1.0.1)", "bcrypt (>=3.1.3)", "invoke (>=1.3)", "gssapi (>=1.4.1)", "pywin32 (>=2.1.8)"] @@ -1087,14 +1081,6 @@ category = "main" optional = false python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" -[[package]] -name = "pyrsistent" -version = "0.18.1" -description = "Persistent/Functional/Immutable data structures" -category = "main" -optional = false -python-versions = ">=3.7" - [[package]] name = "pysftp" version = "0.2.9" @@ -1633,7 +1619,7 @@ testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest- [metadata] lock-version = "1.1" python-versions = "3.7.*" -content-hash = "2f78d48a6aad2d8a88b7dd7f31a76d907bec9fb65f0086fba6b6d2e1605f0f88" +content-hash = "b02313c8255a1897b0f0617ad4884a5943696c363512921aab1cb2dd8f4fdbe0" [metadata.files] acre = [] @@ -2171,12 +2157,28 @@ log4mongo = [ {file = "log4mongo-1.7.0.tar.gz", hash = "sha256:dc374617206162a0b14167fbb5feac01dbef587539a235dadba6200362984a68"}, ] markupsafe = [ + {file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d8446c54dc28c01e5a2dbac5a25f071f6653e6e40f3a8818e8b45d790fe6ef53"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:36bc903cbb393720fad60fc28c10de6acf10dc6cc883f3e24ee4012371399a38"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d7d807855b419fc2ed3e631034685db6079889a1f01d5d9dac950f764da3dad"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:add36cb2dbb8b736611303cd3bfcee00afd96471b09cda130da3581cbdc56a6d"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:168cd0a3642de83558a5153c8bd34f175a9a6e7f6dc6384b9655d2697312a646"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4dc8f9fb58f7364b63fd9f85013b780ef83c11857ae79f2feda41e270468dd9b"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:20dca64a3ef2d6e4d5d615a3fd418ad3bde77a47ec8a23d984a12b5b4c74491a"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdfba22ea2f0029c9261a4bd07e830a8da012291fbe44dc794e488b6c9bb353a"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-win32.whl", hash = "sha256:99df47edb6bda1249d3e80fdabb1dab8c08ef3975f69aed437cb69d0a5de1e28"}, + {file = "MarkupSafe-2.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:e0f138900af21926a02425cf736db95be9f4af72ba1bb21453432a07f6082134"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f9081981fe268bd86831e5c75f7de206ef275defcb82bc70740ae6dc507aee51"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:fa130dd50c57d53368c9d59395cb5526eda596d3ffe36666cd81a44d56e48872"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf5d821ffabf0ef3533c39c518f3357b171a1651c1ff6827325e4489b0e46c3c"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0d4b31cc67ab36e3392bbf3862cfbadac3db12bdd8b02a2731f509ed5b829724"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:baa1a4e8f868845af802979fcdbf0bb11f94f1cb7ced4c4b8a351bb60d108145"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:deb993cacb280823246a026e3b2d81c493c53de6acfd5e6bfe31ab3402bb37dd"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:63f3268ba69ace99cab4e3e3b5840b03340efed0948ab8f78d2fd87ee5442a4f"}, + {file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8d206346619592c6200148b01a2142798c989edcb9c896f9ac9722a99d4e77e6"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-win32.whl", hash = "sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d"}, {file = "MarkupSafe-2.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9"}, {file = "MarkupSafe-2.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567"}, @@ -2185,14 +2187,27 @@ markupsafe = [ {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f"}, {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2"}, {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9936f0b261d4df76ad22f8fee3ae83b60d7c3e871292cd42f40b81b70afae85"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2a7d351cbd8cfeb19ca00de495e224dea7e7d919659c2841bbb7f420ad03e2d6"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:60bf42e36abfaf9aff1f50f52644b336d4f0a3fd6d8a60ca0d054ac9f713a864"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d6c7ebd4e944c85e2c3421e612a7057a2f48d478d79e61800d81468a8d842207"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f0567c4dc99f264f49fe27da5f735f414c4e7e7dd850cfd8e69f0862d7c74ea9"}, + {file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:89c687013cb1cd489a0f0ac24febe8c7a666e6e221b783e53ac50ebf68e45d86"}, {file = "MarkupSafe-2.0.1-cp37-cp37m-win32.whl", hash = "sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415"}, {file = "MarkupSafe-2.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5bb28c636d87e840583ee3adeb78172efc47c8b26127267f54a9c0ec251d41a9"}, {file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066"}, {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35"}, {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b"}, {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:01a9b8ea66f1658938f65b93a85ebe8bc016e6769611be228d797c9d998dd298"}, {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75"}, {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6fcf051089389abe060c9cd7caa212c707e58153afa2c649f00346ce6d260f1b"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5855f8438a7d1d458206a2466bf82b0f104a3724bf96a1c781ab731e4201731a"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3dd007d54ee88b46be476e293f48c85048603f5f516008bee124ddd891398ed6"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:aca6377c0cb8a8253e493c6b451565ac77e98c2951c45f913e0b52facdcff83f"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:04635854b943835a6ea959e948d19dcd311762c5c0c6e1f0e16ee57022669194"}, + {file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6300b8454aa6930a24b9618fbb54b5a68135092bc666f7b06901f897fa5c2fee"}, {file = "MarkupSafe-2.0.1-cp38-cp38-win32.whl", hash = "sha256:023cb26ec21ece8dc3907c0e8320058b2e0cb3c55cf9564da612bc325bed5e64"}, {file = "MarkupSafe-2.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833"}, {file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26"}, @@ -2202,6 +2217,12 @@ markupsafe = [ {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135"}, {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902"}, {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c47adbc92fc1bb2b3274c4b3a43ae0e4573d9fbff4f54cd484555edbf030baf1"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:37205cac2a79194e3750b0af2a5720d95f786a55ce7df90c3af697bfa100eaac"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1f2ade76b9903f39aa442b4aadd2177decb66525062db244b35d71d0ee8599b6"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4296f2b1ce8c86a6aea78613c34bb1a672ea0e3de9c6ba08a960efe0b0a09047"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f02365d4e99430a12647f09b6cc8bab61a6564363f313126f775eb4f6ef798e"}, + {file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5b6d930f030f8ed98e3e6c98ffa0652bdb82601e7a016ec2ab5d7ff23baa78d1"}, {file = "MarkupSafe-2.0.1-cp39-cp39-win32.whl", hash = "sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74"}, {file = "MarkupSafe-2.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8"}, {file = "MarkupSafe-2.0.1.tar.gz", hash = "sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a"}, @@ -2277,8 +2298,8 @@ packaging = [ {file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"}, ] paramiko = [ - {file = "paramiko-2.9.2-py2.py3-none-any.whl", hash = "sha256:04097dbd96871691cdb34c13db1883066b8a13a0df2afd4cb0a92221f51c2603"}, - {file = "paramiko-2.9.2.tar.gz", hash = "sha256:944a9e5dbdd413ab6c7951ea46b0ab40713235a9c4c5ca81cfe45c6f14fa677b"}, + {file = "paramiko-2.10.1-py2.py3-none-any.whl", hash = "sha256:f6cbd3e1204abfdbcd40b3ecbc9d32f04027cd3080fe666245e21e7540ccfc1b"}, + {file = "paramiko-2.10.1.tar.gz", hash = "sha256:443f4da23ec24e9a9c0ea54017829c282abdda1d57110bf229360775ccd27a31"}, ] parso = [ {file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"}, @@ -2598,29 +2619,6 @@ pyparsing = [ {file = "pyparsing-2.4.7-py2.py3-none-any.whl", hash = "sha256:ef9d7589ef3c200abe66653d3f1ab1033c3c419ae9b9bdb1240a85b024efc88b"}, {file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"}, ] -pyrsistent = [ - {file = "pyrsistent-0.18.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:df46c854f490f81210870e509818b729db4488e1f30f2a1ce1698b2295a878d1"}, - {file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d45866ececf4a5fff8742c25722da6d4c9e180daa7b405dc0a2a2790d668c26"}, - {file = "pyrsistent-0.18.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4ed6784ceac462a7d6fcb7e9b663e93b9a6fb373b7f43594f9ff68875788e01e"}, - {file = "pyrsistent-0.18.1-cp310-cp310-win32.whl", hash = "sha256:e4f3149fd5eb9b285d6bfb54d2e5173f6a116fe19172686797c056672689daf6"}, - {file = "pyrsistent-0.18.1-cp310-cp310-win_amd64.whl", hash = "sha256:636ce2dc235046ccd3d8c56a7ad54e99d5c1cd0ef07d9ae847306c91d11b5fec"}, - {file = "pyrsistent-0.18.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e92a52c166426efbe0d1ec1332ee9119b6d32fc1f0bbfd55d5c1088070e7fc1b"}, - {file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d7a096646eab884bf8bed965bad63ea327e0d0c38989fc83c5ea7b8a87037bfc"}, - {file = "pyrsistent-0.18.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdfd2c361b8a8e5d9499b9082b501c452ade8bbf42aef97ea04854f4a3f43b22"}, - {file = "pyrsistent-0.18.1-cp37-cp37m-win32.whl", hash = "sha256:7ec335fc998faa4febe75cc5268a9eac0478b3f681602c1f27befaf2a1abe1d8"}, - {file = "pyrsistent-0.18.1-cp37-cp37m-win_amd64.whl", hash = "sha256:6455fc599df93d1f60e1c5c4fe471499f08d190d57eca040c0ea182301321286"}, - {file = "pyrsistent-0.18.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fd8da6d0124efa2f67d86fa70c851022f87c98e205f0594e1fae044e7119a5a6"}, - {file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bfe2388663fd18bd8ce7db2c91c7400bf3e1a9e8bd7d63bf7e77d39051b85ec"}, - {file = "pyrsistent-0.18.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0e3e1fcc45199df76053026a51cc59ab2ea3fc7c094c6627e93b7b44cdae2c8c"}, - {file = "pyrsistent-0.18.1-cp38-cp38-win32.whl", hash = "sha256:b568f35ad53a7b07ed9b1b2bae09eb15cdd671a5ba5d2c66caee40dbf91c68ca"}, - {file = "pyrsistent-0.18.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1b96547410f76078eaf66d282ddca2e4baae8964364abb4f4dcdde855cd123a"}, - {file = "pyrsistent-0.18.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f87cc2863ef33c709e237d4b5f4502a62a00fab450c9e020892e8e2ede5847f5"}, - {file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc66318fb7ee012071b2792024564973ecc80e9522842eb4e17743604b5e045"}, - {file = "pyrsistent-0.18.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:914474c9f1d93080338ace89cb2acee74f4f666fb0424896fcfb8d86058bf17c"}, - {file = "pyrsistent-0.18.1-cp39-cp39-win32.whl", hash = "sha256:1b34eedd6812bf4d33814fca1b66005805d3640ce53140ab8bbb1e2651b0d9bc"}, - {file = "pyrsistent-0.18.1-cp39-cp39-win_amd64.whl", hash = "sha256:e24a828f57e0c337c8d8bb9f6b12f09dfdf0273da25fda9e314f0b684b415a07"}, - {file = "pyrsistent-0.18.1.tar.gz", hash = "sha256:d4d61f8b993a7255ba714df3aca52700f8125289f84f704cf80916517c46eb96"}, -] pysftp = [ {file = "pysftp-0.2.9.tar.gz", hash = "sha256:fbf55a802e74d663673400acd92d5373c1c7ee94d765b428d9f977567ac4854a"}, ] diff --git a/pyproject.toml b/pyproject.toml index 90e264d456..01bbaf48ae 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.9.2-nightly.1" # OpenPype +version = "3.9.2-nightly.4" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/setup.py b/setup.py index 3ee6ad43ea..bf42602b52 100644 --- a/setup.py +++ b/setup.py @@ -187,5 +187,6 @@ setup( "build_dir": (openpype_root / "docs" / "build").as_posix() } }, - executables=executables + executables=executables, + packages=[] ) diff --git a/tools/build.ps1 b/tools/build.ps1 index 10da3d0b83..ff28544954 100644 --- a/tools/build.ps1 +++ b/tools/build.ps1 @@ -180,6 +180,9 @@ $out = & "$($env:POETRY_HOME)\bin\poetry" run python setup.py build 2>&1 Set-Content -Path "$($openpype_root)\build\build.log" -Value $out if ($LASTEXITCODE -ne 0) { + Write-Host "------------------------------------------" -ForegroundColor Red + Get-Content "$($openpype_root)\build\build.log" + Write-Host "------------------------------------------" -ForegroundColor Red Write-Host "!!! " -NoNewLine -ForegroundColor Red Write-Host "Build failed. Check the log: " -NoNewline Write-Host ".\build\build.log" -ForegroundColor Yellow diff --git a/tools/build.sh b/tools/build.sh index 301f26023a..79fb748cd5 100755 --- a/tools/build.sh +++ b/tools/build.sh @@ -185,9 +185,9 @@ if [ "$disable_submodule_update" == 1 ]; then fi echo -e "${BIGreen}>>>${RST} Building ..." if [[ "$OSTYPE" == "linux-gnu"* ]]; then - "$POETRY_HOME/bin/poetry" run python "$openpype_root/setup.py" build &> "$openpype_root/build/build.log" || { echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return 1; } + "$POETRY_HOME/bin/poetry" run python "$openpype_root/setup.py" build &> "$openpype_root/build/build.log" || { echo -e "${BIRed}------------------------------------------${RST}"; cat "$openpype_root/build/build.log"; echo -e "${BIRed}------------------------------------------${RST}"; echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return 1; } elif [[ "$OSTYPE" == "darwin"* ]]; then - "$POETRY_HOME/bin/poetry" run python "$openpype_root/setup.py" bdist_mac &> "$openpype_root/build/build.log" || { echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return 1; } + "$POETRY_HOME/bin/poetry" run python "$openpype_root/setup.py" bdist_mac &> "$openpype_root/build/build.log" || { echo -e "${BIRed}------------------------------------------${RST}"; cat "$openpype_root/build/build.log"; echo -e "${BIRed}------------------------------------------${RST}"; echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return 1; } fi "$POETRY_HOME/bin/poetry" run python "$openpype_root/tools/build_dependencies.py" diff --git a/website/docs/artist_hosts_photoshop.md b/website/docs/artist_hosts_photoshop.md index b2b5fd58da..a140170c49 100644 --- a/website/docs/artist_hosts_photoshop.md +++ b/website/docs/artist_hosts_photoshop.md @@ -49,6 +49,12 @@ With the `Creator` you have a variety of options to create: - Uncheck `Use selection`. - This will create a single group named after the `Subset` in the `Creator`. +#### Simplified publish + +There is a simplified workflow for simple use case where only single image should be created containing all visible layers. +No image instances must be present in a workfile and `project_settings/photoshop/publish/CollectInstances/flatten_subset_template` must be filled in Settings. +Then artists just need to hit 'Publish' button in menu. + ### Publish When you are ready to share some work, you will need to publish. This is done by opening the `Pyblish` through the extensions `Publish` button. diff --git a/website/docs/assets/publisher_card_view.png b/website/docs/assets/publisher_card_view.png new file mode 100644 index 0000000000..57b012cb6d Binary files /dev/null and b/website/docs/assets/publisher_card_view.png differ diff --git a/website/docs/assets/publisher_create_dialog.png b/website/docs/assets/publisher_create_dialog.png new file mode 100644 index 0000000000..6e9275062d Binary files /dev/null and b/website/docs/assets/publisher_create_dialog.png differ diff --git a/website/docs/assets/publisher_list_view.png b/website/docs/assets/publisher_list_view.png new file mode 100644 index 0000000000..e9dc8a607a Binary files /dev/null and b/website/docs/assets/publisher_list_view.png differ diff --git a/website/docs/dev_publishing.md b/website/docs/dev_publishing.md new file mode 100644 index 0000000000..8ee3b7e85f --- /dev/null +++ b/website/docs/dev_publishing.md @@ -0,0 +1,548 @@ +--- +id: dev_publishing +title: Publishing +sidebar_label: Publishing +toc_max_heading_level: 4 +--- + +Publishing workflow consists of 2 parts: +- Creating - Mark what will be published and how. +- Publishing - Use data from Creating to go through the pyblish process. + +OpenPype is using [pyblish](https://pyblish.com/) for the publishing process. OpenPype extends and modifies its few functions a bit, mainly for reports and UI purposes. The main differences are that OpenPype's publish UI allows to enable/disable instances or plugins during Creating part instead of in the publishing part and has limited plugin actions only for failed validation plugins. + +## **Creating** + +Concept of Creating does not have to "create" anything yet, but prepare and store metadata about an "instance" (becomes a subset after the publish process). Created instance always has `family` which defines what kind of data will be published, the best example is `workfile` family. Storing of metadata is host specific and may be even a Creator plugin specific. Most hosts are storing metadata into a workfile (Maya scene, Nuke script, etc.) to an item or a node the same way as regular Pyblish instances, so consistency of host implementation is kept, but some features may require a different approach that is the reason why it is creator plugin responsibility. Storing the metadata to the workfile persists values, so the artist does not have to create and set what should be published and how over and over. + +### Created instance + +Objected representation of created instance metadata defined by class **CreatedInstance**. Has access to **CreateContext** and **BaseCreator** that initialized the object. Is a dictionary-like object with few immutable keys (marked with start `*` in table). The immutable keys are set by the creator plugin or create context on initialization and their values can't change. Instance can have more arbitrary data, for example ids of nodes in scene but keep in mind that some keys are reserved. + +| Key | Type | Description | +|---|---|---| +| *id | str | Identifier of metadata type. ATM constant **"pyblish.avalon.instance"** | +| *instance_id | str | Unique ID of instance. Set automatically on instance creation using `str(uuid.uuid4())` | +| *family | str | Instance's family representing type defined by creator plugin. | +| *creator_identifier | str | Identifier of creator that collected/created the instance. | +| *creator_attributes | dict | Dictionary of attributes that are defined by the creator plugin (`get_instance_attr_defs`). | +| *publish_attributes | dict | Dictionary of attributes that are defined by publish plugins. | +| variant | str | Variant is entered by the artist on creation and may affect **subset**. | +| subset | str | Name of instance. This name will be used as a subset name during publishing. Can be changed on context change or variant change. | +| active | bool | Is the instance active and will be published or not. | +| asset | str | Name of asset in which context was created. | +| task | str | Name of task in which context was created. Can be set to `None`. | + +:::note +Task should not be required until the subset name template expects it. +::: + +object of **CreatedInstance** has method **data_to_store** which returns a dictionary that can be parsed to a json string. This method will return all data related to the instance so it can be re-created using `CreatedInstance.from_existing(data)`. + +#### *Create context* {#category-doc-link} + +Controller and wrapper around Creating is `CreateContext` which cares about loading of plugins needed for Creating. And validates required functions in host implementation. + +Context discovers creator and publish plugins. Trigger collections of existing instances on creators and trigger Creating itself. Also it keeps in mind instance objects by their ids. + +Creator plugins can call **creator_adds_instance** or **creator_removed_instance** to add/remove instances but these methods are not meant to be called directly out of the creator. The reason is that it is the creator's responsibility to remove metadata or decide if it should remove the instance. + +#### Required functions in host implementation +Host implementation **must** implement **get_context_data** and **update_context_data**. These two functions are needed to store metadata that are not related to any instance but are needed for Creating and publishing process. Right now only data about enabled/disabled optional publish plugins is stored there. When data is not stored and loaded properly, reset of publishing will cause that they will be set to default value. Context data also parsed to json string similarly as instance data. + +There are also few optional functions. For UI purposes it is possible to implement **get_context_title** which can return a string shown in UI as a title. Output string may contain html tags. It is recommended to return context path (it will be created function this purposes) in this order `"{project name}/{asset hierarchy}/{asset name}/{task name}"`. + +Another optional function is **get_current_context**. This function is handy in hosts where it is possible to open multiple workfiles in one process so using global context variables is not relevant because artists can switch between opened workfiles without being acknowledged. When a function is not implemented or won't return the right keys the global context is used. +```json +# Expected keys in output +{ + "project_name": "MyProject", + "asset_name": "sq01_sh0010", + "task_name": "Modeling" +} +``` + +### Create plugin +Main responsibility of create plugin is to create, update, collect and remove instance metadata and propagate changes to create context. Has access to **CreateContext** (`self.create_context`) that discovered the plugin so has also access to other creators and instances. Create plugins have a lot of responsibility so it is recommended to implement common code per host. + +#### *BaseCreator* +Base implementation of creator plugin. It is not recommended to use this class as base for production plugins but rather use one of **AutoCreator** and **Creator** variants. + +**Abstractions** +- **`family`** (class attr) - Tells what kind of instance will be created. +```python +class WorkfileCreator(Creator): + family = "workfile" +``` + +- **`collect_instances`** (method) - Collect already existing instances from the workfile and add them to create context. This method is called on initialization or reset of **CreateContext**. Each creator is responsible to find its instance metadata, convert them to **CreatedInstance** object and add them to create context (`self._add_instance_to_context(instnace_obj)`). +```python +def collect_instances(self): + # Using 'pipeline.list_instances' is just example how to get existing instances from scene + # - getting existing instances is different per host implementation + for instance_data in pipeline.list_instances(): + # Process only instances that were created by this creator + creator_id = instance_data.get("creator_identifier") + if creator_id == self.identifier: + # Create instance object from existing data + instance = CreatedInstance.from_existing( + instance_data, self + ) + # Add instance to create context + self._add_instance_to_context(instance) +``` + +- **`create`** (method) - Create a new object of **CreatedInstance** store its metadata to the workfile and add the instance into the created context. Failed Creating should raise **CreatorError** if an error happens that artists can fix or give them some useful information. Triggers and implementation differs for **Creator** and **AutoCreator**. + +- **`update_instances`** (method) - Update data of instances. Receives tuple with **instance** and **changes**. +```python +def update_instances(self, update_list): + # Loop over changed instances + for instance, changes in update_list: + # Example possible usage of 'changes' to use different node on change + # of node id in instance data (MADE UP) + node = None + if "node_id" in changes: + old_value, new_value = changes["node_id"] + if new_value is not None: + node = pipeline.get_node_by_id(new_value) + + if node is None: + node = pipeline.get_node_by_instance_id(instance.id) + # Get node in scene that represents the instance + # Imprind data to a node + pipeline.imprint(node, instance.data_to_store()) + + +# Most implementations will probably ignore 'changes' completely +def update_instances(self, update_list): + for instance, _ in update_list: + # Get node from scene + node = pipeline.get_node_by_instance_id(instance.id) + # Imprint data to node + pipeline.imprint(node, instance.data_to_store()) +``` + +- **`remove_instances`** (method) - Remove instance metadata from workfile and from create context. +```python +# Possible way how to remove instance +def remove_instances(self, instances): + for instance in instances: + # Remove instance metadata from workflle + pipeline.remove_instance(instance.id) + # Remove instance from create context + self._remove_instance_from_context(instance) + + +# Default implementation of `AutoCreator` +def remove_instances(self, instances): + pass +``` + +:::note +When host implementation use universal way how to store and load instances you should implement host specific creator plugin base class with implemented **collect_instances**, **update_instances** and **remove_instances**. +::: + +**Optional implementations** + +- **`enabled`** (attr) - Boolean if the creator plugin is enabled and used. +- **`identifier`** (class attr) - Consistent unique string identifier of the creator plugin. Is used to identify source plugin of existing instances. There can't be 2 creator plugins with the same identifier. Default implementation returns `family` attribute. +```python +class RenderLayerCreator(Creator): + family = "render" + identifier = "render_layer" + + +class RenderPassCreator(Creator): + family = "render" + identifier = "render_pass" +``` + +- **`label`** (attr) - String label of creator plugin which will show up in UI, `identifier` is used when not set. It should be possible to use html tags. +```python +class RenderLayerCreator(Creator): + label = "Render Layer" +``` + +- **`get_icon`** (attr) - Icon of creator and its instances. Value can be a path to an image file, full name of qtawesome icon, `QPixmap` or `QIcon`. For complex cases or cases when `Qt` objects are returned it is recommended to override `get_icon` method and handle the logic or import `Qt` inside the method to not break headless usage of creator plugin. For list of qtawesome icons check qtawesome github repository (look for the used version in pyproject.toml). Default implementation return **icon** attribute. +- **`icon`** (method) - Attribute for default implementation of **get_icon**. +```python +class RenderLayerCreator(Creator): + # Use font awesome 5 icon + icon = "fa5.building" +``` + +- **`get_instance_attr_defs`** (method) - Attribute definitions of instance. Creator can define attribute values with default values for each instance. These attributes may affect how instances will be instance processed during publishing. Attribute defiitions can be used from `openpype.pipeline.lib.attribute_definitions` (NOTE: Will be moved to `openpype.lib.attribute_definitions` soon). Attribute definitions define basic types of values for different cases e.g. boolean, number, string, enumerator, etc. Default implementation returns **instance_attr_defs**. +- **`instance_attr_defs`** (attr) - Attribute for default implementation of **get_instance_attr_defs**. + +```python +from openpype.pipeline import attribute_definitions + + +class RenderLayerCreator(Creator): + def get_instance_attr_defs(self): + # Return empty list if '_allow_farm_render' is not enabled (can be set during initialization) + if not self._allow_farm_render: + return [] + # Give artist option to change if should be rendered on farm or locally + return [ + attribute_definitions.BoolDef( + "render_farm", + default=False, + label="Render on Farm" + ) + ] +``` + +- **`get_subset_name`** (method) - Calculate subset name based on passed data. Data can be extended using the `get_dynamic_data` method. Default implementation is using `get_subset_name` from `openpype.lib` which is recommended. + +- **`get_dynamic_data`** (method) - Can be used to extend data for subset templates which may be required in some cases. + + +#### *AutoCreator* +Creator that is triggered on reset of create context. Can be used for families that are expected to be created automatically without artist interaction (e.g. **workfile**). Method `create` is triggered after collecting all creators. + +:::important +**AutoCreator** has implemented **remove_instances** to do nothing as removing of auto created instances would lead to creating new instance immediately or on refresh. +::: + +```python +def __init__( + self, create_context, system_settings, project_settings, *args, **kwargs +): + super(MyCreator, self).__init__( + create_context, system_settings, project_settings, *args, **kwargs + ) + # Get variant value from settings + variant_name = ( + project_settings["my_host"][self.identifier]["variant"] + ).strip() + if not variant_name: + variant_name = "Main" + self._variant_name = variant_name + +# Create does not expect any arguments +def create(self): + # Look for existing instance in create context + existing_instance = None + for instance in self.create_context.instances: + if instance.creator_identifier == self.identifier: + existing_instance = instance + break + + # Collect current context information + # - variant can be filled from settings + variant = self._variant_name + # Only place where we can look for current context + project_name = io.Session["AVALON_PROJECT"] + asset_name = io.Session["AVALON_ASSET"] + task_name = io.Session["AVALON_TASK"] + host_name = io.Session["AVALON_APP"] + + # Create new instance if does not exist yet + if existing_instance is None: + asset_doc = io.find_one({"type": "asset", "name": asset_name}) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + data.update(self.get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name + )) + + new_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(new_instance) + + # Update instance context if is not the same + elif ( + existing_instance["asset"] != asset_name + or existing_instance["task"] != task_name + ): + asset_doc = io.find_one({"type": "asset", "name": asset_name}) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + existing_instance["asset"] = asset_name + existing_instance["task"] = task_name +``` + +#### *Creator* +Implementation of creator plugin that is triggered manually by the artist in UI (or by code). Has extended options for UI purposes than **AutoCreator** and **create** method expect more arguments. + +**Optional implementations** +- **`create_allow_context_change`** (class attr) - Allow to set context in UI before Creating. Some creators may not allow it or their logic would not use the context selection (e.g. bulk creators). Is set to `True` but default. +```python +class BulkRenderCreator(Creator): + create_allow_context_change = False +``` +- **`get_default_variants`** (method) - Returns list of default variants that are listed in create dialog for user. Returns **default_variants** attribute by default. +- **`default_variants`** (attr) - Attribute for default implementation of **get_default_variants**. + +- **`get_default_variant`** (method) - Returns default variant that is prefilled in UI (value does not have to be in default variants). By default returns **default_variant** attribute. If returns `None` then UI logic will take first item from **get_default_variants** if there is any otherwise **"Main"** is used. +- **`default_variant`** (attr) - Attribute for default implementation of **get_default_variant**. + +- **`get_description`** (method) - Returns a short string description of the creator. Returns **description** attribute by default. +- **`description`** (attr) - Attribute for default implementation of **get_description**. + +- **`get_detailed_description`** (method) - Returns detailed string description of creator. Can contain markdown. Returns **detailed_description** attribute by default. +- **`detailed_description`** (attr) - Attribute for default implementation of **get_detailed_description**. + +- **`get_pre_create_attr_defs`** (method) - Similar to **get_instance_attr_defs** returns attribute definitions but they are filled before creation. When creation is called from UI the values are passed to **create** method. Returns **pre_create_attr_defs** attribute by default. +- **`pre_create_attr_defs`** (attr) - Attribute for default implementation of **get_pre_create_attr_defs**. + +```python +from openpype.pipeline import Creator, attribute_definitions + + +class CreateRender(Creator): + family = "render" + label = "Render" + icon = "fa.eye" + description = "Render scene viewport" + + def __init__( + self, context, system_settings, project_settings, *args, **kwargs + ): + super(CreateRender, self).__init__( + context, system_settings, project_settings, *args, **kwargs + ) + plugin_settings = ( + project_settings["my_host"]["create"][self.__class__.__name__] + ) + # Get information if studio has enabled farm publishing + self._allow_farm_render = plugin_settings["allow_farm_render"] + # Get default variants from settings + self.default_variants = plugin_settings["variants"] + + def get_instance_attr_defs(self): + # Return empty list if '_allow_farm_render' is not enabled (can be set during initialization) + if not self._allow_farm_render: + return [] + # Give artist option to change if should be rendered on farm or locally + return [ + attribute_definitions.BoolDef( + "render_farm", + default=False, + label="Render on Farm" + ) + ] + + def get_pre_create_attr_defs(self): + # Give user option to use selection or not + attrs = [ + attribute_definitions.BoolDef( + "use_selection", + default=False, + label="Use selection" + ) + ] + if self._allow_farm_render: + # Set to render on farm in creator dialog + # - this value is not automatically passed to instance attributes + # creator must do that during creation + attrs.append( + attribute_definitions.BoolDef( + "render_farm", + default=False, + label="Render on Farm" + ) + ) + return attrs + + def create(self, subset_name, instance_data, pre_create_data): + # ARGS: + # - 'subset_name' - precalculated subset name + # - 'instance_data' - context data + # - 'asset' - asset name + # - 'task' - task name + # - 'variant' - variant + # - 'family' - instnace family + + # Check if should use selection or not + if pre_create_data.get("use_selection"): + items = pipeline.get_selection() + else: + items = [pipeline.create_write()] + + # Validations related to selection + if len(items) > 1: + raise CreatorError("Please select only single item at time.") + + elif not items: + raise CreatorError("Nothing to create. Select at least one item.") + + # Create instence object + new_instance = CreatedInstance(self.family, subset_name, data, self) + # Pass value from pre create attribute to instance + # - use them only when pre create date contain the data + if "render_farm" in pre_create_data: + use_farm = pre_create_data["render_farm"] + new_instance.creator_attributes["render_farm"] = use_farm + + # Store metadata to workfile + pipeline.imprint(new_instance.id, new_instance.data_to_store()) + + # Add instance to context + self._add_instance_to_context(new_instance) +``` + +## **Publish** +### Exceptions +OpenPype define few specific exceptions that should be used in publish plugins. + +#### *Validation exception* +Validation plugins should raise `PublishValidationError` to show to an artist what's wrong and give him actions to fix it. The exception says that errors in the plugin can be fixed by the artist himself (with or without action on plugin). Any other errors will stop publishing immediately. The exception `PublishValidationError` raised after validation order has the same effect as any other exception. + +Exception `PublishValidationError` expects 4 arguments: +- **message** Which is not used in UI but for headless publishing. +- **title** Short description of error (2-5 words). Title is used for grouping of exceptions per plugin. +- **description** Detailed description of the issue where markdown and html can be used. +- **detail** Is optional to give even more detailed information for advanced users. At this moment the detail is shown directly under description but it is in plan to have detail in a collapsible widget. + +Extended version is `PublishXmlValidationError` which uses xml files with stored descriptions. This helps to avoid having huge markdown texts inside code. The exception has 4 arguments: +- **plugin** The plugin object which raises the exception to find its related xml file. +- **message** Exception message for publishing without UI or different pyblish UI. +- **key** Optional argument says which error from xml is used as a validation plugin may raise error with different messages based on the current errors. Default is **"main"**. +- **formatting_data** Optional dictionary to format data in the error. This is used to fill detailed description with data from the publishing so artist can get more precise information. + +**Where and how to create xml file** + +Xml files for `PublishXmlValidationError` must be located in **./help** subfolder next to the plugin and the filename must match the filename of the plugin. +``` +# File location related to plugin file +└ publish + ├ help + │ ├ validate_scene.xml + │ └ ... + ├ validate_scene.py + └ ... +``` + +Xml file content has **<root>** node which may contain any amount of **<error>** nodes, but each of them must have **id** attribute with unique value. That is then used for **key**. Each error must have **<title>** and **<description>** and **<detail>**. Text content may contain python formatting keys that can be filled when an exception is raised. +```xml + + + + Subset context + ## Invalid subset context + +Context of the given subset doesn't match your current scene. + +### How to repair? + +You can fix this with the "Repair" button on the right. This will use '{expected_asset}' asset name and overwrite '{found_asset}' asset name in scene metadata. + +After that restart publishing with Reload button. + + +### How could this happen? + +The subset was created in different scene with different context +or the scene file was copy pasted from different context. + + + +``` + +#### *Known errors* +When there is a known error that can't be fixed by the user (e.g. can't connect to deadline service, etc.) `KnownPublishError` should be raised. The only difference is that its message is shown in UI to the artist otherwise a neutral message without context is shown. + +### Plugin extension +Publish plugins can be extended by additional logic when inheriting from `OpenPypePyblishPluginMixin` which can be used as mixin (additional inheritance of class). Publish plugins that inherit from this mixin can define attributes that will be shown in **CreatedInstance**. One of the most important usages is to be able turn on/off optional plugins. + +Attributes are defined by the return value of `get_attribute_defs` method. Attribute definitions are for families defined in plugin's `families` attribute if it's instance plugin or for whole context if it's context plugin. To convert existing values (or to remove legacy values) can be re-implemented `convert_attribute_values`. Default implementation just converts the values to right types. + +:::Important +Values of publish attributes from created instance are never removed automatically so implementing this method is the best way to remove legacy data or convert them to new data structure. +::: + +Possible attribute definitions can be found in `openpype/pipeline/lib/attribute_definitions.py`. + +
+Example plugin +

+ +```python +import pyblish.api +from openpype.pipeline import ( + OpenPypePyblishPluginMixin, + attribute_definitions, +) + + +# Example context plugin +class MyExtendedPlugin( + pyblish.api.ContextPlugin, OpenPypePyblishPluginMixin +): + optional = True + active = True + + @classmethod + def get_attribute_defs(cls): + return [ + attribute_definitions.BoolDef( + # Key under which it will be stored + "process", + # Use 'active' as default value + default=cls.active, + # Use plugin label as label for attribute + label=cls.label + ) + ] + + def process_plugin(self, context): + # First check if plugin is optional + if not self.optional: + return True + + # Attribute values are stored by class names + # - for those purposes was implemented 'get_attr_values_from_data' + # to help with accessing it + attribute_values = self.get_attr_values_from_data(context.data) + # Get 'process' key + process_value = attribute_values.get("process") + if process_value is None or process_value: + return True + return False + + def process(self, context): + if not self.process_plugin(context): + return + # Do plugin logic + ... +``` +

+
+ +## **UI examples** +### Main publish window +Main window of publisher shows instances and their values, collected by creators. + +**Card view** +![Publisher UI - Card view](assets/publisher_card_view.png) +**List view** +![Publisher UI - List view](assets/publisher_list_view.png) + +#### *Instances views* +List of instances always contains an `Options` item which is used to show attributes of context plugins. Values from the item are saved and loaded using [host implementation](#required-functions-in-host-implementation) **get_context_data** and **update_context_data**. Instances are grouped by family and can be shown in card view (single selection) or list view (multi selection). + +Instance view has at the bottom 3 buttons. Plus sign opens [create dialog](#create-dialog), bin removes selected instances and stripes swap card and list view. + +#### *Context options* +It is possible to change variant or asset and task context of instances at the top part but all changes there must be confirmed. Confirmation will trigger recalculation of subset names and all new data are stored to instances. + +#### *Create attributes* +Instance attributes display all created attributes of all selected instances. All attributes that have the same definition are grouped into one input and are visually indicated if values are not the same for selected instances. In most cases have **< Multiselection >** placeholder. + +#### *Publish attributes* +Publish attributes work the same way as create attributes but the source of attribute definitions are pyblish plugins. Attributes are filtered based on families of selected instances and families defined in the pyblish plugin. + +### Create dialog +![Publisher UI - Create dialog](assets/publisher_create_dialog.png) +Create dialog is used by artist to create new instances in a context. The context selection can be enabled/disabled by changing `create_allow_context_change` on [creator plugin](#creator). In the middle part the artist selects what will be created and what variant it is. On the right side is information about the selected creator and its pre-create attributes. There is also a question mark button which extends the window and displays more detailed information about the creator. \ No newline at end of file diff --git a/website/docs/dev_requirements.md b/website/docs/dev_requirements.md index bbf3b1fb5b..6c87054ba0 100644 --- a/website/docs/dev_requirements.md +++ b/website/docs/dev_requirements.md @@ -33,6 +33,8 @@ It can be built and ran on all common platforms. We develop and test on the foll ## Database +Database version should be at least **MongoDB 4.4**. + Pype needs site-wide installation of **MongoDB**. It should be installed on reliable server, that all workstations (and possibly render nodes) can connect. This server holds **Avalon** database that is at the core of everything diff --git a/website/docs/module_site_sync.md b/website/docs/module_site_sync.md index 78f482352e..2e9cf01102 100644 --- a/website/docs/module_site_sync.md +++ b/website/docs/module_site_sync.md @@ -123,6 +123,10 @@ To get working connection to Google Drive there are some necessary steps: - add new site back in OpenPype Settings, name as you want, provider needs to be 'gdrive' - distribute credentials file via shared mounted disk location +:::note +If you are using regular personal GDrive for testing don't forget adding `/My Drive` as the prefix in root configuration. Business accounts and share drives don't need this. +::: + ### SFTP SFTP provider is used to connect to SFTP server. Currently authentication with `user:password` or `user:ssh key` is implemented. diff --git a/website/sidebars.js b/website/sidebars.js index 16af1e1151..105afc30eb 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -136,6 +136,13 @@ module.exports = { "dev_requirements", "dev_build", "dev_testing", - "dev_contribute" + "dev_contribute", + { + type: "category", + label: "Hosts integrations", + items: [ + "dev_publishing" + ] + } ] }; diff --git a/website/yarn.lock b/website/yarn.lock index 7f677aaed7..e01f0c4ef2 100644 --- a/website/yarn.lock +++ b/website/yarn.lock @@ -5125,9 +5125,9 @@ minimatch@^3.0.4: brace-expansion "^1.1.7" minimist@^1.2.0, minimist@^1.2.5: - version "1.2.5" - resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.5.tgz#67d66014b66a6a8aaa0c083c5fd58df4e4e97602" - integrity sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw== + version "1.2.6" + resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.6.tgz#8637a5b759ea0d6e98702cfb3a9283323c93af44" + integrity sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q== mkdirp@^0.5.5: version "0.5.5" @@ -5207,9 +5207,9 @@ node-fetch@2.6.7: whatwg-url "^5.0.0" node-forge@^1.2.0: - version "1.2.1" - resolved "https://registry.yarnpkg.com/node-forge/-/node-forge-1.2.1.tgz#82794919071ef2eb5c509293325cec8afd0fd53c" - integrity sha512-Fcvtbb+zBcZXbTTVwqGA5W+MKBj56UjVRevvchv5XrcyXbmNdesfZL37nlcWOfpgHhgmxApw3tQbTr4CqNmX4w== + version "1.3.0" + resolved "https://registry.yarnpkg.com/node-forge/-/node-forge-1.3.0.tgz#37a874ea723855f37db091e6c186e5b67a01d4b2" + integrity sha512-08ARB91bUi6zNKzVmaj3QO7cr397uiDT2nJ63cHjyNtCTWIgvS47j3eT0WfzUwS9+6Z5YshRaoasFkXCKrIYbA== node-releases@^2.0.1: version "2.0.2"