diff --git a/CHANGELOG.md b/CHANGELOG.md index 0c5f2cf8b5..f9820dec45 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,94 @@ # Changelog +## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9) + +### 📖 Documentation + +- Documentation: Testing on Deadline [\#4185](https://github.com/pypeclub/OpenPype/pull/4185) +- Consistent Python version [\#4160](https://github.com/pypeclub/OpenPype/pull/4160) + +**🆕 New features** + +- Feature/op 4397 gl tf extractor for maya [\#4192](https://github.com/pypeclub/OpenPype/pull/4192) +- Maya: Extractor for Unreal SkeletalMesh [\#4174](https://github.com/pypeclub/OpenPype/pull/4174) +- 3dsmax: integration [\#4168](https://github.com/pypeclub/OpenPype/pull/4168) +- Blender: Extract Alembic Animations [\#4128](https://github.com/pypeclub/OpenPype/pull/4128) +- Unreal: Load Alembic Animations [\#4127](https://github.com/pypeclub/OpenPype/pull/4127) + +**🚀 Enhancements** + +- Houdini: Use new interface class name for publish host [\#4220](https://github.com/pypeclub/OpenPype/pull/4220) +- General: Default command for headless mode is interactive [\#4203](https://github.com/pypeclub/OpenPype/pull/4203) +- Maya: Enhanced ASS publishing [\#4196](https://github.com/pypeclub/OpenPype/pull/4196) +- Feature/op 3924 implement ass extractor [\#4188](https://github.com/pypeclub/OpenPype/pull/4188) +- File transactions: Source path is destination path [\#4184](https://github.com/pypeclub/OpenPype/pull/4184) +- Deadline: improve environment processing [\#4182](https://github.com/pypeclub/OpenPype/pull/4182) +- General: Comment per instance in Publisher [\#4178](https://github.com/pypeclub/OpenPype/pull/4178) +- Ensure Mongo database directory exists in Windows. [\#4166](https://github.com/pypeclub/OpenPype/pull/4166) +- Note about unrestricted execution on Windows. [\#4161](https://github.com/pypeclub/OpenPype/pull/4161) +- Maya: Enable thumbnail transparency on extraction. [\#4147](https://github.com/pypeclub/OpenPype/pull/4147) +- Maya: Disable viewport Pan/Zoom on playblast extraction. [\#4146](https://github.com/pypeclub/OpenPype/pull/4146) +- Maya: Optional viewport refresh on pointcache extraction [\#4144](https://github.com/pypeclub/OpenPype/pull/4144) +- CelAction: refactory integration to current openpype [\#4140](https://github.com/pypeclub/OpenPype/pull/4140) +- Maya: create and publish bounding box geometry [\#4131](https://github.com/pypeclub/OpenPype/pull/4131) +- Changed the UOpenPypePublishInstance to use the UDataAsset class [\#4124](https://github.com/pypeclub/OpenPype/pull/4124) +- General: Collection Audio speed up [\#4110](https://github.com/pypeclub/OpenPype/pull/4110) +- Maya: keep existing AOVs when creating render instance [\#4087](https://github.com/pypeclub/OpenPype/pull/4087) +- General: Oiio conversion multipart fix [\#4060](https://github.com/pypeclub/OpenPype/pull/4060) + +**🐛 Bug fixes** + +- Publisher: Signal type issues in Python 2 DCCs [\#4230](https://github.com/pypeclub/OpenPype/pull/4230) +- Blender: Fix Layout Family Versioning [\#4228](https://github.com/pypeclub/OpenPype/pull/4228) +- Blender: Fix Create Camera "Use selection" [\#4226](https://github.com/pypeclub/OpenPype/pull/4226) +- TrayPublisher - join needs list [\#4224](https://github.com/pypeclub/OpenPype/pull/4224) +- General: Event callbacks pass event to callbacks as expected [\#4210](https://github.com/pypeclub/OpenPype/pull/4210) +- Build:Revert .toml update of Gazu [\#4207](https://github.com/pypeclub/OpenPype/pull/4207) +- Nuke: fixed imageio node overrides subset filter [\#4202](https://github.com/pypeclub/OpenPype/pull/4202) +- Maya: pointcache [\#4201](https://github.com/pypeclub/OpenPype/pull/4201) +- Unreal: Support for Unreal Engine 5.1 [\#4199](https://github.com/pypeclub/OpenPype/pull/4199) +- General: Integrate thumbnail looks for thumbnail to multiple places [\#4181](https://github.com/pypeclub/OpenPype/pull/4181) +- Various minor bugfixes [\#4172](https://github.com/pypeclub/OpenPype/pull/4172) +- Nuke/Hiero: Remove tkinter library paths before launch [\#4171](https://github.com/pypeclub/OpenPype/pull/4171) +- Flame: vertical alignment of layers [\#4169](https://github.com/pypeclub/OpenPype/pull/4169) +- Nuke: correct detection of viewer and display [\#4165](https://github.com/pypeclub/OpenPype/pull/4165) +- Settings UI: Don't create QApplication if already exists [\#4156](https://github.com/pypeclub/OpenPype/pull/4156) +- General: Extract review handle start offset of sequences [\#4152](https://github.com/pypeclub/OpenPype/pull/4152) +- Maya: Maintain time connections on Alembic update. [\#4143](https://github.com/pypeclub/OpenPype/pull/4143) + +**🔀 Refactored code** + +- General: Use qtpy in modules and hosts UIs which are running in OpenPype process [\#4225](https://github.com/pypeclub/OpenPype/pull/4225) +- Tools: Use qtpy instead of Qt in standalone tools [\#4223](https://github.com/pypeclub/OpenPype/pull/4223) +- General: Use qtpy in settings UI [\#4215](https://github.com/pypeclub/OpenPype/pull/4215) + +**Merged pull requests:** + +- layout publish more than one container issue [\#4098](https://github.com/pypeclub/OpenPype/pull/4098) + +## [3.14.8](https://github.com/pypeclub/OpenPype/tree/3.14.8) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.7...3.14.8) + +**🚀 Enhancements** + +- General: Refactored extract hierarchy plugin [\#4139](https://github.com/pypeclub/OpenPype/pull/4139) +- General: Find executable enhancement [\#4137](https://github.com/pypeclub/OpenPype/pull/4137) +- Ftrack: Reset session before instance processing [\#4129](https://github.com/pypeclub/OpenPype/pull/4129) +- Ftrack: Editorial asset sync issue [\#4126](https://github.com/pypeclub/OpenPype/pull/4126) +- Deadline: Build version resolving [\#4115](https://github.com/pypeclub/OpenPype/pull/4115) +- Houdini: New Publisher [\#3046](https://github.com/pypeclub/OpenPype/pull/3046) +- Fix: Standalone Publish Directories [\#4148](https://github.com/pypeclub/OpenPype/pull/4148) + +**🐛 Bug fixes** + +- Ftrack: Fix occational double parents issue [\#4153](https://github.com/pypeclub/OpenPype/pull/4153) +- General: Maketx executable issue [\#4136](https://github.com/pypeclub/OpenPype/pull/4136) +- Maya: Looks - add all connections [\#4135](https://github.com/pypeclub/OpenPype/pull/4135) +- General: Fix variable check in collect anatomy instance data [\#4117](https://github.com/pypeclub/OpenPype/pull/4117) + ## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7) diff --git a/HISTORY.md b/HISTORY.md index 04a1073c07..f24e95b2e1 100644 --- a/HISTORY.md +++ b/HISTORY.md @@ -1,6 +1,95 @@ # Changelog +## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9) + +### 📖 Documentation + +- Documentation: Testing on Deadline [\#4185](https://github.com/pypeclub/OpenPype/pull/4185) +- Consistent Python version [\#4160](https://github.com/pypeclub/OpenPype/pull/4160) + +**🆕 New features** + +- Feature/op 4397 gl tf extractor for maya [\#4192](https://github.com/pypeclub/OpenPype/pull/4192) +- Maya: Extractor for Unreal SkeletalMesh [\#4174](https://github.com/pypeclub/OpenPype/pull/4174) +- 3dsmax: integration [\#4168](https://github.com/pypeclub/OpenPype/pull/4168) +- Blender: Extract Alembic Animations [\#4128](https://github.com/pypeclub/OpenPype/pull/4128) +- Unreal: Load Alembic Animations [\#4127](https://github.com/pypeclub/OpenPype/pull/4127) + +**🚀 Enhancements** + +- Houdini: Use new interface class name for publish host [\#4220](https://github.com/pypeclub/OpenPype/pull/4220) +- General: Default command for headless mode is interactive [\#4203](https://github.com/pypeclub/OpenPype/pull/4203) +- Maya: Enhanced ASS publishing [\#4196](https://github.com/pypeclub/OpenPype/pull/4196) +- Feature/op 3924 implement ass extractor [\#4188](https://github.com/pypeclub/OpenPype/pull/4188) +- File transactions: Source path is destination path [\#4184](https://github.com/pypeclub/OpenPype/pull/4184) +- Deadline: improve environment processing [\#4182](https://github.com/pypeclub/OpenPype/pull/4182) +- General: Comment per instance in Publisher [\#4178](https://github.com/pypeclub/OpenPype/pull/4178) +- Ensure Mongo database directory exists in Windows. [\#4166](https://github.com/pypeclub/OpenPype/pull/4166) +- Note about unrestricted execution on Windows. [\#4161](https://github.com/pypeclub/OpenPype/pull/4161) +- Maya: Enable thumbnail transparency on extraction. [\#4147](https://github.com/pypeclub/OpenPype/pull/4147) +- Maya: Disable viewport Pan/Zoom on playblast extraction. [\#4146](https://github.com/pypeclub/OpenPype/pull/4146) +- Maya: Optional viewport refresh on pointcache extraction [\#4144](https://github.com/pypeclub/OpenPype/pull/4144) +- CelAction: refactory integration to current openpype [\#4140](https://github.com/pypeclub/OpenPype/pull/4140) +- Maya: create and publish bounding box geometry [\#4131](https://github.com/pypeclub/OpenPype/pull/4131) +- Changed the UOpenPypePublishInstance to use the UDataAsset class [\#4124](https://github.com/pypeclub/OpenPype/pull/4124) +- General: Collection Audio speed up [\#4110](https://github.com/pypeclub/OpenPype/pull/4110) +- Maya: keep existing AOVs when creating render instance [\#4087](https://github.com/pypeclub/OpenPype/pull/4087) +- General: Oiio conversion multipart fix [\#4060](https://github.com/pypeclub/OpenPype/pull/4060) + +**🐛 Bug fixes** + +- Publisher: Signal type issues in Python 2 DCCs [\#4230](https://github.com/pypeclub/OpenPype/pull/4230) +- Blender: Fix Layout Family Versioning [\#4228](https://github.com/pypeclub/OpenPype/pull/4228) +- Blender: Fix Create Camera "Use selection" [\#4226](https://github.com/pypeclub/OpenPype/pull/4226) +- TrayPublisher - join needs list [\#4224](https://github.com/pypeclub/OpenPype/pull/4224) +- General: Event callbacks pass event to callbacks as expected [\#4210](https://github.com/pypeclub/OpenPype/pull/4210) +- Build:Revert .toml update of Gazu [\#4207](https://github.com/pypeclub/OpenPype/pull/4207) +- Nuke: fixed imageio node overrides subset filter [\#4202](https://github.com/pypeclub/OpenPype/pull/4202) +- Maya: pointcache [\#4201](https://github.com/pypeclub/OpenPype/pull/4201) +- Unreal: Support for Unreal Engine 5.1 [\#4199](https://github.com/pypeclub/OpenPype/pull/4199) +- General: Integrate thumbnail looks for thumbnail to multiple places [\#4181](https://github.com/pypeclub/OpenPype/pull/4181) +- Various minor bugfixes [\#4172](https://github.com/pypeclub/OpenPype/pull/4172) +- Nuke/Hiero: Remove tkinter library paths before launch [\#4171](https://github.com/pypeclub/OpenPype/pull/4171) +- Flame: vertical alignment of layers [\#4169](https://github.com/pypeclub/OpenPype/pull/4169) +- Nuke: correct detection of viewer and display [\#4165](https://github.com/pypeclub/OpenPype/pull/4165) +- Settings UI: Don't create QApplication if already exists [\#4156](https://github.com/pypeclub/OpenPype/pull/4156) +- General: Extract review handle start offset of sequences [\#4152](https://github.com/pypeclub/OpenPype/pull/4152) +- Maya: Maintain time connections on Alembic update. [\#4143](https://github.com/pypeclub/OpenPype/pull/4143) + +**🔀 Refactored code** + +- General: Use qtpy in modules and hosts UIs which are running in OpenPype process [\#4225](https://github.com/pypeclub/OpenPype/pull/4225) +- Tools: Use qtpy instead of Qt in standalone tools [\#4223](https://github.com/pypeclub/OpenPype/pull/4223) +- General: Use qtpy in settings UI [\#4215](https://github.com/pypeclub/OpenPype/pull/4215) + +**Merged pull requests:** + +- layout publish more than one container issue [\#4098](https://github.com/pypeclub/OpenPype/pull/4098) + +## [3.14.8](https://github.com/pypeclub/OpenPype/tree/3.14.8) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.7...3.14.8) + +**🚀 Enhancements** + +- General: Refactored extract hierarchy plugin [\#4139](https://github.com/pypeclub/OpenPype/pull/4139) +- General: Find executable enhancement [\#4137](https://github.com/pypeclub/OpenPype/pull/4137) +- Ftrack: Reset session before instance processing [\#4129](https://github.com/pypeclub/OpenPype/pull/4129) +- Ftrack: Editorial asset sync issue [\#4126](https://github.com/pypeclub/OpenPype/pull/4126) +- Deadline: Build version resolving [\#4115](https://github.com/pypeclub/OpenPype/pull/4115) +- Houdini: New Publisher [\#3046](https://github.com/pypeclub/OpenPype/pull/3046) +- Fix: Standalone Publish Directories [\#4148](https://github.com/pypeclub/OpenPype/pull/4148) + +**🐛 Bug fixes** + +- Ftrack: Fix occational double parents issue [\#4153](https://github.com/pypeclub/OpenPype/pull/4153) +- General: Maketx executable issue [\#4136](https://github.com/pypeclub/OpenPype/pull/4136) +- Maya: Looks - add all connections [\#4135](https://github.com/pypeclub/OpenPype/pull/4135) +- General: Fix variable check in collect anatomy instance data [\#4117](https://github.com/pypeclub/OpenPype/pull/4117) + ## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7) diff --git a/openpype/action.py b/openpype/action.py index de9cdee010..15c96404b6 100644 --- a/openpype/action.py +++ b/openpype/action.py @@ -72,17 +72,19 @@ def get_errored_plugins_from_data(context): return get_errored_plugins_from_context(context) -# 'RepairAction' and 'RepairContextAction' were moved to -# 'openpype.pipeline.publish' please change you imports. -# There is no "reasonable" way hot mark these classes as deprecated to show -# warning of wrong import. -# Deprecated since 3.14.* will be removed in 3.16.* class RepairAction(pyblish.api.Action): """Repairs the action To process the repairing this requires a static `repair(instance)` method is available on the plugin. + Deprecated: + 'RepairAction' and 'RepairContextAction' were moved to + 'openpype.pipeline.publish' please change you imports. + There is no "reasonable" way hot mark these classes as deprecated + to show warning of wrong import. Deprecated since 3.14.* will be + removed in 3.16.* + """ label = "Repair" on = "failed" # This action is only available on a failed plug-in @@ -103,13 +105,19 @@ class RepairAction(pyblish.api.Action): plugin.repair(instance) -# Deprecated since 3.14.* will be removed in 3.16.* class RepairContextAction(pyblish.api.Action): """Repairs the action To process the repairing this requires a static `repair(instance)` method is available on the plugin. + Deprecated: + 'RepairAction' and 'RepairContextAction' were moved to + 'openpype.pipeline.publish' please change you imports. + There is no "reasonable" way hot mark these classes as deprecated + to show warning of wrong import. Deprecated since 3.14.* will be + removed in 3.16.* + """ label = "Repair" on = "failed" # This action is only available on a failed plug-in diff --git a/openpype/cli.py b/openpype/cli.py index d24cd4a872..897106c35f 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -29,8 +29,14 @@ def main(ctx): It wraps different commands together. """ + if ctx.invoked_subcommand is None: - ctx.invoke(tray) + # Print help if headless mode is used + if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": + print(ctx.get_help()) + sys.exit(0) + else: + ctx.invoke(tray) @main.command() diff --git a/openpype/host/interfaces.py b/openpype/host/interfaces.py index 3b2df745d1..999aefd254 100644 --- a/openpype/host/interfaces.py +++ b/openpype/host/interfaces.py @@ -252,7 +252,7 @@ class IWorkfileHost: Remove when all usages are replaced. """ - self.save_workfile() + self.save_workfile(dst_path) def open_file(self, filepath): """Deprecated variant of 'open_workfile'. diff --git a/openpype/hosts/aftereffects/api/launch_logic.py b/openpype/hosts/aftereffects/api/launch_logic.py index 9c8513fe8c..50675c8482 100644 --- a/openpype/hosts/aftereffects/api/launch_logic.py +++ b/openpype/hosts/aftereffects/api/launch_logic.py @@ -10,7 +10,7 @@ from wsrpc_aiohttp import ( WebSocketAsync ) -from Qt import QtCore +from qtpy import QtCore from openpype.lib import Logger from openpype.pipeline import legacy_io diff --git a/openpype/hosts/aftereffects/api/lib.py b/openpype/hosts/aftereffects/api/lib.py index 8cdf9c407e..c738bcba2d 100644 --- a/openpype/hosts/aftereffects/api/lib.py +++ b/openpype/hosts/aftereffects/api/lib.py @@ -7,7 +7,7 @@ import traceback import logging from functools import partial -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.pipeline import install_host from openpype.modules import ModulesManager diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index 7026fe3f05..68a00e30b7 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -1,6 +1,6 @@ import os -from Qt import QtWidgets +from qtpy import QtWidgets import pyblish.api diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index e0e09277df..481c199db2 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -10,7 +10,7 @@ from pathlib import Path from types import ModuleType from typing import Dict, List, Optional, Union -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore import bpy import bpy.utils.previews diff --git a/openpype/hosts/blender/plugins/create/create_camera.py b/openpype/hosts/blender/plugins/create/create_camera.py index 1a3c008069..ada512d7ac 100644 --- a/openpype/hosts/blender/plugins/create/create_camera.py +++ b/openpype/hosts/blender/plugins/create/create_camera.py @@ -32,11 +32,6 @@ class CreateCamera(plugin.Creator): subset = self.data["subset"] name = plugin.asset_name(asset, subset) - camera = bpy.data.cameras.new(subset) - camera_obj = bpy.data.objects.new(subset, camera) - - instances.objects.link(camera_obj) - asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) @@ -53,6 +48,11 @@ class CreateCamera(plugin.Creator): bpy.ops.object.parent_set(keep_transform=True) else: plugin.deselect_all() + camera = bpy.data.cameras.new(subset) + camera_obj = bpy.data.objects.new(subset, camera) + + instances.objects.link(camera_obj) + camera_obj.select_set(True) asset_group.select_set(True) bpy.context.view_layer.objects.active = asset_group diff --git a/openpype/hosts/blender/plugins/load/load_layout_blend.py b/openpype/hosts/blender/plugins/load/load_layout_blend.py index e0124053bf..7d2fd23444 100644 --- a/openpype/hosts/blender/plugins/load/load_layout_blend.py +++ b/openpype/hosts/blender/plugins/load/load_layout_blend.py @@ -48,8 +48,14 @@ class BlendLayoutLoader(plugin.AssetLoader): bpy.data.objects.remove(obj) def _remove_asset_and_library(self, asset_group): + if not asset_group.get(AVALON_PROPERTY): + return + libpath = asset_group.get(AVALON_PROPERTY).get('libpath') + if not libpath: + return + # Check how many assets use the same library count = 0 for obj in bpy.data.collections.get(AVALON_CONTAINERS).all_objects: @@ -63,10 +69,12 @@ class BlendLayoutLoader(plugin.AssetLoader): # If it is the last object to use that library, remove it if count == 1: library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) + if library: + bpy.data.libraries.remove(library) def _process( - self, libpath, asset_group, group_name, asset, representation, actions + self, libpath, asset_group, group_name, asset, representation, + actions, anim_instances ): with bpy.data.libraries.load( libpath, link=True, relative=False @@ -140,12 +148,12 @@ class BlendLayoutLoader(plugin.AssetLoader): elif local_obj.type == 'ARMATURE': plugin.prepare_data(local_obj.data) - if action is not None: + if action: if local_obj.animation_data is None: local_obj.animation_data_create() local_obj.animation_data.action = action elif (local_obj.animation_data and - local_obj.animation_data.action is not None): + local_obj.animation_data.action): plugin.prepare_data( local_obj.animation_data.action) @@ -157,19 +165,26 @@ class BlendLayoutLoader(plugin.AssetLoader): t.id = local_obj elif local_obj.type == 'EMPTY': - creator_plugin = get_legacy_creator_by_name("CreateAnimation") - if not creator_plugin: - raise ValueError("Creator plugin \"CreateAnimation\" was " - "not found.") + if (not anim_instances or + (anim_instances and + local_obj.name not in anim_instances.keys())): + avalon = local_obj.get(AVALON_PROPERTY) + if avalon and avalon.get('family') == 'rig': + creator_plugin = get_legacy_creator_by_name( + "CreateAnimation") + if not creator_plugin: + raise ValueError( + "Creator plugin \"CreateAnimation\" was " + "not found.") - legacy_create( - creator_plugin, - name=local_obj.name.split(':')[-1] + "_animation", - asset=asset, - options={"useSelection": False, - "asset_group": local_obj}, - data={"dependencies": representation} - ) + legacy_create( + creator_plugin, + name=local_obj.name.split(':')[-1] + "_animation", + asset=asset, + options={"useSelection": False, + "asset_group": local_obj}, + data={"dependencies": representation} + ) if not local_obj.get(AVALON_PROPERTY): local_obj[AVALON_PROPERTY] = dict() @@ -272,7 +287,8 @@ class BlendLayoutLoader(plugin.AssetLoader): avalon_container.objects.link(asset_group) objects = self._process( - libpath, asset_group, group_name, asset, representation, None) + libpath, asset_group, group_name, asset, representation, + None, None) for child in asset_group.children: if child.get(AVALON_PROPERTY): @@ -352,10 +368,20 @@ class BlendLayoutLoader(plugin.AssetLoader): return actions = {} + anim_instances = {} for obj in asset_group.children: obj_meta = obj.get(AVALON_PROPERTY) if obj_meta.get('family') == 'rig': + # Get animation instance + collections = list(obj.users_collection) + for c in collections: + avalon = c.get(AVALON_PROPERTY) + if avalon and avalon.get('family') == 'animation': + anim_instances[obj.name] = c.name + break + + # Get armature's action rig = None for child in obj.children: if child.type == 'ARMATURE': @@ -384,9 +410,26 @@ class BlendLayoutLoader(plugin.AssetLoader): # If it is the last object to use that library, remove it if count == 1: library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - bpy.data.libraries.remove(library) + if library: + bpy.data.libraries.remove(library) - self._process(str(libpath), asset_group, object_name, actions) + asset = container.get("asset_name").split("_")[0] + + self._process( + str(libpath), asset_group, object_name, asset, + str(representation.get("_id")), actions, anim_instances + ) + + # Link the new objects to the animation collection + for inst in anim_instances.keys(): + try: + obj = bpy.data.objects[inst] + bpy.data.collections[anim_instances[inst]].objects.link(obj) + except KeyError: + self.log.info(f"Object {inst} does not exist anymore.") + coll = bpy.data.collections.get(anim_instances[inst]) + if (coll): + bpy.data.collections.remove(coll) avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) for child in asset_group.children: diff --git a/openpype/hosts/blender/plugins/publish/extract_abc_animation.py b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py new file mode 100644 index 0000000000..e141ccaa44 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py @@ -0,0 +1,72 @@ +import os + +import bpy + +from openpype.pipeline import publish +from openpype.hosts.blender.api import plugin + + +class ExtractAnimationABC(publish.Extractor): + """Extract as ABC.""" + + label = "Extract Animation ABC" + hosts = ["blender"] + families = ["animation"] + optional = True + + def process(self, instance): + # Define extract output file path + stagingdir = self.staging_dir(instance) + filename = f"{instance.name}.abc" + filepath = os.path.join(stagingdir, filename) + + context = bpy.context + + # Perform extraction + self.log.info("Performing extraction..") + + plugin.deselect_all() + + selected = [] + asset_group = None + + objects = [] + for obj in instance: + if isinstance(obj, bpy.types.Collection): + for child in obj.all_objects: + objects.append(child) + for obj in objects: + children = [o for o in bpy.data.objects if o.parent == obj] + for child in children: + objects.append(child) + + for obj in objects: + obj.select_set(True) + selected.append(obj) + + context = plugin.create_blender_context( + active=asset_group, selected=selected) + + # We export the abc + bpy.ops.wm.alembic_export( + context, + filepath=filepath, + selected=True, + flatten=False + ) + + plugin.deselect_all() + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance '%s' to: %s", + instance.name, representation) diff --git a/openpype/hosts/celaction/__init__.py b/openpype/hosts/celaction/__init__.py index e69de29bb2..8983d48d7d 100644 --- a/openpype/hosts/celaction/__init__.py +++ b/openpype/hosts/celaction/__init__.py @@ -0,0 +1,10 @@ +from .addon import ( + CELACTION_ROOT_DIR, + CelactionAddon, +) + + +__all__ = ( + "CELACTION_ROOT_DIR", + "CelactionAddon", +) diff --git a/openpype/hosts/celaction/addon.py b/openpype/hosts/celaction/addon.py new file mode 100644 index 0000000000..9158010011 --- /dev/null +++ b/openpype/hosts/celaction/addon.py @@ -0,0 +1,31 @@ +import os +from openpype.modules import OpenPypeModule, IHostAddon + +CELACTION_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class CelactionAddon(OpenPypeModule, IHostAddon): + name = "celaction" + host_name = "celaction" + + def initialize(self, module_settings): + self.enabled = True + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(CELACTION_ROOT_DIR, "hooks") + ] + + def add_implementation_envs(self, env, _app): + # Set default values if are not already set via settings + defaults = { + "LOGLEVEL": "DEBUG" + } + for key, value in defaults.items(): + if not env.get(key): + env[key] = value + + def get_workfile_extensions(self): + return [".scn"] diff --git a/openpype/hosts/celaction/api/__init__.py b/openpype/hosts/celaction/api/__init__.py deleted file mode 100644 index 8c93d93738..0000000000 --- a/openpype/hosts/celaction/api/__init__.py +++ /dev/null @@ -1 +0,0 @@ -kwargs = None diff --git a/openpype/hosts/celaction/api/cli.py b/openpype/hosts/celaction/api/cli.py deleted file mode 100644 index 88fc11cafb..0000000000 --- a/openpype/hosts/celaction/api/cli.py +++ /dev/null @@ -1,87 +0,0 @@ -import os -import sys -import copy -import argparse - -import pyblish.api -import pyblish.util - -import openpype.hosts.celaction -from openpype.lib import Logger -from openpype.hosts.celaction import api as celaction -from openpype.tools.utils import host_tools -from openpype.pipeline import install_openpype_plugins - - -log = Logger.get_logger("Celaction_cli_publisher") - -publish_host = "celaction" - -HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.celaction.__file__)) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") -PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") - - -def cli(): - parser = argparse.ArgumentParser(prog="celaction_publish") - - parser.add_argument("--currentFile", - help="Pass file to Context as `currentFile`") - - parser.add_argument("--chunk", - help=("Render chanks on farm")) - - parser.add_argument("--frameStart", - help=("Start of frame range")) - - parser.add_argument("--frameEnd", - help=("End of frame range")) - - parser.add_argument("--resolutionWidth", - help=("Width of resolution")) - - parser.add_argument("--resolutionHeight", - help=("Height of resolution")) - - celaction.kwargs = parser.parse_args(sys.argv[1:]).__dict__ - - -def _prepare_publish_environments(): - """Prepares environments based on request data.""" - env = copy.deepcopy(os.environ) - - project_name = os.getenv("AVALON_PROJECT") - asset_name = os.getenv("AVALON_ASSET") - - env["AVALON_PROJECT"] = project_name - env["AVALON_ASSET"] = asset_name - env["AVALON_TASK"] = os.getenv("AVALON_TASK") - env["AVALON_WORKDIR"] = os.getenv("AVALON_WORKDIR") - env["AVALON_APP"] = f"hosts.{publish_host}" - env["AVALON_APP_NAME"] = "celaction/local" - - env["PYBLISH_HOSTS"] = publish_host - - os.environ.update(env) - - -def main(): - # prepare all environments - _prepare_publish_environments() - - # Registers pype's Global pyblish plugins - install_openpype_plugins() - - if os.path.exists(PUBLISH_PATH): - log.info(f"Registering path: {PUBLISH_PATH}") - pyblish.api.register_plugin_path(PUBLISH_PATH) - - pyblish.api.register_host(publish_host) - - return host_tools.show_publish() - - -if __name__ == "__main__": - cli() - result = main() - sys.exit(not bool(result)) diff --git a/openpype/hosts/celaction/hooks/pre_celaction_registers.py b/openpype/hosts/celaction/hooks/pre_celaction_registers.py deleted file mode 100644 index e49e66f163..0000000000 --- a/openpype/hosts/celaction/hooks/pre_celaction_registers.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import shutil -import winreg -from openpype.lib import PreLaunchHook -from openpype.hosts.celaction import api as celaction - - -class CelactionPrelaunchHook(PreLaunchHook): - """ - Bootstrap celacion with pype - """ - workfile_ext = "scn" - app_groups = ["celaction"] - platforms = ["windows"] - - def execute(self): - # Add workfile path to launch arguments - workfile_path = self.workfile_path() - if workfile_path: - self.launch_context.launch_args.append(workfile_path) - - project_name = self.data["project_name"] - asset_name = self.data["asset_name"] - task_name = self.data["task_name"] - - # get publish version of celaction - app = "celaction_publish" - - # setting output parameters - path = r"Software\CelAction\CelAction2D\User Settings" - winreg.CreateKey(winreg.HKEY_CURRENT_USER, path) - hKey = winreg.OpenKey( - winreg.HKEY_CURRENT_USER, - "Software\\CelAction\\CelAction2D\\User Settings", 0, - winreg.KEY_ALL_ACCESS) - - # TODO: this will need to be checked more thoroughly - pype_exe = os.getenv("OPENPYPE_EXECUTABLE") - - winreg.SetValueEx(hKey, "SubmitAppTitle", 0, winreg.REG_SZ, pype_exe) - - parameters = [ - "launch", - f"--app {app}", - f"--project {project_name}", - f"--asset {asset_name}", - f"--task {task_name}", - "--currentFile \\\"\"*SCENE*\"\\\"", - "--chunk 10", - "--frameStart *START*", - "--frameEnd *END*", - "--resolutionWidth *X*", - "--resolutionHeight *Y*", - # "--programDir \"'*PROGPATH*'\"" - ] - winreg.SetValueEx(hKey, "SubmitParametersTitle", 0, winreg.REG_SZ, - " ".join(parameters)) - - # setting resolution parameters - path = r"Software\CelAction\CelAction2D\User Settings\Dialogs" - path += r"\SubmitOutput" - winreg.CreateKey(winreg.HKEY_CURRENT_USER, path) - hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0, - winreg.KEY_ALL_ACCESS) - winreg.SetValueEx(hKey, "SaveScene", 0, winreg.REG_DWORD, 1) - winreg.SetValueEx(hKey, "CustomX", 0, winreg.REG_DWORD, 1920) - winreg.SetValueEx(hKey, "CustomY", 0, winreg.REG_DWORD, 1080) - - # making sure message dialogs don't appear when overwriting - path = r"Software\CelAction\CelAction2D\User Settings\Messages" - path += r"\OverwriteScene" - winreg.CreateKey(winreg.HKEY_CURRENT_USER, path) - hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0, - winreg.KEY_ALL_ACCESS) - winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 6) - winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1) - - path = r"Software\CelAction\CelAction2D\User Settings\Messages" - path += r"\SceneSaved" - winreg.CreateKey(winreg.HKEY_CURRENT_USER, path) - hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0, - winreg.KEY_ALL_ACCESS) - winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 1) - winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1) - - def workfile_path(self): - workfile_path = self.data["last_workfile_path"] - - # copy workfile from template if doesnt exist any on path - if not os.path.exists(workfile_path): - # TODO add ability to set different template workfile path via - # settings - pype_celaction_dir = os.path.dirname(os.path.dirname( - os.path.abspath(celaction.__file__) - )) - template_path = os.path.join( - pype_celaction_dir, - "resources", - "celaction_template_scene.scn" - ) - - if not os.path.exists(template_path): - self.log.warning( - "Couldn't find workfile template file in {}".format( - template_path - ) - ) - return - - self.log.info( - f"Creating workfile from template: \"{template_path}\"" - ) - - # Copy template workfile to new destinantion - shutil.copy2( - os.path.normpath(template_path), - os.path.normpath(workfile_path) - ) - - self.log.info(f"Workfile to open: \"{workfile_path}\"") - - return workfile_path diff --git a/openpype/hosts/celaction/hooks/pre_celaction_setup.py b/openpype/hosts/celaction/hooks/pre_celaction_setup.py new file mode 100644 index 0000000000..62cebf99ed --- /dev/null +++ b/openpype/hosts/celaction/hooks/pre_celaction_setup.py @@ -0,0 +1,137 @@ +import os +import shutil +import winreg +import subprocess +from openpype.lib import PreLaunchHook, get_openpype_execute_args +from openpype.hosts.celaction import scripts + +CELACTION_SCRIPTS_DIR = os.path.dirname( + os.path.abspath(scripts.__file__) +) + + +class CelactionPrelaunchHook(PreLaunchHook): + """ + Bootstrap celacion with pype + """ + app_groups = ["celaction"] + platforms = ["windows"] + + def execute(self): + asset_doc = self.data["asset_doc"] + width = asset_doc["data"]["resolutionWidth"] + height = asset_doc["data"]["resolutionHeight"] + + # Add workfile path to launch arguments + workfile_path = self.workfile_path() + if workfile_path: + self.launch_context.launch_args.append(workfile_path) + + # setting output parameters + path_user_settings = "\\".join([ + "Software", "CelAction", "CelAction2D", "User Settings" + ]) + winreg.CreateKey(winreg.HKEY_CURRENT_USER, path_user_settings) + hKey = winreg.OpenKey( + winreg.HKEY_CURRENT_USER, path_user_settings, 0, + winreg.KEY_ALL_ACCESS + ) + + path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py") + subproces_args = get_openpype_execute_args("run", path_to_cli) + openpype_executable = subproces_args.pop(0) + + winreg.SetValueEx( + hKey, + "SubmitAppTitle", + 0, + winreg.REG_SZ, + openpype_executable + ) + + parameters = subproces_args + [ + "--currentFile", "*SCENE*", + "--chunk", "*CHUNK*", + "--frameStart", "*START*", + "--frameEnd", "*END*", + "--resolutionWidth", "*X*", + "--resolutionHeight", "*Y*" + ] + + winreg.SetValueEx( + hKey, "SubmitParametersTitle", 0, winreg.REG_SZ, + subprocess.list2cmdline(parameters) + ) + + # setting resolution parameters + path_submit = "\\".join([ + path_user_settings, "Dialogs", "SubmitOutput" + ]) + winreg.CreateKey(winreg.HKEY_CURRENT_USER, path_submit) + hKey = winreg.OpenKey( + winreg.HKEY_CURRENT_USER, path_submit, 0, + winreg.KEY_ALL_ACCESS + ) + winreg.SetValueEx(hKey, "SaveScene", 0, winreg.REG_DWORD, 1) + winreg.SetValueEx(hKey, "CustomX", 0, winreg.REG_DWORD, width) + winreg.SetValueEx(hKey, "CustomY", 0, winreg.REG_DWORD, height) + + # making sure message dialogs don't appear when overwriting + path_overwrite_scene = "\\".join([ + path_user_settings, "Messages", "OverwriteScene" + ]) + winreg.CreateKey(winreg.HKEY_CURRENT_USER, path_overwrite_scene) + hKey = winreg.OpenKey( + winreg.HKEY_CURRENT_USER, path_overwrite_scene, 0, + winreg.KEY_ALL_ACCESS + ) + winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 6) + winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1) + + # set scane as not saved + path_scene_saved = "\\".join([ + path_user_settings, "Messages", "SceneSaved" + ]) + winreg.CreateKey(winreg.HKEY_CURRENT_USER, path_scene_saved) + hKey = winreg.OpenKey( + winreg.HKEY_CURRENT_USER, path_scene_saved, 0, + winreg.KEY_ALL_ACCESS + ) + winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 1) + winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1) + + def workfile_path(self): + workfile_path = self.data["last_workfile_path"] + + # copy workfile from template if doesnt exist any on path + if not os.path.exists(workfile_path): + # TODO add ability to set different template workfile path via + # settings + openpype_celaction_dir = os.path.dirname(CELACTION_SCRIPTS_DIR) + template_path = os.path.join( + openpype_celaction_dir, + "resources", + "celaction_template_scene.scn" + ) + + if not os.path.exists(template_path): + self.log.warning( + "Couldn't find workfile template file in {}".format( + template_path + ) + ) + return + + self.log.info( + f"Creating workfile from template: \"{template_path}\"" + ) + + # Copy template workfile to new destinantion + shutil.copy2( + os.path.normpath(template_path), + os.path.normpath(workfile_path) + ) + + self.log.info(f"Workfile to open: \"{workfile_path}\"") + + return workfile_path diff --git a/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py b/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py index 15c5ddaf1c..bf97dd744b 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py +++ b/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py @@ -1,5 +1,7 @@ import pyblish.api -from openpype.hosts.celaction import api as celaction +import argparse +import sys +from pprint import pformat class CollectCelactionCliKwargs(pyblish.api.Collector): @@ -9,15 +11,31 @@ class CollectCelactionCliKwargs(pyblish.api.Collector): order = pyblish.api.Collector.order - 0.1 def process(self, context): - kwargs = celaction.kwargs.copy() + parser = argparse.ArgumentParser(prog="celaction") + parser.add_argument("--currentFile", + help="Pass file to Context as `currentFile`") + parser.add_argument("--chunk", + help=("Render chanks on farm")) + parser.add_argument("--frameStart", + help=("Start of frame range")) + parser.add_argument("--frameEnd", + help=("End of frame range")) + parser.add_argument("--resolutionWidth", + help=("Width of resolution")) + parser.add_argument("--resolutionHeight", + help=("Height of resolution")) + passing_kwargs = parser.parse_args(sys.argv[1:]).__dict__ - self.log.info("Storing kwargs: %s" % kwargs) - context.set_data("kwargs", kwargs) + self.log.info("Storing kwargs ...") + self.log.debug("_ passing_kwargs: {}".format(pformat(passing_kwargs))) + + # set kwargs to context data + context.set_data("passingKwargs", passing_kwargs) # get kwargs onto context data as keys with values - for k, v in kwargs.items(): + for k, v in passing_kwargs.items(): self.log.info(f"Setting `{k}` to instance.data with value: `{v}`") if k in ["frameStart", "frameEnd"]: - context.data[k] = kwargs[k] = int(v) + context.data[k] = passing_kwargs[k] = int(v) else: context.data[k] = v diff --git a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py index 1d2d9da1af..35ac7fc264 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py +++ b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py @@ -36,7 +36,8 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): "version": version } - celaction_kwargs = context.data.get("kwargs", {}) + celaction_kwargs = context.data.get( + "passingKwargs", {}) if celaction_kwargs: shared_instance_data.update(celaction_kwargs) @@ -52,8 +53,8 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): "subset": subset, "label": scene_file, "family": family, - "families": [family, "ftrack"], - "representations": list() + "families": [], + "representations": [] }) # adding basic script data @@ -72,7 +73,6 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): self.log.info('Publishing Celaction workfile') # render instance - family = "render.farm" subset = f"render{task}Main" instance = context.create_instance(name=subset) # getting instance state @@ -81,8 +81,8 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): # add assetEntity data into instance instance.data.update({ "label": "{} - farm".format(subset), - "family": family, - "families": [family], + "family": "render.farm", + "families": [], "subset": subset }) diff --git a/openpype/hosts/celaction/plugins/publish/collect_render_path.py b/openpype/hosts/celaction/plugins/publish/collect_render_path.py index 9cbb0e4880..f6db6c000d 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_render_path.py +++ b/openpype/hosts/celaction/plugins/publish/collect_render_path.py @@ -11,28 +11,31 @@ class CollectRenderPath(pyblish.api.InstancePlugin): families = ["render.farm"] # Presets - anatomy_render_key = None - publish_render_metadata = None + output_extension = "png" + anatomy_template_key_render_files = None + anatomy_template_key_metadata = None def process(self, instance): anatomy = instance.context.data["anatomy"] anatomy_data = copy.deepcopy(instance.data["anatomyData"]) - anatomy_data["family"] = "render" padding = anatomy.templates.get("frame_padding", 4) anatomy_data.update({ "frame": f"%0{padding}d", - "representation": "png" + "family": "render", + "representation": self.output_extension, + "ext": self.output_extension }) anatomy_filled = anatomy.format(anatomy_data) # get anatomy rendering keys - anatomy_render_key = self.anatomy_render_key or "render" - publish_render_metadata = self.publish_render_metadata or "render" + r_anatomy_key = self.anatomy_template_key_render_files + m_anatomy_key = self.anatomy_template_key_metadata # get folder and path for rendering images from celaction - render_dir = anatomy_filled[anatomy_render_key]["folder"] - render_path = anatomy_filled[anatomy_render_key]["path"] + render_dir = anatomy_filled[r_anatomy_key]["folder"] + render_path = anatomy_filled[r_anatomy_key]["path"] + self.log.debug("__ render_path: `{}`".format(render_path)) # create dir if it doesnt exists try: @@ -46,9 +49,9 @@ class CollectRenderPath(pyblish.api.InstancePlugin): instance.data["path"] = render_path # get anatomy for published renders folder path - if anatomy_filled.get(publish_render_metadata): + if anatomy_filled.get(m_anatomy_key): instance.data["publishRenderMetadataFolder"] = anatomy_filled[ - publish_render_metadata]["folder"] + m_anatomy_key]["folder"] self.log.info("Metadata render path: `{}`".format( instance.data["publishRenderMetadataFolder"] )) diff --git a/openpype/hosts/celaction/scripts/__init__.py b/openpype/hosts/celaction/scripts/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/celaction/scripts/publish_cli.py b/openpype/hosts/celaction/scripts/publish_cli.py new file mode 100644 index 0000000000..39d3f1a94d --- /dev/null +++ b/openpype/hosts/celaction/scripts/publish_cli.py @@ -0,0 +1,37 @@ +import os +import sys + +import pyblish.api +import pyblish.util + +import openpype.hosts.celaction +from openpype.lib import Logger +from openpype.tools.utils import host_tools +from openpype.pipeline import install_openpype_plugins + + +log = Logger.get_logger("celaction") + +PUBLISH_HOST = "celaction" +HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.celaction.__file__)) +PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") +PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") + + +def main(): + # Registers pype's Global pyblish plugins + install_openpype_plugins() + + if os.path.exists(PUBLISH_PATH): + log.info(f"Registering path: {PUBLISH_PATH}") + pyblish.api.register_plugin_path(PUBLISH_PATH) + + pyblish.api.register_host(PUBLISH_HOST) + pyblish.api.register_target("local") + + return host_tools.show_publish() + + +if __name__ == "__main__": + result = main() + sys.exit(not bool(result)) diff --git a/openpype/hosts/flame/api/plugin.py b/openpype/hosts/flame/api/plugin.py index 26129ebaa6..ca113fd98a 100644 --- a/openpype/hosts/flame/api/plugin.py +++ b/openpype/hosts/flame/api/plugin.py @@ -596,18 +596,28 @@ class PublishableClip: if not hero_track and self.vertical_sync: # driving layer is set as negative match for (_in, _out), hero_data in self.vertical_clip_match.items(): - hero_data.update({"heroTrack": False}) - if _in == self.clip_in and _out == self.clip_out: + """ + Since only one instance of hero clip is expected in + `self.vertical_clip_match`, this will loop only once + until none hero clip will be matched with hero clip. + + `tag_hierarchy_data` will be set only once for every + clip which is not hero clip. + """ + _hero_data = deepcopy(hero_data) + _hero_data.update({"heroTrack": False}) + if _in <= self.clip_in and _out >= self.clip_out: data_subset = hero_data["subset"] # add track index in case duplicity of names in hero data if self.subset in data_subset: - hero_data["subset"] = self.subset + str( + _hero_data["subset"] = self.subset + str( self.track_index) # in case track name and subset name is the same then add if self.subset_name == self.track_name: - hero_data["subset"] = self.subset + _hero_data["subset"] = self.subset # assing data to return hierarchy data to tag - tag_hierarchy_data = hero_data + tag_hierarchy_data = _hero_data + break # add data to return data dict self.marker_data.update(tag_hierarchy_data) diff --git a/openpype/hosts/fusion/api/menu.py b/openpype/hosts/fusion/api/menu.py index 39126935e6..42fbab70a6 100644 --- a/openpype/hosts/fusion/api/menu.py +++ b/openpype/hosts/fusion/api/menu.py @@ -1,6 +1,6 @@ import sys -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.tools.utils import host_tools from openpype.style import load_stylesheet diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py index b6092f7c1b..6315fe443d 100644 --- a/openpype/hosts/fusion/api/pipeline.py +++ b/openpype/hosts/fusion/api/pipeline.py @@ -6,7 +6,7 @@ import sys import logging import pyblish.api -from Qt import QtCore +from qtpy import QtCore from openpype.lib import ( Logger, diff --git a/openpype/hosts/fusion/api/pulse.py b/openpype/hosts/fusion/api/pulse.py index eb7ef3785d..762f05ba7e 100644 --- a/openpype/hosts/fusion/api/pulse.py +++ b/openpype/hosts/fusion/api/pulse.py @@ -1,7 +1,7 @@ import os import sys -from Qt import QtCore +from qtpy import QtCore class PulseThread(QtCore.QThread): diff --git a/openpype/hosts/fusion/deploy/MenuScripts/install_pyside2.py b/openpype/hosts/fusion/deploy/MenuScripts/install_pyside2.py index ab9f13ce05..e1240fd677 100644 --- a/openpype/hosts/fusion/deploy/MenuScripts/install_pyside2.py +++ b/openpype/hosts/fusion/deploy/MenuScripts/install_pyside2.py @@ -6,10 +6,10 @@ import importlib try: - from Qt import QtWidgets # noqa: F401 - from Qt import __binding__ - print(f"Qt binding: {__binding__}") - mod = importlib.import_module(__binding__) + from qtpy import API_NAME + + print(f"Qt binding: {API_NAME}") + mod = importlib.import_module(API_NAME) print(f"Qt path: {mod.__file__}") print("Qt library found, nothing to do..") diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py index 93f775b24b..f08dc0bf2c 100644 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py +++ b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py @@ -3,7 +3,7 @@ import sys import glob import logging -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore import qtawesome as qta diff --git a/openpype/hosts/fusion/plugins/inventory/set_tool_color.py b/openpype/hosts/fusion/plugins/inventory/set_tool_color.py index c7530ce674..a057ad1e89 100644 --- a/openpype/hosts/fusion/plugins/inventory/set_tool_color.py +++ b/openpype/hosts/fusion/plugins/inventory/set_tool_color.py @@ -1,4 +1,4 @@ -from Qt import QtGui, QtWidgets +from qtpy import QtGui, QtWidgets from openpype.pipeline import InventoryAction from openpype import style diff --git a/openpype/hosts/fusion/scripts/set_rendermode.py b/openpype/hosts/fusion/scripts/set_rendermode.py index f0638e4fe3..9d2bfef310 100644 --- a/openpype/hosts/fusion/scripts/set_rendermode.py +++ b/openpype/hosts/fusion/scripts/set_rendermode.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets import qtawesome from openpype.hosts.fusion.api import get_current_comp diff --git a/openpype/hosts/harmony/api/lib.py b/openpype/hosts/harmony/api/lib.py index e5e7ad1b7e..e1e77bfbee 100644 --- a/openpype/hosts/harmony/api/lib.py +++ b/openpype/hosts/harmony/api/lib.py @@ -14,7 +14,7 @@ import json import signal import time from uuid import uuid4 -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui import collections from .server import Server diff --git a/openpype/hosts/hiero/addon.py b/openpype/hosts/hiero/addon.py index f5bb94dbaa..1cc7a8637e 100644 --- a/openpype/hosts/hiero/addon.py +++ b/openpype/hosts/hiero/addon.py @@ -27,7 +27,12 @@ class HieroAddon(OpenPypeModule, IHostAddon): new_hiero_paths.append(norm_path) env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths) + # Remove auto screen scale factor for Qt + # - let Hiero decide it's value env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None) + # Remove tkinter library paths if are set + env.pop("TK_LIBRARY", None) + env.pop("TCL_LIBRARY", None) # Add vendor to PYTHONPATH python_path = env["PYTHONPATH"] diff --git a/openpype/hosts/houdini/api/__init__.py b/openpype/hosts/houdini/api/__init__.py index fddf7ab98d..2663a55f6f 100644 --- a/openpype/hosts/houdini/api/__init__.py +++ b/openpype/hosts/houdini/api/__init__.py @@ -1,24 +1,13 @@ from .pipeline import ( - install, - uninstall, - + HoudiniHost, ls, - containerise, + containerise ) from .plugin import ( Creator, ) -from .workio import ( - open_file, - save_file, - current_file, - has_unsaved_changes, - file_extensions, - work_root -) - from .lib import ( lsattr, lsattrs, @@ -29,22 +18,13 @@ from .lib import ( __all__ = [ - "install", - "uninstall", + "HoudiniHost", "ls", "containerise", "Creator", - # Workfiles API - "open_file", - "save_file", - "current_file", - "has_unsaved_changes", - "file_extensions", - "work_root", - # Utility functions "lsattr", "lsattrs", @@ -52,7 +32,3 @@ __all__ = [ "maintained_selection" ] - -# Backwards API compatibility -open = open_file -save = save_file diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index c8a7f92bb9..13f5a62ec3 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -1,6 +1,10 @@ +# -*- coding: utf-8 -*- +import sys +import os import uuid import logging from contextlib import contextmanager +import json import six @@ -8,10 +12,13 @@ from openpype.client import get_asset_by_name from openpype.pipeline import legacy_io from openpype.pipeline.context_tools import get_current_project_asset - import hou + +self = sys.modules[__name__] +self._parent = None log = logging.getLogger(__name__) +JSON_PREFIX = "JSON:::" def get_asset_fps(): @@ -29,23 +36,18 @@ def set_id(node, unique_id, overwrite=False): def get_id(node): - """ - Get the `cbId` attribute of the given node + """Get the `cbId` attribute of the given node. + Args: node (hou.Node): the name of the node to retrieve the attribute from Returns: - str + str: cbId attribute of the node. """ - if node is None: - return - - id = node.parm("id") - if node is None: - return - return id + if node is not None: + return node.parm("id") def generate_ids(nodes, asset_id=None): @@ -281,7 +283,7 @@ def render_rop(ropnode): raise RuntimeError("Render failed: {0}".format(exc)) -def imprint(node, data): +def imprint(node, data, update=False): """Store attributes with value on a node Depending on the type of attribute it creates the correct parameter @@ -290,49 +292,76 @@ def imprint(node, data): http://www.sidefx.com/docs/houdini/hom/hou/ParmTemplate.html + Because of some update glitch where you cannot overwrite existing + ParmTemplates on node using: + `setParmTemplates()` and `parmTuplesInFolder()` + update is done in another pass. + Args: node(hou.Node): node object from Houdini data(dict): collection of attributes and their value + update (bool, optional): flag if imprint should update + already existing data or leave them untouched and only + add new. Returns: None """ + if not data: + return + if not node: + self.log.error("Node is not set, calling imprint on invalid data.") + return - parm_group = node.parmTemplateGroup() + current_parms = {p.name(): p for p in node.spareParms()} + update_parms = [] + templates = [] - parm_folder = hou.FolderParmTemplate("folder", "Extra") for key, value in data.items(): if value is None: continue - if isinstance(value, float): - parm = hou.FloatParmTemplate(name=key, - label=key, - num_components=1, - default_value=(value,)) - elif isinstance(value, bool): - parm = hou.ToggleParmTemplate(name=key, - label=key, - default_value=value) - elif isinstance(value, int): - parm = hou.IntParmTemplate(name=key, - label=key, - num_components=1, - default_value=(value,)) - elif isinstance(value, six.string_types): - parm = hou.StringParmTemplate(name=key, - label=key, - num_components=1, - default_value=(value,)) - else: - raise TypeError("Unsupported type: %r" % type(value)) + parm = get_template_from_value(key, value) - parm_folder.addParmTemplate(parm) + if key in current_parms: + if node.evalParm(key) == data[key]: + continue + if not update: + log.debug(f"{key} already exists on {node}") + else: + log.debug(f"replacing {key}") + update_parms.append(parm) + continue + + templates.append(parm) + + parm_group = node.parmTemplateGroup() + parm_folder = parm_group.findFolder("Extra") + + # if folder doesn't exist yet, create one and append to it, + # else append to existing one + if not parm_folder: + parm_folder = hou.FolderParmTemplate("folder", "Extra") + parm_folder.setParmTemplates(templates) + parm_group.append(parm_folder) + else: + for template in templates: + parm_group.appendToFolder(parm_folder, template) + # this is needed because the pointer to folder + # is for some reason lost every call to `appendToFolder()` + parm_folder = parm_group.findFolder("Extra") - parm_group.append(parm_folder) node.setParmTemplateGroup(parm_group) + # TODO: Updating is done here, by calling probably deprecated functions. + # This needs to be addressed in the future. + if not update_parms: + return + + for parm in update_parms: + node.replaceSpareParmTuple(parm.name(), parm) + def lsattr(attr, value=None, root="/"): """Return nodes that have `attr` @@ -397,8 +426,22 @@ def read(node): """ # `spareParms` returns a tuple of hou.Parm objects - return {parameter.name(): parameter.eval() for - parameter in node.spareParms()} + data = {} + if not node: + return data + for parameter in node.spareParms(): + value = parameter.eval() + # test if value is json encoded dict + if isinstance(value, six.string_types) and \ + value.startswith(JSON_PREFIX): + try: + value = json.loads(value[len(JSON_PREFIX):]) + except json.JSONDecodeError: + # not a json + pass + data[parameter.name()] = value + + return data @contextmanager @@ -460,3 +503,89 @@ def reset_framerange(): hou.playbar.setFrameRange(frame_start, frame_end) hou.playbar.setPlaybackRange(frame_start, frame_end) hou.setFrame(frame_start) + + +def get_main_window(): + """Acquire Houdini's main window""" + if self._parent is None: + self._parent = hou.ui.mainQtWindow() + return self._parent + + +def get_template_from_value(key, value): + if isinstance(value, float): + parm = hou.FloatParmTemplate(name=key, + label=key, + num_components=1, + default_value=(value,)) + elif isinstance(value, bool): + parm = hou.ToggleParmTemplate(name=key, + label=key, + default_value=value) + elif isinstance(value, int): + parm = hou.IntParmTemplate(name=key, + label=key, + num_components=1, + default_value=(value,)) + elif isinstance(value, six.string_types): + parm = hou.StringParmTemplate(name=key, + label=key, + num_components=1, + default_value=(value,)) + elif isinstance(value, (dict, list, tuple)): + parm = hou.StringParmTemplate(name=key, + label=key, + num_components=1, + default_value=( + JSON_PREFIX + json.dumps(value),)) + else: + raise TypeError("Unsupported type: %r" % type(value)) + + return parm + + +def get_frame_data(node): + """Get the frame data: start frame, end frame and steps. + + Args: + node(hou.Node) + + Returns: + dict: frame data for star, end and steps. + + """ + data = {} + + if node.parm("trange") is None: + + return data + + if node.evalParm("trange") == 0: + self.log.debug("trange is 0") + return data + + data["frameStart"] = node.evalParm("f1") + data["frameEnd"] = node.evalParm("f2") + data["steps"] = node.evalParm("f3") + + return data + + +def splitext(name, allowed_multidot_extensions): + # type: (str, list) -> tuple + """Split file name to name and extension. + + Args: + name (str): File name to split. + allowed_multidot_extensions (list of str): List of allowed multidot + extensions. + + Returns: + tuple: Name and extension. + """ + + for ext in allowed_multidot_extensions: + if name.endswith(ext): + return name[:-len(ext)], ext + + return os.path.splitext(name) diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index e4af1913ef..f8e2c16d21 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -1,9 +1,13 @@ +# -*- coding: utf-8 -*- +"""Pipeline tools for OpenPype Houdini integration.""" import os import sys import logging import contextlib -import hou +import hou # noqa + +from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost import pyblish.api @@ -26,6 +30,7 @@ from .lib import get_asset_fps log = logging.getLogger("openpype.hosts.houdini") AVALON_CONTAINERS = "/obj/AVALON_CONTAINERS" +CONTEXT_CONTAINER = "/obj/OpenPypeContext" IS_HEADLESS = not hasattr(hou, "ui") PLUGINS_DIR = os.path.join(HOUDINI_HOST_DIR, "plugins") @@ -35,71 +40,139 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create") INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") -self = sys.modules[__name__] -self._has_been_setup = False -self._parent = None -self._events = dict() +class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): + name = "houdini" + def __init__(self): + super(HoudiniHost, self).__init__() + self._op_events = {} + self._has_been_setup = False -def install(): - _register_callbacks() + def install(self): + pyblish.api.register_host("houdini") + pyblish.api.register_host("hython") + pyblish.api.register_host("hpython") - pyblish.api.register_host("houdini") - pyblish.api.register_host("hython") - pyblish.api.register_host("hpython") + pyblish.api.register_plugin_path(PUBLISH_PATH) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) - pyblish.api.register_plugin_path(PUBLISH_PATH) - register_loader_plugin_path(LOAD_PATH) - register_creator_plugin_path(CREATE_PATH) + log.info("Installing callbacks ... ") + # register_event_callback("init", on_init) + self._register_callbacks() + register_event_callback("before.save", before_save) + register_event_callback("save", on_save) + register_event_callback("open", on_open) + register_event_callback("new", on_new) - log.info("Installing callbacks ... ") - # register_event_callback("init", on_init) - register_event_callback("before.save", before_save) - register_event_callback("save", on_save) - register_event_callback("open", on_open) - register_event_callback("new", on_new) + pyblish.api.register_callback( + "instanceToggled", on_pyblish_instance_toggled + ) - pyblish.api.register_callback( - "instanceToggled", on_pyblish_instance_toggled - ) + self._has_been_setup = True + # add houdini vendor packages + hou_pythonpath = os.path.join(HOUDINI_HOST_DIR, "vendor") - self._has_been_setup = True - # add houdini vendor packages - hou_pythonpath = os.path.join(HOUDINI_HOST_DIR, "vendor") + sys.path.append(hou_pythonpath) - sys.path.append(hou_pythonpath) + # Set asset settings for the empty scene directly after launch of + # Houdini so it initializes into the correct scene FPS, + # Frame Range, etc. + # TODO: make sure this doesn't trigger when + # opening with last workfile. + _set_context_settings() + shelves.generate_shelves() - # Set asset settings for the empty scene directly after launch of Houdini - # so it initializes into the correct scene FPS, Frame Range, etc. - # todo: make sure this doesn't trigger when opening with last workfile - _set_context_settings() - shelves.generate_shelves() + def has_unsaved_changes(self): + return hou.hipFile.hasUnsavedChanges() + def get_workfile_extensions(self): + return [".hip", ".hiplc", ".hipnc"] -def uninstall(): - """Uninstall Houdini-specific functionality of avalon-core. + def save_workfile(self, dst_path=None): + # Force forwards slashes to avoid segfault + if dst_path: + dst_path = dst_path.replace("\\", "/") + hou.hipFile.save(file_name=dst_path, + save_to_recent_files=True) + return dst_path - This function is called automatically on calling `api.uninstall()`. - """ + def open_workfile(self, filepath): + # Force forwards slashes to avoid segfault + filepath = filepath.replace("\\", "/") - pyblish.api.deregister_host("hython") - pyblish.api.deregister_host("hpython") - pyblish.api.deregister_host("houdini") + hou.hipFile.load(filepath, + suppress_save_prompt=True, + ignore_load_warnings=False) + return filepath -def _register_callbacks(): - for event in self._events.copy().values(): - if event is None: - continue + def get_current_workfile(self): + current_filepath = hou.hipFile.path() + if (os.path.basename(current_filepath) == "untitled.hip" and + not os.path.exists(current_filepath)): + # By default a new scene in houdini is saved in the current + # working directory as "untitled.hip" so we need to capture + # that and consider it 'not saved' when it's in that state. + return None - try: - hou.hipFile.removeEventCallback(event) - except RuntimeError as e: - log.info(e) + return current_filepath - self._events[on_file_event_callback] = hou.hipFile.addEventCallback( - on_file_event_callback - ) + def get_containers(self): + return ls() + + def _register_callbacks(self): + for event in self._op_events.copy().values(): + if event is None: + continue + + try: + hou.hipFile.removeEventCallback(event) + except RuntimeError as e: + log.info(e) + + self._op_events[on_file_event_callback] = hou.hipFile.addEventCallback( + on_file_event_callback + ) + + @staticmethod + def create_context_node(): + """Helper for creating context holding node. + + Returns: + hou.Node: context node + + """ + obj_network = hou.node("/obj") + op_ctx = obj_network.createNode( + "null", node_name="OpenPypeContext") + op_ctx.moveToGoodPosition() + op_ctx.setBuiltExplicitly(False) + op_ctx.setCreatorState("OpenPype") + op_ctx.setComment("OpenPype node to hold context metadata") + op_ctx.setColor(hou.Color((0.081, 0.798, 0.810))) + op_ctx.hide(True) + return op_ctx + + def update_context_data(self, data, changes): + op_ctx = hou.node(CONTEXT_CONTAINER) + if not op_ctx: + op_ctx = self.create_context_node() + + lib.imprint(op_ctx, data) + + def get_context_data(self): + op_ctx = hou.node(CONTEXT_CONTAINER) + if not op_ctx: + op_ctx = self.create_context_node() + return lib.read(op_ctx) + + def save_file(self, dst_path=None): + # Force forwards slashes to avoid segfault + dst_path = dst_path.replace("\\", "/") + + hou.hipFile.save(file_name=dst_path, + save_to_recent_files=True) def on_file_event_callback(event): @@ -113,22 +186,6 @@ def on_file_event_callback(event): emit_event("new") -def get_main_window(): - """Acquire Houdini's main window""" - if self._parent is None: - self._parent = hou.ui.mainQtWindow() - return self._parent - - -def teardown(): - """Remove integration""" - if not self._has_been_setup: - return - - self._has_been_setup = False - print("pyblish: Integration torn down successfully") - - def containerise(name, namespace, nodes, @@ -251,7 +308,7 @@ def on_open(): log.warning("Scene has outdated content.") # Get main window - parent = get_main_window() + parent = lib.get_main_window() if parent is None: log.info("Skipping outdated content pop-up " "because Houdini window can't be found.") diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 2bbb65aa05..e15e27c83f 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -1,14 +1,19 @@ # -*- coding: utf-8 -*- """Houdini specific Avalon/Pyblish plugin definitions.""" import sys +from abc import ( + ABCMeta +) import six - import hou from openpype.pipeline import ( CreatorError, - LegacyCreator + LegacyCreator, + Creator as NewCreator, + CreatedInstance ) -from .lib import imprint +from openpype.lib import BoolDef +from .lib import imprint, read, lsattr class OpenPypeCreatorError(CreatorError): @@ -30,12 +35,15 @@ class Creator(LegacyCreator): when hovering over a node. The information is visible under the name of the node. + Deprecated: + This creator is deprecated and will be removed in future version. + """ defaults = ['Main'] def __init__(self, *args, **kwargs): super(Creator, self).__init__(*args, **kwargs) - self.nodes = list() + self.nodes = [] def process(self): """This is the base functionality to create instances in Houdini @@ -84,3 +92,187 @@ class Creator(LegacyCreator): OpenPypeCreatorError, OpenPypeCreatorError("Creator error: {}".format(er)), sys.exc_info()[2]) + + +class HoudiniCreatorBase(object): + @staticmethod + def cache_subsets(shared_data): + """Cache instances for Creators to shared data. + + Create `houdini_cached_subsets` key when needed in shared data and + fill it with all collected instances from the scene under its + respective creator identifiers. + + If legacy instances are detected in the scene, create + `houdini_cached_legacy_subsets` there and fill it with + all legacy subsets under family as a key. + + Args: + Dict[str, Any]: Shared data. + + Return: + Dict[str, Any]: Shared data dictionary. + + """ + if shared_data.get("houdini_cached_subsets") is None: + shared_data["houdini_cached_subsets"] = {} + if shared_data.get("houdini_cached_legacy_subsets") is None: + shared_data["houdini_cached_legacy_subsets"] = {} + cached_instances = lsattr("id", "pyblish.avalon.instance") + for i in cached_instances: + if not i.parm("creator_identifier"): + # we have legacy instance + family = i.parm("family").eval() + if family not in shared_data[ + "houdini_cached_legacy_subsets"]: + shared_data["houdini_cached_legacy_subsets"][ + family] = [i] + else: + shared_data[ + "houdini_cached_legacy_subsets"][family].append(i) + continue + + creator_id = i.parm("creator_identifier").eval() + if creator_id not in shared_data["houdini_cached_subsets"]: + shared_data["houdini_cached_subsets"][creator_id] = [i] + else: + shared_data[ + "houdini_cached_subsets"][creator_id].append(i) # noqa + return shared_data + + @staticmethod + def create_instance_node( + node_name, parent, + node_type="geometry"): + # type: (str, str, str) -> hou.Node + """Create node representing instance. + + Arguments: + node_name (str): Name of the new node. + parent (str): Name of the parent node. + node_type (str, optional): Type of the node. + + Returns: + hou.Node: Newly created instance node. + + """ + parent_node = hou.node(parent) + instance_node = parent_node.createNode( + node_type, node_name=node_name) + instance_node.moveToGoodPosition() + return instance_node + + +@six.add_metaclass(ABCMeta) +class HoudiniCreator(NewCreator, HoudiniCreatorBase): + """Base class for most of the Houdini creator plugins.""" + selected_nodes = [] + + def create(self, subset_name, instance_data, pre_create_data): + try: + if pre_create_data.get("use_selection"): + self.selected_nodes = hou.selectedNodes() + + # Get the node type and remove it from the data, not needed + node_type = instance_data.pop("node_type", None) + if node_type is None: + node_type = "geometry" + + instance_node = self.create_instance_node( + subset_name, "/out", node_type) + + self.customize_node_look(instance_node) + + instance_data["instance_node"] = instance_node.path() + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self) + self._add_instance_to_context(instance) + imprint(instance_node, instance.data_to_store()) + return instance + + except hou.Error as er: + six.reraise( + OpenPypeCreatorError, + OpenPypeCreatorError("Creator error: {}".format(er)), + sys.exc_info()[2]) + + def lock_parameters(self, node, parameters): + """Lock list of specified parameters on the node. + + Args: + node (hou.Node): Houdini node to lock parameters on. + parameters (list of str): List of parameter names. + + """ + for name in parameters: + try: + parm = node.parm(name) + parm.lock(True) + except AttributeError: + self.log.debug("missing lock pattern {}".format(name)) + + def collect_instances(self): + # cache instances if missing + self.cache_subsets(self.collection_shared_data) + for instance in self.collection_shared_data[ + "houdini_cached_subsets"].get(self.identifier, []): + created_instance = CreatedInstance.from_existing( + read(instance), self + ) + self._add_instance_to_context(created_instance) + + def update_instances(self, update_list): + for created_inst, _changes in update_list: + instance_node = hou.node(created_inst.get("instance_node")) + + new_values = { + key: new_value + for key, (_old_value, new_value) in _changes.items() + } + imprint( + instance_node, + new_values, + update=True + ) + + def remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + instance_node = hou.node(instance.data.get("instance_node")) + if instance_node: + instance_node.destroy() + + self._remove_instance_from_context(instance) + + def get_pre_create_attr_defs(self): + return [ + BoolDef("use_selection", label="Use selection") + ] + + @staticmethod + def customize_node_look( + node, color=None, + shape="chevron_down"): + """Set custom look for instance nodes. + + Args: + node (hou.Node): Node to set look. + color (hou.Color, Optional): Color of the node. + shape (str, Optional): Shape name of the node. + + Returns: + None + + """ + if not color: + color = hou.Color((0.616, 0.871, 0.769)) + node.setUserData('nodeshape', shape) + node.setColor(color) diff --git a/openpype/hosts/houdini/api/workio.py b/openpype/hosts/houdini/api/workio.py deleted file mode 100644 index 5f7efff333..0000000000 --- a/openpype/hosts/houdini/api/workio.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Host API required Work Files tool""" -import os - -import hou - - -def file_extensions(): - return [".hip", ".hiplc", ".hipnc"] - - -def has_unsaved_changes(): - return hou.hipFile.hasUnsavedChanges() - - -def save_file(filepath): - - # Force forwards slashes to avoid segfault - filepath = filepath.replace("\\", "/") - - hou.hipFile.save(file_name=filepath, - save_to_recent_files=True) - - return filepath - - -def open_file(filepath): - - # Force forwards slashes to avoid segfault - filepath = filepath.replace("\\", "/") - - hou.hipFile.load(filepath, - suppress_save_prompt=True, - ignore_load_warnings=False) - - return filepath - - -def current_file(): - - current_filepath = hou.hipFile.path() - if (os.path.basename(current_filepath) == "untitled.hip" and - not os.path.exists(current_filepath)): - # By default a new scene in houdini is saved in the current - # working directory as "untitled.hip" so we need to capture - # that and consider it 'not saved' when it's in that state. - return None - - return current_filepath - - -def work_root(session): - work_dir = session["AVALON_WORKDIR"] - scene_dir = session.get("AVALON_SCENEDIR") - if scene_dir: - return os.path.join(work_dir, scene_dir) - else: - return work_dir diff --git a/openpype/hosts/houdini/plugins/create/convert_legacy.py b/openpype/hosts/houdini/plugins/create/convert_legacy.py new file mode 100644 index 0000000000..4b8041b4f5 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/convert_legacy.py @@ -0,0 +1,74 @@ +# -*- coding: utf-8 -*- +"""Convertor for legacy Houdini subsets.""" +from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin +from openpype.hosts.houdini.api.lib import imprint + + +class HoudiniLegacyConvertor(SubsetConvertorPlugin): + """Find and convert any legacy subsets in the scene. + + This Convertor will find all legacy subsets in the scene and will + transform them to the current system. Since the old subsets doesn't + retain any information about their original creators, the only mapping + we can do is based on their families. + + Its limitation is that you can have multiple creators creating subset + of the same family and there is no way to handle it. This code should + nevertheless cover all creators that came with OpenPype. + + """ + identifier = "io.openpype.creators.houdini.legacy" + family_to_id = { + "camera": "io.openpype.creators.houdini.camera", + "ass": "io.openpype.creators.houdini.ass", + "imagesequence": "io.openpype.creators.houdini.imagesequence", + "hda": "io.openpype.creators.houdini.hda", + "pointcache": "io.openpype.creators.houdini.pointcache", + "redshiftproxy": "io.openpype.creators.houdini.redshiftproxy", + "redshift_rop": "io.openpype.creators.houdini.redshift_rop", + "usd": "io.openpype.creators.houdini.usd", + "usdrender": "io.openpype.creators.houdini.usdrender", + "vdbcache": "io.openpype.creators.houdini.vdbcache" + } + + def __init__(self, *args, **kwargs): + super(HoudiniLegacyConvertor, self).__init__(*args, **kwargs) + self.legacy_subsets = {} + + def find_instances(self): + """Find legacy subsets in the scene. + + Legacy subsets are the ones that doesn't have `creator_identifier` + parameter on them. + + This is using cached entries done in + :py:meth:`~HoudiniCreatorBase.cache_subsets()` + + """ + self.legacy_subsets = self.collection_shared_data.get( + "houdini_cached_legacy_subsets") + if not self.legacy_subsets: + return + self.add_convertor_item("Found {} incompatible subset{}.".format( + len(self.legacy_subsets), "s" if len(self.legacy_subsets) > 1 else "") + ) + + def convert(self): + """Convert all legacy subsets to current. + + It is enough to add `creator_identifier` and `instance_node`. + + """ + if not self.legacy_subsets: + return + + for family, subsets in self.legacy_subsets.items(): + if family in self.family_to_id: + for subset in subsets: + data = { + "creator_identifier": self.family_to_id[family], + "instance_node": subset.path() + } + self.log.info("Converting {} to {}".format( + subset.path(), self.family_to_id[family])) + imprint(subset, data) diff --git a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py index eef86005f5..fec64eb4a1 100644 --- a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py +++ b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py @@ -1,46 +1,49 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating alembic camera subsets.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance, CreatorError -class CreateAlembicCamera(plugin.Creator): - """Single baked camera from Alembic ROP""" +class CreateAlembicCamera(plugin.HoudiniCreator): + """Single baked camera from Alembic ROP.""" - name = "camera" + identifier = "io.openpype.creators.houdini.camera" label = "Camera (Abc)" family = "camera" icon = "camera" - def __init__(self, *args, **kwargs): - super(CreateAlembicCamera, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "alembic"}) - # Set node type to create for output - self.data.update({"node_type": "alembic"}) + instance = super(CreateAlembicCamera, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - def _process(self, instance): - """Creator main entry point. - - Args: - instance (hou.Node): Created Houdini instance. - - """ + instance_node = hou.node(instance.get("instance_node")) parms = { - "filename": "$HIP/pyblish/%s.abc" % self.name, + "filename": hou.text.expandString( + "$HIP/pyblish/{}.abc".format(subset_name)), "use_sop_path": False, } - if self.nodes: - node = self.nodes[0] - path = node.path() + if self.selected_nodes: + if len(self.selected_nodes) > 1: + raise CreatorError("More than one item selected.") + path = self.selected_nodes[0].path() # Split the node path into the first root and the remainder # So we can set the root and objects parameters correctly _, root, remainder = path.split("/", 2) parms.update({"root": "/" + root, "objects": remainder}) - instance.setParms(parms) + instance_node.setParms(parms) # Lock the Use Sop Path setting so the # user doesn't accidentally enable it. - instance.parm("use_sop_path").lock(True) - instance.parm("trange").set(1) + to_lock = ["use_sop_path"] + self.lock_parameters(instance_node, to_lock) + + instance_node.parm("trange").set(1) diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py index 72088e43b0..8b310753d0 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py @@ -1,9 +1,12 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating Arnold ASS files.""" from openpype.hosts.houdini.api import plugin -class CreateArnoldAss(plugin.Creator): +class CreateArnoldAss(plugin.HoudiniCreator): """Arnold .ass Archive""" + identifier = "io.openpype.creators.houdini.ass" label = "Arnold ASS" family = "ass" icon = "magic" @@ -12,42 +15,39 @@ class CreateArnoldAss(plugin.Creator): # Default extension: `.ass` or `.ass.gz` ext = ".ass" - def __init__(self, *args, **kwargs): - super(CreateArnoldAss, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "arnold"}) - self.data.update({"node_type": "arnold"}) + instance = super(CreateArnoldAss, self).create( + subset_name, + instance_data, + pre_create_data) # type: plugin.CreatedInstance - def process(self): - node = super(CreateArnoldAss, self).process() - - basename = node.name() - node.setName(basename + "_ASS", unique_name=True) + instance_node = hou.node(instance.get("instance_node")) # Hide Properties Tab on Arnold ROP since that's used # for rendering instead of .ass Archive Export - parm_template_group = node.parmTemplateGroup() + parm_template_group = instance_node.parmTemplateGroup() parm_template_group.hideFolder("Properties", True) - node.setParmTemplateGroup(parm_template_group) + instance_node.setParmTemplateGroup(parm_template_group) - filepath = '$HIP/pyblish/`chs("subset")`.$F4{}'.format(self.ext) + filepath = "{}{}".format( + hou.text.expandString("$HIP/pyblish/"), + "{}.$F4{}".format(subset_name, self.ext) + ) parms = { # Render frame range "trange": 1, - # Arnold ROP settings "ar_ass_file": filepath, "ar_ass_export_enable": 1 } - node.setParms(parms) - # Lock the ASS export attribute - node.parm("ar_ass_export_enable").lock(True) + instance_node.setParms(parms) - # Lock some Avalon attributes - to_lock = ["family", "id"] - for name in to_lock: - parm = node.parm(name) - parm.lock(True) + # Lock any parameters in this list + to_lock = ["ar_ass_export_enable", "family", "id"] + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_composite.py b/openpype/hosts/houdini/plugins/create/create_composite.py index e278708076..45af2b0630 100644 --- a/openpype/hosts/houdini/plugins/create/create_composite.py +++ b/openpype/hosts/houdini/plugins/create/create_composite.py @@ -1,44 +1,42 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating composite sequences.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateCompositeSequence(plugin.Creator): +class CreateCompositeSequence(plugin.HoudiniCreator): """Composite ROP to Image Sequence""" + identifier = "io.openpype.creators.houdini.imagesequence" label = "Composite (Image Sequence)" family = "imagesequence" icon = "gears" - def __init__(self, *args, **kwargs): - super(CreateCompositeSequence, self).__init__(*args, **kwargs) + ext = ".exr" - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa - # Type of ROP node to create - self.data.update({"node_type": "comp"}) + instance_data.pop("active", None) + instance_data.update({"node_type": "comp"}) - def _process(self, instance): - """Creator main entry point. + instance = super(CreateCompositeSequence, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - Args: - instance (hou.Node): Created Houdini instance. + instance_node = hou.node(instance.get("instance_node")) + filepath = "{}{}".format( + hou.text.expandString("$HIP/pyblish/"), + "{}.$F4{}".format(subset_name, self.ext) + ) + parms = { + "trange": 1, + "copoutput": filepath + } - """ - parms = {"copoutput": "$HIP/pyblish/%s.$F4.exr" % self.name} - - if self.nodes: - node = self.nodes[0] - parms.update({"coppath": node.path()}) - - instance.setParms(parms) + instance_node.setParms(parms) # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] - for name in to_lock: - try: - parm = instance.parm(name) - parm.lock(True) - except AttributeError: - # missing lock pattern - self.log.debug( - "missing lock pattern {}".format(name)) + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_hda.py b/openpype/hosts/houdini/plugins/create/create_hda.py index b98da8b8bb..4bed83c2e9 100644 --- a/openpype/hosts/houdini/plugins/create/create_hda.py +++ b/openpype/hosts/houdini/plugins/create/create_hda.py @@ -1,28 +1,22 @@ # -*- coding: utf-8 -*- -import hou - +"""Creator plugin for creating publishable Houdini Digital Assets.""" from openpype.client import ( get_asset_by_name, get_subsets, ) from openpype.pipeline import legacy_io -from openpype.hosts.houdini.api import lib from openpype.hosts.houdini.api import plugin -class CreateHDA(plugin.Creator): +class CreateHDA(plugin.HoudiniCreator): """Publish Houdini Digital Asset file.""" - name = "hda" + identifier = "io.openpype.creators.houdini.hda" label = "Houdini Digital Asset (Hda)" family = "hda" icon = "gears" maintain_selection = False - def __init__(self, *args, **kwargs): - super(CreateHDA, self).__init__(*args, **kwargs) - self.data.pop("active", None) - def _check_existing(self, subset_name): # type: (str) -> bool """Check if existing subset name versions already exists.""" @@ -40,55 +34,51 @@ class CreateHDA(plugin.Creator): } return subset_name.lower() in existing_subset_names_low - def _process(self, instance): - subset_name = self.data["subset"] - # get selected nodes - out = hou.node("/obj") - self.nodes = hou.selectedNodes() + def _create_instance_node( + self, node_name, parent, node_type="geometry"): + import hou - if (self.options or {}).get("useSelection") and self.nodes: - # if we have `use selection` enabled and we have some + parent_node = hou.node("/obj") + if self.selected_nodes: + # if we have `use selection` enabled, and we have some # selected nodes ... - subnet = out.collapseIntoSubnet( - self.nodes, - subnet_name="{}_subnet".format(self.name)) + subnet = parent_node.collapseIntoSubnet( + self.selected_nodes, + subnet_name="{}_subnet".format(node_name)) subnet.moveToGoodPosition() to_hda = subnet else: - to_hda = out.createNode( - "subnet", node_name="{}_subnet".format(self.name)) + to_hda = parent_node.createNode( + "subnet", node_name="{}_subnet".format(node_name)) if not to_hda.type().definition(): # if node type has not its definition, it is not user # created hda. We test if hda can be created from the node. if not to_hda.canCreateDigitalAsset(): - raise Exception( + raise plugin.OpenPypeCreatorError( "cannot create hda from node {}".format(to_hda)) hda_node = to_hda.createDigitalAsset( - name=subset_name, - hda_file_name="$HIP/{}.hda".format(subset_name) + name=node_name, + hda_file_name="$HIP/{}.hda".format(node_name) ) hda_node.layoutChildren() - elif self._check_existing(subset_name): + elif self._check_existing(node_name): raise plugin.OpenPypeCreatorError( ("subset {} is already published with different HDA" - "definition.").format(subset_name)) + "definition.").format(node_name)) else: hda_node = to_hda - hda_node.setName(subset_name) - - # delete node created by Avalon in /out - # this needs to be addressed in future Houdini workflow refactor. - - hou.node("/out/{}".format(subset_name)).destroy() - - try: - lib.imprint(hda_node, self.data) - except hou.OperationFailed: - raise plugin.OpenPypeCreatorError( - ("Cannot set metadata on asset. Might be that it already is " - "OpenPype asset.") - ) - + hda_node.setName(node_name) + self.customize_node_look(hda_node) return hda_node + + def create(self, subset_name, instance_data, pre_create_data): + instance_data.pop("active", None) + + instance = super(CreateHDA, self).create( + subset_name, + instance_data, + pre_create_data) # type: plugin.CreatedInstance + + return instance diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index feb683edf6..6b6b277422 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -1,48 +1,51 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating pointcache alembics.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreatePointCache(plugin.Creator): +class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" - - name = "pointcache" + identifier = "io.openpype.creators.houdini.pointcache" label = "Point Cache" family = "pointcache" icon = "gears" - def __init__(self, *args, **kwargs): - super(CreatePointCache, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "alembic"}) - self.data.update({"node_type": "alembic"}) + instance = super(CreatePointCache, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - def _process(self, instance): - """Creator main entry point. - - Args: - instance (hou.Node): Created Houdini instance. - - """ + instance_node = hou.node(instance.get("instance_node")) parms = { - "use_sop_path": True, # Export single node from SOP Path - "build_from_path": True, # Direct path of primitive in output - "path_attrib": "path", # Pass path attribute for output + "use_sop_path": True, + "build_from_path": True, + "path_attrib": "path", "prim_to_detail_pattern": "cbId", - "format": 2, # Set format to Ogawa - "facesets": 0, # No face sets (by default exclude them) - "filename": "$HIP/pyblish/%s.abc" % self.name, + "format": 2, + "facesets": 0, + "filename": hou.text.expandString( + "$HIP/pyblish/{}.abc".format(subset_name)) } - if self.nodes: - node = self.nodes[0] - parms.update({"sop_path": node.path()}) + if self.selected_nodes: + parms["sop_path"] = self.selected_nodes[0].path() - instance.setParms(parms) - instance.parm("trange").set(1) + # try to find output node + for child in self.selected_nodes[0].children(): + if child.type().name() == "output": + parms["sop_path"] = child.path() + break + + instance_node.setParms(parms) + instance_node.parm("trange").set(1) # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] - for name in to_lock: - parm = instance.parm(name) - parm.lock(True) + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py index da4d80bf2b..8b6a68437b 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py @@ -1,18 +1,20 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating Redshift proxies.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateRedshiftProxy(plugin.Creator): +class CreateRedshiftProxy(plugin.HoudiniCreator): """Redshift Proxy""" - + identifier = "io.openpype.creators.houdini.redshiftproxy" label = "Redshift Proxy" family = "redshiftproxy" icon = "magic" - def __init__(self, *args, **kwargs): - super(CreateRedshiftProxy, self).__init__(*args, **kwargs) - + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) # Redshift provides a `Redshift_Proxy_Output` node type which shows # a limited set of parameters by default and is set to extract a @@ -21,28 +23,24 @@ class CreateRedshiftProxy(plugin.Creator): # why this happens. # TODO: Somehow enforce so that it only shows the original limited # attributes of the Redshift_Proxy_Output node type - self.data.update({"node_type": "Redshift_Proxy_Output"}) + instance_data.update({"node_type": "Redshift_Proxy_Output"}) - def _process(self, instance): - """Creator main entry point. + instance = super(CreateRedshiftProxy, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - Args: - instance (hou.Node): Created Houdini instance. + instance_node = hou.node(instance.get("instance_node")) - """ parms = { - "RS_archive_file": '$HIP/pyblish/`chs("subset")`.$F4.rs', + "RS_archive_file": '$HIP/pyblish/`{}.$F4.rs'.format(subset_name), } - if self.nodes: - node = self.nodes[0] - path = node.path() - parms["RS_archive_sopPath"] = path + if self.selected_nodes: + parms["RS_archive_sopPath"] = self.selected_nodes[0].path() - instance.setParms(parms) + instance_node.setParms(parms) # Lock some Avalon attributes - to_lock = ["family", "id"] - for name in to_lock: - parm = instance.parm(name) - parm.lock(True) + to_lock = ["family", "id", "prim_to_detail_pattern"] + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 6949ca169b..2cbe9bfda1 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -1,41 +1,40 @@ -import hou +# -*- coding: utf-8 -*- +"""Creator plugin to create Redshift ROP.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateRedshiftROP(plugin.Creator): +class CreateRedshiftROP(plugin.HoudiniCreator): """Redshift ROP""" - + identifier = "io.openpype.creators.houdini.redshift_rop" label = "Redshift ROP" family = "redshift_rop" icon = "magic" defaults = ["master"] - def __init__(self, *args, **kwargs): - super(CreateRedshiftROP, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa + + instance_data.pop("active", None) + instance_data.update({"node_type": "Redshift_ROP"}) + # Add chunk size attribute + instance_data["chunkSize"] = 10 # Clear the family prefix from the subset - subset = self.data["subset"] + subset = subset_name subset_no_prefix = subset[len(self.family):] subset_no_prefix = subset_no_prefix[0].lower() + subset_no_prefix[1:] - self.data["subset"] = subset_no_prefix + subset_name = subset_no_prefix - # Add chunk size attribute - self.data["chunkSize"] = 10 + instance = super(CreateRedshiftROP, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_node = hou.node(instance.get("instance_node")) - self.data.update({"node_type": "Redshift_ROP"}) - - def _process(self, instance): - """Creator main entry point. - - Args: - instance (hou.Node): Created Houdini instance. - - """ - basename = instance.name() - instance.setName(basename + "_ROP", unique_name=True) + basename = instance_node.name() + instance_node.setName(basename + "_ROP", unique_name=True) # Also create the linked Redshift IPR Rop try: @@ -43,11 +42,12 @@ class CreateRedshiftROP(plugin.Creator): "Redshift_IPR", node_name=basename + "_IPR" ) except hou.OperationFailed: - raise Exception(("Cannot create Redshift node. Is Redshift " - "installed and enabled?")) + raise plugin.OpenPypeCreatorError( + ("Cannot create Redshift node. Is Redshift " + "installed and enabled?")) # Move it to directly under the Redshift ROP - ipr_rop.setPosition(instance.position() + hou.Vector2(0, -1)) + ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1)) # Set the linked rop to the Redshift ROP ipr_rop.parm("linked_rop").set(ipr_rop.relativePathTo(instance)) @@ -61,10 +61,8 @@ class CreateRedshiftROP(plugin.Creator): "RS_outputMultilayerMode": 0, # no multi-layered exr "RS_outputBeautyAOVSuffix": "beauty", } - instance.setParms(parms) + instance_node.setParms(parms) # Lock some Avalon attributes to_lock = ["family", "id"] - for name in to_lock: - parm = instance.parm(name) - parm.lock(True) + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_usd.py b/openpype/hosts/houdini/plugins/create/create_usd.py index 5bcb7840c0..51ed8237c5 100644 --- a/openpype/hosts/houdini/plugins/create/create_usd.py +++ b/openpype/hosts/houdini/plugins/create/create_usd.py @@ -1,39 +1,39 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating USDs.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateUSD(plugin.Creator): +class CreateUSD(plugin.HoudiniCreator): """Universal Scene Description""" - + identifier = "io.openpype.creators.houdini.usd" label = "USD (experimental)" family = "usd" icon = "gears" enabled = False - def __init__(self, *args, **kwargs): - super(CreateUSD, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "usd"}) - self.data.update({"node_type": "usd"}) + instance = super(CreateUSD, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - def _process(self, instance): - """Creator main entry point. + instance_node = hou.node(instance.get("instance_node")) - Args: - instance (hou.Node): Created Houdini instance. - - """ parms = { - "lopoutput": "$HIP/pyblish/%s.usd" % self.name, + "lopoutput": "$HIP/pyblish/{}.usd".format(subset_name), "enableoutputprocessor_simplerelativepaths": False, } - if self.nodes: - node = self.nodes[0] - parms.update({"loppath": node.path()}) + if self.selected_nodes: + parms["loppath"] = self.selected_nodes[0].path() - instance.setParms(parms) + instance_node.setParms(parms) # Lock any parameters in this list to_lock = [ @@ -42,6 +42,4 @@ class CreateUSD(plugin.Creator): "family", "id", ] - for name in to_lock: - parm = instance.parm(name) - parm.lock(True) + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_usdrender.py b/openpype/hosts/houdini/plugins/create/create_usdrender.py index cb3fe3f02b..f78f0bed50 100644 --- a/openpype/hosts/houdini/plugins/create/create_usdrender.py +++ b/openpype/hosts/houdini/plugins/create/create_usdrender.py @@ -1,42 +1,41 @@ -import hou +# -*- coding: utf-8 -*- +"""Creator plugin for creating USD renders.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateUSDRender(plugin.Creator): +class CreateUSDRender(plugin.HoudiniCreator): """USD Render ROP in /stage""" - + identifier = "io.openpype.creators.houdini.usdrender" label = "USD Render (experimental)" family = "usdrender" icon = "magic" - def __init__(self, *args, **kwargs): - super(CreateUSDRender, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa - self.parent = hou.node("/stage") + instance_data["parent"] = hou.node("/stage") # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "usdrender"}) - self.data.update({"node_type": "usdrender"}) + instance = super(CreateUSDRender, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - def _process(self, instance): - """Creator main entry point. + instance_node = hou.node(instance.get("instance_node")) - Args: - instance (hou.Node): Created Houdini instance. - """ parms = { # Render frame range "trange": 1 } - if self.nodes: - node = self.nodes[0] - parms.update({"loppath": node.path()}) - instance.setParms(parms) + if self.selected_nodes: + parms["loppath"] = self.selected_nodes[0].path() + instance_node.setParms(parms) # Lock some Avalon attributes to_lock = ["family", "id"] - for name in to_lock: - parm = instance.parm(name) - parm.lock(True) + self.lock_parameters(instance_node, to_lock) diff --git a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py index 242c21fc72..1a5011745f 100644 --- a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py +++ b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py @@ -1,38 +1,36 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating VDB Caches.""" from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance -class CreateVDBCache(plugin.Creator): +class CreateVDBCache(plugin.HoudiniCreator): """OpenVDB from Geometry ROP""" - + identifier = "io.openpype.creators.houdini.vdbcache" name = "vbdcache" label = "VDB Cache" family = "vdbcache" icon = "cloud" - def __init__(self, *args, **kwargs): - super(CreateVDBCache, self).__init__(*args, **kwargs) + def create(self, subset_name, instance_data, pre_create_data): + import hou - # Remove the active, we are checking the bypass flag of the nodes - self.data.pop("active", None) + instance_data.pop("active", None) + instance_data.update({"node_type": "geometry"}) - # Set node type to create for output - self.data["node_type"] = "geometry" + instance = super(CreateVDBCache, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance - def _process(self, instance): - """Creator main entry point. - - Args: - instance (hou.Node): Created Houdini instance. - - """ + instance_node = hou.node(instance.get("instance_node")) parms = { - "sopoutput": "$HIP/pyblish/%s.$F4.vdb" % self.name, + "sopoutput": "$HIP/pyblish/{}.$F4.vdb".format(subset_name), "initsim": True, "trange": 1 } - if self.nodes: - node = self.nodes[0] - parms.update({"soppath": node.path()}) + if self.selected_nodes: + parms["soppath"] = self.selected_nodes[0].path() - instance.setParms(parms) + instance_node.setParms(parms) diff --git a/openpype/hosts/houdini/plugins/create/create_workfile.py b/openpype/hosts/houdini/plugins/create/create_workfile.py new file mode 100644 index 0000000000..0c6d840810 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_workfile.py @@ -0,0 +1,93 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating workfiles.""" +from openpype.hosts.houdini.api import plugin +from openpype.hosts.houdini.api.lib import read, imprint +from openpype.hosts.houdini.api.pipeline import CONTEXT_CONTAINER +from openpype.pipeline import CreatedInstance, AutoCreator +from openpype.pipeline import legacy_io +from openpype.client import get_asset_by_name +import hou + + +class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator): + """Workfile auto-creator.""" + identifier = "io.openpype.creators.houdini.workfile" + label = "Workfile" + family = "workfile" + icon = "document" + + default_variant = "Main" + + def create(self): + variant = self.default_variant + current_instance = next( + ( + instance for instance in self.create_context.instances + if instance.creator_identifier == self.identifier + ), None) + + project_name = self.project_name + asset_name = legacy_io.Session["AVALON_ASSET"] + task_name = legacy_io.Session["AVALON_TASK"] + host_name = legacy_io.Session["AVALON_APP"] + + if current_instance is None: + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + data.update( + self.get_dynamic_data( + variant, task_name, asset_doc, + project_name, host_name, current_instance) + ) + self.log.info("Auto-creating workfile instance...") + current_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(current_instance) + elif ( + current_instance["asset"] != asset_name + or current_instance["task"] != task_name + ): + # Update instance context if is not the same + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + current_instance["asset"] = asset_name + current_instance["task"] = task_name + current_instance["subset"] = subset_name + + # write workfile information to context container. + op_ctx = hou.node(CONTEXT_CONTAINER) + if not op_ctx: + op_ctx = self.create_context_node() + + workfile_data = {"workfile": current_instance.data_to_store()} + imprint(op_ctx, workfile_data) + + def collect_instances(self): + op_ctx = hou.node(CONTEXT_CONTAINER) + instance = read(op_ctx) + if not instance: + return + workfile = instance.get("workfile") + if not workfile: + return + created_instance = CreatedInstance.from_existing( + workfile, self + ) + self._add_instance_to_context(created_instance) + + def update_instances(self, update_list): + op_ctx = hou.node(CONTEXT_CONTAINER) + for created_inst, _changes in update_list: + if created_inst["creator_identifier"] == self.identifier: + workfile_data = {"workfile": created_inst.data_to_store()} + imprint(op_ctx, workfile_data, update=True) diff --git a/openpype/hosts/houdini/plugins/publish/collect_active_state.py b/openpype/hosts/houdini/plugins/publish/collect_active_state.py index 862d5720e1..cc3f2e7fae 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_active_state.py +++ b/openpype/hosts/houdini/plugins/publish/collect_active_state.py @@ -1,4 +1,5 @@ import pyblish.api +import hou class CollectInstanceActiveState(pyblish.api.InstancePlugin): @@ -24,7 +25,7 @@ class CollectInstanceActiveState(pyblish.api.InstancePlugin): # Check bypass state and reverse active = True - node = instance[0] + node = hou.node(instance.get("instance_node")) if hasattr(node, "isBypassed"): active = not node.isBypassed() diff --git a/openpype/hosts/houdini/plugins/publish/collect_current_file.py b/openpype/hosts/houdini/plugins/publish/collect_current_file.py index 1383c274a2..9cca07fdc7 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/collect_current_file.py @@ -5,19 +5,20 @@ from openpype.pipeline import legacy_io import pyblish.api -class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin): +class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin): """Inject the current working file into context""" order = pyblish.api.CollectorOrder - 0.01 label = "Houdini Current File" hosts = ["houdini"] + family = ["workfile"] - def process(self, context): + def process(self, instance): """Inject the current working file""" current_file = hou.hipFile.path() if not os.path.exists(current_file): - # By default Houdini will even point a new scene to a path. + # By default, Houdini will even point a new scene to a path. # However if the file is not saved at all and does not exist, # we assume the user never set it. filepath = "" @@ -34,43 +35,26 @@ class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin): "saved correctly." ) - context.data["currentFile"] = current_file + instance.context.data["currentFile"] = current_file folder, file = os.path.split(current_file) filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] - - data = {} - - # create instance - instance = context.create_instance(name=filename) - subset = 'workfile' + task.capitalize() - - data.update({ - "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), - "label": subset, - "publish": True, - "family": 'workfile', - "families": ['workfile'], + instance.data.update({ "setMembers": [current_file], - "frameStart": context.data['frameStart'], - "frameEnd": context.data['frameEnd'], - "handleStart": context.data['handleStart'], - "handleEnd": context.data['handleEnd'] + "frameStart": instance.context.data['frameStart'], + "frameEnd": instance.context.data['frameEnd'], + "handleStart": instance.context.data['handleStart'], + "handleEnd": instance.context.data['handleEnd'] }) - data['representations'] = [{ + instance.data['representations'] = [{ 'name': ext.lstrip("."), 'ext': ext.lstrip("."), 'files': file, "stagingDir": folder, }] - instance.data.update(data) - self.log.info('Collected instance: {}'.format(file)) self.log.info('Scene path: {}'.format(current_file)) self.log.info('staging Dir: {}'.format(folder)) - self.log.info('subset: {}'.format(subset)) diff --git a/openpype/hosts/houdini/plugins/publish/collect_frames.py b/openpype/hosts/houdini/plugins/publish/collect_frames.py index 9bd43d8a09..531cdf1249 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_frames.py +++ b/openpype/hosts/houdini/plugins/publish/collect_frames.py @@ -1,19 +1,13 @@ +# -*- coding: utf-8 -*- +"""Collector plugin for frames data on ROP instances.""" import os import re -import hou +import hou # noqa import pyblish.api from openpype.hosts.houdini.api import lib -def splitext(name, allowed_multidot_extensions): - - for ext in allowed_multidot_extensions: - if name.endswith(ext): - return name[:-len(ext)], ext - - return os.path.splitext(name) - class CollectFrames(pyblish.api.InstancePlugin): """Collect all frames which would be saved from the ROP nodes""" @@ -24,7 +18,9 @@ class CollectFrames(pyblish.api.InstancePlugin): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.data["instance_node"]) + frame_data = lib.get_frame_data(ropnode) + instance.data.update(frame_data) start_frame = instance.data.get("frameStart", None) end_frame = instance.data.get("frameEnd", None) @@ -38,13 +34,13 @@ class CollectFrames(pyblish.api.InstancePlugin): self.log.warning("Using current frame: {}".format(hou.frame())) output = output_parm.eval() - _, ext = splitext(output, + _, ext = lib.splitext(output, allowed_multidot_extensions=[".ass.gz"]) file_name = os.path.basename(output) result = file_name # Get the filename pattern match from the output - # path so we can compute all frames that would + # path, so we can compute all frames that would # come out from rendering the ROP node if there # is a frame pattern in the name pattern = r"\w+\.(\d+)" + re.escape(ext) @@ -63,8 +59,9 @@ class CollectFrames(pyblish.api.InstancePlugin): # for a custom frame list. So this should be refactored. instance.data.update({"frames": result}) - def create_file_list(self, match, start_frame, end_frame): - """Collect files based on frame range and regex.match + @staticmethod + def create_file_list(match, start_frame, end_frame): + """Collect files based on frame range and `regex.match` Args: match(re.match): match object diff --git a/openpype/hosts/houdini/plugins/publish/collect_instances.py b/openpype/hosts/houdini/plugins/publish/collect_instances.py index d38927984a..bb85630552 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_instances.py +++ b/openpype/hosts/houdini/plugins/publish/collect_instances.py @@ -47,6 +47,11 @@ class CollectInstances(pyblish.api.ContextPlugin): if node.evalParm("id") != "pyblish.avalon.instance": continue + # instance was created by new creator code, skip it as + # it is already collected. + if node.parm("creator_identifier"): + continue + has_family = node.evalParm("family") assert has_family, "'%s' is missing 'family'" % node.name() @@ -58,7 +63,8 @@ class CollectInstances(pyblish.api.ContextPlugin): data.update({"active": not node.isBypassed()}) # temporarily translation of `active` to `publish` till issue has - # been resolved, https://github.com/pyblish/pyblish-base/issues/307 + # been resolved. + # https://github.com/pyblish/pyblish-base/issues/307 if "active" in data: data["publish"] = data["active"] @@ -78,6 +84,7 @@ class CollectInstances(pyblish.api.ContextPlugin): instance.data["families"] = [instance.data["family"]] instance[:] = [node] + instance.data["instance_node"] = node.path() instance.data.update(data) def sort_by_family(instance): diff --git a/openpype/hosts/houdini/plugins/publish/collect_output_node.py b/openpype/hosts/houdini/plugins/publish/collect_output_node.py index 0130c0a8da..601ed17b39 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/collect_output_node.py @@ -22,7 +22,7 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin): import hou - node = instance[0] + node = hou.node(instance.data["instance_node"]) # Get sop path node_type = node.type().name() diff --git a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py index 72b554b567..346bdf3421 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py @@ -69,7 +69,7 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): def process(self, instance): - rop = instance[0] + rop = hou.node(instance.get("instance_node")) # Collect chunkSize chunk_size_parm = rop.parm("chunkSize") diff --git a/openpype/hosts/houdini/plugins/publish/collect_render_products.py b/openpype/hosts/houdini/plugins/publish/collect_render_products.py index d7163b43c0..fcd80e0082 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_render_products.py +++ b/openpype/hosts/houdini/plugins/publish/collect_render_products.py @@ -53,7 +53,7 @@ class CollectRenderProducts(pyblish.api.InstancePlugin): node = instance.data.get("output_node") if not node: - rop_path = instance[0].path() + rop_path = instance.data["instance_node"].path() raise RuntimeError( "No output node found. Make sure to connect an " "input to the USD ROP: %s" % rop_path diff --git a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py index cf8d61cda3..81274c670e 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py +++ b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py @@ -1,6 +1,6 @@ import pyblish.api -from openyppe.client import get_subset_by_name, get_asset_by_name +from openpype.client import get_subset_by_name, get_asset_by_name from openpype.pipeline import legacy_io import openpype.lib.usdlib as usdlib diff --git a/openpype/hosts/houdini/plugins/publish/collect_usd_layers.py b/openpype/hosts/houdini/plugins/publish/collect_usd_layers.py index e3985e3c97..833add854b 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_usd_layers.py +++ b/openpype/hosts/houdini/plugins/publish/collect_usd_layers.py @@ -3,6 +3,8 @@ import os import pyblish.api import openpype.hosts.houdini.api.usd as usdlib +import hou + class CollectUsdLayers(pyblish.api.InstancePlugin): """Collect the USD Layers that have configured save paths.""" @@ -19,7 +21,7 @@ class CollectUsdLayers(pyblish.api.InstancePlugin): self.log.debug("No output node found..") return - rop_node = instance[0] + rop_node = hou.node(instance.get("instance_node")) save_layers = [] for layer in usdlib.get_configured_save_layers(rop_node): @@ -54,8 +56,10 @@ class CollectUsdLayers(pyblish.api.InstancePlugin): layer_inst.data["subset"] = "__stub__" layer_inst.data["label"] = label layer_inst.data["asset"] = instance.data["asset"] - layer_inst.append(instance[0]) # include same USD ROP - layer_inst.append((layer, save_path)) # include layer data + # include same USD ROP + layer_inst.append(rop_node) + # include layer data + layer_inst.append((layer, save_path)) # Allow this subset to be grouped into a USD Layer on creation layer_inst.data["subsetGroup"] = "USD Layer" diff --git a/openpype/hosts/houdini/plugins/publish/extract_alembic.py b/openpype/hosts/houdini/plugins/publish/extract_alembic.py index 758d4c560b..cb2d4ef424 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_alembic.py +++ b/openpype/hosts/houdini/plugins/publish/extract_alembic.py @@ -5,6 +5,8 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop +import hou + class ExtractAlembic(publish.Extractor): @@ -15,7 +17,7 @@ class ExtractAlembic(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.data["instance_node"]) # Get the filename from the filename parameter output = ropnode.evalParm("filename") diff --git a/openpype/hosts/houdini/plugins/publish/extract_ass.py b/openpype/hosts/houdini/plugins/publish/extract_ass.py index a302b451cb..0d246625ba 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_ass.py +++ b/openpype/hosts/houdini/plugins/publish/extract_ass.py @@ -5,6 +5,8 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop +import hou + class ExtractAss(publish.Extractor): @@ -15,7 +17,7 @@ class ExtractAss(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.data["instance_node"]) # Get the filename from the filename parameter # `.evalParm(parameter)` will make sure all tokens are resolved @@ -33,8 +35,12 @@ class ExtractAss(publish.Extractor): # error and thus still continues to the integrator. To capture that # we make sure all files exist files = instance.data["frames"] - missing = [fname for fname in files - if not os.path.exists(os.path.join(staging_dir, fname))] + missing = [] + for file_name in files: + full_path = os.path.normpath(os.path.join(staging_dir, file_name)) + if not os.path.exists(full_path): + missing.append(full_path) + if missing: raise RuntimeError("Failed to complete Arnold ass extraction. " "Missing output files: {}".format(missing)) diff --git a/openpype/hosts/houdini/plugins/publish/extract_composite.py b/openpype/hosts/houdini/plugins/publish/extract_composite.py index 23e875f107..7a1ab36b93 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_composite.py +++ b/openpype/hosts/houdini/plugins/publish/extract_composite.py @@ -1,9 +1,10 @@ import os - import pyblish.api from openpype.pipeline import publish -from openpype.hosts.houdini.api.lib import render_rop +from openpype.hosts.houdini.api.lib import render_rop, splitext + +import hou class ExtractComposite(publish.Extractor): @@ -15,7 +16,7 @@ class ExtractComposite(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.data["instance_node"]) # Get the filename from the copoutput parameter # `.evalParm(parameter)` will make sure all tokens are resolved @@ -28,8 +29,24 @@ class ExtractComposite(publish.Extractor): render_rop(ropnode) - if "files" not in instance.data: - instance.data["files"] = [] + output = instance.data["frames"] + _, ext = splitext(output[0], []) + ext = ext.lstrip(".") - frames = instance.data["frames"] - instance.data["files"].append(frames) + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": ext, + "ext": ext, + "files": output, + "stagingDir": staging_dir, + "frameStart": instance.data["frameStart"], + "frameEnd": instance.data["frameEnd"], + } + + from pprint import pformat + + self.log.info(pformat(representation)) + + instance.data["representations"].append(representation) diff --git a/openpype/hosts/houdini/plugins/publish/extract_hda.py b/openpype/hosts/houdini/plugins/publish/extract_hda.py index 7dd03a92b7..8b97bf364f 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_hda.py +++ b/openpype/hosts/houdini/plugins/publish/extract_hda.py @@ -1,11 +1,9 @@ # -*- coding: utf-8 -*- import os - from pprint import pformat - import pyblish.api - from openpype.pipeline import publish +import hou class ExtractHDA(publish.Extractor): @@ -17,7 +15,7 @@ class ExtractHDA(publish.Extractor): def process(self, instance): self.log.info(pformat(instance.data)) - hda_node = instance[0] + hda_node = hou.node(instance.data.get("instance_node")) hda_def = hda_node.type().definition() hda_options = hda_def.options() hda_options.setSaveInitialParmsAndContents(True) diff --git a/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py index ca9be64a47..29ede98a52 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py @@ -5,6 +5,8 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop +import hou + class ExtractRedshiftProxy(publish.Extractor): @@ -15,7 +17,7 @@ class ExtractRedshiftProxy(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.get("instance_node")) # Get the filename from the filename parameter # `.evalParm(parameter)` will make sure all tokens are resolved diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd.py b/openpype/hosts/houdini/plugins/publish/extract_usd.py index 78c32affb4..cbeb5add71 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_usd.py +++ b/openpype/hosts/houdini/plugins/publish/extract_usd.py @@ -5,6 +5,7 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop +import hou class ExtractUSD(publish.Extractor): @@ -17,7 +18,7 @@ class ExtractUSD(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.get("instance_node")) # Get the filename from the filename parameter output = ropnode.evalParm("lopoutput") diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py index f686f712bb..0288b7363a 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py +++ b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py @@ -187,7 +187,7 @@ class ExtractUSDLayered(publish.Extractor): # Main ROP node, either a USD Rop or ROP network with # multiple USD ROPs - node = instance[0] + node = hou.node(instance.get("instance_node")) # Collect any output dependencies that have not been processed yet # during extraction of other instances diff --git a/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py b/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py index 26ec423048..434d6a2160 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py +++ b/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py @@ -5,6 +5,8 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop +import hou + class ExtractVDBCache(publish.Extractor): @@ -15,7 +17,7 @@ class ExtractVDBCache(publish.Extractor): def process(self, instance): - ropnode = instance[0] + ropnode = hou.node(instance.get("instance_node")) # Get the filename from the filename parameter # `.evalParm(parameter)` will make sure all tokens are resolved diff --git a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml new file mode 100644 index 0000000000..0f92560bf7 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml @@ -0,0 +1,21 @@ + + + +Scene setting + +## Invalid input node + +VDB input must have the same number of VDBs, points, primitives and vertices as output. + + + +### __Detailed Info__ (optional) + +A VDB is an inherited type of Prim, holds the following data: + - Primitives: 1 + - Points: 1 + - Vertices: 1 + - VDBs: 1 + + + \ No newline at end of file diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py index c990f481d3..16d9ef9aec 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py @@ -2,7 +2,7 @@ import pyblish.api from openpype.lib import version_up from openpype.pipeline import registered_host - +from openpype.hosts.houdini.api import HoudiniHost class IncrementCurrentFile(pyblish.api.ContextPlugin): """Increment the current file. @@ -20,11 +20,11 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin): def process(self, context): # Filename must not have changed since collecting - host = registered_host() + host = registered_host() # type: HoudiniHost current_file = host.current_file() assert ( context.data["currentFile"] == current_file ), "Collected filename from current scene name." new_filepath = version_up(current_file) - host.save(new_filepath) + host.save_workfile(new_filepath) diff --git a/openpype/hosts/houdini/plugins/publish/save_scene.py b/openpype/hosts/houdini/plugins/publish/save_scene.py index 6128c7af77..d6e07ccab0 100644 --- a/openpype/hosts/houdini/plugins/publish/save_scene.py +++ b/openpype/hosts/houdini/plugins/publish/save_scene.py @@ -14,13 +14,13 @@ class SaveCurrentScene(pyblish.api.ContextPlugin): # Filename must not have changed since collecting host = registered_host() - current_file = host.current_file() + current_file = host.get_current_workfile() assert context.data['currentFile'] == current_file, ( "Collected filename from current scene name." ) if host.has_unsaved_changes(): - self.log.info("Saving current file..") - host.save_file(current_file) + self.log.info("Saving current file {}...".format(current_file)) + host.save_workfile(current_file) else: self.log.debug("No unsaved changes, skipping file save..") diff --git a/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py deleted file mode 100644 index ac408bc842..0000000000 --- a/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py +++ /dev/null @@ -1,47 +0,0 @@ -import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder - - -class ValidateVDBInputNode(pyblish.api.InstancePlugin): - """Validate that the node connected to the output node is of type VDB. - - Regardless of the amount of VDBs create the output will need to have an - equal amount of VDBs, points, primitives and vertices - - A VDB is an inherited type of Prim, holds the following data: - - Primitives: 1 - - Points: 1 - - Vertices: 1 - - VDBs: 1 - - """ - - order = ValidateContentsOrder + 0.1 - families = ["vdbcache"] - hosts = ["houdini"] - label = "Validate Input Node (VDB)" - - def process(self, instance): - invalid = self.get_invalid(instance) - if invalid: - raise RuntimeError( - "Node connected to the output node is not" "of type VDB!" - ) - - @classmethod - def get_invalid(cls, instance): - - node = instance.data["output_node"] - - prims = node.geometry().prims() - nr_of_prims = len(prims) - - nr_of_points = len(node.geometry().points()) - if nr_of_points != nr_of_prims: - cls.log.error("The number of primitives and points do not match") - return [instance] - - for prim in prims: - if prim.numVertices() != 1: - cls.log.error("Found primitive with more than 1 vertex!") - return [instance] diff --git a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py index ea800707fb..86e92a052f 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py +++ b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py @@ -1,8 +1,8 @@ +# -*- coding: utf-8 -*- import pyblish.api from collections import defaultdict - -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): @@ -16,7 +16,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + 0.1 + order = pyblish.api.ValidatorOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Primitive to Detail (Abc)" @@ -24,18 +24,26 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Primitives found with inconsistent primitive " - "to detail attributes. See log." + raise PublishValidationError( + ("Primitives found with inconsistent primitive " + "to detail attributes. See log."), + title=self.label ) @classmethod def get_invalid(cls, instance): + import hou # noqa + output_node = instance.data.get("output_node") + rop_node = hou.node(instance.data["instance_node"]) + if output_node is None: + cls.log.error( + "SOP Output node in '%s' does not exist. " + "Ensure a valid SOP output path is set." % rop_node.path() + ) - output = instance.data["output_node"] + return [rop_node.path()] - rop = instance[0] - pattern = rop.parm("prim_to_detail_pattern").eval().strip() + pattern = rop_node.parm("prim_to_detail_pattern").eval().strip() if not pattern: cls.log.debug( "Alembic ROP has no 'Primitive to Detail' pattern. " @@ -43,7 +51,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): ) return - build_from_path = rop.parm("build_from_path").eval() + build_from_path = rop_node.parm("build_from_path").eval() if not build_from_path: cls.log.debug( "Alembic ROP has 'Build from Path' disabled. " @@ -51,14 +59,14 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): ) return - path_attr = rop.parm("path_attrib").eval() + path_attr = rop_node.parm("path_attrib").eval() if not path_attr: cls.log.error( "The Alembic ROP node has no Path Attribute" "value set, but 'Build Hierarchy from Attribute'" "is enabled." ) - return [rop.path()] + return [rop_node.path()] # Let's assume each attribute is explicitly named for now and has no # wildcards for Primitive to Detail. This simplifies the check. @@ -67,7 +75,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): # Check if the primitive attribute exists frame = instance.data.get("frameStart", 0) - geo = output.geometryAtFrame(frame) + geo = output_node.geometryAtFrame(frame) # If there are no primitives on the start frame then it might be # something that is emitted over time. As such we can't actually @@ -86,7 +94,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): "Geometry Primitives are missing " "path attribute: `%s`" % path_attr ) - return [output.path()] + return [output_node.path()] # Ensure at least a single string value is present if not attrib.strings(): @@ -94,7 +102,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): "Primitive path attribute has no " "string values: %s" % path_attr ) - return [output.path()] + return [output_node.path()] paths = None for attr in pattern.split(" "): @@ -130,4 +138,4 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): "Path has multiple values: %s (path: %s)" % (list(values), path) ) - return [output.path()] + return [output_node.path()] diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py index cbed3ea235..44d58cfa36 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py @@ -1,7 +1,6 @@ +# -*- coding: utf-8 -*- import pyblish.api - -from openpype.pipeline.publish import ValidateContentsOrder - +import hou class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): """Validate Face Sets are disabled for extraction to pointcache. @@ -18,14 +17,14 @@ class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + 0.1 + order = pyblish.api.ValidatorOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Alembic ROP Face Sets" def process(self, instance): - rop = instance[0] + rop = hou.node(instance.data["instance_node"]) facesets = rop.parm("facesets").eval() # 0 = No Face Sets diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py index 2625ae5f83..bafb206bd3 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py @@ -1,6 +1,7 @@ +# -*- coding: utf-8 -*- import pyblish.api - -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError +import hou class ValidateAlembicInputNode(pyblish.api.InstancePlugin): @@ -12,7 +13,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + 0.1 + order = pyblish.api.ValidatorOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Input Node (Abc)" @@ -20,18 +21,28 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Primitive types found that are not supported" - "for Alembic output." + raise PublishValidationError( + ("Primitive types found that are not supported" + "for Alembic output."), + title=self.label ) @classmethod def get_invalid(cls, instance): invalid_prim_types = ["VDB", "Volume"] - node = instance.data["output_node"] + output_node = instance.data.get("output_node") - if not hasattr(node, "geometry"): + if output_node is None: + node = hou.node(instance.data["instance_node"]) + cls.log.error( + "SOP Output node in '%s' does not exist. " + "Ensure a valid SOP output path is set." % node.path() + ) + + return [node.path()] + + if not hasattr(output_node, "geometry"): # In the case someone has explicitly set an Object # node instead of a SOP node in Geometry context # then for now we ignore - this allows us to also @@ -40,7 +51,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): return frame = instance.data.get("frameStart", 0) - geo = node.geometryAtFrame(frame) + geo = output_node.geometryAtFrame(frame) invalid = False for prim_type in invalid_prim_types: diff --git a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py index 5eb8f93d03..f11f9c0c62 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py +++ b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py @@ -1,6 +1,7 @@ import pyblish.api from openpype.hosts.houdini.api import lib +import hou class ValidateAnimationSettings(pyblish.api.InstancePlugin): @@ -36,7 +37,7 @@ class ValidateAnimationSettings(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - node = instance[0] + node = hou.node(instance.get("instance_node")) # Check trange parm, 0 means Render Current Frame frame_range = node.evalParm("trange") diff --git a/openpype/hosts/houdini/plugins/publish/validate_bypass.py b/openpype/hosts/houdini/plugins/publish/validate_bypass.py index 7cf8da69d6..1bf51a986c 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_bypass.py +++ b/openpype/hosts/houdini/plugins/publish/validate_bypass.py @@ -1,6 +1,8 @@ +# -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError +import hou class ValidateBypassed(pyblish.api.InstancePlugin): """Validate all primitives build hierarchy from attribute when enabled. @@ -11,7 +13,7 @@ class ValidateBypassed(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder - 0.1 + order = pyblish.api.ValidatorOrder - 0.1 families = ["*"] hosts = ["houdini"] label = "Validate ROP Bypass" @@ -26,14 +28,15 @@ class ValidateBypassed(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: rop = invalid[0] - raise RuntimeError( - "ROP node %s is set to bypass, publishing cannot continue.." - % rop.path() + raise PublishValidationError( + ("ROP node {} is set to bypass, publishing cannot " + "continue.".format(rop.path())), + title=self.label ) @classmethod def get_invalid(cls, instance): - rop = instance[0] + rop = hou.node(instance.get("instance_node")) if hasattr(rop, "isBypassed") and rop.isBypassed(): return [rop] diff --git a/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py b/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py index d414920f8b..41b5273e6a 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py +++ b/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py @@ -1,11 +1,13 @@ +# -*- coding: utf-8 -*- +"""Validator plugin for Houdini Camera ROP settings.""" import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError class ValidateCameraROP(pyblish.api.InstancePlugin): """Validate Camera ROP settings.""" - order = ValidateContentsOrder + order = pyblish.api.ValidatorOrder families = ["camera"] hosts = ["houdini"] label = "Camera ROP" @@ -14,30 +16,45 @@ class ValidateCameraROP(pyblish.api.InstancePlugin): import hou - node = instance[0] + node = hou.node(instance.data.get("instance_node")) if node.parm("use_sop_path").eval(): - raise RuntimeError( - "Alembic ROP for Camera export should not be " - "set to 'Use Sop Path'. Please disable." + raise PublishValidationError( + ("Alembic ROP for Camera export should not be " + "set to 'Use Sop Path'. Please disable."), + title=self.label ) # Get the root and objects parameter of the Alembic ROP node root = node.parm("root").eval() objects = node.parm("objects").eval() - assert root, "Root parameter must be set on Alembic ROP" - assert root.startswith("/"), "Root parameter must start with slash /" - assert objects, "Objects parameter must be set on Alembic ROP" - assert len(objects.split(" ")) == 1, "Must have only a single object." + errors = [] + if not root: + errors.append("Root parameter must be set on Alembic ROP") + if not root.startswith("/"): + errors.append("Root parameter must start with slash /") + if not objects: + errors.append("Objects parameter must be set on Alembic ROP") + if len(objects.split(" ")) != 1: + errors.append("Must have only a single object.") + + if errors: + for error in errors: + self.log.error(error) + raise PublishValidationError( + "Some checks failed, see validator log.", + title=self.label) # Check if the object exists and is a camera path = root + "/" + objects camera = hou.node(path) if not camera: - raise ValueError("Camera path does not exist: %s" % path) + raise PublishValidationError( + "Camera path does not exist: %s" % path, + title=self.label) if camera.type().name() != "cam": - raise ValueError( - "Object set in Alembic ROP is not a camera: " - "%s (type: %s)" % (camera, camera.type().name()) - ) + raise PublishValidationError( + ("Object set in Alembic ROP is not a camera: " + "{} (type: {})").format(camera, camera.type().name()), + title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_cop_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_cop_output_node.py index 543539ffe3..1d0377c818 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_cop_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_cop_output_node.py @@ -1,4 +1,9 @@ +# -*- coding: utf-8 -*- +import sys import pyblish.api +import six + +from openpype.pipeline import PublishValidationError class ValidateCopOutputNode(pyblish.api.InstancePlugin): @@ -20,9 +25,10 @@ class ValidateCopOutputNode(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Output node(s) `%s` are incorrect. " - "See plug-in log for details." % invalid + raise PublishValidationError( + ("Output node(s) `{}` are incorrect. " + "See plug-in log for details.").format(invalid), + title=self.label ) @classmethod @@ -30,10 +36,19 @@ class ValidateCopOutputNode(pyblish.api.InstancePlugin): import hou - output_node = instance.data["output_node"] + try: + output_node = instance.data["output_node"] + except KeyError: + six.reraise( + PublishValidationError, + PublishValidationError( + "Can't determine COP output node.", + title=cls.__name__), + sys.exc_info()[2] + ) if output_node is None: - node = instance[0] + node = hou.node(instance.get("instance_node")) cls.log.error( "COP Output node in '%s' does not exist. " "Ensure a valid COP output path is set." % node.path() @@ -54,7 +69,8 @@ class ValidateCopOutputNode(pyblish.api.InstancePlugin): # For the sake of completeness also assert the category type # is Cop2 to avoid potential edge case scenarios even though # the isinstance check above should be stricter than this category - assert output_node.type().category().name() == "Cop2", ( - "Output node %s is not of category Cop2. This is a bug.." - % output_node.path() - ) + if output_node.type().category().name() != "Cop2": + raise PublishValidationError( + ("Output node %s is not of category Cop2. " + "This is a bug...").format(output_node.path()), + title=cls.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py index b26d28a1e7..4584e78f4f 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py +++ b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py @@ -1,7 +1,11 @@ +# -*- coding: utf-8 -*- import os import pyblish.api from openpype.hosts.houdini.api import lib +from openpype.pipeline import PublishValidationError + +import hou class ValidateFileExtension(pyblish.api.InstancePlugin): @@ -29,15 +33,16 @@ class ValidateFileExtension(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "ROP node has incorrect " "file extension: %s" % invalid + raise PublishValidationError( + "ROP node has incorrect file extension: {}".format(invalid), + title=self.label ) @classmethod def get_invalid(cls, instance): # Get ROP node from instance - node = instance[0] + node = hou.node(instance.data["instance_node"]) # Create lookup for current family in instance families = [] @@ -53,7 +58,9 @@ class ValidateFileExtension(pyblish.api.InstancePlugin): for family in families: extension = cls.family_extensions.get(family, None) if extension is None: - raise RuntimeError("Unsupported family: %s" % family) + raise PublishValidationError( + "Unsupported family: {}".format(family), + title=cls.label) if output_extension != extension: return [node.path()] diff --git a/openpype/hosts/houdini/plugins/publish/validate_frame_token.py b/openpype/hosts/houdini/plugins/publish/validate_frame_token.py index 76b5910576..b5f6ba71e1 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_frame_token.py +++ b/openpype/hosts/houdini/plugins/publish/validate_frame_token.py @@ -1,6 +1,7 @@ import pyblish.api from openpype.hosts.houdini.api import lib +import hou class ValidateFrameToken(pyblish.api.InstancePlugin): @@ -36,7 +37,7 @@ class ValidateFrameToken(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - node = instance[0] + node = hou.node(instance.get("instance_node")) # Check trange parm, 0 means Render Current Frame frame_range = node.evalParm("trange") diff --git a/openpype/hosts/houdini/plugins/publish/validate_houdini_license_category.py b/openpype/hosts/houdini/plugins/publish/validate_houdini_license_category.py index f5f03aa844..f1c52f22c1 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_houdini_license_category.py +++ b/openpype/hosts/houdini/plugins/publish/validate_houdini_license_category.py @@ -1,4 +1,6 @@ +# -*- coding: utf-8 -*- import pyblish.api +from openpype.pipeline import PublishValidationError class ValidateHoudiniCommercialLicense(pyblish.api.InstancePlugin): @@ -24,7 +26,7 @@ class ValidateHoudiniCommercialLicense(pyblish.api.InstancePlugin): license = hou.licenseCategory() if license != hou.licenseCategoryType.Commercial: - raise RuntimeError( - "USD Publishing requires a full Commercial " - "license. You are on: %s" % license - ) + raise PublishValidationError( + ("USD Publishing requires a full Commercial " + "license. You are on: {}").format(license), + title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py b/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py index be6a798a95..9d1f92a101 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py +++ b/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py @@ -1,11 +1,12 @@ +# -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError class ValidateIntermediateDirectoriesChecked(pyblish.api.InstancePlugin): """Validate Create Intermediate Directories is enabled on ROP node.""" - order = ValidateContentsOrder + order = pyblish.api.ValidatorOrder families = ["pointcache", "camera", "vdbcache"] hosts = ["houdini"] label = "Create Intermediate Directories Checked" @@ -14,10 +15,10 @@ class ValidateIntermediateDirectoriesChecked(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Found ROP node with Create Intermediate " - "Directories turned off: %s" % invalid - ) + raise PublishValidationError( + ("Found ROP node with Create Intermediate " + "Directories turned off: {}".format(invalid)), + title=self.label) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/houdini/plugins/publish/validate_no_errors.py b/openpype/hosts/houdini/plugins/publish/validate_no_errors.py index 76635d4ed5..f7c95aaf4e 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_no_errors.py +++ b/openpype/hosts/houdini/plugins/publish/validate_no_errors.py @@ -1,6 +1,7 @@ +# -*- coding: utf-8 -*- import pyblish.api import hou -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError def cook_in_range(node, start, end): @@ -28,7 +29,7 @@ def get_errors(node): class ValidateNoErrors(pyblish.api.InstancePlugin): """Validate the Instance has no current cooking errors.""" - order = ValidateContentsOrder + order = pyblish.api.ValidatorOrder hosts = ["houdini"] label = "Validate no errors" @@ -37,7 +38,7 @@ class ValidateNoErrors(pyblish.api.InstancePlugin): validate_nodes = [] if len(instance) > 0: - validate_nodes.append(instance[0]) + validate_nodes.append(hou.node(instance.get("instance_node"))) output_node = instance.data.get("output_node") if output_node: validate_nodes.append(output_node) @@ -62,4 +63,6 @@ class ValidateNoErrors(pyblish.api.InstancePlugin): errors = get_errors(node) if errors: self.log.error(errors) - raise RuntimeError("Node has errors: %s" % node.path()) + raise PublishValidationError( + "Node has errors: {}".format(node.path()), + title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py index 7a8cd04f15..d3a4c0cfbf 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py @@ -1,5 +1,8 @@ +# -*- coding: utf-8 -*- import pyblish.api from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError +import hou class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): @@ -19,19 +22,26 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "See log for details. " "Invalid nodes: {0}".format(invalid) + raise PublishValidationError( + "See log for details. " "Invalid nodes: {0}".format(invalid), + title=self.label ) @classmethod def get_invalid(cls, instance): - import hou + output_node = instance.data.get("output_node") + rop_node = hou.node(instance.data["instance_node"]) - output = instance.data["output_node"] + if output_node is None: + cls.log.error( + "SOP Output node in '%s' does not exist. " + "Ensure a valid SOP output path is set." % rop_node.path() + ) - rop = instance[0] - build_from_path = rop.parm("build_from_path").eval() + return [rop_node.path()] + + build_from_path = rop_node.parm("build_from_path").eval() if not build_from_path: cls.log.debug( "Alembic ROP has 'Build from Path' disabled. " @@ -39,20 +49,20 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): ) return - path_attr = rop.parm("path_attrib").eval() + path_attr = rop_node.parm("path_attrib").eval() if not path_attr: cls.log.error( "The Alembic ROP node has no Path Attribute" "value set, but 'Build Hierarchy from Attribute'" "is enabled." ) - return [rop.path()] + return [rop_node.path()] cls.log.debug("Checking for attribute: %s" % path_attr) # Check if the primitive attribute exists frame = instance.data.get("frameStart", 0) - geo = output.geometryAtFrame(frame) + geo = output_node.geometryAtFrame(frame) # If there are no primitives on the current frame then we can't # check whether the path names are correct. So we'll just issue a @@ -73,7 +83,7 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): "Geometry Primitives are missing " "path attribute: `%s`" % path_attr ) - return [output.path()] + return [output_node.path()] # Ensure at least a single string value is present if not attrib.strings(): @@ -81,7 +91,7 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): "Primitive path attribute has no " "string values: %s" % path_attr ) - return [output.path()] + return [output_node.path()] paths = geo.primStringAttribValues(path_attr) # Ensure all primitives are set to a valid path @@ -93,4 +103,4 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): "Prims have no value for attribute `%s` " "(%s of %s prims)" % (path_attr, len(invalid_prims), num_prims) ) - return [output.path()] + return [output_node.path()] diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py index 0ab182c584..4e8e5fc0e8 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py +++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py @@ -1,7 +1,9 @@ +# -*-coding: utf-8 -*- import pyblish.api from openpype.hosts.houdini.api import lib from openpype.pipeline.publish import RepairContextAction +from openpype.pipeline import PublishValidationError import hou @@ -27,17 +29,24 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin): # We ensure it's a shell node and that it has the pre-render script # set correctly. Plus the shell script it will trigger should be # completely empty (doing nothing) - assert node.type().name() == "shell", "Must be shell ROP node" - assert node.parm("command").eval() == "", "Must have no command" - assert not node.parm("shellexec").eval(), "Must not execute in shell" - assert ( - node.parm("prerender").eval() == cmd - ), "REMOTE_PUBLISH node does not have correct prerender script." - assert ( - node.parm("lprerender").eval() == "python" - ), "REMOTE_PUBLISH node prerender script type not set to 'python'" + if node.type().name() != "shell": + self.raise_error("Must be shell ROP node") + if node.parm("command").eval() != "": + self.raise_error("Must have no command") + if node.parm("shellexec").eval(): + self.raise_error("Must not execute in shell") + if node.parm("prerender").eval() != cmd: + self.raise_error(("REMOTE_PUBLISH node does not have " + "correct prerender script.")) + if node.parm("lprerender").eval() != "python": + self.raise_error(("REMOTE_PUBLISH node prerender script " + "type not set to 'python'")) @classmethod def repair(cls, context): """(Re)create the node if it fails to pass validation.""" lib.create_remote_publish_node(force=True) + + def raise_error(self, message): + self.log.error(message) + raise PublishValidationError(message, title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py index afc8df7528..8ec62f4e85 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py +++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py @@ -1,7 +1,9 @@ +# -*- coding: utf-8 -*- import pyblish.api import hou from openpype.pipeline.publish import RepairContextAction +from openpype.pipeline import PublishValidationError class ValidateRemotePublishEnabled(pyblish.api.ContextPlugin): @@ -18,10 +20,12 @@ class ValidateRemotePublishEnabled(pyblish.api.ContextPlugin): node = hou.node("/out/REMOTE_PUBLISH") if not node: - raise RuntimeError("Missing REMOTE_PUBLISH node.") + raise PublishValidationError( + "Missing REMOTE_PUBLISH node.", title=self.label) if node.isBypassed(): - raise RuntimeError("REMOTE_PUBLISH must not be bypassed.") + raise PublishValidationError( + "REMOTE_PUBLISH must not be bypassed.", title=self.label) @classmethod def repair(cls, context): @@ -29,7 +33,8 @@ class ValidateRemotePublishEnabled(pyblish.api.ContextPlugin): node = hou.node("/out/REMOTE_PUBLISH") if not node: - raise RuntimeError("Missing REMOTE_PUBLISH node.") + raise PublishValidationError( + "Missing REMOTE_PUBLISH node.", title=cls.label) cls.log.info("Disabling bypass on /out/REMOTE_PUBLISH") node.bypass(False) diff --git a/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py index a5a07b1b1a..ed7f438729 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_sop_output_node.py @@ -1,4 +1,6 @@ +# -*- coding: utf-8 -*- import pyblish.api +from openpype.pipeline import PublishValidationError class ValidateSopOutputNode(pyblish.api.InstancePlugin): @@ -22,9 +24,9 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Output node(s) `%s` are incorrect. " - "See plug-in log for details." % invalid + raise PublishValidationError( + "Output node(s) are incorrect", + title="Invalid output node(s)" ) @classmethod @@ -32,10 +34,10 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): import hou - output_node = instance.data["output_node"] + output_node = instance.data.get("output_node") if output_node is None: - node = instance[0] + node = hou.node(instance.data["instance_node"]) cls.log.error( "SOP Output node in '%s' does not exist. " "Ensure a valid SOP output path is set." % node.path() @@ -56,10 +58,11 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin): # For the sake of completeness also assert the category type # is Sop to avoid potential edge case scenarios even though # the isinstance check above should be stricter than this category - assert output_node.type().category().name() == "Sop", ( - "Output node %s is not of category Sop. This is a bug.." - % output_node.path() - ) + if output_node.type().category().name() != "Sop": + raise PublishValidationError( + ("Output node {} is not of category Sop. " + "This is a bug.").format(output_node.path()), + title=cls.label) # Ensure the node is cooked and succeeds to cook so we can correctly # check for its geometry data. diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_layer_path_backslashes.py b/openpype/hosts/houdini/plugins/publish/validate_usd_layer_path_backslashes.py index ac0181aed2..a0e2302495 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_layer_path_backslashes.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_layer_path_backslashes.py @@ -1,6 +1,10 @@ +# -*- coding: utf-8 -*- import pyblish.api import openpype.hosts.houdini.api.usd as hou_usdlib +from openpype.pipeline import PublishValidationError + +import hou class ValidateUSDLayerPathBackslashes(pyblish.api.InstancePlugin): @@ -24,7 +28,7 @@ class ValidateUSDLayerPathBackslashes(pyblish.api.InstancePlugin): def process(self, instance): - rop = instance[0] + rop = hou.node(instance.get("instance_node")) lop_path = hou_usdlib.get_usd_rop_loppath(rop) stage = lop_path.stage(apply_viewport_overrides=False) @@ -44,7 +48,7 @@ class ValidateUSDLayerPathBackslashes(pyblish.api.InstancePlugin): invalid.append(layer) if invalid: - raise RuntimeError( + raise PublishValidationError(( "Loaded layers have backslashes. " - "This is invalid for HUSK USD rendering." - ) + "This is invalid for HUSK USD rendering."), + title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_model_and_shade.py b/openpype/hosts/houdini/plugins/publish/validate_usd_model_and_shade.py index 2fd2f5eb9f..a55eb70cb2 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_model_and_shade.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_model_and_shade.py @@ -1,10 +1,13 @@ +# -*- coding: utf-8 -*- import pyblish.api import openpype.hosts.houdini.api.usd as hou_usdlib - +from openpype.pipeline import PublishValidationError from pxr import UsdShade, UsdRender, UsdLux +import hou + def fullname(o): """Get fully qualified class name""" @@ -37,7 +40,7 @@ class ValidateUsdModel(pyblish.api.InstancePlugin): def process(self, instance): - rop = instance[0] + rop = hou.node(instance.get("instance_node")) lop_path = hou_usdlib.get_usd_rop_loppath(rop) stage = lop_path.stage(apply_viewport_overrides=False) @@ -55,7 +58,8 @@ class ValidateUsdModel(pyblish.api.InstancePlugin): if invalid: prim_paths = sorted([str(prim.GetPath()) for prim in invalid]) - raise RuntimeError("Found invalid primitives: %s" % prim_paths) + raise PublishValidationError( + "Found invalid primitives: {}".format(prim_paths)) class ValidateUsdShade(ValidateUsdModel): diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_usd_output_node.py index 1f10fafdf4..af21efcafc 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_output_node.py @@ -1,4 +1,6 @@ +# -*- coding: utf-8 -*- import pyblish.api +from openpype.pipeline import PublishValidationError class ValidateUSDOutputNode(pyblish.api.InstancePlugin): @@ -20,9 +22,10 @@ class ValidateUSDOutputNode(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Output node(s) `%s` are incorrect. " - "See plug-in log for details." % invalid + raise PublishValidationError( + ("Output node(s) `{}` are incorrect. " + "See plug-in log for details.").format(invalid), + title=self.label ) @classmethod @@ -33,7 +36,7 @@ class ValidateUSDOutputNode(pyblish.api.InstancePlugin): output_node = instance.data["output_node"] if output_node is None: - node = instance[0] + node = hou.node(instance.get("instance_node")) cls.log.error( "USD node '%s' LOP path does not exist. " "Ensure a valid LOP path is set." % node.path() diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py index 36336a03ae..02c44ab94e 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py @@ -1,6 +1,8 @@ +# -*- coding: utf-8 -*- +import os import pyblish.api -import os +from openpype.pipeline import PublishValidationError class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin): @@ -28,4 +30,5 @@ class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin): if invalid: for message in invalid: self.log.error(message) - raise RuntimeError("USD Render Paths are invalid.") + raise PublishValidationError( + "USD Render Paths are invalid.", title=self.label) diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_setdress.py b/openpype/hosts/houdini/plugins/publish/validate_usd_setdress.py index fb1094e6b5..01ebc0e828 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_setdress.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_setdress.py @@ -1,6 +1,8 @@ +# -*- coding: utf-8 -*- import pyblish.api import openpype.hosts.houdini.api.usd as hou_usdlib +from openpype.pipeline import PublishValidationError class ValidateUsdSetDress(pyblish.api.InstancePlugin): @@ -20,8 +22,9 @@ class ValidateUsdSetDress(pyblish.api.InstancePlugin): def process(self, instance): from pxr import UsdGeom + import hou - rop = instance[0] + rop = hou.node(instance.get("instance_node")) lop_path = hou_usdlib.get_usd_rop_loppath(rop) stage = lop_path.stage(apply_viewport_overrides=False) @@ -47,8 +50,9 @@ class ValidateUsdSetDress(pyblish.api.InstancePlugin): invalid.append(node) if invalid: - raise RuntimeError( + raise PublishValidationError(( "SetDress contains local geometry. " "This is not allowed, it must be an assembly " - "of referenced assets." + "of referenced assets."), + title=self.label ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py index f08c7c72c5..c4f118ac3b 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py @@ -1,3 +1,4 @@ +# -*- coding: utf-8 -*- import re import pyblish.api @@ -5,6 +6,7 @@ import pyblish.api from openpype.client import get_subset_by_name from openpype.pipeline import legacy_io from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin): @@ -32,7 +34,8 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin): project_name, model_subset, asset_doc["_id"], fields=["_id"] ) if not subset_doc: - raise RuntimeError( - "USD Model subset not found: " - "%s (%s)" % (model_subset, asset_name) + raise PublishValidationError( + ("USD Model subset not found: " + "{} ({})").format(model_subset, asset_name), + title=self.label ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py index a4902b48a9..bd3366a424 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py @@ -1,5 +1,6 @@ +# -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError import hou @@ -12,14 +13,14 @@ class ValidateUsdShadeWorkspace(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + order = pyblish.api.ValidatorOrder hosts = ["houdini"] families = ["usdShade"] label = "USD Shade Workspace" def process(self, instance): - rop = instance[0] + rop = hou.node(instance.get("instance_node")) workspace = rop.parent() definition = workspace.type().definition() @@ -39,13 +40,14 @@ class ValidateUsdShadeWorkspace(pyblish.api.InstancePlugin): if node_type != other_node_type: continue - # Get highest version + # Get the highest version highest = max(highest, other_version) if version != highest: - raise RuntimeError( - "Shading Workspace is not the latest version." - " Found %s. Latest is %s." % (version, highest) + raise PublishValidationError( + ("Shading Workspace is not the latest version." + " Found {}. Latest is {}.").format(version, highest), + title=self.label ) # There were some issues with the editable node not having the right @@ -56,8 +58,9 @@ class ValidateUsdShadeWorkspace(pyblish.api.InstancePlugin): ) rop_value = rop.parm("lopoutput").rawValue() if rop_value != value: - raise RuntimeError( - "Shading Workspace has invalid 'lopoutput'" - " parameter value. The Shading Workspace" - " needs to be reset to its default values." + raise PublishValidationError( + ("Shading Workspace has invalid 'lopoutput'" + " parameter value. The Shading Workspace" + " needs to be reset to its default values."), + title=self.label ) diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py index ac408bc842..1f9ccc9c42 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py @@ -1,5 +1,8 @@ +# -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import ( + PublishValidationError +) class ValidateVDBInputNode(pyblish.api.InstancePlugin): @@ -16,7 +19,7 @@ class ValidateVDBInputNode(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + 0.1 + order = pyblish.api.ValidatorOrder + 0.1 families = ["vdbcache"] hosts = ["houdini"] label = "Validate Input Node (VDB)" @@ -24,8 +27,10 @@ class ValidateVDBInputNode(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Node connected to the output node is not" "of type VDB!" + raise PublishValidationError( + self, + "Node connected to the output node is not of type VDB", + title=self.label ) @classmethod diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py index 55ed581d4c..61c1209fc9 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py @@ -1,6 +1,7 @@ +# -*- coding: utf-8 -*- import pyblish.api import hou -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline import PublishValidationError class ValidateVDBOutputNode(pyblish.api.InstancePlugin): @@ -17,7 +18,7 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): """ - order = ValidateContentsOrder + 0.1 + order = pyblish.api.ValidatorOrder + 0.1 families = ["vdbcache"] hosts = ["houdini"] label = "Validate Output Node (VDB)" @@ -25,8 +26,9 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( - "Node connected to the output node is not" " of type VDB!" + raise PublishValidationError( + "Node connected to the output node is not" " of type VDB!", + title=self.label ) @classmethod @@ -36,7 +38,7 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): if node is None: cls.log.error( "SOP path is not correctly set on " - "ROP node '%s'." % instance[0].path() + "ROP node '%s'." % instance.get("instance_node") ) return [instance] diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py index 560b355e21..7707cc2dba 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -1,11 +1,17 @@ # -*- coding: utf-8 -*- import pyblish.api import hou +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import RepairAction from openpype.pipeline.publish import RepairAction -class ValidateWorkfilePaths(pyblish.api.InstancePlugin): +class ValidateWorkfilePaths( + pyblish.api.InstancePlugin, OptionalPyblishPluginMixin): """Validate workfile paths so they are absolute.""" order = pyblish.api.ValidatorOrder @@ -19,6 +25,8 @@ class ValidateWorkfilePaths(pyblish.api.InstancePlugin): prohibited_vars = ["$HIP", "$JOB"] def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid() self.log.info( "node types to check: {}".format(", ".join(self.node_types))) @@ -30,15 +38,16 @@ class ValidateWorkfilePaths(pyblish.api.InstancePlugin): self.log.error( "{}: {}".format(param.path(), param.unexpandedString())) - raise RuntimeError("Invalid paths found") + raise PublishValidationError( + "Invalid paths found", title=self.label) @classmethod def get_invalid(cls): invalid = [] for param, _ in hou.fileReferences(): - if param is None: + # it might return None for some reason + if not param: continue - # skip nodes we are not interested in if param.node().type().name() not in cls.node_types: continue diff --git a/openpype/hosts/houdini/startup/MainMenuCommon.xml b/openpype/hosts/houdini/startup/MainMenuCommon.xml index abfa3f136e..c08114b71b 100644 --- a/openpype/hosts/houdini/startup/MainMenuCommon.xml +++ b/openpype/hosts/houdini/startup/MainMenuCommon.xml @@ -1,10 +1,10 @@ - + - + - + - + bool: + node = rt.getNodeByName(node_name) + if not node: + return False + + for k, v in data.items(): + if isinstance(v, (dict, list)): + rt.setUserProp(node, k, f'{JSON_PREFIX}{json.dumps(v)}') + else: + rt.setUserProp(node, k, v) + + return True + + +def lsattr( + attr: str, + value: Union[str, None] = None, + root: Union[str, None] = None) -> list: + """List nodes having attribute with specified value. + + Args: + attr (str): Attribute name to match. + value (str, Optional): Value to match, of omitted, all nodes + with specified attribute are returned no matter of value. + root (str, Optional): Root node name. If omitted, scene root is used. + + Returns: + list of nodes. + """ + root = rt.rootnode if root is None else rt.getNodeByName(root) + + def output_node(node, nodes): + nodes.append(node) + for child in node.Children: + output_node(child, nodes) + + nodes = [] + output_node(root, nodes) + return [ + n for n in nodes + if rt.getUserProp(n, attr) == value + ] if value else [ + n for n in nodes + if rt.getUserProp(n, attr) + ] + + +def read(container) -> dict: + data = {} + props = rt.getUserPropBuffer(container) + # this shouldn't happen but let's guard against it anyway + if not props: + return data + + for line in props.split("\r\n"): + try: + key, value = line.split("=") + except ValueError: + # if the line cannot be split we can't really parse it + continue + + value = value.strip() + if isinstance(value.strip(), six.string_types) and \ + value.startswith(JSON_PREFIX): + try: + value = json.loads(value[len(JSON_PREFIX):]) + except json.JSONDecodeError: + # not a json + pass + + data[key.strip()] = value + + data["instance_node"] = container.name + + return data + + +@contextlib.contextmanager +def maintained_selection(): + previous_selection = rt.getCurrentSelection() + try: + yield + finally: + if previous_selection: + rt.select(previous_selection) + else: + rt.select() + + +def get_all_children(parent, node_type=None): + """Handy function to get all the children of a given node + + Args: + parent (3dsmax Node1): Node to get all children of. + node_type (None, runtime.class): give class to check for + e.g. rt.FFDBox/rt.GeometryClass etc. + + Returns: + list: list of all children of the parent node + """ + def list_children(node): + children = [] + for c in node.Children: + children.append(c) + children = children + list_children(c) + return children + child_list = list_children(parent) + + return ([x for x in child_list if rt.superClassOf(x) == node_type] + if node_type else child_list) diff --git a/openpype/hosts/max/api/menu.py b/openpype/hosts/max/api/menu.py new file mode 100644 index 0000000000..d1913c51e0 --- /dev/null +++ b/openpype/hosts/max/api/menu.py @@ -0,0 +1,130 @@ +# -*- coding: utf-8 -*- +"""3dsmax menu definition of OpenPype.""" +from Qt import QtWidgets, QtCore +from pymxs import runtime as rt + +from openpype.tools.utils import host_tools + + +class OpenPypeMenu(object): + """Object representing OpenPype menu. + + This is using "hack" to inject itself before "Help" menu of 3dsmax. + For some reason `postLoadingMenus` event doesn't fire, and main menu + if probably re-initialized by menu templates, se we wait for at least + 1 event Qt event loop before trying to insert. + + """ + + def __init__(self): + super().__init__() + self.main_widget = self.get_main_widget() + self.menu = None + + timer = QtCore.QTimer() + # set number of event loops to wait. + timer.setInterval(1) + timer.timeout.connect(self._on_timer) + timer.start() + + self._timer = timer + self._counter = 0 + + def _on_timer(self): + if self._counter < 1: + self._counter += 1 + return + + self._counter = 0 + self._timer.stop() + self.build_openpype_menu() + + @staticmethod + def get_main_widget(): + """Get 3dsmax main window.""" + return QtWidgets.QWidget.find(rt.windows.getMAXHWND()) + + def get_main_menubar(self) -> QtWidgets.QMenuBar: + """Get main Menubar by 3dsmax main window.""" + return list(self.main_widget.findChildren(QtWidgets.QMenuBar))[0] + + def get_or_create_openpype_menu( + self, name: str = "&OpenPype", + before: str = "&Help") -> QtWidgets.QAction: + """Create OpenPype menu. + + Args: + name (str, Optional): OpenPypep menu name. + before (str, Optional): Name of the 3dsmax main menu item to + add OpenPype menu before. + + Returns: + QtWidgets.QAction: OpenPype menu action. + + """ + if self.menu is not None: + return self.menu + + menu_bar = self.get_main_menubar() + menu_items = menu_bar.findChildren( + QtWidgets.QMenu, options=QtCore.Qt.FindDirectChildrenOnly) + help_action = None + for item in menu_items: + if name in item.title(): + # we already have OpenPype menu + return item + + if before in item.title(): + help_action = item.menuAction() + + op_menu = QtWidgets.QMenu("&OpenPype") + menu_bar.insertMenu(help_action, op_menu) + + self.menu = op_menu + return op_menu + + def build_openpype_menu(self) -> QtWidgets.QAction: + """Build items in OpenPype menu.""" + openpype_menu = self.get_or_create_openpype_menu() + load_action = QtWidgets.QAction("Load...", openpype_menu) + load_action.triggered.connect(self.load_callback) + openpype_menu.addAction(load_action) + + publish_action = QtWidgets.QAction("Publish...", openpype_menu) + publish_action.triggered.connect(self.publish_callback) + openpype_menu.addAction(publish_action) + + manage_action = QtWidgets.QAction("Manage...", openpype_menu) + manage_action.triggered.connect(self.manage_callback) + openpype_menu.addAction(manage_action) + + library_action = QtWidgets.QAction("Library...", openpype_menu) + library_action.triggered.connect(self.library_callback) + openpype_menu.addAction(library_action) + + openpype_menu.addSeparator() + + workfiles_action = QtWidgets.QAction("Work Files...", openpype_menu) + workfiles_action.triggered.connect(self.workfiles_callback) + openpype_menu.addAction(workfiles_action) + return openpype_menu + + def load_callback(self): + """Callback to show Loader tool.""" + host_tools.show_loader(parent=self.main_widget) + + def publish_callback(self): + """Callback to show Publisher tool.""" + host_tools.show_publisher(parent=self.main_widget) + + def manage_callback(self): + """Callback to show Scene Manager/Inventory tool.""" + host_tools.show_subset_manager(parent=self.main_widget) + + def library_callback(self): + """Callback to show Library Loader tool.""" + host_tools.show_library_loader(parent=self.main_widget) + + def workfiles_callback(self): + """Callback to show Workfiles tool.""" + host_tools.show_workfiles(parent=self.main_widget) diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py new file mode 100644 index 0000000000..f3cdf245fb --- /dev/null +++ b/openpype/hosts/max/api/pipeline.py @@ -0,0 +1,145 @@ +# -*- coding: utf-8 -*- +"""Pipeline tools for OpenPype Houdini integration.""" +import os +import logging + +import json + +from openpype.host import HostBase, IWorkfileHost, ILoadHost, INewPublisher +import pyblish.api +from openpype.pipeline import ( + register_creator_plugin_path, + register_loader_plugin_path, + AVALON_CONTAINER_ID, +) +from openpype.hosts.max.api.menu import OpenPypeMenu +from openpype.hosts.max.api import lib +from openpype.hosts.max import MAX_HOST_DIR + +from pymxs import runtime as rt # noqa + +log = logging.getLogger("openpype.hosts.max") + +PLUGINS_DIR = os.path.join(MAX_HOST_DIR, "plugins") +PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") +LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +CREATE_PATH = os.path.join(PLUGINS_DIR, "create") +INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") + + +class MaxHost(HostBase, IWorkfileHost, ILoadHost, INewPublisher): + + name = "max" + menu = None + + def __init__(self): + super(MaxHost, self).__init__() + self._op_events = {} + self._has_been_setup = False + + def install(self): + pyblish.api.register_host("max") + + pyblish.api.register_plugin_path(PUBLISH_PATH) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) + + # self._register_callbacks() + self.menu = OpenPypeMenu() + + self._has_been_setup = True + + def has_unsaved_changes(self): + # TODO: how to get it from 3dsmax? + return True + + def get_workfile_extensions(self): + return [".max"] + + def save_workfile(self, dst_path=None): + rt.saveMaxFile(dst_path) + return dst_path + + def open_workfile(self, filepath): + rt.checkForSave() + rt.loadMaxFile(filepath) + return filepath + + def get_current_workfile(self): + return os.path.join(rt.maxFilePath, rt.maxFileName) + + def get_containers(self): + return ls() + + def _register_callbacks(self): + rt.callbacks.removeScripts(id=rt.name("OpenPypeCallbacks")) + + rt.callbacks.addScript( + rt.Name("postLoadingMenus"), + self._deferred_menu_creation, id=rt.Name('OpenPypeCallbacks')) + + def _deferred_menu_creation(self): + self.log.info("Building menu ...") + self.menu = OpenPypeMenu() + + @staticmethod + def create_context_node(): + """Helper for creating context holding node.""" + + root_scene = rt.rootScene + + create_attr_script = (""" +attributes "OpenPypeContext" +( + parameters main rollout:params + ( + context type: #string + ) + + rollout params "OpenPype Parameters" + ( + editText editTextContext "Context" type: #string + ) +) + """) + + attr = rt.execute(create_attr_script) + rt.custAttributes.add(root_scene, attr) + + return root_scene.OpenPypeContext.context + + def update_context_data(self, data, changes): + try: + _ = rt.rootScene.OpenPypeContext.context + except AttributeError: + # context node doesn't exists + self.create_context_node() + + rt.rootScene.OpenPypeContext.context = json.dumps(data) + + def get_context_data(self): + try: + context = rt.rootScene.OpenPypeContext.context + except AttributeError: + # context node doesn't exists + context = self.create_context_node() + if not context: + context = "{}" + return json.loads(context) + + def save_file(self, dst_path=None): + # Force forwards slashes to avoid segfault + dst_path = dst_path.replace("\\", "/") + rt.saveMaxFile(dst_path) + + +def ls() -> list: + """Get all OpenPype instances.""" + objs = rt.objects + containers = [ + obj for obj in objs + if rt.getUserProp(obj, "id") == AVALON_CONTAINER_ID + ] + + for container in sorted(containers, key=lambda name: container.name): + yield lib.read(container) diff --git a/openpype/hosts/max/api/plugin.py b/openpype/hosts/max/api/plugin.py new file mode 100644 index 0000000000..4788bfd383 --- /dev/null +++ b/openpype/hosts/max/api/plugin.py @@ -0,0 +1,111 @@ +# -*- coding: utf-8 -*- +"""3dsmax specific Avalon/Pyblish plugin definitions.""" +from pymxs import runtime as rt +import six +from abc import ABCMeta +from openpype.pipeline import ( + CreatorError, + Creator, + CreatedInstance +) +from openpype.lib import BoolDef +from .lib import imprint, read, lsattr + + +class OpenPypeCreatorError(CreatorError): + pass + + +class MaxCreatorBase(object): + + @staticmethod + def cache_subsets(shared_data): + if shared_data.get("max_cached_subsets") is None: + shared_data["max_cached_subsets"] = {} + cached_instances = lsattr("id", "pyblish.avalon.instance") + for i in cached_instances: + creator_id = rt.getUserProp(i, "creator_identifier") + if creator_id not in shared_data["max_cached_subsets"]: + shared_data["max_cached_subsets"][creator_id] = [i.name] + else: + shared_data[ + "max_cached_subsets"][creator_id].append(i.name) # noqa + return shared_data + + @staticmethod + def create_instance_node(node_name: str, parent: str = ""): + parent_node = rt.getNodeByName(parent) if parent else rt.rootScene + if not parent_node: + raise OpenPypeCreatorError(f"Specified parent {parent} not found") + + container = rt.container(name=node_name) + container.Parent = parent_node + + return container + + +@six.add_metaclass(ABCMeta) +class MaxCreator(Creator, MaxCreatorBase): + selected_nodes = [] + + def create(self, subset_name, instance_data, pre_create_data): + if pre_create_data.get("use_selection"): + self.selected_nodes = rt.getCurrentSelection() + + instance_node = self.create_instance_node(subset_name) + instance_data["instance_node"] = instance_node.name + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self + ) + for node in self.selected_nodes: + node.Parent = instance_node + + self._add_instance_to_context(instance) + imprint(instance_node.name, instance.data_to_store()) + + return instance + + def collect_instances(self): + self.cache_subsets(self.collection_shared_data) + for instance in self.collection_shared_data[ + "max_cached_subsets"].get(self.identifier, []): + created_instance = CreatedInstance.from_existing( + read(rt.getNodeByName(instance)), self + ) + self._add_instance_to_context(created_instance) + + def update_instances(self, update_list): + for created_inst, _changes in update_list: + instance_node = created_inst.get("instance_node") + + new_values = { + key: new_value + for key, (_old_value, new_value) in _changes.items() + } + imprint( + instance_node, + new_values, + ) + + def remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + instance_node = rt.getNodeByName( + instance.data.get("instance_node")) + if instance_node: + rt.delete(rt.getNodeByName(instance_node)) + + self._remove_instance_from_context(instance) + + def get_pre_create_attr_defs(self): + return [ + BoolDef("use_selection", label="Use selection") + ] diff --git a/openpype/hosts/max/hooks/set_paths.py b/openpype/hosts/max/hooks/set_paths.py new file mode 100644 index 0000000000..3db5306344 --- /dev/null +++ b/openpype/hosts/max/hooks/set_paths.py @@ -0,0 +1,17 @@ +from openpype.lib import PreLaunchHook + + +class SetPath(PreLaunchHook): + """Set current dir to workdir. + + Hook `GlobalHostDataHook` must be executed before this hook. + """ + app_groups = ["max"] + + def execute(self): + workdir = self.launch_context.env.get("AVALON_WORKDIR", "") + if not workdir: + self.log.warning("BUG: Workdir is not filled.") + return + + self.launch_context.kwargs["cwd"] = workdir diff --git a/openpype/hosts/max/plugins/__init__.py b/openpype/hosts/max/plugins/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/max/plugins/create/create_pointcache.py b/openpype/hosts/max/plugins/create/create_pointcache.py new file mode 100644 index 0000000000..32f0838471 --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_pointcache.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating pointcache alembics.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreatePointCache(plugin.MaxCreator): + identifier = "io.openpype.creators.max.pointcache" + label = "Point Cache" + family = "pointcache" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + # from pymxs import runtime as rt + + _ = super(CreatePointCache, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + # for additional work on the node: + # instance_node = rt.getNodeByName(instance.get("instance_node")) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py new file mode 100644 index 0000000000..285d84b7b6 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- +"""Simple alembic loader for 3dsmax. + +Because of limited api, alembics can be only loaded, but not easily updated. + +""" +import os +from openpype.pipeline import ( + load +) + + +class AbcLoader(load.LoaderPlugin): + """Alembic loader.""" + + families = ["model", "animation", "pointcache"] + label = "Load Alembic" + representations = ["abc"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + file_path = os.path.normpath(self.fname) + + abc_before = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + abc_export_cmd = (f""" +AlembicImport.ImportToRoot = false + +importFile @"{file_path}" #noPrompt + """) + + self.log.debug(f"Executing command: {abc_export_cmd}") + rt.execute(abc_export_cmd) + + abc_after = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + # This should yield new AlembicContainer node + abc_containers = abc_after.difference(abc_before) + + if len(abc_containers) != 1: + self.log.error("Something failed when loading.") + + abc_container = abc_containers.pop() + + container_name = f"{name}_CON" + container = rt.container(name=container_name) + abc_container.Parent = container + + return container + + def remove(self, container): + from pymxs import runtime as rt + + node = container["node"] + rt.delete(node) diff --git a/openpype/hosts/max/plugins/publish/collect_workfile.py b/openpype/hosts/max/plugins/publish/collect_workfile.py new file mode 100644 index 0000000000..3500b2735c --- /dev/null +++ b/openpype/hosts/max/plugins/publish/collect_workfile.py @@ -0,0 +1,63 @@ +# -*- coding: utf-8 -*- +"""Collect current work file.""" +import os +import pyblish.api + +from pymxs import runtime as rt +from openpype.pipeline import legacy_io + + +class CollectWorkfile(pyblish.api.ContextPlugin): + """Inject the current working file into context""" + + order = pyblish.api.CollectorOrder - 0.01 + label = "Collect 3dsmax Workfile" + hosts = ['max'] + + def process(self, context): + """Inject the current working file.""" + folder = rt.maxFilePath + file = rt.maxFileName + if not folder or not file: + self.log.error("Scene is not saved.") + current_file = os.path.join(folder, file) + + context.data['currentFile'] = current_file + + filename, ext = os.path.splitext(file) + + task = legacy_io.Session["AVALON_TASK"] + + data = {} + + # create instance + instance = context.create_instance(name=filename) + subset = 'workfile' + task.capitalize() + + data.update({ + "subset": subset, + "asset": os.getenv("AVALON_ASSET", None), + "label": subset, + "publish": True, + "family": 'workfile', + "families": ['workfile'], + "setMembers": [current_file], + "frameStart": context.data['frameStart'], + "frameEnd": context.data['frameEnd'], + "handleStart": context.data['handleStart'], + "handleEnd": context.data['handleEnd'] + }) + + data['representations'] = [{ + 'name': ext.lstrip("."), + 'ext': ext.lstrip("."), + 'files': file, + "stagingDir": folder, + }] + + instance.data.update(data) + + self.log.info('Collected instance: {}'.format(file)) + self.log.info('Scene path: {}'.format(current_file)) + self.log.info('staging Dir: {}'.format(folder)) + self.log.info('subset: {}'.format(subset)) diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py new file mode 100644 index 0000000000..904c1656da --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py @@ -0,0 +1,100 @@ +# -*- coding: utf-8 -*- +""" +Export alembic file. + +Note: + Parameters on AlembicExport (AlembicExport.Parameter): + + ParticleAsMesh (bool): Sets whether particle shapes are exported + as meshes. + AnimTimeRange (enum): How animation is saved: + #CurrentFrame: saves current frame + #TimeSlider: saves the active time segments on time slider (default) + #StartEnd: saves a range specified by the Step + StartFrame (int) + EnFrame (int) + ShapeSuffix (bool): When set to true, appends the string "Shape" to the + name of each exported mesh. This property is set to false by default. + SamplesPerFrame (int): Sets the number of animation samples per frame. + Hidden (bool): When true, export hidden geometry. + UVs (bool): When true, export the mesh UV map channel. + Normals (bool): When true, export the mesh normals. + VertexColors (bool): When true, export the mesh vertex color map 0 and the + current vertex color display data when it differs + ExtraChannels (bool): When true, export the mesh extra map channels + (map channels greater than channel 1) + Velocity (bool): When true, export the meh vertex and particle velocity + data. + MaterialIDs (bool): When true, export the mesh material ID as + Alembic face sets. + Visibility (bool): When true, export the node visibility data. + LayerName (bool): When true, export the node layer name as an Alembic + object property. + MaterialName (bool): When true, export the geometry node material name as + an Alembic object property + ObjectID (bool): When true, export the geometry node g-buffer object ID as + an Alembic object property. + CustomAttributes (bool): When true, export the node and its modifiers + custom attributes into an Alembic object compound property. +""" +import os +import pyblish.api +from openpype.pipeline import publish +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection, + get_all_children +) + + +class ExtractAlembic(publish.Extractor): + order = pyblish.api.ExtractorOrder + label = "Extract Pointcache" + hosts = ["max"] + families = ["pointcache", "camera"] + + def process(self, instance): + start = float(instance.data.get("frameStartHandle", 1)) + end = float(instance.data.get("frameEndHandle", 1)) + + container = instance.data["instance_node"] + + self.log.info("Extracting pointcache ...") + + parent_dir = self.staging_dir(instance) + file_name = "{name}.abc".format(**instance.data) + path = os.path.join(parent_dir, file_name) + + # We run the render + self.log.info("Writing alembic '%s' to '%s'" % (file_name, + parent_dir)) + + abc_export_cmd = ( + f""" +AlembicExport.ArchiveType = #ogawa +AlembicExport.CoordinateSystem = #maya +AlembicExport.StartFrame = {start} +AlembicExport.EndFrame = {end} + +exportFile @"{path}" #noPrompt selectedOnly:on using:AlembicExport + + """) + + self.log.debug(f"Executing command: {abc_export_cmd}") + + with maintained_selection(): + # select and export + + rt.select(get_all_children(rt.getNodeByName(container))) + rt.execute(abc_export_cmd) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': file_name, + "stagingDir": parent_dir, + } + instance.data["representations"].append(representation) diff --git a/openpype/hosts/max/plugins/publish/validate_scene_saved.py b/openpype/hosts/max/plugins/publish/validate_scene_saved.py new file mode 100644 index 0000000000..8506b17315 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_scene_saved.py @@ -0,0 +1,18 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateSceneSaved(pyblish.api.InstancePlugin): + """Validate that workfile was saved.""" + + order = pyblish.api.ValidatorOrder + families = ["workfile"] + hosts = ["max"] + label = "Validate Workfile is saved" + + def process(self, instance): + if not rt.maxFilePath or not rt.maxFileName: + raise PublishValidationError( + "Workfile is not saved", title=self.label) diff --git a/openpype/hosts/max/startup/startup.ms b/openpype/hosts/max/startup/startup.ms new file mode 100644 index 0000000000..aee40eb6bc --- /dev/null +++ b/openpype/hosts/max/startup/startup.ms @@ -0,0 +1,9 @@ +-- OpenPype Init Script +( + local sysPath = dotNetClass "System.IO.Path" + local sysDir = dotNetClass "System.IO.Directory" + local localScript = getThisScriptFilename() + local startup = sysPath.Combine (sysPath.GetDirectoryName localScript) "startup.py" + + python.ExecuteFile startup +) \ No newline at end of file diff --git a/openpype/hosts/max/startup/startup.py b/openpype/hosts/max/startup/startup.py new file mode 100644 index 0000000000..37bcef5db1 --- /dev/null +++ b/openpype/hosts/max/startup/startup.py @@ -0,0 +1,6 @@ +# -*- coding: utf-8 -*- +from openpype.hosts.max.api import MaxHost +from openpype.pipeline import install_host + +host = MaxHost() +install_host(host) diff --git a/openpype/hosts/maya/api/gltf.py b/openpype/hosts/maya/api/gltf.py new file mode 100644 index 0000000000..2a983f1573 --- /dev/null +++ b/openpype/hosts/maya/api/gltf.py @@ -0,0 +1,88 @@ +# -*- coding: utf-8 -*- +"""Tools to work with GLTF.""" +import logging + +from maya import cmds, mel # noqa + +log = logging.getLogger(__name__) + +_gltf_options = { + "of": str, # outputFolder + "cpr": str, # copyright + "sno": bool, # selectedNodeOnly + "sn": str, # sceneName + "glb": bool, # binary + "nbu": bool, # niceBufferURIs + "hbu": bool, # hashBufferURI + "ext": bool, # externalTextures + "ivt": int, # initialValuesTime + "acn": str, # animationClipName + "ast": int, # animationClipStartTime + "aet": int, # animationClipEndTime + "afr": float, # animationClipFrameRate + "dsa": int, # detectStepAnimations + "mpa": str, # meshPrimitiveAttributes + "bpa": str, # blendPrimitiveAttributes + "i32": bool, # force32bitIndices + "ssm": bool, # skipStandardMaterials + "eut": bool, # excludeUnusedTexcoord + "dm": bool, # defaultMaterial + "cm": bool, # colorizeMaterials + "dmy": str, # dumpMaya + "dgl": str, # dumpGLTF + "imd": str, # ignoreMeshDeformers + "ssc": bool, # skipSkinClusters + "sbs": bool, # skipBlendShapes + "rvp": bool, # redrawViewport + "vno": bool # visibleNodesOnly +} + + +def extract_gltf(parent_dir, + filename, + **kwargs): + + """Sets GLTF export options from data in the instance. + + """ + + cmds.loadPlugin('maya2glTF', quiet=True) + # load the UI to run mel command + mel.eval("maya2glTF_UI()") + + parent_dir = parent_dir.replace('\\', '/') + options = { + "dsa": 1, + "glb": True + } + options.update(kwargs) + + for key, value in options.copy().items(): + if key not in _gltf_options: + log.warning("extract_gltf() does not support option '%s'. " + "Flag will be ignored..", key) + options.pop(key) + options.pop(value) + continue + + job_args = list() + default_opt = "maya2glTF -of \"{0}\" -sn \"{1}\"".format(parent_dir, filename) # noqa + job_args.append(default_opt) + + for key, value in options.items(): + if isinstance(value, str): + job_args.append("-{0} \"{1}\"".format(key, value)) + elif isinstance(value, bool): + if value: + job_args.append("-{0}".format(key)) + else: + job_args.append("-{0} {1}".format(key, value)) + + job_str = " ".join(job_args) + log.info("{}".format(job_str)) + mel.eval(job_str) + + # close the gltf export after finish the export + gltf_UI = "maya2glTF_exporter_window" + if cmds.window(gltf_UI, q=True, exists=True): + cmds.deleteUI(gltf_UI) diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 2530021eba..ca826946f8 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -127,14 +127,19 @@ def get_main_window(): @contextlib.contextmanager -def suspended_refresh(): - """Suspend viewport refreshes""" +def suspended_refresh(suspend=True): + """Suspend viewport refreshes + cmds.ogs(pause=True) is a toggle so we cant pass False. + """ + original_state = cmds.ogs(query=True, pause=True) try: - cmds.refresh(suspend=True) + if suspend and not original_state: + cmds.ogs(pause=True) yield finally: - cmds.refresh(suspend=False) + if suspend and not original_state: + cmds.ogs(pause=True) @contextlib.contextmanager @@ -3436,3 +3441,8 @@ def iter_visible_nodes_in_range(nodes, start, end): # If no more nodes to process break the frame iterations.. if not node_dependencies: break + + +def get_attribute_input(attr): + connections = cmds.listConnections(attr, plugs=True, destination=False) + return connections[0] if connections else None diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py index 2b996702c3..5161141ef9 100644 --- a/openpype/hosts/maya/api/lib_rendersettings.py +++ b/openpype/hosts/maya/api/lib_rendersettings.py @@ -95,21 +95,25 @@ class RenderSettings(object): if renderer == "redshift": self._set_redshift_settings(width, height) + mel.eval("redshiftUpdateActiveAovList") def _set_arnold_settings(self, width, height): """Sets settings for Arnold.""" from mtoa.core import createOptions # noqa from mtoa.aovs import AOVInterface # noqa createOptions() - arnold_render_presets = self._project_settings["maya"]["RenderSettings"]["arnold_renderer"] # noqa + render_settings = self._project_settings["maya"]["RenderSettings"] + arnold_render_presets = render_settings["arnold_renderer"] # noqa # Force resetting settings and AOV list to avoid having to deal with # AOV checking logic, for now. # This is a work around because the standard # function to revert render settings does not reset AOVs list in MtoA # Fetch current aovs in case there's any. current_aovs = AOVInterface().getAOVs() + remove_aovs = render_settings["remove_aovs"] + if remove_aovs: # Remove fetched AOVs - AOVInterface().removeAOVs(current_aovs) + AOVInterface().removeAOVs(current_aovs) mel.eval("unifiedRenderGlobalsRevertToDefault") img_ext = arnold_render_presets["image_format"] img_prefix = arnold_render_presets["image_prefix"] @@ -118,6 +122,8 @@ class RenderSettings(object): multi_exr = arnold_render_presets["multilayer_exr"] additional_options = arnold_render_presets["additional_options"] for aov in aovs: + if aov in current_aovs and not remove_aovs: + continue AOVInterface('defaultArnoldRenderOptions').addAOV(aov) cmds.setAttr("defaultResolution.width", width) @@ -141,12 +147,50 @@ class RenderSettings(object): def _set_redshift_settings(self, width, height): """Sets settings for Redshift.""" - redshift_render_presets = ( - self._project_settings - ["maya"] - ["RenderSettings"] - ["redshift_renderer"] - ) + render_settings = self._project_settings["maya"]["RenderSettings"] + redshift_render_presets = render_settings["redshift_renderer"] + + remove_aovs = render_settings["remove_aovs"] + all_rs_aovs = cmds.ls(type='RedshiftAOV') + if remove_aovs: + for aov in all_rs_aovs: + enabled = cmds.getAttr("{}.enabled".format(aov)) + if enabled: + cmds.delete(aov) + + redshift_aovs = redshift_render_presets["aov_list"] + # list all the aovs + all_rs_aovs = cmds.ls(type='RedshiftAOV') + for rs_aov in redshift_aovs: + rs_layername = rs_aov + if " " in rs_aov: + rs_renderlayer = rs_aov.replace(" ", "") + rs_layername = "rsAov_{}".format(rs_renderlayer) + else: + rs_layername = "rsAov_{}".format(rs_aov) + if rs_layername in all_rs_aovs: + continue + cmds.rsCreateAov(type=rs_aov) + # update the AOV list + mel.eval("redshiftUpdateActiveAovList") + + rs_p_engine = redshift_render_presets["primary_gi_engine"] + rs_s_engine = redshift_render_presets["secondary_gi_engine"] + + if int(rs_p_engine) or int(rs_s_engine) != 0: + cmds.setAttr("redshiftOptions.GIEnabled", 1) + if int(rs_p_engine) == 0: + # reset the primary GI Engine as default + cmds.setAttr("redshiftOptions.primaryGIEngine", 4) + if int(rs_s_engine) == 0: + # reset the secondary GI Engine as default + cmds.setAttr("redshiftOptions.secondaryGIEngine", 2) + else: + cmds.setAttr("redshiftOptions.GIEnabled", 0) + + cmds.setAttr("redshiftOptions.primaryGIEngine", int(rs_p_engine)) + cmds.setAttr("redshiftOptions.secondaryGIEngine", int(rs_s_engine)) + additional_options = redshift_render_presets["additional_options"] ext = redshift_render_presets["image_format"] img_exts = ["iff", "exr", "tif", "png", "tga", "jpg"] @@ -163,12 +207,31 @@ class RenderSettings(object): """Sets important settings for Vray.""" settings = cmds.ls(type="VRaySettingsNode") node = settings[0] if settings else cmds.createNode("VRaySettingsNode") - vray_render_presets = ( - self._project_settings - ["maya"] - ["RenderSettings"] - ["vray_renderer"] - ) + render_settings = self._project_settings["maya"]["RenderSettings"] + vray_render_presets = render_settings["vray_renderer"] + # vrayRenderElement + remove_aovs = render_settings["remove_aovs"] + all_vray_aovs = cmds.ls(type='VRayRenderElement') + lightSelect_aovs = cmds.ls(type='VRayRenderElementSet') + if remove_aovs: + for aov in all_vray_aovs: + # remove all aovs except LightSelect + enabled = cmds.getAttr("{}.enabled".format(aov)) + if enabled: + cmds.delete(aov) + # remove LightSelect + for light_aovs in lightSelect_aovs: + light_enabled = cmds.getAttr("{}.enabled".format(light_aovs)) + if light_enabled: + cmds.delete(lightSelect_aovs) + + vray_aovs = vray_render_presets["aov_list"] + for renderlayer in vray_aovs: + renderElement = "vrayAddRenderElement {}".format(renderlayer) + RE_name = mel.eval(renderElement) + # if there is more than one same render element + if RE_name.endswith("1"): + cmds.delete(RE_name) # Set aov separator # First we need to explicitly set the UI items in Render Settings # because that is also what V-Ray updates to when that Render Settings diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 39d821f620..82df85a8be 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -217,7 +217,7 @@ class ReferenceLoader(Loader): # Need to save alembic settings and reapply, cause referencing resets # them to incoming data. - alembic_attrs = ["speed", "offset", "cycleType"] + alembic_attrs = ["speed", "offset", "cycleType", "time"] alembic_data = {} if representation["name"] == "abc": alembic_nodes = cmds.ls( @@ -226,7 +226,12 @@ class ReferenceLoader(Loader): if alembic_nodes: for attr in alembic_attrs: node_attr = "{}.{}".format(alembic_nodes[0], attr) - alembic_data[attr] = cmds.getAttr(node_attr) + data = { + "input": lib.get_attribute_input(node_attr), + "value": cmds.getAttr(node_attr) + } + + alembic_data[attr] = data else: self.log.debug("No alembic nodes found in {}".format(members)) @@ -263,8 +268,19 @@ class ReferenceLoader(Loader): "{}:*".format(namespace), type="AlembicNode" ) if alembic_nodes: - for attr, value in alembic_data.items(): - cmds.setAttr("{}.{}".format(alembic_nodes[0], attr), value) + alembic_node = alembic_nodes[0] # assume single AlembicNode + for attr, data in alembic_data.items(): + node_attr = "{}.{}".format(alembic_node, attr) + input = lib.get_attribute_input(node_attr) + if data["input"]: + if data["input"] != input: + cmds.connectAttr( + data["input"], node_attr, force=True + ) + else: + if input: + cmds.disconnectAttr(input, node_attr) + cmds.setAttr(node_attr, data["value"]) # Fix PLN-40 for older containers created with Avalon that had the # `.verticesOnlySet` set to True. diff --git a/openpype/hosts/maya/plugins/create/create_ass.py b/openpype/hosts/maya/plugins/create/create_ass.py index 39f226900a..935a068ca5 100644 --- a/openpype/hosts/maya/plugins/create/create_ass.py +++ b/openpype/hosts/maya/plugins/create/create_ass.py @@ -1,5 +1,3 @@ -from collections import OrderedDict - from openpype.hosts.maya.api import ( lib, plugin @@ -9,12 +7,26 @@ from maya import cmds class CreateAss(plugin.Creator): - """Arnold Archive""" + """Arnold Scene Source""" name = "ass" - label = "Ass StandIn" + label = "Arnold Scene Source" family = "ass" icon = "cube" + expandProcedurals = False + motionBlur = True + motionBlurKeys = 2 + motionBlurLength = 0.5 + maskOptions = False + maskCamera = False + maskLight = False + maskShape = False + maskShader = False + maskOverride = False + maskDriver = False + maskFilter = False + maskColor_manager = False + maskOperator = False def __init__(self, *args, **kwargs): super(CreateAss, self).__init__(*args, **kwargs) @@ -22,17 +34,27 @@ class CreateAss(plugin.Creator): # Add animation data self.data.update(lib.collect_animation_data()) - # Vertex colors with the geometry - self.data["exportSequence"] = False + self.data["expandProcedurals"] = self.expandProcedurals + self.data["motionBlur"] = self.motionBlur + self.data["motionBlurKeys"] = self.motionBlurKeys + self.data["motionBlurLength"] = self.motionBlurLength + + # Masks + self.data["maskOptions"] = self.maskOptions + self.data["maskCamera"] = self.maskCamera + self.data["maskLight"] = self.maskLight + self.data["maskShape"] = self.maskShape + self.data["maskShader"] = self.maskShader + self.data["maskOverride"] = self.maskOverride + self.data["maskDriver"] = self.maskDriver + self.data["maskFilter"] = self.maskFilter + self.data["maskColor_manager"] = self.maskColor_manager + self.data["maskOperator"] = self.maskOperator def process(self): instance = super(CreateAss, self).process() - # data = OrderedDict(**self.data) - - - - nodes = list() + nodes = [] if (self.options or {}).get("useSelection"): nodes = cmds.ls(selection=True) @@ -42,7 +64,3 @@ class CreateAss(plugin.Creator): assContent = cmds.sets(name="content_SET") assProxy = cmds.sets(name="proxy_SET", empty=True) cmds.sets([assContent, assProxy], forceElement=instance) - - # self.log.info(data) - # - # self.data = data diff --git a/openpype/hosts/maya/plugins/create/create_layout.py b/openpype/hosts/maya/plugins/create/create_layout.py index 6dc87430aa..1768a3d49e 100644 --- a/openpype/hosts/maya/plugins/create/create_layout.py +++ b/openpype/hosts/maya/plugins/create/create_layout.py @@ -8,3 +8,9 @@ class CreateLayout(plugin.Creator): label = "Layout" family = "layout" icon = "cubes" + + def __init__(self, *args, **kwargs): + super(CreateLayout, self).__init__(*args, **kwargs) + # enable this when you want to + # publish group of loaded asset + self.data["groupLoadedAssets"] = False diff --git a/openpype/hosts/maya/plugins/create/create_pointcache.py b/openpype/hosts/maya/plugins/create/create_pointcache.py index ab8fe12079..cdec140ea8 100644 --- a/openpype/hosts/maya/plugins/create/create_pointcache.py +++ b/openpype/hosts/maya/plugins/create/create_pointcache.py @@ -28,6 +28,7 @@ class CreatePointCache(plugin.Creator): self.data["visibleOnly"] = False # only nodes that are visible self.data["includeParentHierarchy"] = False # Include parent groups self.data["worldSpace"] = True # Default to exporting world-space + self.data["refresh"] = False # Default to suspend refresh. # Add options for custom attributes self.data["attr"] = "" diff --git a/openpype/hosts/maya/plugins/create/create_proxy_abc.py b/openpype/hosts/maya/plugins/create/create_proxy_abc.py new file mode 100644 index 0000000000..2946f7b530 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_proxy_abc.py @@ -0,0 +1,35 @@ +from openpype.hosts.maya.api import ( + lib, + plugin +) + + +class CreateProxyAlembic(plugin.Creator): + """Proxy Alembic for animated data""" + + name = "proxyAbcMain" + label = "Proxy Alembic" + family = "proxyAbc" + icon = "gears" + write_color_sets = False + write_face_sets = False + + def __init__(self, *args, **kwargs): + super(CreateProxyAlembic, self).__init__(*args, **kwargs) + + # Add animation data + self.data.update(lib.collect_animation_data()) + + # Vertex colors with the geometry. + self.data["writeColorSets"] = self.write_color_sets + # Vertex colors with the geometry. + self.data["writeFaceSets"] = self.write_face_sets + # Default to exporting world-space + self.data["worldSpace"] = True + + # name suffix for the bounding box + self.data["nameSuffix"] = "_BBox" + + # Add options for custom attributes + self.data["attr"] = "" + self.data["attrPrefix"] = "" diff --git a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py index 1a8e84c80d..6e72bf5324 100644 --- a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py +++ b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py @@ -48,3 +48,21 @@ class CreateUnrealSkeletalMesh(plugin.Creator): cmds.sets(node, forceElement=joints_set) else: cmds.sets(node, forceElement=geometry_set) + + # Add animation data + self.data.update(lib.collect_animation_data()) + + # Only renderable visible shapes + self.data["renderableOnly"] = False + # only nodes that are visible + self.data["visibleOnly"] = False + # Include parent groups + self.data["includeParentHierarchy"] = False + # Default to exporting world-space + self.data["worldSpace"] = True + # Default to suspend refresh. + self.data["refresh"] = False + + # Add options for custom attributes + self.data["attr"] = "" + self.data["attrPrefix"] = "" diff --git a/openpype/hosts/maya/plugins/load/actions.py b/openpype/hosts/maya/plugins/load/actions.py index eca1b27f34..9cc9180d6e 100644 --- a/openpype/hosts/maya/plugins/load/actions.py +++ b/openpype/hosts/maya/plugins/load/actions.py @@ -14,6 +14,7 @@ class SetFrameRangeLoader(load.LoaderPlugin): families = ["animation", "camera", + "proxyAbc", "pointcache"] representations = ["abc"] @@ -48,6 +49,7 @@ class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): families = ["animation", "camera", + "proxyAbc", "pointcache"] representations = ["abc"] diff --git a/openpype/hosts/maya/plugins/load/load_abc_to_standin.py b/openpype/hosts/maya/plugins/load/load_abc_to_standin.py index 605a492e4d..70866a3ba6 100644 --- a/openpype/hosts/maya/plugins/load/load_abc_to_standin.py +++ b/openpype/hosts/maya/plugins/load/load_abc_to_standin.py @@ -11,7 +11,7 @@ from openpype.settings import get_project_settings class AlembicStandinLoader(load.LoaderPlugin): """Load Alembic as Arnold Standin""" - families = ["animation", "model", "pointcache"] + families = ["animation", "model", "proxyAbc", "pointcache"] representations = ["abc"] label = "Import Alembic as Arnold Standin" diff --git a/openpype/hosts/maya/plugins/load/load_gpucache.py b/openpype/hosts/maya/plugins/load/load_gpucache.py index a09f924c7b..07e5734f43 100644 --- a/openpype/hosts/maya/plugins/load/load_gpucache.py +++ b/openpype/hosts/maya/plugins/load/load_gpucache.py @@ -10,7 +10,7 @@ from openpype.settings import get_project_settings class GpuCacheLoader(load.LoaderPlugin): """Load Alembic as gpuCache""" - families = ["model", "animation", "pointcache"] + families = ["model", "animation", "proxyAbc", "pointcache"] representations = ["abc"] label = "Import Gpu Cache" diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index c762a29326..c6b07b036d 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -16,6 +16,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): families = ["model", "pointcache", + "proxyAbc", "animation", "mayaAscii", "mayaScene", diff --git a/openpype/hosts/maya/plugins/publish/collect_ass.py b/openpype/hosts/maya/plugins/publish/collect_ass.py index 7c9a1b76fb..b5e05d6665 100644 --- a/openpype/hosts/maya/plugins/publish/collect_ass.py +++ b/openpype/hosts/maya/plugins/publish/collect_ass.py @@ -1,4 +1,5 @@ from maya import cmds +from openpype.pipeline.publish import KnownPublishError import pyblish.api @@ -6,6 +7,7 @@ import pyblish.api class CollectAssData(pyblish.api.InstancePlugin): """Collect Ass data.""" + # Offset to be after renderable camera collection. order = pyblish.api.CollectorOrder + 0.2 label = 'Collect Ass' families = ["ass"] @@ -23,8 +25,23 @@ class CollectAssData(pyblish.api.InstancePlugin): instance.data['setMembers'] = members self.log.debug('content members: {}'.format(members)) elif objset.startswith("proxy_SET"): - assert len(members) == 1, "You have multiple proxy meshes, please only use one" + if len(members) != 1: + msg = "You have multiple proxy meshes, please only use one" + raise KnownPublishError(msg) instance.data['proxy'] = members self.log.debug('proxy members: {}'.format(members)) + # Use camera in object set if present else default to render globals + # camera. + cameras = cmds.ls(type="camera", long=True) + renderable = [c for c in cameras if cmds.getAttr("%s.renderable" % c)] + camera = renderable[0] + for node in instance.data["setMembers"]: + camera_shapes = cmds.listRelatives( + node, shapes=True, type="camera" + ) + if camera_shapes: + camera = node + instance.data["camera"] = camera + self.log.debug("data: {}".format(instance.data)) diff --git a/openpype/hosts/maya/plugins/publish/collect_gltf.py b/openpype/hosts/maya/plugins/publish/collect_gltf.py new file mode 100644 index 0000000000..bb37fe3a7e --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_gltf.py @@ -0,0 +1,17 @@ +# -*- coding: utf-8 -*- +import pyblish.api + + +class CollectGLTF(pyblish.api.InstancePlugin): + """Collect Assets for GLTF/GLB export.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Asset for GLTF/GLB export" + families = ["model", "animation", "pointcache"] + + def process(self, instance): + if not instance.data.get("families"): + instance.data["families"] = [] + + if "gltf" not in instance.data["families"]: + instance.data["families"].append("gltf") diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py index 157be5717b..e1adffaaaf 100644 --- a/openpype/hosts/maya/plugins/publish/collect_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_look.py @@ -403,13 +403,13 @@ class CollectLook(pyblish.api.InstancePlugin): # history = cmds.listHistory(look_sets) history = [] for material in materials: - history.extend(cmds.listHistory(material)) + history.extend(cmds.listHistory(material, ac=True)) # handle VrayPluginNodeMtl node - see #1397 vray_plugin_nodes = cmds.ls( history, type="VRayPluginNodeMtl", long=True) for vray_node in vray_plugin_nodes: - history.extend(cmds.listHistory(vray_node)) + history.extend(cmds.listHistory(vray_node, ac=True)) # handling render attribute sets render_set_types = [ diff --git a/openpype/hosts/maya/plugins/publish/extract_ass.py b/openpype/hosts/maya/plugins/publish/extract_ass.py index 5c21a4ff08..049f256a7a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_ass.py +++ b/openpype/hosts/maya/plugins/publish/extract_ass.py @@ -1,77 +1,93 @@ import os from maya import cmds +import arnold from openpype.pipeline import publish -from openpype.hosts.maya.api.lib import maintained_selection +from openpype.hosts.maya.api.lib import maintained_selection, attribute_values class ExtractAssStandin(publish.Extractor): - """Extract the content of the instance to a ass file + """Extract the content of the instance to a ass file""" - Things to pay attention to: - - If animation is toggled, are the frames correct - - - """ - - label = "Ass Standin (.ass)" + label = "Arnold Scene Source (.ass)" hosts = ["maya"] families = ["ass"] asciiAss = False def process(self, instance): - - sequence = instance.data.get("exportSequence", False) - staging_dir = self.staging_dir(instance) filename = "{}.ass".format(instance.name) - filenames = list() + filenames = [] file_path = os.path.join(staging_dir, filename) + # Mask + mask = arnold.AI_NODE_ALL + + node_types = { + "options": arnold.AI_NODE_OPTIONS, + "camera": arnold.AI_NODE_CAMERA, + "light": arnold.AI_NODE_LIGHT, + "shape": arnold.AI_NODE_SHAPE, + "shader": arnold.AI_NODE_SHADER, + "override": arnold.AI_NODE_OVERRIDE, + "driver": arnold.AI_NODE_DRIVER, + "filter": arnold.AI_NODE_FILTER, + "color_manager": arnold.AI_NODE_COLOR_MANAGER, + "operator": arnold.AI_NODE_OPERATOR + } + + for key in node_types.keys(): + if instance.data.get("mask" + key.title()): + mask = mask ^ node_types[key] + + # Motion blur + values = { + "defaultArnoldRenderOptions.motion_blur_enable": instance.data.get( + "motionBlur", True + ), + "defaultArnoldRenderOptions.motion_steps": instance.data.get( + "motionBlurKeys", 2 + ), + "defaultArnoldRenderOptions.motion_frames": instance.data.get( + "motionBlurLength", 0.5 + ) + } + # Write out .ass file + kwargs = { + "filename": file_path, + "startFrame": instance.data.get("frameStartHandle", 1), + "endFrame": instance.data.get("frameEndHandle", 1), + "frameStep": instance.data.get("step", 1), + "selected": True, + "asciiAss": self.asciiAss, + "shadowLinks": True, + "lightLinks": True, + "boundingBox": True, + "expandProcedurals": instance.data.get("expandProcedurals", False), + "camera": instance.data["camera"], + "mask": mask + } + self.log.info("Writing: '%s'" % file_path) - with maintained_selection(): - self.log.info("Writing: {}".format(instance.data["setMembers"])) - cmds.select(instance.data["setMembers"], noExpand=True) + with attribute_values(values): + with maintained_selection(): + self.log.info( + "Writing: {}".format(instance.data["setMembers"]) + ) + cmds.select(instance.data["setMembers"], noExpand=True) - if sequence: - self.log.info("Extracting ass sequence") + self.log.info( + "Extracting ass sequence with: {}".format(kwargs) + ) - # Collect the start and end including handles - start = instance.data.get("frameStartHandle", 1) - end = instance.data.get("frameEndHandle", 1) - step = instance.data.get("step", 0) + exported_files = cmds.arnoldExportAss(**kwargs) - exported_files = cmds.arnoldExportAss(filename=file_path, - selected=True, - asciiAss=self.asciiAss, - shadowLinks=True, - lightLinks=True, - boundingBox=True, - startFrame=start, - endFrame=end, - frameStep=step - ) for file in exported_files: filenames.append(os.path.split(file)[1]) + self.log.info("Exported: {}".format(filenames)) - else: - self.log.info("Extracting ass") - cmds.arnoldExportAss(filename=file_path, - selected=True, - asciiAss=False, - shadowLinks=True, - lightLinks=True, - boundingBox=True - ) - self.log.info("Extracted {}".format(filename)) - filenames = filename - optionals = [ - "frameStart", "frameEnd", "step", "handles", - "handleEnd", "handleStart" - ] - for key in optionals: - instance.data.pop(key, None) if "representations" not in instance.data: instance.data["representations"] = [] @@ -79,13 +95,11 @@ class ExtractAssStandin(publish.Extractor): representation = { 'name': 'ass', 'ext': 'ass', - 'files': filenames, - "stagingDir": staging_dir + 'files': filenames if len(filenames) > 1 else filenames[0], + "stagingDir": staging_dir, + 'frameStart': kwargs["startFrame"] } - if sequence: - representation['frameStart'] = start - instance.data["representations"].append(representation) self.log.info("Extracted instance '%s' to: %s" diff --git a/openpype/hosts/maya/plugins/publish/extract_gltf.py b/openpype/hosts/maya/plugins/publish/extract_gltf.py new file mode 100644 index 0000000000..f5ceed5f33 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_gltf.py @@ -0,0 +1,65 @@ +import os + +from maya import cmds, mel +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.maya.api import lib +from openpype.hosts.maya.api.gltf import extract_gltf + + +class ExtractGLB(publish.Extractor): + + order = pyblish.api.ExtractorOrder + hosts = ["maya"] + label = "Extract GLB" + families = ["gltf"] + + def process(self, instance): + staging_dir = self.staging_dir(instance) + filename = "{0}.glb".format(instance.name) + path = os.path.join(staging_dir, filename) + + self.log.info("Extracting GLB to: {}".format(path)) + + nodes = instance[:] + + self.log.info("Instance: {0}".format(nodes)) + + start_frame = instance.data('frameStart') or \ + int(cmds.playbackOptions(query=True, + animationStartTime=True))# noqa + end_frame = instance.data('frameEnd') or \ + int(cmds.playbackOptions(query=True, + animationEndTime=True)) # noqa + fps = mel.eval('currentTimeUnitToFPS()') + + options = { + "sno": True, # selectedNodeOnly + "nbu": True, # .bin instead of .bin0 + "ast": start_frame, + "aet": end_frame, + "afr": fps, + "dsa": 1, + "acn": instance.name, + "glb": True, + "vno": True # visibleNodeOnly + } + with lib.maintained_selection(): + cmds.select(nodes, hi=True, noExpand=True) + extract_gltf(staging_dir, + instance.name, + **options) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'glb', + 'ext': 'glb', + 'files': filename, + "stagingDir": staging_dir, + } + instance.data["representations"].append(representation) + + self.log.info("Extract GLB successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_layout.py b/openpype/hosts/maya/plugins/publish/extract_layout.py index a801d99f42..7921fca069 100644 --- a/openpype/hosts/maya/plugins/publish/extract_layout.py +++ b/openpype/hosts/maya/plugins/publish/extract_layout.py @@ -15,6 +15,7 @@ class ExtractLayout(publish.Extractor): label = "Extract Layout" hosts = ["maya"] families = ["layout"] + project_container = "AVALON_CONTAINERS" optional = True def process(self, instance): @@ -33,12 +34,25 @@ class ExtractLayout(publish.Extractor): for asset in cmds.sets(str(instance), query=True): # Find the container - grp_name = asset.split(':')[0] + project_container = self.project_container + container_list = cmds.ls(project_container) + if len(container_list) == 0: + self.log.warning("Project container is not found!") + self.log.warning("The asset(s) may not be properly loaded after published") # noqa + continue + + grp_loaded_ass = instance.data.get("groupLoadedAssets", False) + if grp_loaded_ass: + asset_list = cmds.listRelatives(asset, children=True) + for asset in asset_list: + grp_name = asset.split(':')[0] + else: + grp_name = asset.split(':')[0] containers = cmds.ls("{}*_CON".format(grp_name)) - - assert len(containers) == 1, \ - "More than one container found for {}".format(asset) - + if len(containers) == 0: + self.log.warning("{} isn't from the loader".format(asset)) + self.log.warning("It may not be properly loaded after published") # noqa + continue container = containers[0] representation_id = cmds.getAttr( diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 403b4ee6bc..df07a674dc 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -90,7 +90,7 @@ def maketx(source, destination, args, logger): maketx_path = get_oiio_tools_path("maketx") - if not os.path.exists(maketx_path): + if not maketx_path: print( "OIIO tool not found in {}".format(maketx_path)) raise AssertionError("OIIO tool not found") diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index b19d24fad7..1f9f9db99a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -115,6 +115,10 @@ class ExtractPlayblast(publish.Extractor): else: preset["viewport_options"] = {"imagePlane": image_plane} + # Disable Pan/Zoom. + pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"])) + cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), False) + with lib.maintained_time(): filename = preset.get("filename", "%TEMP%") @@ -135,6 +139,8 @@ class ExtractPlayblast(publish.Extractor): path = capture.capture(log=self.log, **preset) + cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom) + self.log.debug("playblast path {}".format(path)) collected_files = os.listdir(stagingdir) diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py index 7c1c6d5c12..7ed73fd5b0 100644 --- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py @@ -86,13 +86,16 @@ class ExtractAlembic(publish.Extractor): start=start, end=end)) - with suspended_refresh(): + suspend = not instance.data.get("refresh", False) + with suspended_refresh(suspend=suspend): with maintained_selection(): cmds.select(nodes, noExpand=True) - extract_alembic(file=path, - startFrame=start, - endFrame=end, - **options) + extract_alembic( + file=path, + startFrame=start, + endFrame=end, + **options + ) if "representations" not in instance.data: instance.data["representations"] = [] diff --git a/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py new file mode 100644 index 0000000000..cf6351fdca --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py @@ -0,0 +1,109 @@ +import os + +from maya import cmds + +from openpype.pipeline import publish +from openpype.hosts.maya.api.lib import ( + extract_alembic, + suspended_refresh, + maintained_selection, + iter_visible_nodes_in_range +) + + +class ExtractProxyAlembic(publish.Extractor): + """Produce an alembic for bounding box geometry + """ + + label = "Extract Proxy (Alembic)" + hosts = ["maya"] + families = ["proxyAbc"] + + def process(self, instance): + name_suffix = instance.data.get("nameSuffix") + # Collect the start and end including handles + start = float(instance.data.get("frameStartHandle", 1)) + end = float(instance.data.get("frameEndHandle", 1)) + + attrs = instance.data.get("attr", "").split(";") + attrs = [value for value in attrs if value.strip()] + attrs += ["cbId"] + + attr_prefixes = instance.data.get("attrPrefix", "").split(";") + attr_prefixes = [value for value in attr_prefixes if value.strip()] + + self.log.info("Extracting Proxy Alembic..") + dirname = self.staging_dir(instance) + + filename = "{name}.abc".format(**instance.data) + path = os.path.join(dirname, filename) + + proxy_root = self.create_proxy_geometry(instance, + name_suffix, + start, + end) + + options = { + "step": instance.data.get("step", 1.0), + "attr": attrs, + "attrPrefix": attr_prefixes, + "writeVisibility": True, + "writeCreases": True, + "writeColorSets": instance.data.get("writeColorSets", False), + "writeFaceSets": instance.data.get("writeFaceSets", False), + "uvWrite": True, + "selection": True, + "worldSpace": instance.data.get("worldSpace", True), + "root": proxy_root + } + + if int(cmds.about(version=True)) >= 2017: + # Since Maya 2017 alembic supports multiple uv sets - write them. + options["writeUVSets"] = True + + with suspended_refresh(): + with maintained_selection(): + cmds.select(proxy_root, hi=True, noExpand=True) + extract_alembic(file=path, + startFrame=start, + endFrame=end, + **options) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": dirname + } + instance.data["representations"].append(representation) + + instance.context.data["cleanupFullPaths"].append(path) + + self.log.info("Extracted {} to {}".format(instance, dirname)) + # remove the bounding box + bbox_master = cmds.ls("bbox_grp") + cmds.delete(bbox_master) + + def create_proxy_geometry(self, instance, name_suffix, start, end): + nodes = instance[:] + nodes = list(iter_visible_nodes_in_range(nodes, + start=start, + end=end)) + + inst_selection = cmds.ls(nodes, long=True) + cmds.geomToBBox(inst_selection, + nameSuffix=name_suffix, + keepOriginal=True, + single=False, + bakeAnimation=True, + startTime=start, + endTime=end) + # create master group for bounding + # boxes as the main root + master_group = cmds.group(name="bbox_grp") + bbox_sel = cmds.ls(master_group, long=True) + self.log.debug("proxy_root: {}".format(bbox_sel)) + return bbox_sel diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index 712159c2be..1edafeb926 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -105,6 +105,11 @@ class ExtractThumbnail(publish.Extractor): pm.currentTime(refreshFrameInt - 1, edit=True) pm.currentTime(refreshFrameInt, edit=True) + # Override transparency if requested. + transparency = instance.data.get("transparency", 0) + if transparency != 0: + preset["viewport2_options"]["transparencyAlgorithm"] = transparency + # Isolate view is requested by having objects in the set besides a # camera. if preset.pop("isolate_view", False) and instance.data.get("isolate"): @@ -117,6 +122,10 @@ class ExtractThumbnail(publish.Extractor): else: preset["viewport_options"] = {"imagePlane": image_plane} + # Disable Pan/Zoom. + pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"])) + cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), False) + with lib.maintained_time(): # Force viewer to False in call to capture because we have our own # viewer opening call to allow a signal to trigger between @@ -136,6 +145,7 @@ class ExtractThumbnail(publish.Extractor): _, thumbnail = os.path.split(playblast) + cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom) self.log.info("file list {}".format(thumbnail)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py new file mode 100644 index 0000000000..e1f847f31a --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py @@ -0,0 +1,108 @@ +# -*- coding: utf-8 -*- +"""Create Unreal Skeletal Mesh data to be extracted as FBX.""" +import os +from contextlib import contextmanager + +from maya import cmds # noqa + +from openpype.pipeline import publish +from openpype.hosts.maya.api.lib import ( + extract_alembic, + suspended_refresh, + maintained_selection +) + + +@contextmanager +def renamed(original_name, renamed_name): + # type: (str, str) -> None + try: + cmds.rename(original_name, renamed_name) + yield + finally: + cmds.rename(renamed_name, original_name) + + +class ExtractUnrealSkeletalMeshAbc(publish.Extractor): + """Extract Unreal Skeletal Mesh as FBX from Maya. """ + + label = "Extract Unreal Skeletal Mesh - Alembic" + hosts = ["maya"] + families = ["skeletalMesh"] + optional = True + + def process(self, instance): + self.log.info("Extracting pointcache..") + + geo = cmds.listRelatives( + instance.data.get("geometry"), allDescendents=True, fullPath=True) + joints = cmds.listRelatives( + instance.data.get("joints"), allDescendents=True, fullPath=True) + + nodes = geo + joints + + attrs = instance.data.get("attr", "").split(";") + attrs = [value for value in attrs if value.strip()] + attrs += ["cbId"] + + attr_prefixes = instance.data.get("attrPrefix", "").split(";") + attr_prefixes = [value for value in attr_prefixes if value.strip()] + + # Define output path + staging_dir = self.staging_dir(instance) + filename = "{0}.abc".format(instance.name) + path = os.path.join(staging_dir, filename) + + # The export requires forward slashes because we need + # to format it into a string in a mel expression + path = path.replace('\\', '/') + + self.log.info("Extracting ABC to: {0}".format(path)) + self.log.info("Members: {0}".format(nodes)) + self.log.info("Instance: {0}".format(instance[:])) + + options = { + "step": instance.data.get("step", 1.0), + "attr": attrs, + "attrPrefix": attr_prefixes, + "writeVisibility": True, + "writeCreases": True, + "writeColorSets": instance.data.get("writeColorSets", False), + "writeFaceSets": instance.data.get("writeFaceSets", False), + "uvWrite": True, + "selection": True, + "worldSpace": instance.data.get("worldSpace", True) + } + + self.log.info("Options: {}".format(options)) + + if int(cmds.about(version=True)) >= 2017: + # Since Maya 2017 alembic supports multiple uv sets - write them. + options["writeUVSets"] = True + + if not instance.data.get("includeParentHierarchy", True): + # Set the root nodes if we don't want to include parents + # The roots are to be considered the ones that are the actual + # direct members of the set + options["root"] = instance.data.get("setMembers") + + with suspended_refresh(suspend=instance.data.get("refresh", False)): + with maintained_selection(): + cmds.select(nodes, noExpand=True) + extract_alembic(file=path, + # startFrame=start, + # endFrame=end, + **options) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": staging_dir, + } + instance.data["representations"].append(representation) + + self.log.info("Extract ABC successful to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py similarity index 95% rename from openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py rename to openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py index 258120db2f..b162ce47f7 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py @@ -21,12 +21,13 @@ def renamed(original_name, renamed_name): cmds.rename(renamed_name, original_name) -class ExtractUnrealSkeletalMesh(publish.Extractor): +class ExtractUnrealSkeletalMeshFbx(publish.Extractor): """Extract Unreal Skeletal Mesh as FBX from Maya. """ order = pyblish.api.ExtractorOrder - 0.1 - label = "Extract Unreal Skeletal Mesh" + label = "Extract Unreal Skeletal Mesh - FBX" families = ["skeletalMesh"] + optional = True def process(self, instance): fbx_exporter = fbx.FBXExtractor(log=self.log) diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py index 649913fff6..5a527031be 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py @@ -20,7 +20,7 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin): """ order = ValidateContentsOrder - families = ['animation', "pointcache"] + families = ['animation', "pointcache", "proxyAbc"] hosts = ['maya'] label = 'Animation Out Set Related Node Ids' actions = [ diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py index b467a7c232..5e50ae72cd 100644 --- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py @@ -25,6 +25,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): families = ["animation", "pointcache", "camera", + "proxyAbc", "renderlayer", "review", "yeticache"] diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py index 8221c18b17..398b6fb7bf 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py @@ -28,7 +28,9 @@ class ValidateSkeletalMeshHierarchy(pyblish.api.InstancePlugin): parent.split("|")[1] for parent in (joints_parents + geo_parents) } - if len(set(parents_set)) != 1: + self.log.info(parents_set) + + if len(set(parents_set)) > 2: raise PublishXmlValidationError( self, "Multiple roots on geometry or joints." diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py new file mode 100644 index 0000000000..c0a9ddcf69 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py @@ -0,0 +1,60 @@ +# -*- coding: utf-8 -*- +import pyblish.api + +from openpype.hosts.maya.api.action import ( + SelectInvalidAction, +) +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) + +from maya import cmds + + +class ValidateSkeletalMeshTriangulated(pyblish.api.InstancePlugin): + """Validates that the geometry has been triangulated.""" + + order = ValidateContentsOrder + hosts = ["maya"] + families = ["skeletalMesh"] + label = "Skeletal Mesh Triangulated" + optional = True + actions = [ + SelectInvalidAction, + RepairAction + ] + + def process(self, instance): + invalid = self.get_invalid(instance) + if invalid: + raise RuntimeError( + "The following objects needs to be triangulated: " + "{}".format(invalid)) + + @classmethod + def get_invalid(cls, instance): + geo = instance.data.get("geometry") + + invalid = [] + + for obj in cmds.listRelatives( + cmds.ls(geo), allDescendents=True, fullPath=True): + n_triangles = cmds.polyEvaluate(obj, triangle=True) + n_faces = cmds.polyEvaluate(obj, face=True) + + if not (isinstance(n_triangles, int) and isinstance(n_faces, int)): + continue + + # We check if the number of triangles is equal to the number of + # faces for each transform node. + # If it is, the object is triangulated. + if cmds.objectType(obj, i="transform") and n_triangles != n_faces: + invalid.append(obj) + + return invalid + + @classmethod + def repair(cls, instance): + for node in cls.get_invalid(instance): + cmds.polyTriangulate(node) diff --git a/openpype/hosts/nuke/addon.py b/openpype/hosts/nuke/addon.py index 1c5d5c4005..9d25afe2b6 100644 --- a/openpype/hosts/nuke/addon.py +++ b/openpype/hosts/nuke/addon.py @@ -27,7 +27,12 @@ class NukeAddon(OpenPypeModule, IHostAddon): new_nuke_paths.append(norm_path) env["NUKE_PATH"] = os.pathsep.join(new_nuke_paths) + # Remove auto screen scale factor for Qt + # - let Nuke decide it's value env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None) + # Remove tkinter library paths if are set + env.pop("TK_LIBRARY", None) + env.pop("TCL_LIBRARY", None) # Add vendor to PYTHONPATH python_path = env["PYTHONPATH"] diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 2691b7447a..7ee30bf273 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -611,7 +611,10 @@ def get_created_node_imageio_setting_legacy(nodeclass, creator, subset): if ( onode["subsets"] - and not any(re.search(s, subset) for s in onode["subsets"]) + and not any( + re.search(s.lower(), subset.lower()) + for s in onode["subsets"] + ) ): continue @@ -694,7 +697,8 @@ def get_imageio_node_override_setting( # find matching override node override_imageio_node = None for onode in override_nodes: - log.info(onode) + log.debug("__ onode: {}".format(onode)) + log.debug("__ subset: {}".format(subset)) if node_class not in onode["nukeNodeClass"]: continue @@ -703,7 +707,10 @@ def get_imageio_node_override_setting( if ( onode["subsets"] - and not any(re.search(s, subset) for s in onode["subsets"]) + and not any( + re.search(s.lower(), subset.lower()) + for s in onode["subsets"] + ) ): continue @@ -2961,7 +2968,7 @@ def get_viewer_config_from_string(input_string): viewer = split[1] display = split[0] elif "(" in viewer: - pattern = r"([\w\d\s]+).*[(](.*)[)]" + pattern = r"([\w\d\s\.\-]+).*[(](.*)[)]" result = re.findall(pattern, viewer) try: result = result.pop() diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index e7197b4fa8..06c086b10d 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -298,7 +298,7 @@ class ExtractSlateFrame(publish.Extractor): def add_comment_slate_node(self, instance, node): - comment = instance.context.data.get("comment") + comment = instance.data["comment"] intent = instance.context.data.get("intent") if not isinstance(intent, dict): intent = { diff --git a/openpype/hosts/photoshop/api/launch_logic.py b/openpype/hosts/photoshop/api/launch_logic.py index 1f0203dca6..1403b6cfa1 100644 --- a/openpype/hosts/photoshop/api/launch_logic.py +++ b/openpype/hosts/photoshop/api/launch_logic.py @@ -8,7 +8,7 @@ from wsrpc_aiohttp import ( WebSocketAsync ) -from Qt import QtCore +from qtpy import QtCore from openpype.lib import Logger from openpype.pipeline import legacy_io diff --git a/openpype/hosts/photoshop/api/lib.py b/openpype/hosts/photoshop/api/lib.py index 221b4314e6..e3b601d011 100644 --- a/openpype/hosts/photoshop/api/lib.py +++ b/openpype/hosts/photoshop/api/lib.py @@ -3,7 +3,7 @@ import sys import contextlib import traceback -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.lib import env_value_to_bool, Logger from openpype.modules import ModulesManager diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py index 9f6fc0983c..d2da8c5cb4 100644 --- a/openpype/hosts/photoshop/api/pipeline.py +++ b/openpype/hosts/photoshop/api/pipeline.py @@ -1,5 +1,5 @@ import os -from Qt import QtWidgets +from qtpy import QtWidgets import pyblish.api diff --git a/openpype/hosts/photoshop/plugins/create/create_legacy_image.py b/openpype/hosts/photoshop/plugins/create/create_legacy_image.py index 7672458165..2d655cae32 100644 --- a/openpype/hosts/photoshop/plugins/create/create_legacy_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_legacy_image.py @@ -1,6 +1,6 @@ import re -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.pipeline import create from openpype.hosts.photoshop import api as photoshop diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py index 56ea82f6b6..a7ae02a2eb 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py @@ -1,5 +1,7 @@ +import os import pyblish.api +from openpype.settings import get_project_settings from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -18,23 +20,38 @@ class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin): families = ["texture_batch_workfile"] optional = True - # from presets - main_workfile_extensions = ['mra'] - def process(self, instance): if instance.data["family"] == "workfile": ext = instance.data["representations"][0]["ext"] - if ext not in self.main_workfile_extensions: + main_workfile_extensions = self.get_main_workfile_extensions() + if ext not in main_workfile_extensions: self.log.warning("Only secondary workfile present!") return if not instance.data.get("resources"): msg = "No secondary workfile present for workfile '{}'". \ format(instance.data["name"]) - ext = self.main_workfile_extensions[0] + ext = main_workfile_extensions[0] formatting_data = {"file_name": instance.data["name"], "extension": ext} raise PublishXmlValidationError(self, msg, formatting_data=formatting_data ) + + @staticmethod + def get_main_workfile_extensions(): + project_settings = get_project_settings(os.environ["AVALON_PROJECT"]) + + try: + extensions = (project_settings["standalonepublisher"] + ["publish"] + ["CollectTextures"] + ["main_workfile_extensions"]) + except KeyError: + raise Exception("Setting 'Main workfile extensions' not found." + " The setting must be set for the" + " 'Collect Texture' publish plugin of the" + " 'Standalone Publish' tool.") + + return extensions diff --git a/openpype/hosts/tvpaint/api/launch_script.py b/openpype/hosts/tvpaint/api/launch_script.py index c474a10529..614dbe8a6e 100644 --- a/openpype/hosts/tvpaint/api/launch_script.py +++ b/openpype/hosts/tvpaint/api/launch_script.py @@ -6,7 +6,7 @@ import ctypes import platform import logging -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype.pipeline import install_host diff --git a/openpype/hosts/tvpaint/plugins/create/create_render_layer.py b/openpype/hosts/tvpaint/plugins/create/create_render_layer.py index a085830e96..009b69c4f1 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_render_layer.py +++ b/openpype/hosts/tvpaint/plugins/create/create_render_layer.py @@ -207,8 +207,8 @@ class CreateRenderlayer(plugin.Creator): ) def _ask_user_subset_override(self, instance): - from Qt import QtCore - from Qt.QtWidgets import QMessageBox + from qtpy import QtCore + from qtpy.QtWidgets import QMessageBox title = "Subset \"{}\" already exist".format(instance["subset"]) text = ( diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index d396b64072..ca5a42cd82 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -2,6 +2,7 @@ import os import logging from typing import List +import semver import pyblish.api @@ -21,6 +22,9 @@ import unreal # noqa logger = logging.getLogger("openpype.hosts.unreal") OPENPYPE_CONTAINERS = "OpenPypeContainers" +UNREAL_VERSION = semver.VersionInfo( + *os.getenv("OPENPYPE_UNREAL_VERSION").split(".") +) HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.unreal.__file__)) PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") @@ -111,7 +115,9 @@ def ls(): """ ar = unreal.AssetRegistryHelpers.get_asset_registry() - openpype_containers = ar.get_assets_by_class("AssetContainer", True) + # UE 5.1 changed how class name is specified + class_name = ["/Script/OpenPype", "AssetContainer"] if UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 else "AssetContainer" # noqa + openpype_containers = ar.get_assets_by_class(class_name, True) # get_asset_by_class returns AssetData. To get all metadata we need to # load asset. get_tag_values() work only on metadata registered in diff --git a/openpype/hosts/unreal/api/tools_ui.py b/openpype/hosts/unreal/api/tools_ui.py index 2500f8495f..708e167a65 100644 --- a/openpype/hosts/unreal/api/tools_ui.py +++ b/openpype/hosts/unreal/api/tools_ui.py @@ -1,5 +1,5 @@ import sys -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import ( resources, diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index 4ae72593e9..2dc6fb9f42 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -150,6 +150,7 @@ class UnrealPrelaunchHook(PreLaunchHook): engine_path=Path(engine_path) ) + self.launch_context.env["OPENPYPE_UNREAL_VERSION"] = engine_version # Append project file to launch arguments self.launch_context.launch_args.append( f"\"{project_file.as_posix()}\"") diff --git a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstance.cpp b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstance.cpp index 4f1e846c0b..ed81104c05 100644 --- a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstance.cpp +++ b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstance.cpp @@ -2,107 +2,150 @@ #include "OpenPypePublishInstance.h" #include "AssetRegistryModule.h" +#include "NotificationManager.h" +#include "SNotificationList.h" +//Moves all the invalid pointers to the end to prepare them for the shrinking +#define REMOVE_INVALID_ENTRIES(VAR) VAR.CompactStable(); \ + VAR.Shrink(); UOpenPypePublishInstance::UOpenPypePublishInstance(const FObjectInitializer& ObjectInitializer) - : UObject(ObjectInitializer) + : UPrimaryDataAsset(ObjectInitializer) { - FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked("AssetRegistry"); - FString path = UOpenPypePublishInstance::GetPathName(); + const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked< + FAssetRegistryModule>("AssetRegistry"); + + const FPropertyEditorModule& PropertyEditorModule = FModuleManager::LoadModuleChecked( + "PropertyEditor"); + + FString Left, Right; + GetPathName().Split("/" + GetName(), &Left, &Right); + FARFilter Filter; - Filter.PackagePaths.Add(FName(*path)); + Filter.PackagePaths.Emplace(FName(Left)); - AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetAdded); + TArray FoundAssets; + AssetRegistryModule.GetRegistry().GetAssets(Filter, FoundAssets); + + for (const FAssetData& AssetData : FoundAssets) + OnAssetCreated(AssetData); + + REMOVE_INVALID_ENTRIES(AssetDataInternal) + REMOVE_INVALID_ENTRIES(AssetDataExternal) + + AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetCreated); AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UOpenPypePublishInstance::OnAssetRemoved); - AssetRegistryModule.Get().OnAssetRenamed().AddUObject(this, &UOpenPypePublishInstance::OnAssetRenamed); + AssetRegistryModule.Get().OnAssetUpdated().AddUObject(this, &UOpenPypePublishInstance::OnAssetUpdated); } -void UOpenPypePublishInstance::OnAssetAdded(const FAssetData& AssetData) +void UOpenPypePublishInstance::OnAssetCreated(const FAssetData& InAssetData) { TArray split; - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); + UObject* Asset = InAssetData.GetAsset(); - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - if (assetDir.StartsWith(*selfDir)) + if (!IsValid(Asset)) { - // exclude self - if (assetFName != "OpenPypePublishInstance") + UE_LOG(LogAssetData, Warning, TEXT("Asset \"%s\" is not valid! Skipping the addition."), + *InAssetData.ObjectPath.ToString()); + return; + } + + const bool result = IsUnderSameDir(Asset) && Cast(Asset) == nullptr; + + if (result) + { + if (AssetDataInternal.Emplace(Asset).IsValidId()) { - assets.Add(assetPath); - UE_LOG(LogTemp, Log, TEXT("%s: asset added to %s"), *selfFullPath, *selfDir); + UE_LOG(LogTemp, Log, TEXT("Added an Asset to PublishInstance - Publish Instance: %s, Asset %s"), + *this->GetName(), *Asset->GetName()); } } } -void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& AssetData) +void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& InAssetData) { - TArray split; - - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); - - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - FString path = UOpenPypePublishInstance::GetPathName(); - FString lpp = FPackageName::GetLongPackagePath(*path); - - if (assetDir.StartsWith(*selfDir)) + if (Cast(InAssetData.GetAsset()) == nullptr) { - // exclude self - if (assetFName != "OpenPypePublishInstance") + if (AssetDataInternal.Contains(nullptr)) { - // UE_LOG(LogTemp, Warning, TEXT("%s: asset removed"), *lpp); - assets.Remove(assetPath); + AssetDataInternal.Remove(nullptr); + REMOVE_INVALID_ENTRIES(AssetDataInternal) + } + else + { + AssetDataExternal.Remove(nullptr); + REMOVE_INVALID_ENTRIES(AssetDataExternal) } } } -void UOpenPypePublishInstance::OnAssetRenamed(const FAssetData& AssetData, const FString& str) +void UOpenPypePublishInstance::OnAssetUpdated(const FAssetData& InAssetData) { - TArray split; + REMOVE_INVALID_ENTRIES(AssetDataInternal); + REMOVE_INVALID_ENTRIES(AssetDataExternal); +} - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); +bool UOpenPypePublishInstance::IsUnderSameDir(const UObject* InAsset) const +{ + FString ThisLeft, ThisRight; + this->GetPathName().Split(this->GetName(), &ThisLeft, &ThisRight); - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); + return InAsset->GetPathName().StartsWith(ThisLeft); +} - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); +#ifdef WITH_EDITOR - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - if (assetDir.StartsWith(*selfDir)) +void UOpenPypePublishInstance::SendNotification(const FString& Text) const +{ + FNotificationInfo Info{FText::FromString(Text)}; + + Info.bFireAndForget = true; + Info.bUseLargeFont = false; + Info.bUseThrobber = false; + Info.bUseSuccessFailIcons = false; + Info.ExpireDuration = 4.f; + Info.FadeOutDuration = 2.f; + + FSlateNotificationManager::Get().AddNotification(Info); + + UE_LOG(LogAssetData, Warning, + TEXT( + "Removed duplicated asset from the AssetsDataExternal in Container \"%s\", Asset is already included in the AssetDataInternal!" + ), *GetName() + ) +} + + +void UOpenPypePublishInstance::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) +{ + Super::PostEditChangeProperty(PropertyChangedEvent); + + if (PropertyChangedEvent.ChangeType == EPropertyChangeType::ValueSet && + PropertyChangedEvent.Property->GetFName() == GET_MEMBER_NAME_CHECKED( + UOpenPypePublishInstance, AssetDataExternal)) { - // exclude self - if (assetFName != "AssetContainer") + // Check for duplicated assets + for (const auto& Asset : AssetDataInternal) { + if (AssetDataExternal.Contains(Asset)) + { + AssetDataExternal.Remove(Asset); + return SendNotification( + "You are not allowed to add assets into AssetDataExternal which are already included in AssetDataInternal!"); + } + } - assets.Remove(str); - assets.Add(assetPath); - // UE_LOG(LogTemp, Warning, TEXT("%s: asset renamed %s"), *lpp, *str); + // Check if no UOpenPypePublishInstance type assets are included + for (const auto& Asset : AssetDataExternal) + { + if (Cast(Asset.Get()) != nullptr) + { + AssetDataExternal.Remove(Asset); + return SendNotification("You are not allowed to add publish instances!"); + } } } } + +#endif diff --git a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp index e61964c689..9b26da7fa4 100644 --- a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp +++ b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp @@ -9,10 +9,10 @@ UOpenPypePublishInstanceFactory::UOpenPypePublishInstanceFactory(const FObjectIn bEditorImport = true; } -UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) +UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) { - UOpenPypePublishInstance* OpenPypePublishInstance = NewObject(InParent, Class, Name, Flags); - return OpenPypePublishInstance; + check(InClass->IsChildOf(UOpenPypePublishInstance::StaticClass())); + return NewObject(InParent, InClass, InName, Flags); } bool UOpenPypePublishInstanceFactory::ShouldShowInNewMenu() const { diff --git a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstance.h b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstance.h index 0a27a078d7..0e946fb039 100644 --- a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstance.h +++ b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstance.h @@ -5,17 +5,99 @@ UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypePublishInstance : public UObject +class OPENPYPE_API UOpenPypePublishInstance : public UPrimaryDataAsset { - GENERATED_BODY() - + GENERATED_UCLASS_BODY() + public: - UOpenPypePublishInstance(const FObjectInitializer& ObjectInitalizer); + + /** + /** + * Retrieves all the assets which are monitored by the Publish Instance (Monitors assets in the directory which is + * placed in) + * + * @return - Set of UObjects. Careful! They are returning raw pointers. Seems like an issue in UE5 + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetInternalAssets() const + { + //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. + TSet ResultSet; + + for (const auto& Asset : AssetDataInternal) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } + + /** + * Retrieves all the assets which have been added manually by the Publish Instance + * + * @return - TSet of assets (UObjects). Careful! They are returning raw pointers. Seems like an issue in UE5 + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetExternalAssets() const + { + //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. + TSet ResultSet; + + for (const auto& Asset : AssetDataExternal) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } + + /** + * Function for returning all the assets in the container combined. + * + * @return Returns all the internal and externally added assets into one set (TSet of UObjects). Careful! They are + * returning raw pointers. Seems like an issue in UE5 + * + * @attention If the bAddExternalAssets variable is false, external assets won't be included! + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetAllAssets() const + { + const TSet>& IteratedSet = bAddExternalAssets ? AssetDataInternal.Union(AssetDataExternal) : AssetDataInternal; + + //Create a new TSet only with raw pointers. + TSet ResultSet; + + for (auto& Asset : IteratedSet) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } + - UPROPERTY(EditAnywhere, BlueprintReadOnly) - TArray assets; private: - void OnAssetAdded(const FAssetData& AssetData); - void OnAssetRemoved(const FAssetData& AssetData); - void OnAssetRenamed(const FAssetData& AssetData, const FString& str); -}; \ No newline at end of file + + UPROPERTY(VisibleAnywhere, Category="Assets") + TSet> AssetDataInternal; + + /** + * This property allows exposing the array to include other assets from any other directory than what it's currently + * monitoring. NOTE: that these assets have to be added manually! They are not automatically registered or added! + */ + UPROPERTY(EditAnywhere, Category = "Assets") + bool bAddExternalAssets = false; + + UPROPERTY(EditAnywhere, meta=(EditCondition="bAddExternalAssets"), Category="Assets") + TSet> AssetDataExternal; + + + void OnAssetCreated(const FAssetData& InAssetData); + void OnAssetRemoved(const FAssetData& InAssetData); + void OnAssetUpdated(const FAssetData& InAssetData); + + bool IsUnderSameDir(const UObject* InAsset) const; + +#ifdef WITH_EDITOR + + void SendNotification(const FString& Text) const; + virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override; + +#endif + +}; + diff --git a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h index a2b3abe13e..7d2c77fe6e 100644 --- a/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h +++ b/openpype/hosts/unreal/integration/UE_4.7/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h @@ -14,6 +14,6 @@ class OPENPYPE_API UOpenPypePublishInstanceFactory : public UFactory public: UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer); - virtual UObject* FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; + virtual UObject* FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; virtual bool ShouldShowInNewMenu() const override; -}; \ No newline at end of file +}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/OpenPype.Build.cs b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/OpenPype.Build.cs index fcfd268234..67db648b2a 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/OpenPype.Build.cs +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/OpenPype.Build.cs @@ -6,7 +6,11 @@ public class OpenPype : ModuleRules { public OpenPype(ReadOnlyTargetRules Target) : base(Target) { + DefaultBuildSettings = BuildSettingsVersion.V2; + bLegacyPublicIncludePaths = false; + ShadowVariableWarningLevel = WarningLevel.Error; PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs; + IncludeOrderVersion = EngineIncludeOrderVersion.Unreal5_0; PublicIncludePaths.AddRange( new string[] { diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/AssetContainer.cpp b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/AssetContainer.cpp index c766f87a8e..61e563f729 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/AssetContainer.cpp +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/AssetContainer.cpp @@ -1,7 +1,7 @@ // Fill out your copyright notice in the Description page of Project Settings. #include "AssetContainer.h" -#include "AssetRegistryModule.h" +#include "AssetRegistry/AssetRegistryModule.h" #include "Misc/PackageName.h" #include "Engine.h" #include "Containers/UnrealString.h" @@ -30,8 +30,8 @@ void UAssetContainer::OnAssetAdded(const FAssetData& AssetData) // get asset path and class FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); - + FString assetFName = AssetData.AssetClassPath.ToString(); + UE_LOG(LogTemp, Log, TEXT("asset name %s"), *assetFName); // split path assetPath.ParseIntoArray(split, TEXT(" "), true); @@ -60,7 +60,7 @@ void UAssetContainer::OnAssetRemoved(const FAssetData& AssetData) // get asset path and class FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); + FString assetFName = AssetData.AssetClassPath.ToString(); // split path assetPath.ParseIntoArray(split, TEXT(" "), true); @@ -93,7 +93,7 @@ void UAssetContainer::OnAssetRenamed(const FAssetData& AssetData, const FString& // get asset path and class FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); + FString assetFName = AssetData.AssetClassPath.ToString(); // split path assetPath.ParseIntoArray(split, TEXT(" "), true); diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstance.cpp b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstance.cpp index 4f1e846c0b..f5eb6f9e70 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstance.cpp +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstance.cpp @@ -1,108 +1,152 @@ #pragma once #include "OpenPypePublishInstance.h" -#include "AssetRegistryModule.h" +#include "AssetRegistry/AssetRegistryModule.h" +#include "AssetToolsModule.h" +#include "Framework/Notifications/NotificationManager.h" +#include "Widgets/Notifications/SNotificationList.h" +//Moves all the invalid pointers to the end to prepare them for the shrinking +#define REMOVE_INVALID_ENTRIES(VAR) VAR.CompactStable(); \ + VAR.Shrink(); UOpenPypePublishInstance::UOpenPypePublishInstance(const FObjectInitializer& ObjectInitializer) - : UObject(ObjectInitializer) + : UPrimaryDataAsset(ObjectInitializer) { - FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked("AssetRegistry"); - FString path = UOpenPypePublishInstance::GetPathName(); + const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked< + FAssetRegistryModule>("AssetRegistry"); + + FString Left, Right; + GetPathName().Split(GetName(), &Left, &Right); + FARFilter Filter; - Filter.PackagePaths.Add(FName(*path)); + Filter.PackagePaths.Emplace(FName(Left)); - AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetAdded); + TArray FoundAssets; + AssetRegistryModule.GetRegistry().GetAssets(Filter, FoundAssets); + + for (const FAssetData& AssetData : FoundAssets) + OnAssetCreated(AssetData); + + REMOVE_INVALID_ENTRIES(AssetDataInternal) + REMOVE_INVALID_ENTRIES(AssetDataExternal) + + AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetCreated); AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UOpenPypePublishInstance::OnAssetRemoved); - AssetRegistryModule.Get().OnAssetRenamed().AddUObject(this, &UOpenPypePublishInstance::OnAssetRenamed); + AssetRegistryModule.Get().OnAssetUpdated().AddUObject(this, &UOpenPypePublishInstance::OnAssetUpdated); + + } -void UOpenPypePublishInstance::OnAssetAdded(const FAssetData& AssetData) +void UOpenPypePublishInstance::OnAssetCreated(const FAssetData& InAssetData) { TArray split; - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); + const TObjectPtr Asset = InAssetData.GetAsset(); - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - if (assetDir.StartsWith(*selfDir)) + if (!IsValid(Asset)) { - // exclude self - if (assetFName != "OpenPypePublishInstance") + UE_LOG(LogAssetData, Warning, TEXT("Asset \"%s\" is not valid! Skipping the addition."), + *InAssetData.GetObjectPathString()); + return; + } + + const bool result = IsUnderSameDir(Asset) && Cast(Asset) == nullptr; + + if (result) + { + if (AssetDataInternal.Emplace(Asset).IsValidId()) { - assets.Add(assetPath); - UE_LOG(LogTemp, Log, TEXT("%s: asset added to %s"), *selfFullPath, *selfDir); + UE_LOG(LogTemp, Log, TEXT("Added an Asset to PublishInstance - Publish Instance: %s, Asset %s"), + *this->GetName(), *Asset->GetName()); } } } -void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& AssetData) +void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& InAssetData) { - TArray split; - - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); - - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - FString path = UOpenPypePublishInstance::GetPathName(); - FString lpp = FPackageName::GetLongPackagePath(*path); - - if (assetDir.StartsWith(*selfDir)) + if (Cast(InAssetData.GetAsset()) == nullptr) { - // exclude self - if (assetFName != "OpenPypePublishInstance") + if (AssetDataInternal.Contains(nullptr)) { - // UE_LOG(LogTemp, Warning, TEXT("%s: asset removed"), *lpp); - assets.Remove(assetPath); + AssetDataInternal.Remove(nullptr); + REMOVE_INVALID_ENTRIES(AssetDataInternal) + } + else + { + AssetDataExternal.Remove(nullptr); + REMOVE_INVALID_ENTRIES(AssetDataExternal) } } } -void UOpenPypePublishInstance::OnAssetRenamed(const FAssetData& AssetData, const FString& str) +void UOpenPypePublishInstance::OnAssetUpdated(const FAssetData& InAssetData) { - TArray split; + REMOVE_INVALID_ENTRIES(AssetDataInternal); + REMOVE_INVALID_ENTRIES(AssetDataExternal); +} - // get directory of current container - FString selfFullPath = UOpenPypePublishInstance::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); +bool UOpenPypePublishInstance::IsUnderSameDir(const TObjectPtr& InAsset) const +{ + FString ThisLeft, ThisRight; + this->GetPathName().Split(this->GetName(), &ThisLeft, &ThisRight); - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.AssetClass.ToString(); + return InAsset->GetPathName().StartsWith(ThisLeft); +} - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); +#ifdef WITH_EDITOR - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - if (assetDir.StartsWith(*selfDir)) +void UOpenPypePublishInstance::SendNotification(const FString& Text) const +{ + FNotificationInfo Info{FText::FromString(Text)}; + + Info.bFireAndForget = true; + Info.bUseLargeFont = false; + Info.bUseThrobber = false; + Info.bUseSuccessFailIcons = false; + Info.ExpireDuration = 4.f; + Info.FadeOutDuration = 2.f; + + FSlateNotificationManager::Get().AddNotification(Info); + + UE_LOG(LogAssetData, Warning, + TEXT( + "Removed duplicated asset from the AssetsDataExternal in Container \"%s\", Asset is already included in the AssetDataInternal!" + ), *GetName() + ) +} + + +void UOpenPypePublishInstance::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) +{ + Super::PostEditChangeProperty(PropertyChangedEvent); + + if (PropertyChangedEvent.ChangeType == EPropertyChangeType::ValueSet && + PropertyChangedEvent.Property->GetFName() == GET_MEMBER_NAME_CHECKED( + UOpenPypePublishInstance, AssetDataExternal)) { - // exclude self - if (assetFName != "AssetContainer") - { - assets.Remove(str); - assets.Add(assetPath); - // UE_LOG(LogTemp, Warning, TEXT("%s: asset renamed %s"), *lpp, *str); + // Check for duplicated assets + for (const auto& Asset : AssetDataInternal) + { + if (AssetDataExternal.Contains(Asset)) + { + AssetDataExternal.Remove(Asset); + return SendNotification("You are not allowed to add assets into AssetDataExternal which are already included in AssetDataInternal!"); + } + + } + + // Check if no UOpenPypePublishInstance type assets are included + for (const auto& Asset : AssetDataExternal) + { + if (Cast(Asset.Get()) != nullptr) + { + AssetDataExternal.Remove(Asset); + return SendNotification("You are not allowed to add publish instances!"); + } } } } + +#endif diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp index e61964c689..9b26da7fa4 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp @@ -9,10 +9,10 @@ UOpenPypePublishInstanceFactory::UOpenPypePublishInstanceFactory(const FObjectIn bEditorImport = true; } -UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) +UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) { - UOpenPypePublishInstance* OpenPypePublishInstance = NewObject(InParent, Class, Name, Flags); - return OpenPypePublishInstance; + check(InClass->IsChildOf(UOpenPypePublishInstance::StaticClass())); + return NewObject(InParent, InClass, InName, Flags); } bool UOpenPypePublishInstanceFactory::ShouldShowInNewMenu() const { diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/AssetContainer.h b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/AssetContainer.h index 3c2a360c78..2c06e59d6f 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/AssetContainer.h +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/AssetContainer.h @@ -5,7 +5,7 @@ #include "CoreMinimal.h" #include "UObject/NoExportTypes.h" #include "Engine/AssetUserData.h" -#include "AssetData.h" +#include "AssetRegistry/AssetData.h" #include "AssetContainer.generated.h" /** diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstance.h b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstance.h index 0a27a078d7..e9d94aecfc 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstance.h +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstance.h @@ -5,17 +5,92 @@ UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypePublishInstance : public UObject +class OPENPYPE_API UOpenPypePublishInstance : public UPrimaryDataAsset { - GENERATED_BODY() - + GENERATED_UCLASS_BODY() public: - UOpenPypePublishInstance(const FObjectInitializer& ObjectInitalizer); + /** + * Retrieves all the assets which are monitored by the Publish Instance (Monitors assets in the directory which is + * placed in) + * + * @return - Set of UObjects. Careful! They are returning raw pointers. Seems like an issue in UE5 + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetInternalAssets() const + { + //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. + TSet ResultSet; + + for (const auto& Asset : AssetDataInternal) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } + + /** + * Retrieves all the assets which have been added manually by the Publish Instance + * + * @return - TSet of assets (UObjects). Careful! They are returning raw pointers. Seems like an issue in UE5 + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetExternalAssets() const + { + //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. + TSet ResultSet; + + for (const auto& Asset : AssetDataExternal) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } + + /** + * Function for returning all the assets in the container combined. + * + * @return Returns all the internal and externally added assets into one set (TSet of UObjects). Careful! They are + * returning raw pointers. Seems like an issue in UE5 + * + * @attention If the bAddExternalAssets variable is false, external assets won't be included! + */ + UFUNCTION(BlueprintCallable, BlueprintPure) + TSet GetAllAssets() const + { + const TSet>& IteratedSet = bAddExternalAssets ? AssetDataInternal.Union(AssetDataExternal) : AssetDataInternal; + + //Create a new TSet only with raw pointers. + TSet ResultSet; + + for (auto& Asset : IteratedSet) + ResultSet.Add(Asset.LoadSynchronous()); + + return ResultSet; + } - UPROPERTY(EditAnywhere, BlueprintReadOnly) - TArray assets; private: - void OnAssetAdded(const FAssetData& AssetData); - void OnAssetRemoved(const FAssetData& AssetData); - void OnAssetRenamed(const FAssetData& AssetData, const FString& str); -}; \ No newline at end of file + UPROPERTY(VisibleAnywhere, Category="Assets") + TSet> AssetDataInternal; + + /** + * This property allows the instance to include other assets from any other directory than what it's currently + * monitoring. + * @attention assets have to be added manually! They are not automatically registered or added! + */ + UPROPERTY(EditAnywhere, Category="Assets") + bool bAddExternalAssets = false; + + UPROPERTY(EditAnywhere, Category="Assets", meta=(EditCondition="bAddExternalAssets")) + TSet> AssetDataExternal; + + void OnAssetCreated(const FAssetData& InAssetData); + void OnAssetRemoved(const FAssetData& InAssetData); + void OnAssetUpdated(const FAssetData& InAssetData); + + bool IsUnderSameDir(const TObjectPtr& InAsset) const; + +#ifdef WITH_EDITOR + + void SendNotification(const FString& Text) const; + virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override; + +#endif +}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h index a2b3abe13e..7d2c77fe6e 100644 --- a/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h +++ b/openpype/hosts/unreal/integration/UE_5.0/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h @@ -14,6 +14,6 @@ class OPENPYPE_API UOpenPypePublishInstanceFactory : public UFactory public: UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer); - virtual UObject* FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; + virtual UObject* FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; virtual bool ShouldShowInNewMenu() const override; -}; \ No newline at end of file +}; diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index d02c6de357..095f5e414b 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -50,7 +50,10 @@ def get_engine_versions(env=None): # environment variable not set pass except OSError: - # specified directory doesn't exists + # specified directory doesn't exist + pass + except StopIteration: + # specified directory doesn't exist pass # if we've got something, terminate auto-detection process diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py new file mode 100644 index 0000000000..496b6056ea --- /dev/null +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -0,0 +1,162 @@ +# -*- coding: utf-8 -*- +"""Load Alembic Animation.""" +import os + +from openpype.pipeline import ( + get_representation_path, + AVALON_CONTAINER_ID +) +from openpype.hosts.unreal.api import plugin +from openpype.hosts.unreal.api import pipeline as unreal_pipeline +import unreal # noqa + + +class AnimationAlembicLoader(plugin.Loader): + """Load Unreal SkeletalMesh from Alembic""" + + families = ["animation"] + label = "Import Alembic Animation" + representations = ["abc"] + icon = "cube" + color = "orange" + + def get_task(self, filename, asset_dir, asset_name, replace): + task = unreal.AssetImportTask() + options = unreal.AbcImportSettings() + sm_settings = unreal.AbcStaticMeshSettings() + conversion_settings = unreal.AbcConversionSettings( + preset=unreal.AbcConversionPreset.CUSTOM, + flip_u=False, flip_v=False, + rotation=[0.0, 0.0, 0.0], + scale=[1.0, 1.0, -1.0]) + + task.set_editor_property('filename', filename) + task.set_editor_property('destination_path', asset_dir) + task.set_editor_property('destination_name', asset_name) + task.set_editor_property('replace_existing', replace) + task.set_editor_property('automated', True) + task.set_editor_property('save', True) + + options.set_editor_property( + 'import_type', unreal.AlembicImportType.SKELETAL) + + options.static_mesh_settings = sm_settings + options.conversion_settings = conversion_settings + task.options = options + + return task + + def load(self, context, name, namespace, data): + """Load and containerise representation into Content Browser. + + This is two step process. First, import FBX to temporary path and + then call `containerise()` on it - this moves all content to new + directory and then it will create AssetContainer there and imprint it + with metadata. This will mark this path as container. + + Args: + context (dict): application context + name (str): subset name + namespace (str): in Unreal this is basically path to container. + This is not passed here, so namespace is set + by `containerise()` because only then we know + real path. + data (dict): Those would be data to be imprinted. This is not used + now, data are imprinted by `containerise()`. + + Returns: + list(str): list of container content + """ + + # Create directory for asset and openpype container + root = "/Game/OpenPype/Assets" + asset = context.get('asset').get('name') + suffix = "_CON" + if asset: + asset_name = "{}_{}".format(asset, name) + else: + asset_name = "{}".format(name) + version = context.get('version').get('name') + + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{root}/{asset}/{name}_v{version:03d}", suffix="") + + container_name += suffix + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + task = self.get_task(self.fname, asset_dir, asset_name, False) + + asset_tools = unreal.AssetToolsHelpers.get_asset_tools() + asset_tools.import_asset_tasks([task]) + + # Create Asset Container + unreal_pipeline.create_container( + container=container_name, path=asset_dir) + + data = { + "schema": "openpype:container-2.0", + "id": AVALON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": context["representation"]["_id"], + "parent": context["representation"]["parent"], + "family": context["representation"]["context"]["family"] + } + unreal_pipeline.imprint( + "{}/{}".format(asset_dir, container_name), data) + + asset_content = unreal.EditorAssetLibrary.list_assets( + asset_dir, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + return asset_content + + def update(self, container, representation): + name = container["asset_name"] + source_path = get_representation_path(representation) + destination_path = container["namespace"] + + task = self.get_task(source_path, destination_path, name, True) + + # do import fbx and replace existing data + asset_tools = unreal.AssetToolsHelpers.get_asset_tools() + asset_tools.import_asset_tasks([task]) + + container_path = f"{container['namespace']}/{container['objectName']}" + + # update metadata + unreal_pipeline.imprint( + container_path, + { + "representation": str(representation["_id"]), + "parent": str(representation["parent"]) + }) + + asset_content = unreal.EditorAssetLibrary.list_assets( + destination_path, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + def remove(self, container): + path = container["namespace"] + parent_path = os.path.dirname(path) + + unreal.EditorAssetLibrary.delete_directory(path) + + asset_content = unreal.EditorAssetLibrary.list_assets( + parent_path, recursive=False + ) + + if len(asset_content) == 0: + unreal.EditorAssetLibrary.delete_directory(parent_path) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py similarity index 100% rename from openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py rename to openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py similarity index 99% rename from openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py rename to openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index 9fe5f3ab4b..e316d255e9 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -14,7 +14,7 @@ import unreal # noqa class SkeletalMeshAlembicLoader(plugin.Loader): """Load Unreal SkeletalMesh from Alembic""" - families = ["pointcache"] + families = ["pointcache", "skeletalMesh"] label = "Import Alembic Skeletal Mesh" representations = ["abc"] icon = "cube" diff --git a/openpype/hosts/unreal/plugins/load/load_rig.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py similarity index 100% rename from openpype/hosts/unreal/plugins/load/load_rig.py rename to openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py similarity index 99% rename from openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py rename to openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index a5b9cbd1fc..c7841cef53 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -14,7 +14,7 @@ import unreal # noqa class StaticMeshAlembicLoader(plugin.Loader): """Load Unreal StaticMesh from Alembic""" - families = ["model"] + families = ["model", "staticMesh"] label = "Import Alembic Static Mesh" representations = ["abc"] icon = "cube" diff --git a/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py similarity index 100% rename from openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py rename to openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py diff --git a/openpype/hosts/unreal/plugins/publish/collect_instances.py b/openpype/hosts/unreal/plugins/publish/collect_instances.py index 2f604cb322..1f25cbde7d 100644 --- a/openpype/hosts/unreal/plugins/publish/collect_instances.py +++ b/openpype/hosts/unreal/plugins/publish/collect_instances.py @@ -3,6 +3,8 @@ import ast import unreal # noqa import pyblish.api +from openpype.hosts.unreal.api.pipeline import UNREAL_VERSION +from openpype.pipeline.publish import KnownPublishError class CollectInstances(pyblish.api.ContextPlugin): @@ -23,8 +25,10 @@ class CollectInstances(pyblish.api.ContextPlugin): def process(self, context): ar = unreal.AssetRegistryHelpers.get_asset_registry() - instance_containers = ar.get_assets_by_class( - "OpenPypePublishInstance", True) + class_name = ["/Script/OpenPype", + "AssetContainer"] if UNREAL_VERSION.major == 5 and \ + UNREAL_VERSION.minor > 0 else "OpenPypePublishInstance" # noqa + instance_containers = ar.get_assets_by_class(class_name, True) for container_data in instance_containers: asset = container_data.get_asset() @@ -32,9 +36,8 @@ class CollectInstances(pyblish.api.ContextPlugin): data["objectName"] = container_data.asset_name # convert to strings data = {str(key): str(value) for (key, value) in data.items()} - assert data.get("family"), ( - "instance has no family" - ) + if not data.get("family"): + raise KnownPublishError("instance has no family") # content of container members = ast.literal_eval(data.get("members")) diff --git a/openpype/lib/events.py b/openpype/lib/events.py index 747761fb3e..096201312f 100644 --- a/openpype/lib/events.py +++ b/openpype/lib/events.py @@ -74,22 +74,52 @@ class EventCallback(object): "Registered callback is not callable. \"{}\"" ).format(str(func))) - # Collect additional data about function - # - name - # - path - # - if expect argument or not + # Collect function name and path to file for logging func_name = func.__name__ func_path = os.path.abspath(inspect.getfile(func)) + + # Get expected arguments from function spec + # - positional arguments are always preferred + expect_args = False + expect_kwargs = False + fake_event = "fake" if hasattr(inspect, "signature"): + # Python 3 using 'Signature' object where we try to bind arg + # or kwarg. Using signature is recommended approach based on + # documentation. sig = inspect.signature(func) - expect_args = len(sig.parameters) > 0 + try: + sig.bind(fake_event) + expect_args = True + except TypeError: + pass + + try: + sig.bind(event=fake_event) + expect_kwargs = True + except TypeError: + pass + else: - expect_args = len(inspect.getargspec(func)[0]) > 0 + # In Python 2 'signature' is not available so 'getcallargs' is used + # - 'getcallargs' is marked as deprecated since Python 3.0 + try: + inspect.getcallargs(func, fake_event) + expect_args = True + except TypeError: + pass + + try: + inspect.getcallargs(func, event=fake_event) + expect_kwargs = True + except TypeError: + pass self._func_ref = func_ref self._func_name = func_name self._func_path = func_path self._expect_args = expect_args + self._expect_kwargs = expect_kwargs self._ref_valid = func_ref is not None self._enabled = True @@ -157,6 +187,10 @@ class EventCallback(object): try: if self._expect_args: callback(event) + + elif self._expect_kwargs: + callback(event=event) + else: callback() diff --git a/openpype/lib/file_transaction.py b/openpype/lib/file_transaction.py index 1626bec6b6..f265b8815c 100644 --- a/openpype/lib/file_transaction.py +++ b/openpype/lib/file_transaction.py @@ -14,9 +14,9 @@ else: class FileTransaction(object): - """ + """File transaction with rollback options. - The file transaction is a three step process. + The file transaction is a three-step process. 1) Rename any existing files to a "temporary backup" during `process()` 2) Copy the files to final destination during `process()` @@ -39,14 +39,12 @@ class FileTransaction(object): Warning: Any folders created during the transfer will not be removed. - """ MODE_COPY = 0 MODE_HARDLINK = 1 def __init__(self, log=None): - if log is None: log = logging.getLogger("FileTransaction") @@ -63,49 +61,64 @@ class FileTransaction(object): self._backup_to_original = {} def add(self, src, dst, mode=MODE_COPY): - """Add a new file to transfer queue""" + """Add a new file to transfer queue. + + Args: + src (str): Source path. + dst (str): Destination path. + mode (MODE_COPY, MODE_HARDLINK): Transfer mode. + """ + opts = {"mode": mode} - src = os.path.abspath(src) - dst = os.path.abspath(dst) + src = os.path.normpath(os.path.abspath(src)) + dst = os.path.normpath(os.path.abspath(dst)) if dst in self._transfers: queued_src = self._transfers[dst][0] if src == queued_src: - self.log.debug("File transfer was already " - "in queue: {} -> {}".format(src, dst)) + self.log.debug( + "File transfer was already in queue: {} -> {}".format( + src, dst)) return else: self.log.warning("File transfer in queue replaced..") - self.log.debug("Removed from queue: " - "{} -> {}".format(queued_src, dst)) - self.log.debug("Added to queue: {} -> {}".format(src, dst)) + self.log.debug( + "Removed from queue: {} -> {} replaced by {} -> {}".format( + queued_src, dst, src, dst)) self._transfers[dst] = (src, opts) def process(self): - # Backup any existing files - for dst in self._transfers.keys(): - if os.path.exists(dst): - # Backup original file - # todo: add timestamp or uuid to ensure unique - backup = dst + ".bak" - self._backup_to_original[backup] = dst - self.log.debug("Backup existing file: " - "{} -> {}".format(dst, backup)) - os.rename(dst, backup) + for dst, (src, _) in self._transfers.items(): + if dst == src or not os.path.exists(dst): + continue + + # Backup original file + # todo: add timestamp or uuid to ensure unique + backup = dst + ".bak" + self._backup_to_original[backup] = dst + self.log.debug( + "Backup existing file: {} -> {}".format(dst, backup)) + os.rename(dst, backup) # Copy the files to transfer for dst, (src, opts) in self._transfers.items(): + if dst == src: + self.log.debug( + "Source and destionation are same files {} -> {}".format( + src, dst)) + continue + self._create_folder_for_file(dst) if opts["mode"] == self.MODE_COPY: self.log.debug("Copying file ... {} -> {}".format(src, dst)) copyfile(src, dst) elif opts["mode"] == self.MODE_HARDLINK: - self.log.debug("Hardlinking file ... {} -> {}".format(src, - dst)) + self.log.debug("Hardlinking file ... {} -> {}".format( + src, dst)) create_hard_link(src, dst) self._transferred.append(dst) @@ -116,23 +129,21 @@ class FileTransaction(object): try: os.remove(backup) except OSError: - self.log.error("Failed to remove backup file: " - "{}".format(backup), - exc_info=True) + self.log.error( + "Failed to remove backup file: {}".format(backup), + exc_info=True) def rollback(self): - errors = 0 - # Rollback any transferred files for path in self._transferred: try: os.remove(path) except OSError: errors += 1 - self.log.error("Failed to rollback created file: " - "{}".format(path), - exc_info=True) + self.log.error( + "Failed to rollback created file: {}".format(path), + exc_info=True) # Rollback the backups for backup, original in self._backup_to_original.items(): @@ -140,13 +151,15 @@ class FileTransaction(object): os.rename(backup, original) except OSError: errors += 1 - self.log.error("Failed to restore original file: " - "{} -> {}".format(backup, original), - exc_info=True) + self.log.error( + "Failed to restore original file: {} -> {}".format( + backup, original), + exc_info=True) if errors: - self.log.error("{} errors occurred during " - "rollback.".format(errors), exc_info=True) + self.log.error( + "{} errors occurred during rollback.".format(errors), + exc_info=True) six.reraise(*sys.exc_info()) @property diff --git a/openpype/lib/path_templates.py b/openpype/lib/path_templates.py index b160054e38..0f99efb430 100644 --- a/openpype/lib/path_templates.py +++ b/openpype/lib/path_templates.py @@ -422,7 +422,7 @@ class TemplateResult(str): cls = self.__class__ return cls( - os.path.normpath(self), + os.path.normpath(self.replace("\\", "/")), self.template, self.solved, self.used_values, diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index 0bfccd3443..57279d0380 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -77,26 +77,38 @@ def get_transcode_temp_directory(): ) -def get_oiio_info_for_input(filepath, logger=None): +def get_oiio_info_for_input(filepath, logger=None, subimages=False): """Call oiiotool to get information about input and return stdout. Stdout should contain xml format string. """ args = [ - get_oiio_tools_path(), "--info", "-v", "-i:infoformat=xml", filepath + get_oiio_tools_path(), + "--info", + "-v" ] + if subimages: + args.append("-a") + + args.extend(["-i:infoformat=xml", filepath]) + output = run_subprocess(args, logger=logger) output = output.replace("\r\n", "\n") xml_started = False + subimages_lines = [] lines = [] for line in output.split("\n"): if not xml_started: if not line.startswith("<"): continue xml_started = True + if xml_started: lines.append(line) + if line == "": + subimages_lines.append(lines) + lines = [] if not xml_started: raise ValueError( @@ -105,12 +117,19 @@ def get_oiio_info_for_input(filepath, logger=None): ) ) - xml_text = "\n".join(lines) - return parse_oiio_xml_output(xml_text, logger=logger) + output = [] + for subimage_lines in subimages_lines: + xml_text = "\n".join(subimage_lines) + output.append(parse_oiio_xml_output(xml_text, logger=logger)) + + if subimages: + return output + return output[0] class RationalToInt: """Rational value stored as division of 2 integers using string.""" + def __init__(self, string_value): parts = string_value.split("/") top = float(parts[0]) @@ -157,16 +176,16 @@ def convert_value_by_type_name(value_type, value, logger=None): if value_type == "int": return int(value) - if value_type == "float": + if value_type in ("float", "double"): return float(value) # Vectors will probably have more types - if value_type in ("vec2f", "float2"): + if value_type in ("vec2f", "float2", "float2d"): return [float(item) for item in value.split(",")] # Matrix should be always have square size of element 3x3, 4x4 # - are returned as list of lists - if value_type == "matrix": + if value_type in ("matrix", "matrixd"): output = [] current_index = -1 parts = value.split(",") @@ -198,7 +217,7 @@ def convert_value_by_type_name(value_type, value, logger=None): if value_type == "rational2i": return RationalToInt(value) - if value_type == "vector": + if value_type in ("vector", "vectord"): parts = [part.strip() for part in value.split(",")] output = [] for part in parts: @@ -380,6 +399,10 @@ def should_convert_for_ffmpeg(src_filepath): if not input_info: return None + subimages = input_info.get("subimages") + if subimages is not None and subimages > 1: + return True + # Check compression compression = input_info["attribs"].get("compression") if compression in ("dwaa", "dwab"): @@ -453,7 +476,7 @@ def convert_for_ffmpeg( if input_frame_start is not None and input_frame_end is not None: is_sequence = int(input_frame_end) != int(input_frame_start) - input_info = get_oiio_info_for_input(first_input_path) + input_info = get_oiio_info_for_input(first_input_path, logger=logger) # Change compression only if source compression is "dwaa" or "dwab" # - they're not supported in ffmpeg @@ -488,13 +511,21 @@ def convert_for_ffmpeg( input_channels.append(alpha) input_channels_str = ",".join(input_channels) - oiio_cmd.extend([ + subimages = input_info.get("subimages") + input_arg = "-i" + if subimages is None or subimages == 1: # Tell oiiotool which channels should be loaded # - other channels are not loaded to memory so helps to avoid memory # leak issues - "-i:ch={}".format(input_channels_str), first_input_path, + # - this option is crashing if used on multipart/subimages exrs + input_arg += ":ch={}".format(input_channels_str) + + oiio_cmd.extend([ + input_arg, first_input_path, # Tell oiiotool which channels should be put to top stack (and output) - "--ch", channels_arg + "--ch", channels_arg, + # Use first subimage + "--subimage", "0" ]) # Add frame definitions to arguments @@ -588,7 +619,7 @@ def convert_input_paths_for_ffmpeg( " \".exr\" extension. Got \"{}\"." ).format(ext)) - input_info = get_oiio_info_for_input(first_input_path) + input_info = get_oiio_info_for_input(first_input_path, logger=logger) # Change compression only if source compression is "dwaa" or "dwab" # - they're not supported in ffmpeg @@ -606,12 +637,22 @@ def convert_input_paths_for_ffmpeg( red, green, blue, alpha = review_channels input_channels = [red, green, blue] + # TODO find subimage inder where rgba is available for multipart exrs channels_arg = "R={},G={},B={}".format(red, green, blue) if alpha is not None: channels_arg += ",A={}".format(alpha) input_channels.append(alpha) input_channels_str = ",".join(input_channels) + subimages = input_info.get("subimages") + input_arg = "-i" + if subimages is None or subimages == 1: + # Tell oiiotool which channels should be loaded + # - other channels are not loaded to memory so helps to avoid memory + # leak issues + # - this option is crashing if used on multipart exrs + input_arg += ":ch={}".format(input_channels_str) + for input_path in input_paths: # Prepare subprocess arguments oiio_cmd = [ @@ -625,13 +666,12 @@ def convert_input_paths_for_ffmpeg( oiio_cmd.extend(["--compression", compression]) oiio_cmd.extend([ - # Tell oiiotool which channels should be loaded - # - other channels are not loaded to memory so helps to - # avoid memory leak issues - "-i:ch={}".format(input_channels_str), input_path, + input_arg, input_path, # Tell oiiotool which channels should be put to top stack # (and output) - "--ch", channels_arg + "--ch", channels_arg, + # Use first subimage + "--subimage", "0" ]) for attr_name, attr_value in input_info["attribs"].items(): diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index 099f9a34ba..b6797dbba0 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -60,9 +60,10 @@ def find_executable(executable): path to file. Returns: - str: Full path to executable with extension (is file). - None: When the executable was not found. + Union[str, None]: Full path to executable with extension which was + found otherwise None. """ + # Skip if passed path is file if is_file_executable(executable): return executable @@ -70,24 +71,36 @@ def find_executable(executable): low_platform = platform.system().lower() _, ext = os.path.splitext(executable) - # Prepare variants for which it will be looked - variants = [executable] - # Add other extension variants only if passed executable does not have one - if not ext: - if low_platform == "windows": - exts = [".exe", ".ps1", ".bat"] - for ext in os.getenv("PATHEXT", "").split(os.pathsep): - ext = ext.lower() - if ext and ext not in exts: - exts.append(ext) - else: - exts = [".sh"] + # Prepare extensions to check + exts = set() + if ext: + exts.add(ext.lower()) - for ext in exts: - variant = executable + ext - if is_file_executable(variant): - return variant - variants.append(variant) + else: + # Add other possible extension variants only if passed executable + # does not have any + if low_platform == "windows": + exts |= {".exe", ".ps1", ".bat"} + for ext in os.getenv("PATHEXT", "").split(os.pathsep): + exts.add(ext.lower()) + + else: + exts |= {".sh"} + + # Executable is a path but there may be missing extension + # - this can happen primarily on windows where + # e.g. "ffmpeg" should be "ffmpeg.exe" + exe_dir, exe_filename = os.path.split(executable) + if exe_dir and os.path.isdir(exe_dir): + for filename in os.listdir(exe_dir): + filepath = os.path.join(exe_dir, filename) + basename, ext = os.path.splitext(filename) + if ( + basename == exe_filename + and ext.lower() in exts + and is_file_executable(filepath) + ): + return filepath # Get paths where to look for executable path_str = os.environ.get("PATH", None) @@ -97,13 +110,27 @@ def find_executable(executable): elif hasattr(os, "defpath"): path_str = os.defpath - if path_str: - paths = path_str.split(os.pathsep) - for path in paths: - for variant in variants: - filepath = os.path.abspath(os.path.join(path, variant)) - if is_file_executable(filepath): - return filepath + if not path_str: + return None + + paths = path_str.split(os.pathsep) + for path in paths: + if not os.path.isdir(path): + continue + for filename in os.listdir(path): + filepath = os.path.abspath(os.path.join(path, filename)) + # Filename matches executable exactly + if filename == executable and is_file_executable(filepath): + return filepath + + basename, ext = os.path.splitext(filename) + if ( + basename == executable + and ext.lower() in exts + and is_file_executable(filepath) + ): + return filepath + return None @@ -272,8 +299,8 @@ def get_oiio_tools_path(tool="oiiotool"): oiio_dir = get_vendor_bin_path("oiio") if platform.system().lower() == "linux": oiio_dir = os.path.join(oiio_dir, "bin") - default_path = os.path.join(oiio_dir, tool) - if _oiio_executable_validation(default_path): + default_path = find_executable(os.path.join(oiio_dir, tool)) + if default_path and _oiio_executable_validation(default_path): tool_executable_path = default_path # Look to PATH for the tool diff --git a/openpype/modules/avalon_apps/avalon_app.py b/openpype/modules/avalon_apps/avalon_app.py index f9085522b0..a0226ecc5c 100644 --- a/openpype/modules/avalon_apps/avalon_app.py +++ b/openpype/modules/avalon_apps/avalon_app.py @@ -57,7 +57,7 @@ class AvalonModule(OpenPypeModule, ITrayModule): if not self._library_loader_imported: return - from Qt import QtWidgets + from qtpy import QtWidgets # Actions action_library_loader = QtWidgets.QAction( "Loader", tray_menu @@ -75,7 +75,7 @@ class AvalonModule(OpenPypeModule, ITrayModule): def show_library_loader(self): if self._library_loader_window is None: - from Qt import QtCore + from qtpy import QtCore from openpype.tools.libraryloader import LibraryLoaderWindow from openpype.pipeline import install_openpype_plugins diff --git a/openpype/modules/clockify/clockify_module.py b/openpype/modules/clockify/clockify_module.py index 14fcb01f67..300d5576e2 100644 --- a/openpype/modules/clockify/clockify_module.py +++ b/openpype/modules/clockify/clockify_module.py @@ -183,7 +183,7 @@ class ClockifyModule( # Definition of Tray menu def tray_menu(self, parent_menu): # Menu for Tray App - from Qt import QtWidgets + from qtpy import QtWidgets menu = QtWidgets.QMenu("Clockify", parent_menu) menu.setProperty("submenu", "on") diff --git a/openpype/modules/clockify/widgets.py b/openpype/modules/clockify/widgets.py index d58df3c067..122b6212c0 100644 --- a/openpype/modules/clockify/widgets.py +++ b/openpype/modules/clockify/widgets.py @@ -1,4 +1,4 @@ -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets from openpype import resources, style diff --git a/openpype/hosts/celaction/plugins/publish/submit_celaction_deadline.py b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py similarity index 73% rename from openpype/hosts/celaction/plugins/publish/submit_celaction_deadline.py rename to openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py index ea109e9445..038ee4fc03 100644 --- a/openpype/hosts/celaction/plugins/publish/submit_celaction_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py @@ -2,16 +2,14 @@ import os import re import json import getpass - import requests import pyblish.api -class ExtractCelactionDeadline(pyblish.api.InstancePlugin): +class CelactionSubmitDeadline(pyblish.api.InstancePlugin): """Submit CelAction2D scene to Deadline - Renders are submitted to a Deadline Web Service as - supplied via settings key "DEADLINE_REST_URL". + Renders are submitted to a Deadline Web Service. """ @@ -26,27 +24,21 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): deadline_pool_secondary = "" deadline_group = "" deadline_chunk_size = 1 - - enviro_filter = [ - "FTRACK_API_USER", - "FTRACK_API_KEY", - "FTRACK_SERVER" - ] + deadline_job_delay = "00:00:08:00" def process(self, instance): instance.data["toBeRenderedOn"] = "deadline" context = instance.context - deadline_url = ( - context.data["system_settings"] - ["modules"] - ["deadline"] - ["DEADLINE_REST_URL"] - ) - assert deadline_url, "Requires DEADLINE_REST_URL" + # get default deadline webservice url from deadline module + deadline_url = instance.context.data["defaultDeadline"] + # if custom one is set in instance, use that + if instance.data.get("deadlineUrl"): + deadline_url = instance.data.get("deadlineUrl") + assert deadline_url, "Requires Deadline Webservice URL" self.deadline_url = "{}/api/jobs".format(deadline_url) - self._comment = context.data.get("comment", "") + self._comment = instance.data["comment"] self._deadline_user = context.data.get( "deadlineUser", getpass.getuser()) self._frame_start = int(instance.data["frameStart"]) @@ -82,6 +74,26 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): render_dir = os.path.normpath(os.path.dirname(render_path)) render_path = os.path.normpath(render_path) script_name = os.path.basename(script_path) + + for item in instance.context: + if "workfile" in item.data["family"]: + msg = "Workfile (scene) must be published along" + assert item.data["publish"] is True, msg + + template_data = item.data.get("anatomyData") + rep = item.data.get("representations")[0].get("name") + template_data["representation"] = rep + template_data["ext"] = rep + template_data["comment"] = None + anatomy_filled = instance.context.data["anatomy"].format( + template_data) + template_filled = anatomy_filled["publish"]["path"] + script_path = os.path.normpath(template_filled) + + self.log.info( + "Using published scene for render {}".format(script_path) + ) + jobname = "%s - %s" % (script_name, instance.name) output_filename_0 = self.preview_fname(render_path) @@ -98,7 +110,7 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): chunk_size = self.deadline_chunk_size # search for %02d pattern in name, and padding number - search_results = re.search(r"(.%0)(\d)(d)[._]", render_path).groups() + search_results = re.search(r"(%0)(\d)(d)[._]", render_path).groups() split_patern = "".join(search_results) padding_number = int(search_results[1]) @@ -145,10 +157,11 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): # frames from Deadline Monitor "OutputFilename0": output_filename_0.replace("\\", "/"), - # # Asset dependency to wait for at least the scene file to sync. + # # Asset dependency to wait for at least + # the scene file to sync. # "AssetDependency0": script_path "ScheduledType": "Once", - "JobDelay": "00:00:08:00" + "JobDelay": self.deadline_job_delay }, "PluginInfo": { # Input @@ -173,19 +186,6 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): plugin = payload["JobInfo"]["Plugin"] self.log.info("using render plugin : {}".format(plugin)) - i = 0 - for key, values in dict(os.environ).items(): - if key.upper() in self.enviro_filter: - payload["JobInfo"].update( - { - "EnvironmentKeyValue%d" - % i: "{key}={value}".format( - key=key, value=values - ) - } - ) - i += 1 - self.log.info("Submitting..") self.log.info(json.dumps(payload, indent=4, sort_keys=True)) @@ -193,10 +193,15 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): self.expected_files(instance, render_path) self.log.debug("__ expectedFiles: `{}`".format( instance.data["expectedFiles"])) + response = requests.post(self.deadline_url, json=payload) if not response.ok: - raise Exception(response.text) + self.log.error( + "Submission failed! [{}] {}".format( + response.status_code, response.content)) + self.log.debug(payload) + raise SystemExit(response.text) return response @@ -234,32 +239,29 @@ class ExtractCelactionDeadline(pyblish.api.InstancePlugin): split_path = path.split(split_patern) hashes = "#" * int(search_results[1]) return "".join([split_path[0], hashes, split_path[-1]]) - if "#" in path: - self.log.debug("_ path: `{}`".format(path)) - return path - else: - return path - def expected_files(self, - instance, - path): + self.log.debug("_ path: `{}`".format(path)) + return path + + def expected_files(self, instance, filepath): """ Create expected files in instance data """ if not instance.data.get("expectedFiles"): - instance.data["expectedFiles"] = list() + instance.data["expectedFiles"] = [] - dir = os.path.dirname(path) - file = os.path.basename(path) + dirpath = os.path.dirname(filepath) + filename = os.path.basename(filepath) - if "#" in file: - pparts = file.split("#") + if "#" in filename: + pparts = filename.split("#") padding = "%0{}d".format(len(pparts) - 1) - file = pparts[0] + padding + pparts[-1] + filename = pparts[0] + padding + pparts[-1] - if "%" not in file: - instance.data["expectedFiles"].append(path) + if "%" not in filename: + instance.data["expectedFiles"].append(filepath) return for i in range(self._frame_start, (self._frame_end + 1)): instance.data["expectedFiles"].append( - os.path.join(dir, (file % i)).replace("\\", "/")) + os.path.join(dirpath, (filename % i)).replace("\\", "/") + ) diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 6362b4ca65..5c5c54febb 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -126,22 +126,16 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "harmony": [r".*"], # for everything from AE "celaction": [r".*"]} - enviro_filter = [ + environ_job_filter = [ + "OPENPYPE_METADATA_FILE" + ] + + environ_keys = [ "FTRACK_API_USER", "FTRACK_API_KEY", "FTRACK_SERVER", - "OPENPYPE_METADATA_FILE", - "AVALON_PROJECT", - "AVALON_ASSET", - "AVALON_TASK", "AVALON_APP_NAME", - "OPENPYPE_PUBLISH_JOB" - - "OPENPYPE_LOG_NO_COLORS", "OPENPYPE_USERNAME", - "OPENPYPE_RENDER_JOB", - "OPENPYPE_PUBLISH_JOB", - "OPENPYPE_MONGO", "OPENPYPE_VERSION" ] @@ -223,28 +217,42 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance_version = instance.data.get("version") # take this if exists if instance_version != 1: override_version = instance_version - output_dir = self._get_publish_folder(instance.context.data['anatomy'], - deepcopy( - instance.data["anatomyData"]), - instance.data.get("asset"), - instances[0]["subset"], - 'render', - override_version) + output_dir = self._get_publish_folder( + instance.context.data['anatomy'], + deepcopy(instance.data["anatomyData"]), + instance.data.get("asset"), + instances[0]["subset"], + 'render', + override_version + ) # Transfer the environment from the original job to this dependent # job so they use the same environment metadata_path, roothless_metadata_path = \ self._create_metadata_path(instance) - environment = job["Props"].get("Env", {}) - environment["AVALON_PROJECT"] = legacy_io.Session["AVALON_PROJECT"] - environment["AVALON_ASSET"] = legacy_io.Session["AVALON_ASSET"] - environment["AVALON_TASK"] = legacy_io.Session["AVALON_TASK"] - environment["AVALON_APP_NAME"] = os.environ.get("AVALON_APP_NAME") - environment["OPENPYPE_LOG_NO_COLORS"] = "1" - environment["OPENPYPE_USERNAME"] = instance.context.data["user"] - environment["OPENPYPE_PUBLISH_JOB"] = "1" - environment["OPENPYPE_RENDER_JOB"] = "0" + environment = { + "AVALON_PROJECT": legacy_io.Session["AVALON_PROJECT"], + "AVALON_ASSET": legacy_io.Session["AVALON_ASSET"], + "AVALON_TASK": legacy_io.Session["AVALON_TASK"], + "OPENPYPE_USERNAME": instance.context.data["user"], + "OPENPYPE_PUBLISH_JOB": "1", + "OPENPYPE_RENDER_JOB": "0", + "OPENPYPE_REMOTE_JOB": "0", + "OPENPYPE_LOG_NO_COLORS": "1" + } + + # add environments from self.environ_keys + for env_key in self.environ_keys: + if os.getenv(env_key): + environment[env_key] = os.environ[env_key] + + # pass environment keys from self.environ_job_filter + job_environ = job["Props"].get("Env", {}) + for env_j_key in self.environ_job_filter: + if job_environ.get(env_j_key): + environment[env_j_key] = job_environ[env_j_key] + # Add mongo url if it's enabled if instance.context.data.get("deadlinePassMongoUrl"): mongo_url = os.environ.get("OPENPYPE_MONGO") @@ -308,19 +316,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): if instance.data.get("suspend_publish"): payload["JobInfo"]["InitialStatus"] = "Suspended" - index = 0 - for key in environment: - if key.upper() in self.enviro_filter: - payload["JobInfo"].update( - { - "EnvironmentKeyValue%d" - % index: "{key}={value}".format( - key=key, value=environment[key] - ) - } - ) - index += 1 - + for index, (key_, value_) in enumerate(environment.items()): + payload["JobInfo"].update( + { + "EnvironmentKeyValue%d" + % index: "{key}={value}".format( + key=key_, value=value_ + ) + } + ) # remove secondary pool payload["JobInfo"].pop("SecondaryPool", None) @@ -776,6 +780,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "handleEnd": handle_end, "frameStartHandle": start - handle_start, "frameEndHandle": end + handle_end, + "comment": instance.data["comment"], "fps": fps, "source": source, "extendFrames": data.get("extendFrames"), diff --git a/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.ico b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.ico new file mode 100644 index 0000000000..39d61592fe Binary files /dev/null and b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.ico differ diff --git a/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.param b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.param new file mode 100644 index 0000000000..24c59d2005 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.param @@ -0,0 +1,38 @@ +[About] +Type=label +Label=About +Category=About Plugin +CategoryOrder=-1 +Index=0 +Default=Celaction Plugin for Deadline +Description=Not configurable + +[ConcurrentTasks] +Type=label +Label=ConcurrentTasks +Category=About Plugin +CategoryOrder=-1 +Index=0 +Default=True +Description=Not configurable + +[Executable] +Type=filename +Label=Executable +Category=Config +CategoryOrder=0 +CategoryIndex=0 +Description=The command executable to run +Required=false +DisableIfBlank=true + +[RenderNameSeparator] +Type=string +Label=RenderNameSeparator +Category=Config +CategoryOrder=0 +CategoryIndex=1 +Description=The separator to use for naming +Required=false +DisableIfBlank=true +Default=. diff --git a/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.py b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.py new file mode 100644 index 0000000000..2d0edd3dca --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/CelAction/CelAction.py @@ -0,0 +1,122 @@ +from System.Text.RegularExpressions import * + +from Deadline.Plugins import * +from Deadline.Scripting import * + +import _winreg + +###################################################################### +# This is the function that Deadline calls to get an instance of the +# main DeadlinePlugin class. +###################################################################### + + +def GetDeadlinePlugin(): + return CelActionPlugin() + + +def CleanupDeadlinePlugin(deadlinePlugin): + deadlinePlugin.Cleanup() + +###################################################################### +# This is the main DeadlinePlugin class for the CelAction plugin. +###################################################################### + + +class CelActionPlugin(DeadlinePlugin): + + def __init__(self): + self.InitializeProcessCallback += self.InitializeProcess + self.RenderExecutableCallback += self.RenderExecutable + self.RenderArgumentCallback += self.RenderArgument + self.StartupDirectoryCallback += self.StartupDirectory + + def Cleanup(self): + for stdoutHandler in self.StdoutHandlers: + del stdoutHandler.HandleCallback + + del self.InitializeProcessCallback + del self.RenderExecutableCallback + del self.RenderArgumentCallback + del self.StartupDirectoryCallback + + def GetCelActionRegistryKey(self): + # Modify registry for frame separation + path = r'Software\CelAction\CelAction2D\User Settings' + _winreg.CreateKey(_winreg.HKEY_CURRENT_USER, path) + regKey = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, path, 0, + _winreg.KEY_ALL_ACCESS) + return regKey + + def GetSeparatorValue(self, regKey): + useSeparator, _ = _winreg.QueryValueEx( + regKey, 'RenderNameUseSeparator') + separator, _ = _winreg.QueryValueEx(regKey, 'RenderNameSeparator') + + return useSeparator, separator + + def SetSeparatorValue(self, regKey, useSeparator, separator): + _winreg.SetValueEx(regKey, 'RenderNameUseSeparator', + 0, _winreg.REG_DWORD, useSeparator) + _winreg.SetValueEx(regKey, 'RenderNameSeparator', + 0, _winreg.REG_SZ, separator) + + def InitializeProcess(self): + # Set the plugin specific settings. + self.SingleFramesOnly = False + + # Set the process specific settings. + self.StdoutHandling = True + self.PopupHandling = True + + # Ignore 'celaction' Pop-up dialog + self.AddPopupIgnorer(".*Rendering.*") + self.AddPopupIgnorer(".*AutoRender.*") + + # Ignore 'celaction' Pop-up dialog + self.AddPopupIgnorer(".*Wait.*") + + # Ignore 'celaction' Pop-up dialog + self.AddPopupIgnorer(".*Timeline Scrub.*") + + celActionRegKey = self.GetCelActionRegistryKey() + + self.SetSeparatorValue(celActionRegKey, 1, self.GetConfigEntryWithDefault( + "RenderNameSeparator", ".").strip()) + + def RenderExecutable(self): + return RepositoryUtils.CheckPathMapping(self.GetConfigEntry("Executable").strip()) + + def RenderArgument(self): + arguments = RepositoryUtils.CheckPathMapping( + self.GetPluginInfoEntry("Arguments").strip()) + arguments = arguments.replace( + "", str(self.GetStartFrame())) + arguments = arguments.replace("", str(self.GetEndFrame())) + arguments = self.ReplacePaddedFrame( + arguments, "", self.GetStartFrame()) + arguments = self.ReplacePaddedFrame( + arguments, "", self.GetEndFrame()) + arguments = arguments.replace("", "\"") + return arguments + + def StartupDirectory(self): + return self.GetPluginInfoEntryWithDefault("StartupDirectory", "").strip() + + def ReplacePaddedFrame(self, arguments, pattern, frame): + frameRegex = Regex(pattern) + while True: + frameMatch = frameRegex.Match(arguments) + if frameMatch.Success: + paddingSize = int(frameMatch.Groups[1].Value) + if paddingSize > 0: + padding = StringUtils.ToZeroPaddedString( + frame, paddingSize, False) + else: + padding = str(frame) + arguments = arguments.replace( + frameMatch.Groups[0].Value, padding) + else: + break + + return arguments diff --git a/openpype/modules/example_addons/example_addon/widgets.py b/openpype/modules/example_addons/example_addon/widgets.py index c0a0a7e510..cd0da3ae43 100644 --- a/openpype/modules/example_addons/example_addon/widgets.py +++ b/openpype/modules/example_addons/example_addon/widgets.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.style import load_stylesheet diff --git a/openpype/modules/ftrack/event_handlers_server/event_first_version_status.py b/openpype/modules/ftrack/event_handlers_server/event_first_version_status.py index ecc6c95d90..8ef333effd 100644 --- a/openpype/modules/ftrack/event_handlers_server/event_first_version_status.py +++ b/openpype/modules/ftrack/event_handlers_server/event_first_version_status.py @@ -135,9 +135,9 @@ class FirstVersionStatus(BaseEvent): new_status = asset_version_statuses.get(found_item["status"]) if not new_status: - self.log.warning( + self.log.warning(( "AssetVersion doesn't have status `{}`." - ).format(found_item["status"]) + ).format(found_item["status"])) continue try: diff --git a/openpype/modules/ftrack/lib/avalon_sync.py b/openpype/modules/ftrack/lib/avalon_sync.py index 935d1e85c9..0341c25717 100644 --- a/openpype/modules/ftrack/lib/avalon_sync.py +++ b/openpype/modules/ftrack/lib/avalon_sync.py @@ -1556,7 +1556,7 @@ class SyncEntitiesFactory: deleted_entities.append(mongo_id) av_ent = self.avalon_ents_by_id[mongo_id] - av_ent_path_items = [p for p in av_ent["data"]["parents"]] + av_ent_path_items = list(av_ent["data"]["parents"]) av_ent_path_items.append(av_ent["name"]) self.log.debug("Deleted <{}>".format("/".join(av_ent_path_items))) @@ -1855,7 +1855,7 @@ class SyncEntitiesFactory: _vis_par = _avalon_ent["data"]["visualParent"] _name = _avalon_ent["name"] if _name in self.all_ftrack_names: - av_ent_path_items = _avalon_ent["data"]["parents"] + av_ent_path_items = list(_avalon_ent["data"]["parents"]) av_ent_path_items.append(_name) av_ent_path = "/".join(av_ent_path_items) # TODO report @@ -1997,7 +1997,7 @@ class SyncEntitiesFactory: {"_id": mongo_id}, item )) - av_ent_path_items = item["data"]["parents"] + av_ent_path_items = list(item["data"]["parents"]) av_ent_path_items.append(item["name"]) av_ent_path = "/".join(av_ent_path_items) self.log.debug( @@ -2110,6 +2110,7 @@ class SyncEntitiesFactory: entity_dict = self.entities_dict[ftrack_id] + final_parents = entity_dict["final_entity"]["data"]["parents"] if archived_by_id: # if is changeable then unarchive (nothing to check here) if self.changeability_by_mongo_id[mongo_id]: @@ -2123,10 +2124,8 @@ class SyncEntitiesFactory: archived_name = archived_by_id["name"] if ( - archived_name != entity_dict["name"] or - archived_parents != entity_dict["final_entity"]["data"][ - "parents" - ] + archived_name != entity_dict["name"] + or archived_parents != final_parents ): return None @@ -2136,11 +2135,7 @@ class SyncEntitiesFactory: for archived in archived_by_name: mongo_id = str(archived["_id"]) archived_parents = archived.get("data", {}).get("parents") - if ( - archived_parents == entity_dict["final_entity"]["data"][ - "parents" - ] - ): + if archived_parents == final_parents: return mongo_id # Secondly try to find more close to current ftrack entity @@ -2350,8 +2345,7 @@ class SyncEntitiesFactory: continue changed = True - parents = [par for par in _parents] - hierarchy = "/".join(parents) + parents = list(_parents) self.entities_dict[ftrack_id][ "final_entity"]["data"]["parents"] = parents diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py index 159e60024d..0e8209866f 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py @@ -36,10 +36,35 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): return context = instance.context - session = context.data["ftrackSession"] + task_entity, parent_entity = self.get_instance_entities( + instance, context) + if parent_entity is None: + self.log.info(( + "Skipping ftrack integration. Instance \"{}\" does not" + " have specified ftrack entities." + ).format(str(instance))) + return + session = context.data["ftrackSession"] + # Reset session operations and reconfigure locations + session.recorded_operations.clear() + session._configure_locations() + + try: + self.integrate_to_ftrack( + session, + instance, + task_entity, + parent_entity, + component_list + ) + + except Exception: + session.reset() + raise + + def get_instance_entities(self, instance, context): parent_entity = None - default_asset_name = None # If instance has set "ftrackEntity" or "ftrackTask" then use them from # instance. Even if they are set to None. If they are set to None it # has a reason. (like has different context) @@ -52,15 +77,21 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): parent_entity = context.data.get("ftrackEntity") if task_entity: - default_asset_name = task_entity["name"] parent_entity = task_entity["parent"] - if parent_entity is None: - self.log.info(( - "Skipping ftrack integration. Instance \"{}\" does not" - " have specified ftrack entities." - ).format(str(instance))) - return + return task_entity, parent_entity + + def integrate_to_ftrack( + self, + session, + instance, + task_entity, + parent_entity, + component_list + ): + default_asset_name = None + if task_entity: + default_asset_name = task_entity["name"] if not default_asset_name: default_asset_name = parent_entity["name"] @@ -186,13 +217,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): self.log.info("Setting task status to \"{}\"".format(status_name)) task_entity["status"] = status - try: - session.commit() - except Exception: - tp, value, tb = sys.exc_info() - session.rollback() - session._configure_locations() - six.reraise(tp, value, tb) + session.commit() def _fill_component_locations(self, session, component_list): components_by_location_name = collections.defaultdict(list) @@ -495,13 +520,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): session.delete(member) del(member) - try: - session.commit() - except Exception: - tp, value, tb = sys.exc_info() - session.rollback() - session._configure_locations() - six.reraise(tp, value, tb) + session.commit() # Reset members in memory if "members" in component_entity.keys(): @@ -617,13 +636,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): ) else: # Commit changes. - try: - session.commit() - except Exception: - tp, value, tb = sys.exc_info() - session.rollback() - session._configure_locations() - six.reraise(tp, value, tb) + session.commit() def _create_components(self, session, asset_versions_data_by_id): for item in asset_versions_data_by_id.values(): diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py index e7c265988e..6ed02bc8b6 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py @@ -38,7 +38,7 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin): self.log.info("There are any integrated AssetVersions") return - comment = (instance.context.data.get("comment") or "").strip() + comment = instance.data["comment"] if not comment: self.log.info("Comment is not set.") else: diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py index ac3fa874e0..6776509dda 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py @@ -45,7 +45,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin): host_name = context.data["hostName"] app_name = context.data["appName"] app_label = context.data["appLabel"] - comment = (context.data.get("comment") or "").strip() + comment = instance.data["comment"] if not comment: self.log.info("Comment is not set.") else: diff --git a/openpype/modules/ftrack/tray/ftrack_tray.py b/openpype/modules/ftrack/tray/ftrack_tray.py index e3c6e30ead..8718dff434 100644 --- a/openpype/modules/ftrack/tray/ftrack_tray.py +++ b/openpype/modules/ftrack/tray/ftrack_tray.py @@ -3,9 +3,9 @@ import time import datetime import threading -from Qt import QtCore, QtWidgets, QtGui - import ftrack_api +from qtpy import QtCore, QtWidgets, QtGui + from openpype import resources from openpype.lib import Logger from openpype_modules.ftrack import resolve_ftrack_url, FTRACK_MODULE_DIR diff --git a/openpype/modules/ftrack/tray/login_dialog.py b/openpype/modules/ftrack/tray/login_dialog.py index 05d9226ca4..fbb3455775 100644 --- a/openpype/modules/ftrack/tray/login_dialog.py +++ b/openpype/modules/ftrack/tray/login_dialog.py @@ -1,10 +1,13 @@ import os + import requests +from qtpy import QtCore, QtGui, QtWidgets + from openpype import style from openpype_modules.ftrack.lib import credentials -from . import login_tools from openpype import resources -from Qt import QtCore, QtGui, QtWidgets + +from . import login_tools class CredentialsDialog(QtWidgets.QDialog): diff --git a/openpype/modules/interfaces.py b/openpype/modules/interfaces.py index f92ec6bf2d..7cd299df67 100644 --- a/openpype/modules/interfaces.py +++ b/openpype/modules/interfaces.py @@ -222,7 +222,7 @@ class ITrayAction(ITrayModule): pass def tray_menu(self, tray_menu): - from Qt import QtWidgets + from qtpy import QtWidgets if self.admin_action: menu = self.admin_submenu(tray_menu) @@ -247,7 +247,7 @@ class ITrayAction(ITrayModule): @staticmethod def admin_submenu(tray_menu): if ITrayAction._admin_submenu is None: - from Qt import QtWidgets + from qtpy import QtWidgets admin_submenu = QtWidgets.QMenu("Admin", tray_menu) admin_submenu.menuAction().setVisible(False) @@ -279,7 +279,7 @@ class ITrayService(ITrayModule): @staticmethod def services_submenu(tray_menu): if ITrayService._services_submenu is None: - from Qt import QtWidgets + from qtpy import QtWidgets services_submenu = QtWidgets.QMenu("Services", tray_menu) services_submenu.menuAction().setVisible(False) @@ -294,7 +294,7 @@ class ITrayService(ITrayModule): @staticmethod def _load_service_icons(): - from Qt import QtGui + from qtpy import QtGui ITrayService._failed_icon = QtGui.QIcon( resources.get_resource("icons", "circle_red.png") @@ -325,7 +325,7 @@ class ITrayService(ITrayModule): return ITrayService._failed_icon def tray_menu(self, tray_menu): - from Qt import QtWidgets + from qtpy import QtWidgets action = QtWidgets.QAction( self.label, diff --git a/openpype/modules/kitsu/kitsu_widgets.py b/openpype/modules/kitsu/kitsu_widgets.py index 65baed9665..5ff3613583 100644 --- a/openpype/modules/kitsu/kitsu_widgets.py +++ b/openpype/modules/kitsu/kitsu_widgets.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype.modules.kitsu.utils.credentials import ( diff --git a/openpype/modules/kitsu/utils/sync_service.py b/openpype/modules/kitsu/utils/sync_service.py index 441b95a7ec..237746bea0 100644 --- a/openpype/modules/kitsu/utils/sync_service.py +++ b/openpype/modules/kitsu/utils/sync_service.py @@ -1,12 +1,9 @@ import os +import threading import gazu -from openpype.client import ( - get_project, - get_assets, - get_asset_by_name -) +from openpype.client import get_project, get_assets, get_asset_by_name from openpype.pipeline import AvalonMongoDB from .credentials import validate_credentials from .update_op_with_zou import ( @@ -397,6 +394,13 @@ def start_listeners(login: str, password: str): login (str): Kitsu user login password (str): Kitsu user password """ + # Refresh token every week + def refresh_token_every_week(): + print("Refreshing token...") + gazu.refresh_token() + threading.Timer(7 * 3600 * 24, refresh_token_every_week).start() + + refresh_token_every_week() # Connect to server listener = Listener(login, password) diff --git a/openpype/modules/log_viewer/log_view_module.py b/openpype/modules/log_viewer/log_view_module.py index 31e954fadd..e9dba2041c 100644 --- a/openpype/modules/log_viewer/log_view_module.py +++ b/openpype/modules/log_viewer/log_view_module.py @@ -22,7 +22,7 @@ class LogViewModule(OpenPypeModule, ITrayModule): # Definition of Tray menu def tray_menu(self, tray_menu): - from Qt import QtWidgets + from qtpy import QtWidgets # Menu for Tray App menu = QtWidgets.QMenu('Logging', tray_menu) diff --git a/openpype/modules/log_viewer/tray/app.py b/openpype/modules/log_viewer/tray/app.py index def319e0e3..3c49f337d4 100644 --- a/openpype/modules/log_viewer/tray/app.py +++ b/openpype/modules/log_viewer/tray/app.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from .widgets import LogsWidget, OutputWidget from openpype import style diff --git a/openpype/modules/log_viewer/tray/models.py b/openpype/modules/log_viewer/tray/models.py index d369ffeb64..bc1f54c06c 100644 --- a/openpype/modules/log_viewer/tray/models.py +++ b/openpype/modules/log_viewer/tray/models.py @@ -1,5 +1,5 @@ import collections -from Qt import QtCore, QtGui +from qtpy import QtCore, QtGui from openpype.lib import Logger diff --git a/openpype/modules/log_viewer/tray/widgets.py b/openpype/modules/log_viewer/tray/widgets.py index c7ac64ab70..981152e6e2 100644 --- a/openpype/modules/log_viewer/tray/widgets.py +++ b/openpype/modules/log_viewer/tray/widgets.py @@ -1,5 +1,5 @@ import html -from Qt import QtCore, QtWidgets +from qtpy import QtCore, QtWidgets import qtawesome from .models import LogModel, LogsFilterProxy diff --git a/openpype/modules/muster/muster.py b/openpype/modules/muster/muster.py index 8d395d16e8..77b9214a5a 100644 --- a/openpype/modules/muster/muster.py +++ b/openpype/modules/muster/muster.py @@ -53,7 +53,7 @@ class MusterModule(OpenPypeModule, ITrayModule): # Definition of Tray menu def tray_menu(self, parent): """Add **change credentials** option to tray menu.""" - from Qt import QtWidgets + from qtpy import QtWidgets # Menu for Tray App menu = QtWidgets.QMenu('Muster', parent) diff --git a/openpype/modules/muster/widget_login.py b/openpype/modules/muster/widget_login.py index ae838c6cea..f38f43fb7f 100644 --- a/openpype/modules/muster/widget_login.py +++ b/openpype/modules/muster/widget_login.py @@ -1,5 +1,4 @@ -import os -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets from openpype import resources, style diff --git a/openpype/modules/python_console_interpreter/window/widgets.py b/openpype/modules/python_console_interpreter/window/widgets.py index 36ce1b61a2..b670352f44 100644 --- a/openpype/modules/python_console_interpreter/window/widgets.py +++ b/openpype/modules/python_console_interpreter/window/widgets.py @@ -5,7 +5,7 @@ import collections from code import InteractiveInterpreter import appdirs -from Qt import QtCore, QtWidgets, QtGui +from qtpy import QtCore, QtWidgets, QtGui from openpype import resources from openpype.style import load_stylesheet diff --git a/openpype/modules/shotgrid/tray/credential_dialog.py b/openpype/modules/shotgrid/tray/credential_dialog.py index 9d841d98be..7b839b63c0 100644 --- a/openpype/modules/shotgrid/tray/credential_dialog.py +++ b/openpype/modules/shotgrid/tray/credential_dialog.py @@ -1,5 +1,5 @@ import os -from Qt import QtCore, QtWidgets, QtGui +from qtpy import QtCore, QtWidgets, QtGui from openpype import style from openpype import resources diff --git a/openpype/modules/shotgrid/tray/shotgrid_tray.py b/openpype/modules/shotgrid/tray/shotgrid_tray.py index 4038d77b03..8e363bd318 100644 --- a/openpype/modules/shotgrid/tray/shotgrid_tray.py +++ b/openpype/modules/shotgrid/tray/shotgrid_tray.py @@ -1,7 +1,7 @@ import os import webbrowser -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.modules.shotgrid.lib import credentials from openpype.modules.shotgrid.tray.credential_dialog import ( diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 653ee50541..f9b99da02b 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -1244,7 +1244,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule): if not self.enabled: return - from Qt import QtWidgets + from qtpy import QtWidgets """Add menu or action to Tray(or parent)'s menu""" action = QtWidgets.QAction(self.label, parent_menu) action.triggered.connect(self.show_widget) diff --git a/openpype/modules/sync_server/tray/app.py b/openpype/modules/sync_server/tray/app.py index 9b9768327e..c093835128 100644 --- a/openpype/modules/sync_server/tray/app.py +++ b/openpype/modules/sync_server/tray/app.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.tools.settings import style diff --git a/openpype/modules/sync_server/tray/delegates.py b/openpype/modules/sync_server/tray/delegates.py index 988eb40d28..e14b2e2f60 100644 --- a/openpype/modules/sync_server/tray/delegates.py +++ b/openpype/modules/sync_server/tray/delegates.py @@ -1,5 +1,5 @@ import os -from Qt import QtCore, QtWidgets, QtGui +from qtpy import QtCore, QtWidgets, QtGui from openpype.lib import Logger diff --git a/openpype/modules/sync_server/tray/models.py b/openpype/modules/sync_server/tray/models.py index d63d046508..b52f350907 100644 --- a/openpype/modules/sync_server/tray/models.py +++ b/openpype/modules/sync_server/tray/models.py @@ -3,8 +3,7 @@ import attr from bson.objectid import ObjectId import datetime -from Qt import QtCore -from Qt.QtCore import Qt +from qtpy import QtCore import qtawesome from openpype.tools.utils.delegates import pretty_timestamp @@ -79,16 +78,16 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel): def columnCount(self, _index=None): return len(self._header) - def headerData(self, section, orientation, role=Qt.DisplayRole): + def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole): if section >= len(self.COLUMN_LABELS): return - if role == Qt.DisplayRole: - if orientation == Qt.Horizontal: + if role == QtCore.Qt.DisplayRole: + if orientation == QtCore.Qt.Horizontal: return self.COLUMN_LABELS[section][1] if role == HEADER_NAME_ROLE: - if orientation == Qt.Horizontal: + if orientation == QtCore.Qt.Horizontal: return self.COLUMN_LABELS[section][0] # return name def data(self, index, role): @@ -123,7 +122,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel): return item.status == lib.STATUS[2] and \ item.remote_progress < 1 - if role in (Qt.DisplayRole, Qt.EditRole): + if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole): # because of ImageDelegate if header_value in ['remote_site', 'local_site']: return "" @@ -146,7 +145,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel): if role == STATUS_ROLE: return item.status - if role == Qt.UserRole: + if role == QtCore.Qt.UserRole: return item._id @property @@ -409,7 +408,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel): """ for i in range(self.rowCount(None)): index = self.index(i, 0) - value = self.data(index, Qt.UserRole) + value = self.data(index, QtCore.Qt.UserRole) if value == id: return index return None @@ -917,7 +916,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel): if not self.can_edit: return - repre_id = self.data(index, Qt.UserRole) + repre_id = self.data(index, QtCore.Qt.UserRole) representation = get_representation_by_id(self.project, repre_id) if representation: @@ -1353,7 +1352,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel): if not self.can_edit: return - file_id = self.data(index, Qt.UserRole) + file_id = self.data(index, QtCore.Qt.UserRole) updated_file = None representation = get_representation_by_id(self.project, self._id) diff --git a/openpype/modules/sync_server/tray/widgets.py b/openpype/modules/sync_server/tray/widgets.py index c40aa98f24..b9ef45727a 100644 --- a/openpype/modules/sync_server/tray/widgets.py +++ b/openpype/modules/sync_server/tray/widgets.py @@ -3,8 +3,7 @@ import subprocess import sys from functools import partial -from Qt import QtWidgets, QtCore, QtGui -from Qt.QtCore import Qt +from qtpy import QtWidgets, QtCore, QtGui import qtawesome from openpype.tools.settings import style @@ -260,7 +259,7 @@ class _SyncRepresentationWidget(QtWidgets.QWidget): self._selected_ids = set() for index in idxs: - self._selected_ids.add(self.model.data(index, Qt.UserRole)) + self._selected_ids.add(self.model.data(index, QtCore.Qt.UserRole)) def _set_selection(self): """ @@ -291,7 +290,7 @@ class _SyncRepresentationWidget(QtWidgets.QWidget): self.table_view.openPersistentEditor(index) return - _id = self.model.data(index, Qt.UserRole) + _id = self.model.data(index, QtCore.Qt.UserRole) detail_window = SyncServerDetailWindow( self.sync_server, _id, self.model.project, parent=self) detail_window.exec() @@ -615,7 +614,7 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget): table_view.setSelectionBehavior( QtWidgets.QAbstractItemView.SelectRows) table_view.horizontalHeader().setSortIndicator( - -1, Qt.AscendingOrder) + -1, QtCore.Qt.AscendingOrder) table_view.setAlternatingRowColors(True) table_view.verticalHeader().hide() table_view.viewport().setAttribute(QtCore.Qt.WA_Hover, True) @@ -773,7 +772,8 @@ class SyncRepresentationDetailWidget(_SyncRepresentationWidget): QtWidgets.QAbstractItemView.ExtendedSelection) table_view.setSelectionBehavior( QtWidgets.QTableView.SelectRows) - table_view.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder) + table_view.horizontalHeader().setSortIndicator( + -1, QtCore.Qt.AscendingOrder) table_view.horizontalHeader().setSortIndicatorShown(True) table_view.setAlternatingRowColors(True) table_view.verticalHeader().hide() diff --git a/openpype/modules/timers_manager/idle_threads.py b/openpype/modules/timers_manager/idle_threads.py index 7242761143..eb11bbf117 100644 --- a/openpype/modules/timers_manager/idle_threads.py +++ b/openpype/modules/timers_manager/idle_threads.py @@ -1,5 +1,5 @@ import time -from Qt import QtCore +from qtpy import QtCore from pynput import mouse, keyboard from openpype.lib import Logger diff --git a/openpype/modules/timers_manager/widget_user_idle.py b/openpype/modules/timers_manager/widget_user_idle.py index 1ecea74440..9df328e6b2 100644 --- a/openpype/modules/timers_manager/widget_user_idle.py +++ b/openpype/modules/timers_manager/widget_user_idle.py @@ -1,4 +1,4 @@ -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets from openpype import resources, style diff --git a/openpype/modules/webserver/host_console_listener.py b/openpype/modules/webserver/host_console_listener.py index fdfe1ba688..e5c11af9c2 100644 --- a/openpype/modules/webserver/host_console_listener.py +++ b/openpype/modules/webserver/host_console_listener.py @@ -3,7 +3,7 @@ from aiohttp import web import json import logging from concurrent.futures import CancelledError -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.modules import ITrayService diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 4fd460ffea..9c468ae8fc 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -199,7 +199,7 @@ class InstanceMember: }) -class AttributeValues: +class AttributeValues(object): """Container which keep values of Attribute definitions. Goal is to have one object which hold values of attribute definitions for @@ -584,6 +584,7 @@ class CreatedInstance: if key in data: data.pop(key) + self._data["variant"] = self._data.get("variant") or "" # Stored creator specific attribute values # {key: value} creator_values = copy.deepcopy(orig_creator_attributes) diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index 6e2be1ce2c..47dfaf6b98 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -1,3 +1,4 @@ +import inspect from abc import ABCMeta import pyblish.api @@ -132,6 +133,25 @@ class OpenPypePyblishPluginMixin: ) return attribute_values + @staticmethod + def get_attr_values_from_data_for_plugin(plugin, data): + """Get attribute values for attribute definitions from data. + + Args: + plugin (Union[publish.api.Plugin, Type[publish.api.Plugin]]): The + plugin for which attributes are extracted. + data(dict): Data from instance or context. + """ + + if not inspect.isclass(plugin): + plugin = plugin.__class__ + + return ( + data + .get("publish_attributes", {}) + .get(plugin.__name__, {}) + ) + def get_attr_values_from_data(self, data): """Get attribute values for attribute definitions from data. @@ -139,11 +159,7 @@ class OpenPypePyblishPluginMixin: data(dict): Data from instance or context. """ - return ( - data - .get("publish_attributes", {}) - .get(self.__class__.__name__, {}) - ) + return self.get_attr_values_from_data_for_plugin(self.__class__, data) class OptionalPyblishPluginMixin(OpenPypePyblishPluginMixin): diff --git a/openpype/plugins/publish/collect_audio.py b/openpype/plugins/publish/collect_audio.py index 7d53b24e54..3a0ddb3281 100644 --- a/openpype/plugins/publish/collect_audio.py +++ b/openpype/plugins/publish/collect_audio.py @@ -1,21 +1,27 @@ +import collections import pyblish.api from openpype.client import ( - get_last_version_by_subset_name, + get_assets, + get_subsets, + get_last_versions, get_representations, ) -from openpype.pipeline import ( - legacy_io, - get_representation_path, -) +from openpype.pipeline.load import get_representation_path_with_anatomy -class CollectAudio(pyblish.api.InstancePlugin): +class CollectAudio(pyblish.api.ContextPlugin): """Collect asset's last published audio. The audio subset name searched for is defined in: project settings > Collect Audio + + Note: + The plugin was instance plugin but because of so much queries the + plugin was slowing down whole collection phase a lot thus was + converted to context plugin which requires only 4 queries top. """ + label = "Collect Asset Audio" order = pyblish.api.CollectorOrder + 0.1 families = ["review"] @@ -39,67 +45,134 @@ class CollectAudio(pyblish.api.InstancePlugin): audio_subset_name = "audioMain" - def process(self, instance): - if instance.data.get("audio"): - self.log.info( - "Skipping Audio collecion. It is already collected" - ) + def process(self, context): + # Fake filtering by family inside context plugin + filtered_instances = [] + for instance in pyblish.api.instances_by_plugin( + context, self.__class__ + ): + # Skip instances that already have audio filled + if instance.data.get("audio"): + self.log.info( + "Skipping Audio collecion. It is already collected" + ) + continue + filtered_instances.append(instance) + + # Skip if none of instances remained + if not filtered_instances: return # Add audio to instance if exists. + instances_by_asset_name = collections.defaultdict(list) + for instance in filtered_instances: + asset_name = instance.data["asset"] + instances_by_asset_name[asset_name].append(instance) + + asset_names = set(instances_by_asset_name.keys()) self.log.info(( - "Searching for audio subset '{subset}'" - " in asset '{asset}'" + "Searching for audio subset '{subset}' in assets {assets}" ).format( subset=self.audio_subset_name, - asset=instance.data["asset"] + assets=", ".join([ + '"{}"'.format(asset_name) + for asset_name in asset_names + ]) )) - repre_doc = self._get_repre_doc(instance) + # Query all required documents + project_name = context.data["projectName"] + anatomy = context.data["anatomy"] + repre_docs_by_asset_names = self.query_representations( + project_name, asset_names) - # Add audio to instance if representation was found - if repre_doc: - instance.data["audio"] = [{ - "offset": 0, - "filename": get_representation_path(repre_doc) - }] - self.log.info("Audio Data added to instance ...") + for asset_name, instances in instances_by_asset_name.items(): + repre_docs = repre_docs_by_asset_names[asset_name] + if not repre_docs: + continue - def _get_repre_doc(self, instance): - cache = instance.context.data.get("__cache_asset_audio") - if cache is None: - cache = {} - instance.context.data["__cache_asset_audio"] = cache - asset_name = instance.data["asset"] + repre_doc = repre_docs[0] + repre_path = get_representation_path_with_anatomy( + repre_doc, anatomy + ) + for instance in instances: + instance.data["audio"] = [{ + "offset": 0, + "filename": repre_path + }] + self.log.info("Audio Data added to instance ...") - # first try to get it from cache - if asset_name in cache: - return cache[asset_name] + def query_representations(self, project_name, asset_names): + """Query representations related to audio subsets for passed assets. - project_name = legacy_io.active_project() + Args: + project_name (str): Project in which we're looking for all + entities. + asset_names (Iterable[str]): Asset names where to look for audio + subsets and their representations. - # Find latest versions document - last_version_doc = get_last_version_by_subset_name( + Returns: + collections.defaultdict[str, List[Dict[Str, Any]]]: Representations + related to audio subsets by asset name. + """ + + output = collections.defaultdict(list) + # Query asset documents + asset_docs = get_assets( project_name, - self.audio_subset_name, - asset_name=asset_name, - fields=["_id"] + asset_names=asset_names, + fields=["_id", "name"] ) - repre_doc = None - if last_version_doc: - # Try to find it's representation (Expected there is only one) - repre_docs = list(get_representations( - project_name, version_ids=[last_version_doc["_id"]] - )) - if not repre_docs: - self.log.warning( - "Version document does not contain any representations" - ) - else: - repre_doc = repre_docs[0] + asset_id_by_name = {} + for asset_doc in asset_docs: + asset_id_by_name[asset_doc["name"]] = asset_doc["_id"] + asset_ids = set(asset_id_by_name.values()) - # update cache - cache[asset_name] = repre_doc + # Query subsets with name define by 'audio_subset_name' attr + # - one or none subsets with the name should be available on an asset + subset_docs = get_subsets( + project_name, + subset_names=[self.audio_subset_name], + asset_ids=asset_ids, + fields=["_id", "parent"] + ) + subset_id_by_asset_id = {} + for subset_doc in subset_docs: + asset_id = subset_doc["parent"] + subset_id_by_asset_id[asset_id] = subset_doc["_id"] - return repre_doc + subset_ids = set(subset_id_by_asset_id.values()) + if not subset_ids: + return output + + # Find all latest versions for the subsets + version_docs_by_subset_id = get_last_versions( + project_name, subset_ids=subset_ids, fields=["_id", "parent"] + ) + version_id_by_subset_id = { + subset_id: version_doc["_id"] + for subset_id, version_doc in version_docs_by_subset_id.items() + } + version_ids = set(version_id_by_subset_id.values()) + if not version_ids: + return output + + # Find representations under latest versions of audio subsets + repre_docs = get_representations( + project_name, version_ids=version_ids + ) + repre_docs_by_version_id = collections.defaultdict(list) + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + repre_docs_by_version_id[version_id].append(repre_doc) + + if not repre_docs_by_version_id: + return output + + for asset_name in asset_names: + asset_id = asset_id_by_name.get(asset_name) + subset_id = subset_id_by_asset_id.get(asset_id) + version_id = version_id_by_subset_id.get(subset_id) + output[asset_name] = repre_docs_by_version_id[version_id] + return output diff --git a/openpype/plugins/publish/collect_comment.py b/openpype/plugins/publish/collect_comment.py index 062142ace9..12579cd957 100644 --- a/openpype/plugins/publish/collect_comment.py +++ b/openpype/plugins/publish/collect_comment.py @@ -1,19 +1,123 @@ -""" -Requires: - None -Provides: - context -> comment (str) +"""Collect comment and add option to enter comment per instance. + +Combination of plugins. One define optional input for instances in Publisher +UI (CollectInstanceCommentDef) and second cares that each instance during +collection has available "comment" key in data (CollectComment). + +Plugin 'CollectInstanceCommentDef' define "comment" attribute which won't be +filled with any value if instance does not match families filter or when +plugin is disabled. + +Plugin 'CollectComment' makes sure that each instance in context has +available "comment" key in data which can be set to 'str' or 'None' if is not +set. +- In case instance already has filled comment the plugin's logic is skipped +- The comment is always set and value should be always 'str' even if is empty + +Why are separated: +- 'CollectInstanceCommentDef' can have specific settings to show comment + attribute only to defined families in publisher UI +- 'CollectComment' will run all the time + +Todos: + The comment per instance is not sent via farm. """ import pyblish.api +from openpype.lib.attribute_definitions import TextDef +from openpype.pipeline.publish import OpenPypePyblishPluginMixin -class CollectComment(pyblish.api.ContextPlugin): - """This plug-ins displays the comment dialog box per default""" +class CollectInstanceCommentDef( + pyblish.api.ContextPlugin, + OpenPypePyblishPluginMixin +): + label = "Comment per instance" + targets = ["local"] + # Disable plugin by default + families = [] + enabled = False - label = "Collect Comment" - order = pyblish.api.CollectorOrder + def process(self, instance): + pass + + @classmethod + def apply_settings(cls, project_setting, _): + plugin_settings = project_setting["global"]["publish"].get( + "collect_comment_per_instance" + ) + if not plugin_settings: + return + + if plugin_settings.get("enabled") is not None: + cls.enabled = plugin_settings["enabled"] + + if plugin_settings.get("families") is not None: + cls.families = plugin_settings["families"] + + @classmethod + def get_attribute_defs(cls): + return [ + TextDef("comment", label="Comment") + ] + + +class CollectComment( + pyblish.api.ContextPlugin, + OpenPypePyblishPluginMixin +): + """Collect comment per each instance. + + Plugin makes sure each instance to publish has set "comment" in data so any + further plugin can use it directly. + """ + + label = "Collect Instance Comment" + order = pyblish.api.CollectorOrder + 0.49 def process(self, context): - comment = (context.data.get("comment") or "").strip() - context.data["comment"] = comment + context_comment = self.cleanup_comment(context.data.get("comment")) + # Set it back + context.data["comment"] = context_comment + for instance in context: + instance_label = str(instance) + # Check if comment is already set + instance_comment = self.cleanup_comment( + instance.data.get("comment")) + + # If comment on instance is not set then look for attributes + if not instance_comment: + attr_values = self.get_attr_values_from_data_for_plugin( + CollectInstanceCommentDef, instance.data + ) + instance_comment = self.cleanup_comment( + attr_values.get("comment") + ) + + # Use context comment if instance has all options of comment + # empty + if not instance_comment: + instance_comment = context_comment + + instance.data["comment"] = instance_comment + if instance_comment: + msg_end = " has comment set to: \"{}\"".format( + instance_comment) + else: + msg_end = " does not have set comment" + self.log.debug("Instance {} {}".format(instance_label, msg_end)) + + def cleanup_comment(self, comment): + """Cleanup comment value. + + Args: + comment (Union[str, None]): Comment value from data. + + Returns: + str: Cleaned comment which is stripped or empty string if input + was 'None'. + """ + + if comment: + return comment.strip() + return "" diff --git a/openpype/plugins/publish/collect_resources_path.py b/openpype/plugins/publish/collect_resources_path.py index 00f65b8b67..a2d5b95ab2 100644 --- a/openpype/plugins/publish/collect_resources_path.py +++ b/openpype/plugins/publish/collect_resources_path.py @@ -21,6 +21,7 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.495 families = ["workfile", "pointcache", + "proxyAbc", "camera", "animation", "model", @@ -50,6 +51,7 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): "source", "assembly", "fbx", + "gltf", "textures", "action", "background", diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index 4179199317..f113e61bb0 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -1,5 +1,4 @@ import os -import re import json import copy import tempfile @@ -21,6 +20,7 @@ from openpype.lib import ( CREATE_NO_WINDOW ) +from openpype.lib.profiles_filtering import filter_profiles class ExtractBurnin(publish.Extractor): @@ -34,6 +34,7 @@ class ExtractBurnin(publish.Extractor): label = "Extract burnins" order = pyblish.api.ExtractorOrder + 0.03 + families = ["review", "burnin"] hosts = [ "nuke", @@ -53,6 +54,7 @@ class ExtractBurnin(publish.Extractor): "flame" # "resolve" ] + optional = True positions = [ @@ -69,11 +71,15 @@ class ExtractBurnin(publish.Extractor): "y_offset": 5 } - # Preset attributes + # Configurable by Settings profiles = None options = None def process(self, instance): + if not self.profiles: + self.log.warning("No profiles present for create burnin") + return + # QUESTION what is this for and should we raise an exception? if "representations" not in instance.data: raise RuntimeError("Burnin needs already created mov to work on.") @@ -137,18 +143,29 @@ class ExtractBurnin(publish.Extractor): return filtered_repres def main_process(self, instance): - # TODO get these data from context host_name = instance.context.data["hostName"] - task_name = os.environ["AVALON_TASK"] - family = self.main_family_from_instance(instance) + family = instance.data["family"] + task_data = instance.data["anatomyData"].get("task", {}) + task_name = task_data.get("name") + task_type = task_data.get("type") + subset = instance.data["subset"] + + filtering_criteria = { + "hosts": host_name, + "families": family, + "task_names": task_name, + "task_types": task_type, + "subset": subset + } + profile = filter_profiles(self.profiles, filtering_criteria, + logger=self.log) - # Find profile most matching current host, task and instance family - profile = self.find_matching_profile(host_name, task_name, family) if not profile: self.log.info(( "Skipped instance. None of profiles in presets are for" - " Host: \"{}\" | Family: \"{}\" | Task \"{}\"" - ).format(host_name, family, task_name)) + " Host: \"{}\" | Families: \"{}\" | Task \"{}\"" + " | Task type \"{}\" | Subset \"{}\" " + ).format(host_name, family, task_name, task_type, subset)) return self.log.debug("profile: {}".format(profile)) @@ -158,7 +175,8 @@ class ExtractBurnin(publish.Extractor): if not burnin_defs: self.log.info(( "Skipped instance. Burnin definitions are not set for profile" - " Host: \"{}\" | Family: \"{}\" | Task \"{}\" | Profile \"{}\"" + " Host: \"{}\" | Families: \"{}\" | Task \"{}\"" + " | Profile \"{}\"" ).format(host_name, family, task_name, profile)) return @@ -468,7 +486,7 @@ class ExtractBurnin(publish.Extractor): burnin_data.update({ "version": int(version), - "comment": context.data.get("comment") or "" + "comment": instance.data["comment"] }) intent_label = context.data.get("intent") or "" @@ -693,130 +711,6 @@ class ExtractBurnin(publish.Extractor): ) }) - def find_matching_profile(self, host_name, task_name, family): - """ Filter profiles by Host name, Task name and main Family. - - Filtering keys are "hosts" (list), "tasks" (list), "families" (list). - If key is not find or is empty than it's expected to match. - - Args: - profiles (list): Profiles definition from presets. - host_name (str): Current running host name. - task_name (str): Current context task name. - family (str): Main family of current Instance. - - Returns: - dict/None: Return most matching profile or None if none of profiles - match at least one criteria. - """ - - matching_profiles = None - highest_points = -1 - for profile in self.profiles or tuple(): - profile_points = 0 - profile_value = [] - - # Host filtering - host_names = profile.get("hosts") - match = self.validate_value_by_regexes(host_name, host_names) - if match == -1: - continue - profile_points += match - profile_value.append(bool(match)) - - # Task filtering - task_names = profile.get("tasks") - match = self.validate_value_by_regexes(task_name, task_names) - if match == -1: - continue - profile_points += match - profile_value.append(bool(match)) - - # Family filtering - families = profile.get("families") - match = self.validate_value_by_regexes(family, families) - if match == -1: - continue - profile_points += match - profile_value.append(bool(match)) - - if profile_points > highest_points: - matching_profiles = [] - highest_points = profile_points - - if profile_points == highest_points: - profile["__value__"] = profile_value - matching_profiles.append(profile) - - if not matching_profiles: - return - - if len(matching_profiles) == 1: - return matching_profiles[0] - - return self.profile_exclusion(matching_profiles) - - def profile_exclusion(self, matching_profiles): - """Find out most matching profile by host, task and family match. - - Profiles are selectivelly filtered. Each profile should have - "__value__" key with list of booleans. Each boolean represents - existence of filter for specific key (host, taks, family). - Profiles are looped in sequence. In each sequence are split into - true_list and false_list. For next sequence loop are used profiles in - true_list if there are any profiles else false_list is used. - - Filtering ends when only one profile left in true_list. Or when all - existence booleans loops passed, in that case first profile from left - profiles is returned. - - Args: - matching_profiles (list): Profiles with same values. - - Returns: - dict: Most matching profile. - """ - self.log.info( - "Search for first most matching profile in match order:" - " Host name -> Task name -> Family." - ) - # Filter all profiles with highest points value. First filter profiles - # with matching host if there are any then filter profiles by task - # name if there are any and lastly filter by family. Else use first in - # list. - idx = 0 - final_profile = None - while True: - profiles_true = [] - profiles_false = [] - for profile in matching_profiles: - value = profile["__value__"] - # Just use first profile when idx is greater than values. - if not idx < len(value): - final_profile = profile - break - - if value[idx]: - profiles_true.append(profile) - else: - profiles_false.append(profile) - - if final_profile is not None: - break - - if profiles_true: - matching_profiles = profiles_true - else: - matching_profiles = profiles_false - - if len(matching_profiles) == 1: - final_profile = matching_profiles[0] - break - idx += 1 - - final_profile.pop("__value__") - return final_profile - def filter_burnins_defs(self, profile, instance): """Filter outputs by their values from settings. @@ -909,56 +803,6 @@ class ExtractBurnin(publish.Extractor): return True return False - def compile_list_of_regexes(self, in_list): - """Convert strings in entered list to compiled regex objects.""" - regexes = [] - if not in_list: - return regexes - - for item in in_list: - if not item: - continue - - try: - regexes.append(re.compile(item)) - except TypeError: - self.log.warning(( - "Invalid type \"{}\" value \"{}\"." - " Expected string based object. Skipping." - ).format(str(type(item)), str(item))) - - return regexes - - def validate_value_by_regexes(self, value, in_list): - """Validate in any regexe from list match entered value. - - Args: - in_list (list): List with regexes. - value (str): String where regexes is checked. - - Returns: - int: Returns `0` when list is not set or is empty. Returns `1` when - any regex match value and returns `-1` when none of regexes - match value entered. - """ - if not in_list: - return 0 - - output = -1 - regexes = self.compile_list_of_regexes(in_list) - for regex in regexes: - if re.match(regex, value): - output = 1 - break - return output - - def main_family_from_instance(self, instance): - """Return main family of entered instance.""" - family = instance.data.get("family") - if not family: - family = instance.data["families"][0] - return family - def families_from_instance(self, instance): """Return all families of entered instance.""" families = [] diff --git a/openpype/plugins/publish/extract_hierarchy_avalon.py b/openpype/plugins/publish/extract_hierarchy_avalon.py index 6b4e5f48c5..b2a6adc210 100644 --- a/openpype/plugins/publish/extract_hierarchy_avalon.py +++ b/openpype/plugins/publish/extract_hierarchy_avalon.py @@ -1,9 +1,8 @@ +import collections from copy import deepcopy import pyblish.api from openpype.client import ( - get_project, - get_asset_by_id, - get_asset_by_name, + get_assets, get_archived_assets ) from openpype.pipeline import legacy_io @@ -17,7 +16,6 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): families = ["clip", "shot"] def process(self, context): - # processing starts here if "hierarchyContext" not in context.data: self.log.info("skipping IntegrateHierarchyToAvalon") return @@ -25,161 +23,236 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): if not legacy_io.Session: legacy_io.install() - project_name = legacy_io.active_project() hierarchy_context = self._get_active_assets(context) self.log.debug("__ hierarchy_context: {}".format(hierarchy_context)) - self.project = None - self.import_to_avalon(context, project_name, hierarchy_context) + project_name = context.data["projectName"] + asset_names = self.extract_asset_names(hierarchy_context) + + asset_docs_by_name = {} + for asset_doc in get_assets(project_name, asset_names=asset_names): + name = asset_doc["name"] + asset_docs_by_name[name] = asset_doc + + archived_asset_docs_by_name = collections.defaultdict(list) + for asset_doc in get_archived_assets( + project_name, asset_names=asset_names + ): + name = asset_doc["name"] + archived_asset_docs_by_name[name].append(asset_doc) + + project_doc = None + hierarchy_queue = collections.deque() + for name, data in hierarchy_context.items(): + hierarchy_queue.append((name, data, None)) + + while hierarchy_queue: + item = hierarchy_queue.popleft() + name, entity_data, parent = item - def import_to_avalon( - self, - context, - project_name, - input_data, - parent=None, - ): - for name in input_data: - self.log.info("input_data[name]: {}".format(input_data[name])) - entity_data = input_data[name] entity_type = entity_data["entity_type"] - - data = {} - data["entityType"] = entity_type - - # Custom attributes. - for k, val in entity_data.get("custom_attributes", {}).items(): - data[k] = val - - if entity_type.lower() != "project": - data["inputs"] = entity_data.get("inputs", []) - - # Tasks. - tasks = entity_data.get("tasks", {}) - if tasks is not None or len(tasks) > 0: - data["tasks"] = tasks - parents = [] - visualParent = None - # do not store project"s id as visualParent - if self.project is not None: - if self.project["_id"] != parent["_id"]: - visualParent = parent["_id"] - parents.extend( - parent.get("data", {}).get("parents", []) - ) - parents.append(parent["name"]) - data["visualParent"] = visualParent - data["parents"] = parents - - update_data = True - # Process project if entity_type.lower() == "project": - entity = get_project(project_name) - # TODO: should be in validator? - assert (entity is not None), "Did not find project in DB" - - # get data from already existing project - cur_entity_data = entity.get("data") or {} - cur_entity_data.update(data) - data = cur_entity_data - - self.project = entity - # Raise error if project or parent are not set - elif self.project is None or parent is None: - raise AssertionError( - "Collected items are not in right order!" + new_parent = project_doc = self.sync_project( + context, + entity_data ) - # Else process assset + else: - entity = get_asset_by_name(project_name, name) - if entity: - # Do not override data, only update - cur_entity_data = entity.get("data") or {} - entity_tasks = cur_entity_data["tasks"] or {} - - # create tasks as dict by default - if not entity_tasks: - cur_entity_data["tasks"] = entity_tasks - - new_tasks = data.pop("tasks", {}) - if "tasks" not in cur_entity_data and not new_tasks: - continue - for task_name in new_tasks: - if task_name in entity_tasks.keys(): - continue - cur_entity_data["tasks"][task_name] = new_tasks[ - task_name] - cur_entity_data.update(data) - data = cur_entity_data - else: - # Skip updating data - update_data = False - - archived_entities = get_archived_assets( - project_name, - asset_names=[name] - ) - unarchive_entity = None - for archived_entity in archived_entities: - archived_parents = ( - archived_entity - .get("data", {}) - .get("parents") - ) - if data["parents"] == archived_parents: - unarchive_entity = archived_entity - break - - if unarchive_entity is None: - # Create entity if doesn"t exist - entity = self.create_avalon_asset( - name, data - ) - else: - # Unarchive if entity was archived - entity = self.unarchive_entity(unarchive_entity, data) - + new_parent = self.sync_asset( + name, + entity_data, + parent, + project_doc, + asset_docs_by_name, + archived_asset_docs_by_name + ) # make sure all relative instances have correct avalon data self._set_avalon_data_to_relative_instances( context, project_name, - entity + new_parent ) - if update_data: - # Update entity data with input data - legacy_io.update_many( - {"_id": entity["_id"]}, - {"$set": {"data": data}} + children = entity_data.get("childs") + if not children: + continue + + for child_name, child_data in children.items(): + hierarchy_queue.append((child_name, child_data, new_parent)) + + def extract_asset_names(self, hierarchy_context): + """Extract all possible asset names from hierarchy context. + + Args: + hierarchy_context (Dict[str, Any]): Nested hierarchy structure. + + Returns: + Set[str]: All asset names from the hierarchy structure. + """ + + hierarchy_queue = collections.deque() + for name, data in hierarchy_context.items(): + hierarchy_queue.append((name, data)) + + asset_names = set() + while hierarchy_queue: + item = hierarchy_queue.popleft() + name, data = item + if data["entity_type"].lower() != "project": + asset_names.add(name) + + children = data.get("childs") + if children: + for child_name, child_data in children.items(): + hierarchy_queue.append((child_name, child_data)) + return asset_names + + def sync_project(self, context, entity_data): + project_doc = context.data["projectEntity"] + + if "data" not in project_doc: + project_doc["data"] = {} + current_data = project_doc["data"] + + changes = {} + entity_type = entity_data["entity_type"] + if current_data.get("entityType") != entity_type: + changes["entityType"] = entity_type + + # Custom attributes. + attributes = entity_data.get("custom_attributes") or {} + for key, value in attributes.items(): + if key not in current_data or current_data[key] != value: + update_key = "data.{}".format(key) + changes[update_key] = value + current_data[key] = value + + if changes: + # Update entity data with input data + legacy_io.update_one( + {"_id": project_doc["_id"]}, + {"$set": changes} + ) + return project_doc + + def sync_asset( + self, + asset_name, + entity_data, + parent, + project, + asset_docs_by_name, + archived_asset_docs_by_name + ): + # Prepare data for new asset or for update comparison + data = { + "entityType": entity_data["entity_type"] + } + + # Custom attributes. + attributes = entity_data.get("custom_attributes") or {} + for key, value in attributes.items(): + data[key] = value + + data["inputs"] = entity_data.get("inputs") or [] + + # Parents and visual parent are empty if parent is project + parents = [] + parent_id = None + if project["_id"] != parent["_id"]: + parent_id = parent["_id"] + # Use parent's parents as source value + parents.extend(parent["data"]["parents"]) + # Add parent's name to parents + parents.append(parent["name"]) + + data["visualParent"] = parent_id + data["parents"] = parents + + asset_doc = asset_docs_by_name.get(asset_name) + # --- Create/Unarchive asset and end --- + if not asset_doc: + # Just use tasks from entity data as they are + # - this is different from the case when tasks are updated + data["tasks"] = entity_data.get("tasks") or {} + archived_asset_doc = None + for archived_entity in archived_asset_docs_by_name[asset_name]: + archived_parents = ( + archived_entity + .get("data", {}) + .get("parents") + ) + if data["parents"] == archived_parents: + archived_asset_doc = archived_entity + break + + # Create entity if doesn't exist + if archived_asset_doc is None: + return self.create_avalon_asset( + asset_name, data, project ) - if "childs" in entity_data: - self.import_to_avalon( - context, project_name, entity_data["childs"], entity - ) + return self.unarchive_entity( + archived_asset_doc, data, project + ) - def unarchive_entity(self, entity, data): + # --- Update existing asset --- + # Make sure current entity has "data" key + if "data" not in asset_doc: + asset_doc["data"] = {} + cur_entity_data = asset_doc["data"] + cur_entity_tasks = cur_entity_data.get("tasks") or {} + + # Tasks + data["tasks"] = {} + new_tasks = entity_data.get("tasks") or {} + for task_name, task_info in new_tasks.items(): + task_info = deepcopy(task_info) + if task_name in cur_entity_tasks: + src_task_info = deepcopy(cur_entity_tasks[task_name]) + src_task_info.update(task_info) + task_info = src_task_info + + data["tasks"][task_name] = task_info + + changes = {} + for key, value in data.items(): + if key not in cur_entity_data or value != cur_entity_data[key]: + update_key = "data.{}".format(key) + changes[update_key] = value + cur_entity_data[key] = value + + # Update asset in database if necessary + if changes: + # Update entity data with input data + legacy_io.update_one( + {"_id": asset_doc["_id"]}, + {"$set": changes} + ) + return asset_doc + + def unarchive_entity(self, archived_doc, data, project): # Unarchived asset should not use same data - new_entity = { - "_id": entity["_id"], + asset_doc = { + "_id": archived_doc["_id"], "schema": "openpype:asset-3.0", - "name": entity["name"], - "parent": self.project["_id"], + "name": archived_doc["name"], + "parent": project["_id"], "type": "asset", "data": data } legacy_io.replace_one( - {"_id": entity["_id"]}, - new_entity + {"_id": archived_doc["_id"]}, + asset_doc ) - return new_entity + return asset_doc - def create_avalon_asset(self, name, data): + def create_avalon_asset(self, name, data, project): asset_doc = { "schema": "openpype:asset-3.0", "name": name, - "parent": self.project["_id"], + "parent": project["_id"], "type": "asset", "data": data } @@ -194,27 +267,27 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): project_name, asset_doc ): + asset_name = asset_doc["name"] + new_parents = asset_doc["data"]["parents"] + hierarchy = "/".join(new_parents) + parent_name = project_name + if new_parents: + parent_name = new_parents[-1] + for instance in context: - # Skip instance if has filled asset entity - if instance.data.get("assetEntity"): + # Skip if instance asset does not match + instance_asset_name = instance.data.get("asset") + if asset_name != instance_asset_name: continue - asset_name = asset_doc["name"] - inst_asset_name = instance.data["asset"] - if asset_name == inst_asset_name: - instance.data["assetEntity"] = asset_doc + instance_asset_doc = instance.data.get("assetEntity") + # Update asset entity with new possible changes of asset document + instance.data["assetEntity"] = asset_doc - # get parenting data - parents = asset_doc["data"].get("parents") or list() - - # equire only relative parent - parent_name = project_name - if parents: - parent_name = parents[-1] - - # update avalon data on instance + # Update anatomy data if asset was not set on instance + if not instance_asset_doc: instance.data["anatomyData"].update({ - "hierarchy": "/".join(parents), + "hierarchy": hierarchy, "task": {}, "parent": parent_name }) @@ -241,7 +314,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): hierarchy_context = context.data["hierarchyContext"] active_assets = [] - # filter only the active publishing insatnces + # filter only the active publishing instances for instance in context: if instance.data.get("publish") is False: continue diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index f299d1c6e9..9310923a9f 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -598,8 +598,12 @@ class ExtractReview(pyblish.api.InstancePlugin): if temp_data["input_is_sequence"]: # Set start frame of input sequence (just frame in filename) # - definition of input filepath + # - add handle start if output should be without handles + start_number = temp_data["first_sequence_frame"] + if temp_data["without_handles"] and temp_data["handles_are_set"]: + start_number += temp_data["handle_start"] ffmpeg_input_args.extend([ - "-start_number", str(temp_data["first_sequence_frame"]) + "-start_number", str(start_number) ]) # TODO add fps mapping `{fps: fraction}` ? @@ -609,49 +613,50 @@ class ExtractReview(pyblish.api.InstancePlugin): # "23.976": "24000/1001" # } # Add framerate to input when input is sequence - ffmpeg_input_args.append( - "-framerate {}".format(temp_data["fps"]) - ) + ffmpeg_input_args.extend([ + "-framerate", str(temp_data["fps"]) + ]) + # Add duration of an input sequence if output is video + if not temp_data["output_is_sequence"]: + ffmpeg_input_args.extend([ + "-to", "{:0.10f}".format(duration_seconds) + ]) if temp_data["output_is_sequence"]: # Set start frame of output sequence (just frame in filename) # - this is definition of an output - ffmpeg_output_args.append( - "-start_number {}".format(temp_data["output_frame_start"]) - ) + ffmpeg_output_args.extend([ + "-start_number", str(temp_data["output_frame_start"]) + ]) # Change output's duration and start point if should not contain # handles - start_sec = 0 if temp_data["without_handles"] and temp_data["handles_are_set"]: - # Set start time without handles - # - check if handle_start is bigger than 0 to avoid zero division - if temp_data["handle_start"] > 0: - start_sec = float(temp_data["handle_start"]) / temp_data["fps"] - ffmpeg_input_args.append("-ss {:0.10f}".format(start_sec)) + # Set output duration in seconds + ffmpeg_output_args.extend([ + "-t", "{:0.10}".format(duration_seconds) + ]) - # Set output duration inn seconds - ffmpeg_output_args.append("-t {:0.10}".format(duration_seconds)) + # Add -ss (start offset in seconds) if input is not sequence + if not temp_data["input_is_sequence"]: + start_sec = float(temp_data["handle_start"]) / temp_data["fps"] + # Set start time without handles + # - Skip if start sec is 0.0 + if start_sec > 0.0: + ffmpeg_input_args.extend([ + "-ss", "{:0.10f}".format(start_sec) + ]) # Set frame range of output when input or output is sequence elif temp_data["output_is_sequence"]: - ffmpeg_output_args.append("-frames:v {}".format(output_frames_len)) - - # Add duration of an input sequence if output is video - if ( - temp_data["input_is_sequence"] - and not temp_data["output_is_sequence"] - ): - ffmpeg_input_args.append("-to {:0.10f}".format( - duration_seconds + start_sec - )) + ffmpeg_output_args.extend([ + "-frames:v", str(output_frames_len) + ]) # Add video/image input path - ffmpeg_input_args.append( - "-i {}".format( - path_to_subprocess_arg(temp_data["full_input_path"]) - ) - ) + ffmpeg_input_args.extend([ + "-i", path_to_subprocess_arg(temp_data["full_input_path"]) + ]) # Add audio arguments if there are any. Skipped when output are images. if not temp_data["output_ext_is_image"] and temp_data["with_audio"]: diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index 8da1213807..03df1455e2 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -73,6 +73,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): "Adding thumbnail representation: {}".format(new_repre) ) instance.data["representations"].append(new_repre) + instance.data["thumbnailPath"] = dst_filepath def _create_thumbnail(self, context, thumbnail_source): if not thumbnail_source: diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index e19c3eee7c..5da4c76539 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -80,6 +80,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder families = ["workfile", "pointcache", + "proxyAbc", "camera", "animation", "model", @@ -110,6 +111,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "image", "assembly", "fbx", + "gltf", "textures", "action", "harmony.template", @@ -768,7 +770,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "time": context.data["time"], "author": context.data["user"], "source": source, - "comment": context.data.get("comment"), + "comment": instance.data["comment"], "machine": context.data.get("machine"), "fps": instance.data.get("fps", context.data.get("fps")) } diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py index 536ab83f2c..8f3b0d4220 100644 --- a/openpype/plugins/publish/integrate_legacy.py +++ b/openpype/plugins/publish/integrate_legacy.py @@ -76,6 +76,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder + 0.00001 families = ["workfile", "pointcache", + "proxyAbc", "camera", "animation", "model", @@ -106,6 +107,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): "image", "assembly", "fbx", + "gltf", "textures", "action", "harmony.template", @@ -968,7 +970,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): "time": context.data["time"], "author": context.data["user"], "source": source, - "comment": context.data.get("comment"), + "comment": instance.data["comment"], "machine": context.data.get("machine"), "fps": context.data.get( "fps", instance.data.get("fps") diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index f74c3d9609..809a1782e0 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -102,8 +102,31 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): thumbnail_root ) + def _get_thumbnail_from_instance(self, instance): + # 1. Look for thumbnail in published representations + published_repres = instance.data.get("published_representations") + path = self._get_thumbnail_path_from_published(published_repres) + if path and os.path.exists(path): + return path + + if path: + self.log.warning( + "Could not find published thumbnail path {}".format(path) + ) + + # 2. Look for thumbnail in "not published" representations + thumbnail_path = self._get_thumbnail_path_from_unpublished(instance) + if thumbnail_path and os.path.exists(thumbnail_path): + return thumbnail_path + + # 3. Look for thumbnail path on instance in 'thumbnailPath' + thumbnail_path = instance.data.get("thumbnailPath") + if thumbnail_path and os.path.exists(thumbnail_path): + return thumbnail_path + return None + def _prepare_instances(self, context): - context_thumbnail_path = context.get("thumbnailPath") + context_thumbnail_path = context.data.get("thumbnailPath") valid_context_thumbnail = False if context_thumbnail_path and os.path.exists(context_thumbnail_path): valid_context_thumbnail = True @@ -122,8 +145,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): continue # Find thumbnail path on instance - thumbnail_path = self._get_instance_thumbnail_path( - published_repres) + thumbnail_path = self._get_thumbnail_from_instance(instance) if thumbnail_path: self.log.debug(( "Found thumbnail path for instance \"{}\"." @@ -157,7 +179,10 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): for repre_info in published_representations.values(): return repre_info["representation"]["parent"] - def _get_instance_thumbnail_path(self, published_representations): + def _get_thumbnail_path_from_published(self, published_representations): + if not published_representations: + return None + thumb_repre_doc = None for repre_info in published_representations.values(): repre_doc = repre_info["representation"] @@ -179,6 +204,38 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): return None return os.path.normpath(path) + def _get_thumbnail_path_from_unpublished(self, instance): + repres = instance.data.get("representations") + if not repres: + return None + + thumbnail_repre = next( + ( + repre + for repre in repres + if repre["name"] == "thumbnail" + ), + None + ) + if not thumbnail_repre: + return None + + staging_dir = thumbnail_repre.get("stagingDir") + if not staging_dir: + staging_dir = instance.data.get("stagingDir") + + filename = thumbnail_repre.get("files") + if not staging_dir or not filename: + return None + + if isinstance(filename, (list, tuple, set)): + filename = filename[0] + + thumbnail_path = os.path.join(staging_dir, filename) + if os.path.exists(thumbnail_path): + return thumbnail_path + return None + def _integrate_thumbnails( self, filtered_instance_items, diff --git a/openpype/resources/app_icons/3dsmax.png b/openpype/resources/app_icons/3dsmax.png new file mode 100644 index 0000000000..9ebdf6099f Binary files /dev/null and b/openpype/resources/app_icons/3dsmax.png differ diff --git a/openpype/resources/app_icons/celaction.png b/openpype/resources/app_icons/celaction.png new file mode 100644 index 0000000000..86ac092365 Binary files /dev/null and b/openpype/resources/app_icons/celaction.png differ diff --git a/openpype/resources/app_icons/celaction_local.png b/openpype/resources/app_icons/celaction_local.png deleted file mode 100644 index 3a8abe6dbc..0000000000 Binary files a/openpype/resources/app_icons/celaction_local.png and /dev/null differ diff --git a/openpype/resources/app_icons/celaction_remotel.png b/openpype/resources/app_icons/celaction_remotel.png deleted file mode 100644 index 320e8173eb..0000000000 Binary files a/openpype/resources/app_icons/celaction_remotel.png and /dev/null differ diff --git a/openpype/settings/defaults/project_settings/celaction.json b/openpype/settings/defaults/project_settings/celaction.json index a4a321fb27..dbe5625f06 100644 --- a/openpype/settings/defaults/project_settings/celaction.json +++ b/openpype/settings/defaults/project_settings/celaction.json @@ -1,13 +1,9 @@ { "publish": { - "ExtractCelactionDeadline": { - "enabled": true, - "deadline_department": "", - "deadline_priority": 50, - "deadline_pool": "", - "deadline_pool_secondary": "", - "deadline_group": "", - "deadline_chunk_size": 10 + "CollectRenderPath": { + "output_extension": "png", + "anatomy_template_key_render_files": "render", + "anatomy_template_key_metadata": "render" } } } \ No newline at end of file diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index a6e7b4a94a..6e1c0f3540 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -70,6 +70,16 @@ "department": "", "multiprocess": true }, + "CelactionSubmitDeadline": { + "enabled": true, + "deadline_department": "", + "deadline_priority": 50, + "deadline_pool": "", + "deadline_pool_secondary": "", + "deadline_group": "", + "deadline_chunk_size": 10, + "deadline_job_delay": "00:00:00:00" + }, "ProcessSubmittedJobOnFarm": { "enabled": true, "deadline_department": "", diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 46b8b1b0c8..8c56500646 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -24,6 +24,10 @@ ], "skip_hosts_headless_publish": [] }, + "collect_comment_per_instance": { + "enabled": false, + "families": [] + }, "ValidateEditorialAssetName": { "enabled": true, "optional": false @@ -205,6 +209,9 @@ { "families": [], "hosts": [], + "task_types": [], + "task_names": [], + "subsets": [], "burnins": { "burnin": { "TOP_LEFT": "{yy}-{mm}-{dd}", diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 988c0e777a..5f40c2a10c 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -59,6 +59,7 @@ "default_render_image_folder": "renders/maya", "enable_all_lights": true, "aov_separator": "underscore", + "remove_aovs": false, "reset_current_frame": false, "arnold_renderer": { "image_prefix": "//_", @@ -149,6 +150,14 @@ "Main" ] }, + "CreateProxyAlembic": { + "enabled": true, + "write_color_sets": false, + "write_face_sets": false, + "defaults": [ + "Main" + ] + }, "CreateMultiverseUsd": { "enabled": true, "defaults": [ @@ -171,7 +180,21 @@ "enabled": true, "defaults": [ "Main" - ] + ], + "expandProcedurals": false, + "motionBlur": true, + "motionBlurKeys": 2, + "motionBlurLength": 0.5, + "maskOptions": false, + "maskCamera": false, + "maskLight": false, + "maskShape": false, + "maskShader": false, + "maskOverride": false, + "maskDriver": false, + "maskFilter": false, + "maskColor_manager": false, + "maskOperator": false }, "CreateAssembly": { "enabled": true, @@ -250,6 +273,9 @@ "CollectFbxCamera": { "enabled": false }, + "CollectGLTF": { + "enabled": false + }, "ValidateInstanceInContext": { "enabled": true, "optional": true, @@ -569,6 +595,12 @@ "optional": false, "active": true }, + "ExtractProxyAlembic": { + "enabled": true, + "families": [ + "proxyAbc" + ] + }, "ExtractAlembic": { "enabled": true, "families": [ @@ -915,7 +947,7 @@ "current_context": [ { "subset_name_filters": [ - "\".+[Mm]ain\"" + ".+[Mm]ain" ], "families": [ "model" @@ -932,7 +964,8 @@ "subset_name_filters": [], "families": [ "animation", - "pointcache" + "pointcache", + "proxyAbc" ], "repre_names": [ "abc" @@ -1007,4 +1040,4 @@ "ValidateNoAnimation": false } } -} \ No newline at end of file +} diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index 03499a8567..936407a49b 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -114,6 +114,35 @@ } } }, + "3dsmax": { + "enabled": true, + "label": "3ds max", + "icon": "{}/app_icons/3dsmax.png", + "host_name": "max", + "environment": { + "ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR": "{OPENPYPE_ROOT}\\openpype\\hosts\\max\\startup" + }, + "variants": { + "2023": { + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\3ds Max 2023\\3dsmax.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": { + "3DSMAX_VERSION": "2023" + } + } + } + }, "flame": { "enabled": true, "label": "Flame", @@ -1268,12 +1297,12 @@ "CELACTION_TEMPLATE": "{OPENPYPE_REPOS_ROOT}/openpype/hosts/celaction/celaction_template_scene.scn" }, "variants": { - "local": { + "current": { "enabled": true, - "variant_label": "Local", + "variant_label": "Current", "use_python_2": false, "executables": { - "windows": [], + "windows": ["C:/Program Files/CelAction/CelAction2D Studio/CelAction2D.exe"], "darwin": [], "linux": [] }, diff --git a/openpype/settings/entities/enum_entity.py b/openpype/settings/entities/enum_entity.py index defe4aa1f0..c0c103ea10 100644 --- a/openpype/settings/entities/enum_entity.py +++ b/openpype/settings/entities/enum_entity.py @@ -152,6 +152,7 @@ class HostsEnumEntity(BaseEnumEntity): schema_types = ["hosts-enum"] all_host_names = [ + "max", "aftereffects", "blender", "celaction", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json b/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json index 500e5b2298..15d9350c84 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json @@ -14,45 +14,24 @@ { "type": "dict", "collapsible": true, - "checkbox_key": "enabled", - "key": "ExtractCelactionDeadline", - "label": "ExtractCelactionDeadline", + "key": "CollectRenderPath", + "label": "CollectRenderPath", "is_group": true, "children": [ { - "type": "boolean", - "key": "enabled", - "label": "Enabled" + "type": "text", + "key": "output_extension", + "label": "Output render file extension" }, { "type": "text", - "key": "deadline_department", - "label": "Deadline apartment" - }, - { - "type": "number", - "key": "deadline_priority", - "label": "Deadline priority" + "key": "anatomy_template_key_render_files", + "label": "Anatomy template key: render files" }, { "type": "text", - "key": "deadline_pool", - "label": "Deadline pool" - }, - { - "type": "text", - "key": "deadline_pool_secondary", - "label": "Deadline pool (secondary)" - }, - { - "type": "text", - "key": "deadline_group", - "label": "Deadline Group" - }, - { - "type": "number", - "key": "deadline_chunk_size", - "label": "Deadline Chunk size" + "key": "anatomy_template_key_metadata", + "label": "Anatomy template key: metadata job file" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json index cd1741ba8b..69f81ed682 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json @@ -387,6 +387,56 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "CelactionSubmitDeadline", + "label": "Celaction Submit Deadline", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "text", + "key": "deadline_department", + "label": "Deadline apartment" + }, + { + "type": "number", + "key": "deadline_priority", + "label": "Deadline priority" + }, + { + "type": "text", + "key": "deadline_pool", + "label": "Deadline pool" + }, + { + "type": "text", + "key": "deadline_pool_secondary", + "label": "Deadline pool (secondary)" + }, + { + "type": "text", + "key": "deadline_group", + "label": "Deadline Group" + }, + { + "type": "number", + "key": "deadline_chunk_size", + "label": "Deadline Chunk size" + }, + { + "type": "text", + "key": "deadline_job_delay", + "label": "Delay job (timecode dd:hh:mm:ss)" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index 742437fbde..5388d04bc9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -60,6 +60,27 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "collect_comment_per_instance", + "label": "Collect comment per instance", + "checkbox_key": "enabled", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + } + ] + }, { "type": "dict", "collapsible": true, @@ -505,11 +526,28 @@ "object_type": "text" }, { - "type": "hosts-enum", "key": "hosts", - "label": "Hosts", + "label": "Host names", + "type": "hosts-enum", "multiselection": true }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subset names", + "type": "list", + "object_type": "text" + }, { "type": "splitter" }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index bc6520474d..e1a3082616 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -200,7 +200,128 @@ } ] }, - + { + "type": "dict", + "collapsible": true, + "key": "CreateProxyAlembic", + "label": "Create Proxy Alembic", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "write_color_sets", + "label": "Write Color Sets" + }, + { + "type": "boolean", + "key": "write_face_sets", + "label": "Write Face Sets" + }, + { + "type": "list", + "key": "defaults", + "label": "Default Subsets", + "object_type": "text" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "CreateAss", + "label": "Create Ass", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "list", + "key": "defaults", + "label": "Default Subsets", + "object_type": "text" + }, + { + "type": "boolean", + "key": "expandProcedurals", + "label": "Expand Procedurals" + }, + { + "type": "boolean", + "key": "motionBlur", + "label": "Motion Blur" + }, + { + "type": "number", + "key": "motionBlurKeys", + "label": "Motion Blur Keys", + "minimum": 0 + }, + { + "type": "number", + "key": "motionBlurLength", + "label": "Motion Blur Length", + "decimal": 3 + }, + { + "type": "boolean", + "key": "maskOptions", + "label": "Mask Options" + }, + { + "type": "boolean", + "key": "maskCamera", + "label": "Mask Camera" + }, + { + "type": "boolean", + "key": "maskLight", + "label": "Mask Light" + }, + { + "type": "boolean", + "key": "maskShape", + "label": "Mask Shape" + }, + { + "type": "boolean", + "key": "maskShader", + "label": "Mask Shader" + }, + { + "type": "boolean", + "key": "maskOverride", + "label": "Mask Override" + }, + { + "type": "boolean", + "key": "maskDriver", + "label": "Mask Driver" + }, + { + "type": "boolean", + "key": "maskFilter", + "label": "Mask Filter" + }, + { + "type": "boolean", + "key": "maskColor_manager", + "label": "Mask Color Manager" + }, + { + "type": "boolean", + "key": "maskOperator", + "label": "Mask Operator" + } + ] + }, { "type": "schema_template", "name": "template_create_plugin", @@ -217,10 +338,6 @@ "key": "CreateMultiverseUsdOver", "label": "Create Multiverse USD Override" }, - { - "key": "CreateAss", - "label": "Create Ass" - }, { "key": "CreateAssembly", "label": "Create Assembly" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index ab8c6b885e..9aaff248ab 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -35,6 +35,20 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "CollectGLTF", + "label": "Collect Assets for GLTF/GLB export", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + } + ] + }, { "type": "splitter" }, @@ -62,7 +76,7 @@ } ] }, - { + { "type": "dict", "collapsible": true, "key": "ValidateFrameRange", @@ -638,6 +652,26 @@ "type": "label", "label": "Extractors" }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractProxyAlembic", + "label": "Extract Proxy Alembic", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json index 0cbb684fc6..e90495891b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json @@ -31,6 +31,11 @@ {"dot": ". (dot)"} ] }, + { + "key": "remove_aovs", + "label": "Remove existing AOVs", + "type": "boolean" + }, { "key": "reset_current_frame", "label": "Reset Current Frame", @@ -204,74 +209,76 @@ "defaults": "empty", "enum_items": [ {"empty": "< empty >"}, - {"atmosphereChannel": "atmosphere"}, - {"backgroundChannel": "background"}, - {"bumpNormalsChannel": "bumpnormals"}, - {"causticsChannel": "caustics"}, - {"coatFilterChannel": "coat_filter"}, - {"coatGlossinessChannel": "coatGloss"}, - {"coatReflectionChannel": "coat_reflection"}, - {"vrayCoatChannel": "coat_specular"}, - {"CoverageChannel": "coverage"}, - {"cryptomatteChannel": "cryptomatte"}, - {"customColor": "custom_color"}, - {"drBucketChannel": "DR"}, - {"denoiserChannel": "denoiser"}, - {"diffuseChannel": "diffuse"}, - {"ExtraTexElement": "extraTex"}, - {"giChannel": "GI"}, - {"LightMixElement": "None"}, - {"lightingChannel": "lighting"}, - {"LightingAnalysisChannel": "LightingAnalysis"}, - {"materialIDChannel": "materialID"}, - {"MaterialSelectElement": "materialSelect"}, - {"matteShadowChannel": "matteShadow"}, - {"MultiMatteElement": "multimatte"}, - {"multimatteIDChannel": "multimatteID"}, - {"normalsChannel": "normals"}, - {"nodeIDChannel": "objectId"}, - {"objectSelectChannel": "objectSelect"}, - {"rawCoatFilterChannel": "raw_coat_filter"}, - {"rawCoatReflectionChannel": "raw_coat_reflection"}, - {"rawDiffuseFilterChannel": "rawDiffuseFilter"}, - {"rawGiChannel": "rawGI"}, - {"rawLightChannel": "rawLight"}, - {"rawReflectionChannel": "rawReflection"}, - {"rawReflectionFilterChannel": "rawReflectionFilter"}, - {"rawRefractionChannel": "rawRefraction"}, - {"rawRefractionFilterChannel": "rawRefractionFilter"}, - {"rawShadowChannel": "rawShadow"}, - {"rawSheenFilterChannel": "raw_sheen_filter"}, - {"rawSheenReflectionChannel": "raw_sheen_reflection"}, - {"rawTotalLightChannel": "rawTotalLight"}, - {"reflectIORChannel": "reflIOR"}, - {"reflectChannel": "reflect"}, - {"reflectionFilterChannel": "reflectionFilter"}, - {"reflectGlossinessChannel": "reflGloss"}, - {"refractChannel": "refract"}, - {"refractionFilterChannel": "refractionFilter"}, - {"refractGlossinessChannel": "refrGloss"}, - {"renderIDChannel": "renderId"}, - {"FastSSS2Channel": "SSS"}, - {"sampleRateChannel": "sampleRate"}, + {"atmosphereChannel": "atmosphereChannel"}, + {"backgroundChannel": "backgroundChannel"}, + {"bumpNormalsChannel": "bumpNormalsChannel"}, + {"causticsChannel": "causticsChannel"}, + {"coatFilterChannel": "coatFilterChannel"}, + {"coatGlossinessChannel": "coatGlossinessChannel"}, + {"coatReflectionChannel": "coatReflectionChannel"}, + {"vrayCoatChannel": "vrayCoatChannel"}, + {"CoverageChannel": "CoverageChannel"}, + {"cryptomatteChannel": "cryptomatteChannel"}, + {"customColor": "customColor"}, + {"drBucketChannel": "drBucketChannel"}, + {"denoiserChannel": "denoiserChannel"}, + {"diffuseChannel": "diffuseChannel"}, + {"ExtraTexElement": "ExtraTexElement"}, + {"giChannel": "giChannel"}, + {"LightMixElement": "LightMixElement"}, + {"LightSelectElement": "LightSelectElement"}, + {"lightingChannel": "lightingChannel"}, + {"LightingAnalysisChannel": "LightingAnalysisChannel"}, + {"materialIDChannel": "materialIDChannel"}, + {"MaterialSelectElement": "MaterialSelectElement"}, + {"matteShadowChannel": "matteShadowChannel"}, + {"metalnessChannel": "metalnessChannel"}, + {"MultiMatteElement": "MultiMatteElement"}, + {"multimatteIDChannel": "multimatteIDChannel"}, + {"noiseLevelChannel": "noiseLevelChannel"}, + {"normalsChannel": "normalsChannel"}, + {"nodeIDChannel": "nodeIDChannel"}, + {"objectSelectChannel": "objectSelectChannel"}, + {"rawCoatFilterChannel": "rawCoatFilterChannel"}, + {"rawCoatReflectionChannel": "rawCoatReflectionChannel"}, + {"rawDiffuseFilterChannel": "rawDiffuseFilterChannel"}, + {"rawGiChannel": "rawGiChannel"}, + {"rawLightChannel": "rawLightChannel"}, + {"rawReflectionChannel": "rawReflectionChannel"}, + {"rawReflectionFilterChannel": "rawReflectionFilterChannel"}, + {"rawRefractionChannel": "rawRefractionChannel"}, + {"rawRefractionFilterChannel": "rawRefractionFilterChannel"}, + {"rawShadowChannel": "rawShadowChannel"}, + {"rawSheenFilterChannel": "rawSheenFilterChannel"}, + {"rawSheenReflectionChannel": "rawSheenReflectionChannel"}, + {"rawTotalLightChannel": "rawTotalLightChannel"}, + {"reflectIORChannel": "reflectIORChannel"}, + {"reflectChannel": "reflectChannel"}, + {"reflectionFilterChannel": "reflectionFilterChannel"}, + {"reflectGlossinessChannel": "reflectGlossinessChannel"}, + {"refractChannel": "refractChannel"}, + {"refractionFilterChannel": "refractionFilterChannel"}, + {"refractGlossinessChannel": "refractGlossinessChannel"}, + {"renderIDChannel": "renderIDChannel"}, + {"FastSSS2Channel": "FastSSS2Channel"}, + {"sampleRateChannel": "sampleRateChannel"}, {"samplerInfo": "samplerInfo"}, - {"selfIllumChannel": "selfIllum"}, - {"shadowChannel": "shadow"}, - {"sheenFilterChannel": "sheen_filter"}, - {"sheenGlossinessChannel": "sheenGloss"}, - {"sheenReflectionChannel": "sheen_reflection"}, - {"vraySheenChannel": "sheen_specular"}, - {"specularChannel": "specular"}, + {"selfIllumChannel": "selfIllumChannel"}, + {"shadowChannel": "shadowChannel"}, + {"sheenFilterChannel": "sheenFilterChannel"}, + {"sheenGlossinessChannel": "sheenGlossinessChannel"}, + {"sheenReflectionChannel": "sheenReflectionChannel"}, + {"vraySheenChannel": "vraySheenChannel"}, + {"specularChannel": "specularChannel"}, {"Toon": "Toon"}, - {"toonLightingChannel": "toonLighting"}, - {"toonSpecularChannel": "toonSpecular"}, - {"totalLightChannel": "totalLight"}, - {"unclampedColorChannel": "unclampedColor"}, - {"VRScansPaintMaskChannel": "VRScansPaintMask"}, - {"VRScansZoneMaskChannel": "VRScansZoneMask"}, - {"velocityChannel": "velocity"}, - {"zdepthChannel": "zDepth"}, - {"LightSelectElement": "lightselect"} + {"toonLightingChannel": "toonLightingChannel"}, + {"toonSpecularChannel": "toonSpecularChannel"}, + {"totalLightChannel": "totalLightChannel"}, + {"unclampedColorChannel": "unclampedColorChannel"}, + {"VRScansPaintMaskChannel": "VRScansPaintMaskChannel"}, + {"VRScansZoneMaskChannel": "VRScansZoneMaskChannel"}, + {"velocityChannel": "velocityChannel"}, + {"zdepthChannel": "zdepthChannel"} ] }, { @@ -310,9 +317,8 @@ "defaults": "0", "enum_items": [ {"0": "None"}, - {"1": "Photon Map"}, - {"2": "Irradiance Cache"}, - {"3": "Brute Force"} + {"3": "Irradiance Cache"}, + {"4": "Brute Force"} ] }, { @@ -323,9 +329,8 @@ "defaults": "0", "enum_items": [ {"0": "None"}, - {"1": "Photon Map"}, - {"2": "Irradiance Cache"}, - {"3": "Brute Force"} + {"2": "Irradiance Point Cloud"}, + {"4": "Brute Force"} ] }, { @@ -361,46 +366,46 @@ "defaults": "empty", "enum_items": [ {"empty": "< none >"}, - {"AO": "Ambient Occlusion"}, + {"Ambient Occlusion": "Ambient Occlusion"}, {"Background": "Background"}, {"Beauty": "Beauty"}, - {"BumpNormals": "Bump Normals"}, + {"Bump Normals": "Bump Normals"}, {"Caustics": "Caustics"}, - {"CausticsRaw": "Caustics Raw"}, + {"Caustics Raw": "Caustics Raw"}, {"Cryptomatte": "Cryptomatte"}, {"Custom": "Custom"}, - {"Z": "Depth"}, - {"DiffuseFilter": "Diffuse Filter"}, - {"DiffuseLighting": "Diffuse Lighting"}, - {"DiffuseLightingRaw": "Diffuse Lighting Raw"}, + {"Depth": "Depth"}, + {"Diffuse Filter": "Diffuse Filter"}, + {"Diffuse Lighting": "Diffuse Lighting"}, + {"Diffuse Lighting Raw": "Diffuse Lighting Raw"}, {"Emission": "Emission"}, - {"GI": "Global Illumination"}, - {"GIRaw": "Global Illumination Raw"}, + {"Global Illumination": "Global Illumination"}, + {"Global Illumination Raw": "Global Illumination Raw"}, {"Matte": "Matte"}, - {"MotionVectors": "Ambient Occlusion"}, - {"N": "Normals"}, - {"ID": "ObjectID"}, - {"ObjectBumpNormal": "Object-Space Bump Normals"}, - {"ObjectPosition": "Object-Space Positions"}, - {"PuzzleMatte": "Puzzle Matte"}, + {"Motion Vectors": "Motion Vectors"}, + {"Normals": "Normals"}, + {"ObjectID": "ObjectID"}, + {"Object-Space Bump Normals": "Object-Space Bump Normals"}, + {"Object-Space Positions": "Object-Space Positions"}, + {"Puzzle Matte": "Puzzle Matte"}, {"Reflections": "Reflections"}, - {"ReflectionsFilter": "Reflections Filter"}, - {"ReflectionsRaw": "Reflections Raw"}, + {"Reflections Filter": "Reflections Filter"}, + {"Reflections Raw": "Reflections Raw"}, {"Refractions": "Refractions"}, - {"RefractionsFilter": "Refractions Filter"}, - {"RefractionsRaw": "Refractions Filter"}, + {"Refractions Filter": "Refractions Filter"}, + {"Refractions Raw": "Refractions Filter"}, {"Shadows": "Shadows"}, {"SpecularLighting": "Specular Lighting"}, - {"SSS": "Sub Surface Scatter"}, - {"SSSRaw": "Sub Surface Scatter Raw"}, - {"TotalDiffuseLightingRaw": "Total Diffuse Lighting Raw"}, - {"TotalTransLightingRaw": "Total Translucency Filter"}, - {"TransTint": "Translucency Filter"}, - {"TransGIRaw": "Translucency Lighting Raw"}, - {"VolumeFogEmission": "Volume Fog Emission"}, - {"VolumeFogTint": "Volume Fog Tint"}, - {"VolumeLighting": "Volume Lighting"}, - {"P": "World Position"} + {"Sub Surface Scatter": "Sub Surface Scatter"}, + {"Sub Surface Scatter Raw": "Sub Surface Scatter Raw"}, + {"Total Diffuse Lighting Raw": "Total Diffuse Lighting Raw"}, + {"Total Translucency Filter": "Total Translucency Filter"}, + {"Translucency Filter": "Translucency Filter"}, + {"Translucency Lighting Raw": "Translucency Lighting Raw"}, + {"Volume Fog Emission": "Volume Fog Emission"}, + {"Volume Fog Tint": "Volume Fog Tint"}, + {"Volume Lighting": "Volume Lighting"}, + {"World Position": "World Position"} ] }, { diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_publish_families.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_publish_families.json index f39ad31fbb..43dd74cdf9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_publish_families.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_publish_families.json @@ -28,6 +28,7 @@ {"nukenodes": "nukenodes"}, {"plate": "plate"}, {"pointcache": "pointcache"}, + {"proxyAbc": "proxyAbc"}, {"prerender": "prerender"}, {"redshiftproxy": "redshiftproxy"}, {"reference": "reference"}, diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_3dsmax.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_3dsmax.json new file mode 100644 index 0000000000..f7c57298af --- /dev/null +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_3dsmax.json @@ -0,0 +1,39 @@ +{ + "type": "dict", + "key": "3dsmax", + "label": "Autodesk 3ds Max", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "schema_template", + "name": "template_host_unchangables" + }, + { + "key": "environment", + "label": "Environment", + "type": "raw-json" + }, + { + "type": "dict-modifiable", + "key": "variants", + "collapsible_key": true, + "use_label_wrap": false, + "object_type": { + "type": "dict", + "collapsible": true, + "children": [ + { + "type": "schema_template", + "name": "template_host_variant_items" + } + ] + } + } + ] +} diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_celaction.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_celaction.json index 82be15c3b0..b104e3bb82 100644 --- a/openpype/settings/entities/schemas/system_schema/host_settings/schema_celaction.json +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_celaction.json @@ -28,8 +28,8 @@ "name": "template_host_variant", "template_data": [ { - "app_variant_label": "Local", - "app_variant": "local" + "app_variant_label": "Current", + "app_variant": "current" } ] } diff --git a/openpype/settings/entities/schemas/system_schema/schema_applications.json b/openpype/settings/entities/schemas/system_schema/schema_applications.json index 20be33320d..36c5811496 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_applications.json +++ b/openpype/settings/entities/schemas/system_schema/schema_applications.json @@ -9,6 +9,10 @@ "type": "schema", "name": "schema_maya" }, + { + "type": "schema", + "name": "schema_3dsmax" + }, { "type": "schema", "name": "schema_flame" diff --git a/openpype/tools/attribute_defs/files_widget.py b/openpype/tools/attribute_defs/files_widget.py index 738e50ba07..2c8ed729c2 100644 --- a/openpype/tools/attribute_defs/files_widget.py +++ b/openpype/tools/attribute_defs/files_widget.py @@ -155,7 +155,7 @@ class DropEmpty(QtWidgets.QWidget): extensions_label = " or ".join(allowed_items) else: last_item = allowed_items.pop(-1) - new_last_item = " or ".join(last_item, allowed_items.pop(-1)) + new_last_item = " or ".join([last_item, allowed_items.pop(-1)]) allowed_items.append(new_last_item) extensions_label = ", ".join(allowed_items) diff --git a/openpype/tools/context_dialog/window.py b/openpype/tools/context_dialog/window.py index 3b544bd375..86c53b55c5 100644 --- a/openpype/tools/context_dialog/window.py +++ b/openpype/tools/context_dialog/window.py @@ -1,7 +1,7 @@ import os import json -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype.pipeline import AvalonMongoDB diff --git a/openpype/tools/creator/model.py b/openpype/tools/creator/model.py index d3d60b96f2..307993103b 100644 --- a/openpype/tools/creator/model.py +++ b/openpype/tools/creator/model.py @@ -36,7 +36,7 @@ class CreatorsModel(QtGui.QStandardItemModel): if not items: item = QtGui.QStandardItem("No registered families") item.setEnabled(False) - item.setData(QtCore.Qt.ItemIsEnabled, False) + item.setData(False, QtCore.Qt.ItemIsEnabled) items.append(item) self.invisibleRootItem().appendRows(items) diff --git a/openpype/tools/launcher/actions.py b/openpype/tools/launcher/actions.py index 34d06f72cc..61660ee9b7 100644 --- a/openpype/tools/launcher/actions.py +++ b/openpype/tools/launcher/actions.py @@ -1,6 +1,6 @@ import os -from Qt import QtWidgets, QtGui +from qtpy import QtWidgets, QtGui from openpype import PLUGINS_DIR from openpype import style diff --git a/openpype/tools/launcher/constants.py b/openpype/tools/launcher/constants.py index 61f631759b..cb0049055c 100644 --- a/openpype/tools/launcher/constants.py +++ b/openpype/tools/launcher/constants.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore ACTION_ROLE = QtCore.Qt.UserRole diff --git a/openpype/tools/launcher/delegates.py b/openpype/tools/launcher/delegates.py index 7b53658727..02a40861d2 100644 --- a/openpype/tools/launcher/delegates.py +++ b/openpype/tools/launcher/delegates.py @@ -1,5 +1,5 @@ import time -from Qt import QtCore, QtWidgets, QtGui +from qtpy import QtCore, QtWidgets, QtGui from .constants import ( ANIMATION_START_ROLE, ANIMATION_STATE_ROLE, diff --git a/openpype/tools/launcher/lib.py b/openpype/tools/launcher/lib.py index 68e57c6b92..2507b6eddc 100644 --- a/openpype/tools/launcher/lib.py +++ b/openpype/tools/launcher/lib.py @@ -1,5 +1,5 @@ import os -from Qt import QtGui +from qtpy import QtGui import qtawesome from openpype import resources diff --git a/openpype/tools/launcher/models.py b/openpype/tools/launcher/models.py index 6e3b531018..6c763544a9 100644 --- a/openpype/tools/launcher/models.py +++ b/openpype/tools/launcher/models.py @@ -6,7 +6,7 @@ import collections import time import appdirs -from Qt import QtCore, QtGui +from qtpy import QtCore, QtGui import qtawesome from openpype.client import ( diff --git a/openpype/tools/launcher/widgets.py b/openpype/tools/launcher/widgets.py index 774ceb659d..3eb641bdb3 100644 --- a/openpype/tools/launcher/widgets.py +++ b/openpype/tools/launcher/widgets.py @@ -1,7 +1,7 @@ import copy import time import collections -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui import qtawesome from openpype.tools.flickcharm import FlickCharm diff --git a/openpype/tools/launcher/window.py b/openpype/tools/launcher/window.py index a9eaa932bb..f68fc4befc 100644 --- a/openpype/tools/launcher/window.py +++ b/openpype/tools/launcher/window.py @@ -1,7 +1,7 @@ import copy import logging -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype import resources diff --git a/openpype/tools/project_manager/project_manager/__init__.py b/openpype/tools/project_manager/project_manager/__init__.py index 6e44afd841..ac4e3d5f39 100644 --- a/openpype/tools/project_manager/project_manager/__init__.py +++ b/openpype/tools/project_manager/project_manager/__init__.py @@ -44,7 +44,7 @@ from .window import ProjectManagerWindow def main(): import sys - from Qt import QtWidgets + from qtpy import QtWidgets app = QtWidgets.QApplication([]) diff --git a/openpype/tools/project_manager/project_manager/constants.py b/openpype/tools/project_manager/project_manager/constants.py index 7ca4aa9492..72512d797b 100644 --- a/openpype/tools/project_manager/project_manager/constants.py +++ b/openpype/tools/project_manager/project_manager/constants.py @@ -1,5 +1,5 @@ import re -from Qt import QtCore +from qtpy import QtCore # Item identifier (unique ID - uuid4 is used) diff --git a/openpype/tools/project_manager/project_manager/delegates.py b/openpype/tools/project_manager/project_manager/delegates.py index b066bbb159..79e9554b0f 100644 --- a/openpype/tools/project_manager/project_manager/delegates.py +++ b/openpype/tools/project_manager/project_manager/delegates.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from .widgets import ( NameTextEdit, diff --git a/openpype/tools/project_manager/project_manager/model.py b/openpype/tools/project_manager/project_manager/model.py index 6f40140e5e..29a26f700f 100644 --- a/openpype/tools/project_manager/project_manager/model.py +++ b/openpype/tools/project_manager/project_manager/model.py @@ -5,7 +5,7 @@ from uuid import uuid4 from pymongo import UpdateOne, DeleteOne -from Qt import QtCore, QtGui +from qtpy import QtCore, QtGui from openpype.client import ( get_projects, diff --git a/openpype/tools/project_manager/project_manager/multiselection_combobox.py b/openpype/tools/project_manager/project_manager/multiselection_combobox.py index f776831298..f12f402d1a 100644 --- a/openpype/tools/project_manager/project_manager/multiselection_combobox.py +++ b/openpype/tools/project_manager/project_manager/multiselection_combobox.py @@ -1,4 +1,4 @@ -from Qt import QtCore, QtWidgets +from qtpy import QtCore, QtWidgets class ComboItemDelegate(QtWidgets.QStyledItemDelegate): diff --git a/openpype/tools/project_manager/project_manager/style.py b/openpype/tools/project_manager/project_manager/style.py index 4405d05960..6445bc341d 100644 --- a/openpype/tools/project_manager/project_manager/style.py +++ b/openpype/tools/project_manager/project_manager/style.py @@ -1,5 +1,5 @@ import os -from Qt import QtGui +from qtpy import QtGui import qtawesome from openpype.tools.utils import paint_image_with_color diff --git a/openpype/tools/project_manager/project_manager/view.py b/openpype/tools/project_manager/project_manager/view.py index 8d1fe54e83..609db30a81 100644 --- a/openpype/tools/project_manager/project_manager/view.py +++ b/openpype/tools/project_manager/project_manager/view.py @@ -1,7 +1,7 @@ import collections from queue import Queue -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.client import get_project from .delegates import ( diff --git a/openpype/tools/project_manager/project_manager/widgets.py b/openpype/tools/project_manager/project_manager/widgets.py index 4bc968347a..06ae06e4d2 100644 --- a/openpype/tools/project_manager/project_manager/widgets.py +++ b/openpype/tools/project_manager/project_manager/widgets.py @@ -16,7 +16,7 @@ from openpype.tools.utils import ( get_warning_pixmap ) -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui class NameTextEdit(QtWidgets.QLineEdit): diff --git a/openpype/tools/project_manager/project_manager/window.py b/openpype/tools/project_manager/project_manager/window.py index 3b2dea8ca3..e35922cf36 100644 --- a/openpype/tools/project_manager/project_manager/window.py +++ b/openpype/tools/project_manager/project_manager/window.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype import resources from openpype.style import load_stylesheet diff --git a/openpype/tools/publisher/constants.py b/openpype/tools/publisher/constants.py index 74337ea1ab..96f74a5a5c 100644 --- a/openpype/tools/publisher/constants.py +++ b/openpype/tools/publisher/constants.py @@ -35,5 +35,8 @@ __all__ = ( "SORT_VALUE_ROLE", "IS_GROUP_ROLE", "CREATOR_IDENTIFIER_ROLE", - "FAMILY_ROLE" + "CREATOR_THUMBNAIL_ENABLED_ROLE", + "FAMILY_ROLE", + "GROUP_ROLE", + "CONVERTER_IDENTIFIER_ROLE", ) diff --git a/openpype/tools/publisher/widgets/card_view_widgets.py b/openpype/tools/publisher/widgets/card_view_widgets.py index 09635d1a15..57336f9304 100644 --- a/openpype/tools/publisher/widgets/card_view_widgets.py +++ b/openpype/tools/publisher/widgets/card_view_widgets.py @@ -43,24 +43,14 @@ from ..constants import ( ) -class SelectionType: - def __init__(self, name): - self.name = name - - def __eq__(self, other): - if isinstance(other, SelectionType): - other = other.name - return self.name == other - - class SelectionTypes: - clear = SelectionType("clear") - extend = SelectionType("extend") - extend_to = SelectionType("extend_to") + clear = "clear" + extend = "extend" + extend_to = "extend_to" class BaseGroupWidget(QtWidgets.QWidget): - selected = QtCore.Signal(str, str, SelectionType) + selected = QtCore.Signal(str, str, str) removed_selected = QtCore.Signal() def __init__(self, group_name, parent): @@ -269,7 +259,7 @@ class InstanceGroupWidget(BaseGroupWidget): class CardWidget(BaseClickableFrame): """Clickable card used as bigger button.""" - selected = QtCore.Signal(str, str, SelectionType) + selected = QtCore.Signal(str, str, str) # Group identifier of card # - this must be set because if send when mouse is released with card id _group_identifier = None @@ -755,11 +745,11 @@ class InstanceCardView(AbstractInstanceView): group_widget = self._widgets_by_group[group_name] new_widget = group_widget.get_widget_by_item_id(instance_id) - if selection_type is SelectionTypes.clear: + if selection_type == SelectionTypes.clear: self._select_item_clear(instance_id, group_name, new_widget) - elif selection_type is SelectionTypes.extend: + elif selection_type == SelectionTypes.extend: self._select_item_extend(instance_id, group_name, new_widget) - elif selection_type is SelectionTypes.extend_to: + elif selection_type == SelectionTypes.extend_to: self._select_item_extend_to(instance_id, group_name, new_widget) self.selection_changed.emit() diff --git a/openpype/tools/settings/__init__.py b/openpype/tools/settings/__init__.py index 3e77a8348a..a5b1ea51a5 100644 --- a/openpype/tools/settings/__init__.py +++ b/openpype/tools/settings/__init__.py @@ -1,5 +1,5 @@ import sys -from Qt import QtWidgets, QtGui +from qtpy import QtWidgets, QtGui from openpype import style from .lib import ( @@ -24,7 +24,9 @@ def main(user_role=None): user_role, ", ".join(allowed_roles) )) - app = QtWidgets.QApplication(sys.argv) + app = QtWidgets.QApplication.instance() + if not app: + app = QtWidgets.QApplication(sys.argv) app.setWindowIcon(QtGui.QIcon(style.app_icon_path())) widget = MainWidget(user_role) diff --git a/openpype/tools/settings/local_settings/apps_widget.py b/openpype/tools/settings/local_settings/apps_widget.py index c1f350fcbc..ce1fc86c32 100644 --- a/openpype/tools/settings/local_settings/apps_widget.py +++ b/openpype/tools/settings/local_settings/apps_widget.py @@ -1,5 +1,5 @@ import platform -from Qt import QtWidgets +from qtpy import QtWidgets from .widgets import ( Separator, ExpandingWidget diff --git a/openpype/tools/settings/local_settings/environments_widget.py b/openpype/tools/settings/local_settings/environments_widget.py index 14ca517851..5008f086d2 100644 --- a/openpype/tools/settings/local_settings/environments_widget.py +++ b/openpype/tools/settings/local_settings/environments_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.tools.utils import PlaceholderLineEdit diff --git a/openpype/tools/settings/local_settings/experimental_widget.py b/openpype/tools/settings/local_settings/experimental_widget.py index 22ef952356..b0d9381663 100644 --- a/openpype/tools/settings/local_settings/experimental_widget.py +++ b/openpype/tools/settings/local_settings/experimental_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets from openpype.tools.experimental_tools import ( ExperimentalTools, LOCAL_EXPERIMENTAL_KEY diff --git a/openpype/tools/settings/local_settings/general_widget.py b/openpype/tools/settings/local_settings/general_widget.py index 35add7573e..5a75c219dc 100644 --- a/openpype/tools/settings/local_settings/general_widget.py +++ b/openpype/tools/settings/local_settings/general_widget.py @@ -1,6 +1,6 @@ import getpass -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.lib import is_admin_password_required from openpype.widgets import PasswordDialog from openpype.tools.utils import PlaceholderLineEdit diff --git a/openpype/tools/settings/local_settings/mongo_widget.py b/openpype/tools/settings/local_settings/mongo_widget.py index 600ab79242..9549d6eb17 100644 --- a/openpype/tools/settings/local_settings/mongo_widget.py +++ b/openpype/tools/settings/local_settings/mongo_widget.py @@ -2,7 +2,7 @@ import os import sys import traceback -from Qt import QtWidgets +from qtpy import QtWidgets from pymongo.errors import ServerSelectionTimeoutError from openpype.lib import change_openpype_mongo_url diff --git a/openpype/tools/settings/local_settings/projects_widget.py b/openpype/tools/settings/local_settings/projects_widget.py index 30a0d212f0..b330d54dec 100644 --- a/openpype/tools/settings/local_settings/projects_widget.py +++ b/openpype/tools/settings/local_settings/projects_widget.py @@ -1,6 +1,6 @@ import platform import copy -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.tools.settings.settings import ProjectListWidget from openpype.tools.utils import PlaceholderLineEdit from openpype.settings.constants import ( diff --git a/openpype/tools/settings/local_settings/widgets.py b/openpype/tools/settings/local_settings/widgets.py index 2733aef187..f40978a66f 100644 --- a/openpype/tools/settings/local_settings/widgets.py +++ b/openpype/tools/settings/local_settings/widgets.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.tools.settings.settings.widgets import ( ExpandingWidget ) diff --git a/openpype/tools/settings/local_settings/window.py b/openpype/tools/settings/local_settings/window.py index 76c2d851e9..fdb05e219f 100644 --- a/openpype/tools/settings/local_settings/window.py +++ b/openpype/tools/settings/local_settings/window.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtGui +from qtpy import QtWidgets, QtGui from openpype import style diff --git a/openpype/tools/settings/settings/base.py b/openpype/tools/settings/settings/base.py index 6def284a83..074ecdae90 100644 --- a/openpype/tools/settings/settings/base.py +++ b/openpype/tools/settings/settings/base.py @@ -5,7 +5,7 @@ import traceback import functools import datetime -from Qt import QtWidgets, QtGui, QtCore +from qtpy import QtWidgets, QtGui, QtCore from openpype.settings.entities import ProjectSettings from openpype.tools.settings import CHILD_OFFSET diff --git a/openpype/tools/settings/settings/breadcrumbs_widget.py b/openpype/tools/settings/settings/breadcrumbs_widget.py index 7524bc61f0..2676d2f52d 100644 --- a/openpype/tools/settings/settings/breadcrumbs_widget.py +++ b/openpype/tools/settings/settings/breadcrumbs_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtGui, QtCore +from qtpy import QtWidgets, QtGui, QtCore PREFIX_ROLE = QtCore.Qt.UserRole + 1 LAST_SEGMENT_ROLE = QtCore.Qt.UserRole + 2 diff --git a/openpype/tools/settings/settings/categories.py b/openpype/tools/settings/settings/categories.py index e1b3943317..2e5ce496ed 100644 --- a/openpype/tools/settings/settings/categories.py +++ b/openpype/tools/settings/settings/categories.py @@ -2,7 +2,7 @@ import sys import traceback import contextlib from enum import Enum -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore import qtawesome from openpype.lib import get_openpype_version diff --git a/openpype/tools/settings/settings/color_widget.py b/openpype/tools/settings/settings/color_widget.py index b38b46f3cb..819bfb3581 100644 --- a/openpype/tools/settings/settings/color_widget.py +++ b/openpype/tools/settings/settings/color_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from .item_widgets import InputWidget diff --git a/openpype/tools/settings/settings/constants.py b/openpype/tools/settings/settings/constants.py index 23526e4de9..b2792d885b 100644 --- a/openpype/tools/settings/settings/constants.py +++ b/openpype/tools/settings/settings/constants.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore DEFAULT_PROJECT_LABEL = "< Default >" diff --git a/openpype/tools/settings/settings/dialogs.py b/openpype/tools/settings/settings/dialogs.py index b1b4daa1a0..38da3cf881 100644 --- a/openpype/tools/settings/settings/dialogs.py +++ b/openpype/tools/settings/settings/dialogs.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.tools.utils.delegates import pretty_date diff --git a/openpype/tools/settings/settings/dict_conditional.py b/openpype/tools/settings/settings/dict_conditional.py index b2a7bb52a2..564603b258 100644 --- a/openpype/tools/settings/settings/dict_conditional.py +++ b/openpype/tools/settings/settings/dict_conditional.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets from .widgets import ( ExpandingWidget, diff --git a/openpype/tools/settings/settings/dict_mutable_widget.py b/openpype/tools/settings/settings/dict_mutable_widget.py index 1c704b3cd5..27c9392320 100644 --- a/openpype/tools/settings/settings/dict_mutable_widget.py +++ b/openpype/tools/settings/settings/dict_mutable_widget.py @@ -1,6 +1,6 @@ from uuid import uuid4 -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore from .base import BaseWidget from .lib import ( diff --git a/openpype/tools/settings/settings/images/__init__.py b/openpype/tools/settings/settings/images/__init__.py index 3ad65e114a..0b246349a8 100644 --- a/openpype/tools/settings/settings/images/__init__.py +++ b/openpype/tools/settings/settings/images/__init__.py @@ -1,5 +1,5 @@ import os -from Qt import QtGui +from qtpy import QtGui def get_image_path(image_filename): diff --git a/openpype/tools/settings/settings/item_widgets.py b/openpype/tools/settings/settings/item_widgets.py index 1ddee7efbe..d51f9b9684 100644 --- a/openpype/tools/settings/settings/item_widgets.py +++ b/openpype/tools/settings/settings/item_widgets.py @@ -1,6 +1,6 @@ import json -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.widgets.sliders import NiceSlider from openpype.tools.settings import CHILD_OFFSET diff --git a/openpype/tools/settings/settings/lib.py b/openpype/tools/settings/settings/lib.py index eef157812f..0bcf8a4e94 100644 --- a/openpype/tools/settings/settings/lib.py +++ b/openpype/tools/settings/settings/lib.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore from .widgets import SettingsToolBtn diff --git a/openpype/tools/settings/settings/list_item_widget.py b/openpype/tools/settings/settings/list_item_widget.py index cd1fd912ae..67ce4b9dc9 100644 --- a/openpype/tools/settings/settings/list_item_widget.py +++ b/openpype/tools/settings/settings/list_item_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.tools.settings import ( CHILD_OFFSET diff --git a/openpype/tools/settings/settings/list_strict_widget.py b/openpype/tools/settings/settings/list_strict_widget.py index f0a3022a50..b0b78e5732 100644 --- a/openpype/tools/settings/settings/list_strict_widget.py +++ b/openpype/tools/settings/settings/list_strict_widget.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from .widgets import ( GridLabelWidget, diff --git a/openpype/tools/settings/settings/multiselection_combobox.py b/openpype/tools/settings/settings/multiselection_combobox.py index c2cc2a8fee..4cc81ff56e 100644 --- a/openpype/tools/settings/settings/multiselection_combobox.py +++ b/openpype/tools/settings/settings/multiselection_combobox.py @@ -1,4 +1,4 @@ -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets class ComboItemDelegate(QtWidgets.QStyledItemDelegate): diff --git a/openpype/tools/settings/settings/search_dialog.py b/openpype/tools/settings/settings/search_dialog.py index e6538cfe67..2860e7c943 100644 --- a/openpype/tools/settings/settings/search_dialog.py +++ b/openpype/tools/settings/settings/search_dialog.py @@ -1,7 +1,7 @@ import re import collections -from Qt import QtCore, QtWidgets, QtGui +from qtpy import QtCore, QtWidgets, QtGui ENTITY_LABEL_ROLE = QtCore.Qt.UserRole + 1 ENTITY_PATH_ROLE = QtCore.Qt.UserRole + 2 diff --git a/openpype/tools/settings/settings/tests.py b/openpype/tools/settings/settings/tests.py index fc53e38ad5..772d4618f7 100644 --- a/openpype/tools/settings/settings/tests.py +++ b/openpype/tools/settings/settings/tests.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore def indented_print(data, indent=0): diff --git a/openpype/tools/settings/settings/widgets.py b/openpype/tools/settings/settings/widgets.py index b8ad21e7e4..fd04cb0a23 100644 --- a/openpype/tools/settings/settings/widgets.py +++ b/openpype/tools/settings/settings/widgets.py @@ -1,6 +1,6 @@ import copy import uuid -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui import qtawesome from openpype.client import get_projects diff --git a/openpype/tools/settings/settings/window.py b/openpype/tools/settings/settings/window.py index 77a2f64dac..f479908f7b 100644 --- a/openpype/tools/settings/settings/window.py +++ b/openpype/tools/settings/settings/window.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtGui, QtCore +from qtpy import QtWidgets, QtGui, QtCore from openpype import style diff --git a/openpype/tools/settings/settings/wrapper_widgets.py b/openpype/tools/settings/settings/wrapper_widgets.py index b14a226912..0b45a9a01b 100644 --- a/openpype/tools/settings/settings/wrapper_widgets.py +++ b/openpype/tools/settings/settings/wrapper_widgets.py @@ -1,5 +1,5 @@ from uuid import uuid4 -from Qt import QtWidgets +from qtpy import QtWidgets from .widgets import ( ExpandingWidget, diff --git a/openpype/tools/standalonepublish/app.py b/openpype/tools/standalonepublish/app.py index c93c33b2a5..d71c205c3b 100644 --- a/openpype/tools/standalonepublish/app.py +++ b/openpype/tools/standalonepublish/app.py @@ -4,7 +4,7 @@ import ctypes import signal from bson.objectid import ObjectId -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui from openpype.client import get_asset_by_id diff --git a/openpype/tools/standalonepublish/widgets/__init__.py b/openpype/tools/standalonepublish/widgets/__init__.py index e61897f807..d79654498d 100644 --- a/openpype/tools/standalonepublish/widgets/__init__.py +++ b/openpype/tools/standalonepublish/widgets/__init__.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore HelpRole = QtCore.Qt.UserRole + 2 FamilyRole = QtCore.Qt.UserRole + 3 diff --git a/openpype/tools/standalonepublish/widgets/model_asset.py b/openpype/tools/standalonepublish/widgets/model_asset.py index 9fed46b3fe..2f67036e78 100644 --- a/openpype/tools/standalonepublish/widgets/model_asset.py +++ b/openpype/tools/standalonepublish/widgets/model_asset.py @@ -1,7 +1,7 @@ import logging import collections -from Qt import QtCore, QtGui +from qtpy import QtCore, QtGui import qtawesome from openpype.client import get_assets diff --git a/openpype/tools/standalonepublish/widgets/model_filter_proxy_exact_match.py b/openpype/tools/standalonepublish/widgets/model_filter_proxy_exact_match.py index 604ae30934..df9c6fb35f 100644 --- a/openpype/tools/standalonepublish/widgets/model_filter_proxy_exact_match.py +++ b/openpype/tools/standalonepublish/widgets/model_filter_proxy_exact_match.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore class ExactMatchesFilterProxyModel(QtCore.QSortFilterProxyModel): diff --git a/openpype/tools/standalonepublish/widgets/model_filter_proxy_recursive_sort.py b/openpype/tools/standalonepublish/widgets/model_filter_proxy_recursive_sort.py index 71ecdf41dc..727d3a97d7 100644 --- a/openpype/tools/standalonepublish/widgets/model_filter_proxy_recursive_sort.py +++ b/openpype/tools/standalonepublish/widgets/model_filter_proxy_recursive_sort.py @@ -1,5 +1,5 @@ -from Qt import QtCore import re +from qtpy import QtCore class RecursiveSortFilterProxyModel(QtCore.QSortFilterProxyModel): diff --git a/openpype/tools/standalonepublish/widgets/model_tasks_template.py b/openpype/tools/standalonepublish/widgets/model_tasks_template.py index 648f7ed479..e22a4e3bf8 100644 --- a/openpype/tools/standalonepublish/widgets/model_tasks_template.py +++ b/openpype/tools/standalonepublish/widgets/model_tasks_template.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore import qtawesome from openpype.style import get_default_entity_icon_color diff --git a/openpype/tools/standalonepublish/widgets/model_tree.py b/openpype/tools/standalonepublish/widgets/model_tree.py index efac0d6b78..040e95d944 100644 --- a/openpype/tools/standalonepublish/widgets/model_tree.py +++ b/openpype/tools/standalonepublish/widgets/model_tree.py @@ -1,4 +1,4 @@ -from Qt import QtCore +from qtpy import QtCore from . import Node diff --git a/openpype/tools/standalonepublish/widgets/model_tree_view_deselectable.py b/openpype/tools/standalonepublish/widgets/model_tree_view_deselectable.py index 6a15916981..3c8c760eca 100644 --- a/openpype/tools/standalonepublish/widgets/model_tree_view_deselectable.py +++ b/openpype/tools/standalonepublish/widgets/model_tree_view_deselectable.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore class DeselectableTreeView(QtWidgets.QTreeView): diff --git a/openpype/tools/standalonepublish/widgets/widget_asset.py b/openpype/tools/standalonepublish/widgets/widget_asset.py index 77d756a606..01f49b79ec 100644 --- a/openpype/tools/standalonepublish/widgets/widget_asset.py +++ b/openpype/tools/standalonepublish/widgets/widget_asset.py @@ -1,5 +1,5 @@ import contextlib -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore import qtawesome from openpype.client import ( diff --git a/openpype/tools/standalonepublish/widgets/widget_component_item.py b/openpype/tools/standalonepublish/widgets/widget_component_item.py index de3cde50cd..523c3977e3 100644 --- a/openpype/tools/standalonepublish/widgets/widget_component_item.py +++ b/openpype/tools/standalonepublish/widgets/widget_component_item.py @@ -1,5 +1,5 @@ import os -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets from .resources import get_resource diff --git a/openpype/tools/standalonepublish/widgets/widget_components.py b/openpype/tools/standalonepublish/widgets/widget_components.py index 237e1da583..a86ac845f2 100644 --- a/openpype/tools/standalonepublish/widgets/widget_components.py +++ b/openpype/tools/standalonepublish/widgets/widget_components.py @@ -4,7 +4,7 @@ import tempfile import random import string -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.pipeline import legacy_io from openpype.lib import ( diff --git a/openpype/tools/standalonepublish/widgets/widget_components_list.py b/openpype/tools/standalonepublish/widgets/widget_components_list.py index 0ee90ae4de..e29ab3c127 100644 --- a/openpype/tools/standalonepublish/widgets/widget_components_list.py +++ b/openpype/tools/standalonepublish/widgets/widget_components_list.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets +from qtpy import QtWidgets class ComponentsList(QtWidgets.QTableWidget): diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_empty.py b/openpype/tools/standalonepublish/widgets/widget_drop_empty.py index a890f38426..110e4d6353 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_empty.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_empty.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui class DropEmpty(QtWidgets.QWidget): diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py index f8a8273b26..f46e31786c 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py @@ -4,7 +4,7 @@ import json import clique import subprocess import openpype.lib -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from . import DropEmpty, ComponentsList, ComponentItem @@ -178,7 +178,7 @@ class DropDataFrame(QtWidgets.QFrame): paths = self._get_all_paths(in_paths) collectionable_paths = [] non_collectionable_paths = [] - for path in in_paths: + for path in paths: ext = os.path.splitext(path)[1] if ext in self.image_extensions or ext in self.sequence_types: collectionable_paths.append(path) diff --git a/openpype/tools/standalonepublish/widgets/widget_family.py b/openpype/tools/standalonepublish/widgets/widget_family.py index eab66d75b3..11c5ec33b7 100644 --- a/openpype/tools/standalonepublish/widgets/widget_family.py +++ b/openpype/tools/standalonepublish/widgets/widget_family.py @@ -1,6 +1,6 @@ import re -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore from openpype.client import ( get_asset_by_name, @@ -186,19 +186,11 @@ class FamilyWidget(QtWidgets.QWidget): if item is None: return - asset_doc = None - if asset_name != self.NOT_SELECTED: - # Get the assets from the database which match with the name - project_name = self.dbcon.active_project() - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["_id"] - ) - - # Get plugin and family - plugin = item.data(PluginRole) - # Early exit if no asset name - if not asset_name.strip(): + if ( + asset_name == self.NOT_SELECTED + or not asset_name.strip() + ): self._build_menu([]) item.setData(ExistsRole, False) print("Asset name is required ..") @@ -210,8 +202,10 @@ class FamilyWidget(QtWidgets.QWidget): asset_doc = get_asset_by_name( project_name, asset_name, fields=["_id"] ) + # Get plugin plugin = item.data(PluginRole) + if asset_doc and plugin: asset_id = asset_doc["_id"] task_name = self.dbcon.Session["AVALON_TASK"] diff --git a/openpype/tools/standalonepublish/widgets/widget_family_desc.py b/openpype/tools/standalonepublish/widgets/widget_family_desc.py index 2095b332bd..33174a852b 100644 --- a/openpype/tools/standalonepublish/widgets/widget_family_desc.py +++ b/openpype/tools/standalonepublish/widgets/widget_family_desc.py @@ -1,5 +1,5 @@ import six -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui import qtawesome from . import FamilyRole, PluginRole diff --git a/openpype/tools/standalonepublish/widgets/widget_shadow.py b/openpype/tools/standalonepublish/widgets/widget_shadow.py index de5fdf6be0..64cb9544fa 100644 --- a/openpype/tools/standalonepublish/widgets/widget_shadow.py +++ b/openpype/tools/standalonepublish/widgets/widget_shadow.py @@ -1,4 +1,4 @@ -from Qt import QtWidgets, QtCore, QtGui +from qtpy import QtWidgets, QtCore, QtGui class ShadowWidget(QtWidgets.QWidget): diff --git a/openpype/tools/stdout_broker/window.py b/openpype/tools/stdout_broker/window.py index f5720ca05b..5825da73e2 100644 --- a/openpype/tools/stdout_broker/window.py +++ b/openpype/tools/stdout_broker/window.py @@ -1,7 +1,7 @@ import re import collections -from Qt import QtWidgets +from qtpy import QtWidgets from openpype import style diff --git a/openpype/tools/tray/pype_info_widget.py b/openpype/tools/tray/pype_info_widget.py index 232d2024ac..c616ad4dba 100644 --- a/openpype/tools/tray/pype_info_widget.py +++ b/openpype/tools/tray/pype_info_widget.py @@ -2,7 +2,7 @@ import os import json import collections -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets from openpype import style from openpype import resources diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py index d4189af4d8..df18325bec 100644 --- a/openpype/tools/tray/pype_tray.py +++ b/openpype/tools/tray/pype_tray.py @@ -6,7 +6,7 @@ import subprocess import platform -from Qt import QtCore, QtGui, QtWidgets +from qtpy import QtCore, QtGui, QtWidgets import openpype.version from openpype import resources, style diff --git a/openpype/tools/traypublisher/window.py b/openpype/tools/traypublisher/window.py index dfe06d149d..3007fa66a5 100644 --- a/openpype/tools/traypublisher/window.py +++ b/openpype/tools/traypublisher/window.py @@ -8,7 +8,7 @@ publishing plugins. import platform -from Qt import QtWidgets, QtCore +from qtpy import QtWidgets, QtCore import qtawesome import appdirs diff --git a/openpype/version.py b/openpype/version.py index ffabcf8025..5b5b1475c0 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.14.7" +__version__ = "3.14.9" diff --git a/setup.cfg b/setup.cfg index 0a9664033d..10cca3eb3f 100644 --- a/setup.cfg +++ b/setup.cfg @@ -8,7 +8,8 @@ exclude = docs, */vendor, website, - openpype/vendor + openpype/vendor, + *deadline/repository/custom/plugins max-complexity = 30 diff --git a/tests/README.md b/tests/README.md index 69828cdbc2..d36b6534f8 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,5 +1,15 @@ Automatic tests for OpenPype ============================ + +Requirements: +============ +Tests are recreating fresh DB for each run, so `mongorestore`, `mongodump` and `mongoimport` command line tools must be installed and on Path. + +You can find intallers here: https://www.mongodb.com/docs/database-tools/installation/installation/ + +You can test that `mongorestore` is available by running this in console, or cmd: +```mongorestore --version``` + Structure: - integration - end to end tests, slow (see README.md in the integration folder for more info) - openpype/modules/MODULE_NAME - structure follow directory structure in code base diff --git a/tests/conftest.py b/tests/conftest.py index aa850be1a6..7b58b0314d 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -43,3 +43,15 @@ def app_variant(request): @pytest.fixture(scope="module") def timeout(request): return request.config.getoption("--timeout") + + +@pytest.hookimpl(tryfirst=True, hookwrapper=True) +def pytest_runtest_makereport(item, call): + # execute all other hooks to obtain the report object + outcome = yield + rep = outcome.get_result() + + # set a report attribute for each phase of a call, which can + # be "setup", "call", "teardown" + + setattr(item, "rep_" + rep.when, rep) diff --git a/tests/integration/hosts/aftereffects/lib.py b/tests/integration/hosts/aftereffects/lib.py index 9fffc6073d..ffad33d13c 100644 --- a/tests/integration/hosts/aftereffects/lib.py +++ b/tests/integration/hosts/aftereffects/lib.py @@ -2,10 +2,13 @@ import os import pytest import shutil -from tests.lib.testing_classes import HostFixtures +from tests.lib.testing_classes import ( + HostFixtures, + PublishTest, +) -class AfterEffectsTestClass(HostFixtures): +class AEHostFixtures(HostFixtures): @pytest.fixture(scope="module") def last_workfile_path(self, download_test_data, output_folder_url): """Get last_workfile_path from source data. @@ -15,15 +18,15 @@ class AfterEffectsTestClass(HostFixtures): src_path = os.path.join(download_test_data, "input", "workfile", - "test_project_test_asset_TestTask_v001.aep") - dest_folder = os.path.join(download_test_data, + "test_project_test_asset_test_task_v001.aep") + dest_folder = os.path.join(output_folder_url, self.PROJECT, self.ASSET, "work", self.TASK) os.makedirs(dest_folder) dest_path = os.path.join(dest_folder, - "test_project_test_asset_TestTask_v001.aep") + "test_project_test_asset_test_task_v001.aep") shutil.copy(src_path, dest_path) yield dest_path @@ -32,3 +35,12 @@ class AfterEffectsTestClass(HostFixtures): def startup_scripts(self, monkeypatch_session, download_test_data): """Points Maya to userSetup file from input data""" pass + + @pytest.fixture(scope="module") + def skip_compare_folders(self): + # skip folder that contain "Logs", these come only from Deadline + return ["Logs", "Auto-Save"] + + +class AELocalPublishTestClass(AEHostFixtures, PublishTest): + """Testing class for local publishes.""" diff --git a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py similarity index 58% rename from tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py rename to tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py index 4925cbd2d7..5d0c15d63a 100644 --- a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py +++ b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py @@ -1,14 +1,16 @@ import logging from tests.lib.assert_classes import DBAssert -from tests.integration.hosts.aftereffects.lib import AfterEffectsTestClass +from tests.integration.hosts.aftereffects.lib import AELocalPublishTestClass log = logging.getLogger("test_publish_in_aftereffects") -class TestPublishInAfterEffects(AfterEffectsTestClass): +class TestPublishInAfterEffects(AELocalPublishTestClass): """Basic test case for publishing in AfterEffects + Uses old Pyblish schema of created instances. + Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. @@ -27,15 +29,15 @@ class TestPublishInAfterEffects(AfterEffectsTestClass): PERSIST = False TEST_FILES = [ - ("1c8261CmHwyMgS-g7S4xL5epAp0jCBmhf", - "test_aftereffects_publish.zip", + ("1jqI_uG2NusKFvZZF7C0ScHjxFJrlc9F-", + "test_aftereffects_publish_legacy.zip", "") ] - APP = "aftereffects" + APP_GROUP = "aftereffects" APP_VARIANT = "" - APP_NAME = "{}/{}".format(APP, APP_VARIANT) + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) TIMEOUT = 120 # publish timeout @@ -49,23 +51,37 @@ class TestPublishInAfterEffects(AfterEffectsTestClass): failures.append( DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) - failures.append( - DBAssert.count_of_types(dbcon, "subset", 1, - name="imageMainBackgroundcopy")) - failures.append( DBAssert.count_of_types(dbcon, "subset", 1, name="workfileTest_task")) failures.append( DBAssert.count_of_types(dbcon, "subset", 1, - name="reviewTesttask")) + name="renderTest_taskMain")) failures.append( DBAssert.count_of_types(dbcon, "representation", 4)) - additional_args = {"context.subset": "renderTestTaskDefault", + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "aep"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "renderTest_taskMain", "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2, + additional_args=additional_args)) + + additional_args = {"context.subset": "renderTest_taskMain", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "renderTest_taskMain", + "name": "png_png"} failures.append( DBAssert.count_of_types(dbcon, "representation", 1, additional_args=additional_args)) diff --git a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_multiframe.py b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_multiframe.py deleted file mode 100644 index c882e0f9b2..0000000000 --- a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_multiframe.py +++ /dev/null @@ -1,64 +0,0 @@ -import logging - -from tests.lib.assert_classes import DBAssert -from tests.integration.hosts.aftereffects.lib import AfterEffectsTestClass - -log = logging.getLogger("test_publish_in_aftereffects") - - -class TestPublishInAfterEffects(AfterEffectsTestClass): - """Basic test case for publishing in AfterEffects - - Should publish 5 frames - """ - PERSIST = True - - TEST_FILES = [ - ("12aSDRjthn4X3yw83gz_0FZJcRRiVDEYT", - "test_aftereffects_publish_multiframe.zip", - "") - ] - - APP = "aftereffects" - APP_VARIANT = "" - - APP_NAME = "{}/{}".format(APP, APP_VARIANT) - - TIMEOUT = 120 # publish timeout - - def test_db_asserts(self, dbcon, publish_finished): - """Host and input data dependent expected results in DB.""" - print("test_db_asserts") - failures = [] - - failures.append(DBAssert.count_of_types(dbcon, "version", 2)) - - failures.append( - DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) - - failures.append( - DBAssert.count_of_types(dbcon, "subset", 1, - name="imageMainBackgroundcopy")) - - failures.append( - DBAssert.count_of_types(dbcon, "subset", 1, - name="workfileTest_task")) - - failures.append( - DBAssert.count_of_types(dbcon, "subset", 1, - name="reviewTesttask")) - - failures.append( - DBAssert.count_of_types(dbcon, "representation", 4)) - - additional_args = {"context.subset": "renderTestTaskDefault", - "context.ext": "png"} - failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, - additional_args=additional_args)) - - assert not any(failures) - - -if __name__ == "__main__": - test_case = TestPublishInAfterEffects() diff --git a/tests/integration/hosts/maya/lib.py b/tests/integration/hosts/maya/lib.py index f3a438c065..ab402f36e0 100644 --- a/tests/integration/hosts/maya/lib.py +++ b/tests/integration/hosts/maya/lib.py @@ -2,10 +2,13 @@ import os import pytest import shutil -from tests.lib.testing_classes import HostFixtures +from tests.lib.testing_classes import ( + HostFixtures, + PublishTest, +) -class MayaTestClass(HostFixtures): +class MayaHostFixtures(HostFixtures): @pytest.fixture(scope="module") def last_workfile_path(self, download_test_data, output_folder_url): """Get last_workfile_path from source data. @@ -15,7 +18,7 @@ class MayaTestClass(HostFixtures): src_path = os.path.join(download_test_data, "input", "workfile", - "test_project_test_asset_TestTask_v001.mb") + "test_project_test_asset_test_task_v001.mb") dest_folder = os.path.join(output_folder_url, self.PROJECT, self.ASSET, @@ -23,7 +26,7 @@ class MayaTestClass(HostFixtures): self.TASK) os.makedirs(dest_folder) dest_path = os.path.join(dest_folder, - "test_project_test_asset_TestTask_v001.mb") + "test_project_test_asset_test_task_v001.mb") shutil.copy(src_path, dest_path) yield dest_path @@ -39,3 +42,11 @@ class MayaTestClass(HostFixtures): "{}{}{}".format(startup_path, os.pathsep, original_pythonpath)) + + @pytest.fixture(scope="module") + def skip_compare_folders(self): + yield [] + + +class MayaLocalPublishTestClass(MayaHostFixtures, PublishTest): + """Testing class for local publishes.""" diff --git a/tests/integration/hosts/maya/test_publish_in_maya.py b/tests/integration/hosts/maya/test_publish_in_maya.py index 68b0564428..b7ee228aae 100644 --- a/tests/integration/hosts/maya/test_publish_in_maya.py +++ b/tests/integration/hosts/maya/test_publish_in_maya.py @@ -1,7 +1,8 @@ -from tests.integration.hosts.maya.lib import MayaTestClass +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.maya.lib import MayaLocalPublishTestClass -class TestPublishInMaya(MayaTestClass): +class TestPublishInMaya(MayaLocalPublishTestClass): """Basic test case for publishing in Maya Shouldnt be running standalone only via 'runtests' pype command! (??) @@ -28,7 +29,7 @@ class TestPublishInMaya(MayaTestClass): ("1BTSIIULJTuDc8VvXseuiJV_fL6-Bu7FP", "test_maya_publish.zip", "") ] - APP = "maya" + APP_GROUP = "maya" # keep empty to locate latest installed variant or explicit APP_VARIANT = "" @@ -37,33 +38,41 @@ class TestPublishInMaya(MayaTestClass): def test_db_asserts(self, dbcon, publish_finished): """Host and input data dependent expected results in DB.""" print("test_db_asserts") - assert 5 == dbcon.count_documents({"type": "version"}), \ - "Not expected no of versions" + failures = [] + failures.append(DBAssert.count_of_types(dbcon, "version", 2)) - assert 0 == dbcon.count_documents({"type": "version", - "name": {"$ne": 1}}), \ - "Only versions with 1 expected" + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) - assert 1 == dbcon.count_documents({"type": "subset", - "name": "modelMain"}), \ - "modelMain subset must be present" + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="modelMain")) - assert 1 == dbcon.count_documents({"type": "subset", - "name": "workfileTest_task"}), \ - "workfileTest_task subset must be present" + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) - assert 11 == dbcon.count_documents({"type": "representation"}), \ - "Not expected no of representations" + failures.append(DBAssert.count_of_types(dbcon, "representation", 5)) - assert 2 == dbcon.count_documents({"type": "representation", - "context.subset": "modelMain", - "context.ext": "abc"}), \ - "Not expected no of representations with ext 'abc'" + additional_args = {"context.subset": "modelMain", + "context.ext": "abc"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2, + additional_args=additional_args)) - assert 2 == dbcon.count_documents({"type": "representation", - "context.subset": "modelMain", - "context.ext": "ma"}), \ - "Not expected no of representations with ext 'abc'" + additional_args = {"context.subset": "modelMain", + "context.ext": "ma"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2, + additional_args=additional_args)) + + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "mb"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + assert not any(failures) if __name__ == "__main__": diff --git a/tests/integration/hosts/nuke/lib.py b/tests/integration/hosts/nuke/lib.py index d3c3d7ba81..96daec7427 100644 --- a/tests/integration/hosts/nuke/lib.py +++ b/tests/integration/hosts/nuke/lib.py @@ -1,17 +1,20 @@ import os import pytest -import shutil +import re -from tests.lib.testing_classes import HostFixtures +from tests.lib.testing_classes import ( + HostFixtures, + PublishTest, +) -class NukeTestClass(HostFixtures): +class NukeHostFixtures(HostFixtures): @pytest.fixture(scope="module") def last_workfile_path(self, download_test_data, output_folder_url): """Get last_workfile_path from source data. """ - source_file_name = "test_project_test_asset_CompositingInNuke_v001.nk" + source_file_name = "test_project_test_asset_test_task_v001.nk" src_path = os.path.join(download_test_data, "input", "workfile", @@ -27,7 +30,16 @@ class NukeTestClass(HostFixtures): dest_path = os.path.join(dest_folder, source_file_name) - shutil.copy(src_path, dest_path) + # rewrite old root with temporary file + # TODO - using only C:/projects seems wrong - but where to get root ? + replace_pattern = re.compile(re.escape("C:/projects"), re.IGNORECASE) + with open(src_path, "r") as fp: + updated = fp.read() + updated = replace_pattern.sub(output_folder_url.replace("\\", '/'), + updated) + + with open(dest_path, "w") as fp: + fp.write(updated) yield dest_path @@ -41,4 +53,12 @@ class NukeTestClass(HostFixtures): monkeypatch_session.setenv("NUKE_PATH", "{}{}{}".format(startup_path, os.pathsep, - original_nuke_path)) \ No newline at end of file + original_nuke_path)) + + @pytest.fixture(scope="module") + def skip_compare_folders(self): + yield ["renders"] + + +class NukeLocalPublishTestClass(NukeHostFixtures, PublishTest): + """Testing class for local publishes.""" diff --git a/tests/integration/hosts/nuke/test_publish_in_nuke.py b/tests/integration/hosts/nuke/test_publish_in_nuke.py index 884160e0b5..f84f13fa20 100644 --- a/tests/integration/hosts/nuke/test_publish_in_nuke.py +++ b/tests/integration/hosts/nuke/test_publish_in_nuke.py @@ -1,17 +1,25 @@ import logging from tests.lib.assert_classes import DBAssert -from tests.integration.hosts.nuke.lib import NukeTestClass +from tests.integration.hosts.nuke.lib import NukeLocalPublishTestClass log = logging.getLogger("test_publish_in_nuke") -class TestPublishInNuke(NukeTestClass): +class TestPublishInNuke(NukeLocalPublishTestClass): """Basic test case for publishing in Nuke Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. + !!! + It expects modified path in WriteNode, + use '[python {nuke.script_directory()}]' instead of regular root + dir (eg. instead of `c:/projects/test_project/test_asset/test_task`). + Access file path by selecting WriteNode group, CTRL+Enter, update file + input + !!! + Opens Nuke, run publish on prepared workile. Then checks content of DB (if subset, version, representations were @@ -20,7 +28,8 @@ class TestPublishInNuke(NukeTestClass): How to run: (in cmd with activated {OPENPYPE_ROOT}/.venv) - {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests ../tests/integration/hosts/nuke # noqa: E501 + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py + runtests ../tests/integration/hosts/nuke # noqa: E501 To check log/errors from launched app's publish process keep PERSIST to True and check `test_openpype.logs` collection. @@ -30,14 +39,14 @@ class TestPublishInNuke(NukeTestClass): ("1SUurHj2aiQ21ZIMJfGVBI2KjR8kIjBGI", "test_Nuke_publish.zip", "") ] - APP = "nuke" + APP_GROUP = "nuke" - TIMEOUT = 120 # publish timeout + TIMEOUT = 50 # publish timeout # could be overwritten by command line arguments # keep empty to locate latest installed variant or explicit APP_VARIANT = "" - PERSIST = True # True - keep test_db, test_openpype, outputted test files + PERSIST = False # True - keep test_db, test_openpype, outputted test files TEST_DATA_FOLDER = None def test_db_asserts(self, dbcon, publish_finished): @@ -52,7 +61,7 @@ class TestPublishInNuke(NukeTestClass): failures.append( DBAssert.count_of_types(dbcon, "subset", 1, - name="renderCompositingInNukeMain")) + name="renderTest_taskMain")) failures.append( DBAssert.count_of_types(dbcon, "subset", 1, @@ -61,7 +70,7 @@ class TestPublishInNuke(NukeTestClass): failures.append( DBAssert.count_of_types(dbcon, "representation", 4)) - additional_args = {"context.subset": "renderCompositingInNukeMain", + additional_args = {"context.subset": "renderTest_taskMain", "context.ext": "exr"} failures.append( DBAssert.count_of_types(dbcon, "representation", 1, diff --git a/tests/integration/hosts/photoshop/lib.py b/tests/integration/hosts/photoshop/lib.py index 16ef2d3ae6..9d51a11c06 100644 --- a/tests/integration/hosts/photoshop/lib.py +++ b/tests/integration/hosts/photoshop/lib.py @@ -2,10 +2,13 @@ import os import pytest import shutil -from tests.lib.testing_classes import HostFixtures +from tests.lib.testing_classes import ( + HostFixtures, + PublishTest +) -class PhotoshopTestClass(HostFixtures): +class PhotoshopTestClass(HostFixtures, PublishTest): @pytest.fixture(scope="module") def last_workfile_path(self, download_test_data, output_folder_url): """Get last_workfile_path from source data. @@ -32,3 +35,7 @@ class PhotoshopTestClass(HostFixtures): def startup_scripts(self, monkeypatch_session, download_test_data): """Points Maya to userSetup file from input data""" pass + + @pytest.fixture(scope="module") + def skip_compare_folders(self): + yield [] diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop.py index 5387bbe51e..4aaf43234d 100644 --- a/tests/integration/hosts/photoshop/test_publish_in_photoshop.py +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop.py @@ -41,11 +41,11 @@ class TestPublishInPhotoshop(PhotoshopTestClass): ("1zD2v5cBgkyOm_xIgKz3WKn8aFB_j8qC-", "test_photoshop_publish.zip", "") ] - APP = "photoshop" + APP_GROUP = "photoshop" # keep empty to locate latest installed variant or explicit APP_VARIANT = "" - APP_NAME = "{}/{}".format(APP, APP_VARIANT) + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) TIMEOUT = 120 # publish timeout @@ -72,7 +72,7 @@ class TestPublishInPhotoshop(PhotoshopTestClass): name="workfileTest_task")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 8)) + DBAssert.count_of_types(dbcon, "representation", 6)) additional_args = {"context.subset": "imageMainForeground", "context.ext": "png"} diff --git a/tests/lib/db_handler.py b/tests/lib/db_handler.py index b181055012..82e741cc3b 100644 --- a/tests/lib/db_handler.py +++ b/tests/lib/db_handler.py @@ -118,9 +118,8 @@ class DBHandler: "Run with overwrite=True") else: if collection: - coll = self.client[db_name_out].get(collection) - if coll: - coll.drop() + if collection in self.client[db_name_out].list_collection_names(): # noqa + self.client[db_name_out][collection].drop() else: self.teardown(db_name_out) @@ -133,7 +132,11 @@ class DBHandler: db_name=db_name, db_name_out=db_name_out, collection=collection) print("mongorestore query:: {}".format(query)) - subprocess.run(query) + try: + subprocess.run(query) + except FileNotFoundError: + raise RuntimeError("'mongorestore' utility must be on path." + "Please install it.") def teardown(self, db_name): """Drops 'db_name' if exists.""" @@ -231,13 +234,15 @@ class DBHandler: # Examples # handler = DBHandler(uri="mongodb://localhost:27017") # # -# backup_dir = "c:\\projects\\test_nuke_publish\\input\\dumps" +# backup_dir = "c:\\projects\\test_zips\\test_nuke_deadline_publish\\input\\dumps" # noqa # # # -# handler.backup_to_dump("avalon", backup_dir, True, collection="test_project") -# handler.setup_from_dump("test_db", backup_dir, True, db_name_out="avalon", collection="test_project") -# handler.setup_from_sql_file("test_db", "c:\\projects\\sql\\item.sql", +# handler.backup_to_dump("avalon_tests", backup_dir, True, collection="test_project") # noqa +#handler.backup_to_dump("openpype_tests", backup_dir, True, collection="settings") # noqa + +# handler.setup_from_dump("avalon_tests", backup_dir, True, db_name_out="avalon_tests", collection="test_project") # noqa +# handler.setup_from_sql_file("avalon_tests", "c:\\projects\\sql\\item.sql", # collection="test_project", # drop=False, mode="upsert") -# handler.setup_from_sql("test_db", "c:\\projects\\sql", +# handler.setup_from_sql("avalon_tests", "c:\\projects\\sql", # collection="test_project", # drop=False, mode="upsert") diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index 78a9f81095..82cc321ae8 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -8,6 +8,7 @@ import tempfile import shutil import glob import platform +import re from tests.lib.db_handler import DBHandler from common.openpype_common.distribution.file_handler import RemoteFileHandler @@ -36,9 +37,9 @@ class ModuleUnitTest(BaseTest): PERSIST = False # True to not purge temporary folder nor test DB TEST_OPENPYPE_MONGO = "mongodb://localhost:27017" - TEST_DB_NAME = "test_db" + TEST_DB_NAME = "avalon_tests" TEST_PROJECT_NAME = "test_project" - TEST_OPENPYPE_NAME = "test_openpype" + TEST_OPENPYPE_NAME = "openpype_tests" TEST_FILES = [] @@ -57,7 +58,7 @@ class ModuleUnitTest(BaseTest): m.undo() @pytest.fixture(scope="module") - def download_test_data(self, test_data_folder, persist=False): + def download_test_data(self, test_data_folder, persist, request): test_data_folder = test_data_folder or self.TEST_DATA_FOLDER if test_data_folder: print("Using existing folder {}".format(test_data_folder)) @@ -78,7 +79,8 @@ class ModuleUnitTest(BaseTest): print("Temporary folder created:: {}".format(tmpdir)) yield tmpdir - persist = persist or self.PERSIST + persist = (persist or self.PERSIST or + self.is_test_failed(request)) if not persist: print("Removing {}".format(tmpdir)) shutil.rmtree(tmpdir) @@ -125,7 +127,8 @@ class ModuleUnitTest(BaseTest): monkeypatch_session.setenv("TEST_SOURCE_FOLDER", download_test_data) @pytest.fixture(scope="module") - def db_setup(self, download_test_data, env_var, monkeypatch_session): + def db_setup(self, download_test_data, env_var, monkeypatch_session, + request): """Restore prepared MongoDB dumps into selected DB.""" backup_dir = os.path.join(download_test_data, "input", "dumps") @@ -135,13 +138,14 @@ class ModuleUnitTest(BaseTest): overwrite=True, db_name_out=self.TEST_DB_NAME) - db_handler.setup_from_dump("openpype", backup_dir, + db_handler.setup_from_dump(self.TEST_OPENPYPE_NAME, backup_dir, overwrite=True, db_name_out=self.TEST_OPENPYPE_NAME) yield db_handler - if not self.PERSIST: + persist = self.PERSIST or self.is_test_failed(request) + if not persist: db_handler.teardown(self.TEST_DB_NAME) db_handler.teardown(self.TEST_OPENPYPE_NAME) @@ -166,6 +170,13 @@ class ModuleUnitTest(BaseTest): mongo_client = OpenPypeMongoConnection.get_mongo_client() yield mongo_client[self.TEST_OPENPYPE_NAME]["settings"] + def is_test_failed(self, request): + # if request.node doesn't have rep_call, something failed + try: + return request.node.rep_call.failed + except AttributeError: + return True + class PublishTest(ModuleUnitTest): """Test class for publishing in hosts. @@ -188,7 +199,7 @@ class PublishTest(ModuleUnitTest): TODO: implement test on file size, file content """ - APP = "" + APP_GROUP = "" TIMEOUT = 120 # publish timeout @@ -210,10 +221,10 @@ class PublishTest(ModuleUnitTest): if not app_variant: variant = ( application_manager.find_latest_available_variant_for_group( - self.APP)) + self.APP_GROUP)) app_variant = variant.name - yield "{}/{}".format(self.APP, app_variant) + yield "{}/{}".format(self.APP_GROUP, app_variant) @pytest.fixture(scope="module") def output_folder_url(self, download_test_data): @@ -310,7 +321,8 @@ class PublishTest(ModuleUnitTest): yield True def test_folder_structure_same(self, dbcon, publish_finished, - download_test_data, output_folder_url): + download_test_data, output_folder_url, + skip_compare_folders): """Check if expected and published subfolders contain same files. Compares only presence, not size nor content! @@ -328,12 +340,33 @@ class PublishTest(ModuleUnitTest): glob.glob(expected_dir_base + "\\**", recursive=True) if f != expected_dir_base and os.path.exists(f)) - not_matched = expected.symmetric_difference(published) - assert not not_matched, "Missing {} files".format( - "\n".join(sorted(not_matched))) + filtered_published = self._filter_files(published, + skip_compare_folders) + + # filter out temp files also in expected + # could be polluted by accident by copying 'output' to zip file + filtered_expected = self._filter_files(expected, skip_compare_folders) + + not_mtched = filtered_expected.symmetric_difference(filtered_published) + if not_mtched: + raise AssertionError("Missing {} files".format( + "\n".join(sorted(not_mtched)))) + + def _filter_files(self, source_files, skip_compare_folders): + """Filter list of files according to regex pattern.""" + filtered = set() + for file_path in source_files: + if skip_compare_folders: + if not any([re.search(val, file_path) + for val in skip_compare_folders]): + filtered.add(file_path) + else: + filtered.add(file_path) + + return filtered -class HostFixtures(PublishTest): +class HostFixtures(): """Host specific fixtures. Should be implemented once per host.""" @pytest.fixture(scope="module") def last_workfile_path(self, download_test_data, output_folder_url): @@ -344,3 +377,8 @@ class HostFixtures(PublishTest): def startup_scripts(self, monkeypatch_session, download_test_data): """"Adds init scripts (like userSetup) to expected location""" raise NotImplementedError + + @pytest.fixture(scope="module") + def skip_compare_folders(self): + """Use list of regexs to filter out published folders from comparing""" + raise NotImplementedError diff --git a/tests/resources/test_data.zip b/tests/resources/test_data.zip index 0faab86b37..e22b9acdbd 100644 Binary files a/tests/resources/test_data.zip and b/tests/resources/test_data.zip differ diff --git a/tests/unit/igniter/test_bootstrap_repos.py b/tests/unit/igniter/test_bootstrap_repos.py index 10278c4928..8b32bbe03c 100644 --- a/tests/unit/igniter/test_bootstrap_repos.py +++ b/tests/unit/igniter/test_bootstrap_repos.py @@ -33,11 +33,11 @@ def test_openpype_version(printer): assert str(v2) == "1.2.3-x" assert v1 > v2 - v3 = OpenPypeVersion(1, 2, 3, staging=True) - assert str(v3) == "1.2.3+staging" + v3 = OpenPypeVersion(1, 2, 3) + assert str(v3) == "1.2.3" - v4 = OpenPypeVersion(1, 2, 3, staging="True", prerelease="rc.1") - assert str(v4) == "1.2.3-rc.1+staging" + v4 = OpenPypeVersion(1, 2, 3, prerelease="rc.1") + assert str(v4) == "1.2.3-rc.1" assert v3 > v4 assert v1 > v4 assert v4 < OpenPypeVersion(1, 2, 3, prerelease="rc.1") @@ -73,7 +73,7 @@ def test_openpype_version(printer): OpenPypeVersion(4, 8, 10), OpenPypeVersion(4, 8, 20), OpenPypeVersion(4, 8, 9), - OpenPypeVersion(1, 2, 3, staging=True), + OpenPypeVersion(1, 2, 3), OpenPypeVersion(1, 2, 3, build="foo") ] res = sorted(sort_versions) @@ -104,27 +104,26 @@ def test_openpype_version(printer): with pytest.raises(ValueError): _ = OpenPypeVersion(version="booobaa") - v11 = OpenPypeVersion(version="4.6.7-foo+staging") + v11 = OpenPypeVersion(version="4.6.7-foo") assert v11.major == 4 assert v11.minor == 6 assert v11.patch == 7 - assert v11.staging is True assert v11.prerelease == "foo" def test_get_main_version(): - ver = OpenPypeVersion(1, 2, 3, staging=True, prerelease="foo") + ver = OpenPypeVersion(1, 2, 3, prerelease="foo") assert ver.get_main_version() == "1.2.3" def test_get_version_path_from_list(): versions = [ OpenPypeVersion(1, 2, 3, path=Path('/foo/bar')), - OpenPypeVersion(3, 4, 5, staging=True, path=Path("/bar/baz")), + OpenPypeVersion(3, 4, 5, path=Path("/bar/baz")), OpenPypeVersion(6, 7, 8, prerelease="x", path=Path("boo/goo")) ] path = BootstrapRepos.get_version_path_from_list( - "3.4.5+staging", versions) + "3.4.5", versions) assert path == Path("/bar/baz") @@ -362,12 +361,15 @@ def test_find_openpype(fix_bootstrap, tmp_path_factory, monkeypatch, printer): result = fix_bootstrap.find_openpype(include_zips=True) # we should have results as file were created assert result is not None, "no OpenPype version found" - # latest item in `result` should be latest version found. + # latest item in `result` should be the latest version found. + # this will be `7.2.10-foo+staging` even with *staging* in since we've + # dropped the logic to handle staging separately and in alphabetical + # sorting it is after `strange`. expected_path = Path( d_path / "{}{}{}".format( - test_versions_2[3].prefix, - test_versions_2[3].version, - test_versions_2[3].suffix + test_versions_2[4].prefix, + test_versions_2[4].version, + test_versions_2[4].suffix ) ) assert result, "nothing found" diff --git a/tools/run_mongo.ps1 b/tools/run_mongo.ps1 index c64ff75969..85b94b0971 100644 --- a/tools/run_mongo.ps1 +++ b/tools/run_mongo.ps1 @@ -112,4 +112,6 @@ $mongoPath = Find-Mongo $preferred_version Write-Color -Text ">>> ", "Using DB path: ", "[ ", "$($dbpath)", " ]" -Color Green, Gray, Cyan, White, Cyan Write-Color -Text ">>> ", "Port: ", "[ ", "$($port)", " ]" -Color Green, Gray, Cyan, White, Cyan +New-Item -ItemType Directory -Force -Path $($dbpath) + Start-Process -FilePath $mongopath "--dbpath $($dbpath) --port $($port)" -PassThru | Out-Null diff --git a/website/docs/assets/deadline_job_version.png b/website/docs/assets/deadline_job_version.png new file mode 100644 index 0000000000..0b78d6a35c Binary files /dev/null and b/website/docs/assets/deadline_job_version.png differ diff --git a/website/docs/dev_build.md b/website/docs/dev_build.md index 4e80f6e19d..9c99b26f1e 100644 --- a/website/docs/dev_build.md +++ b/website/docs/dev_build.md @@ -51,7 +51,9 @@ development tools like [CMake](https://cmake.org/) and [Visual Studio](https://v #### Run from source -For development purposes it is possible to run OpenPype directly from the source. We provide a simple launcher script for this. +For development purposes it is possible to run OpenPype directly from the source. We provide a simple launcher script for this. To run the powershell scripts you may have to enable unrestricted execution as administrator: + +`Set-ExecutionPolicy -ExecutionPolicy unrestricted` To start OpenPype from source you need to diff --git a/website/docs/dev_deadline.md b/website/docs/dev_deadline.md new file mode 100644 index 0000000000..310b2e0983 --- /dev/null +++ b/website/docs/dev_deadline.md @@ -0,0 +1,38 @@ +--- +id: dev_deadline +title: Deadline integration +sidebar_label: Deadline integration +toc_max_heading_level: 4 +--- + +Deadline is not host as usual, it is missing most of the host features, but it does have +its own set of publishing plugins. + +## How to test OpenPype on Deadline + +### Versions + +Since 3.14 job submitted from OpenPype is bound to OpenPype version used to submit it. So +if you submit job with 3.14.8, Deadline will try to find that particular version and use it +for rendering. This is handled by `OPENPYPE_VERSION` variable on job - you can delete it from +there and then the version set in studio Settings will be used. + +![Deadline Job Version](assets/deadline_job_version.png) + +Deadline needs to bootstrap this version so it will try to look the closest compatible +build. So to use version 3.14.8 on Deadline it is enough to have build 3.14.0 or similar - important +are the first two version numbers - major and minor. If they match, the version +is considered compatible. + +### Testing + +So to test various changes you don't need to build again an again OpenPype and putting +it to directory where Deadline is looking for versions - this needs to be done only on +minor version change. That build will then be used to bootstrap whatever is set on the +job or in the studio Settings. + +So you can either use zip version if it suits you, or better set your sources directory +so it will be find as a version - for example with symlink. + +That way you can only modify `OPENPYPE_VERSION` variable on job to point it to version +you would like to test. \ No newline at end of file diff --git a/website/docs/dev_requirements.md b/website/docs/dev_requirements.md index 1c8958d1c0..fa2d996e20 100644 --- a/website/docs/dev_requirements.md +++ b/website/docs/dev_requirements.md @@ -55,7 +55,7 @@ To run mongoDB on server, use your server distribution tools to set it up (on Li ## Python -**Python 3.7.8** is the recommended version to use (as per [VFX platform CY2021](https://vfxplatform.com/)). +**Python 3.7.9** is the recommended version to use (as per [VFX platform CY2021](https://vfxplatform.com/)). If you're planning to run openPYPE on workstations from built executables (highly recommended), you will only need python for building and development, however, if you'd like to run from source centrally, every user will need python installed. diff --git a/website/docs/module_kitsu.md b/website/docs/module_kitsu.md index ec38cce5e1..73e31a280b 100644 --- a/website/docs/module_kitsu.md +++ b/website/docs/module_kitsu.md @@ -26,6 +26,8 @@ openpype_console module kitsu sync-service -l me@domain.ext -p my_password ### Events listening Listening to Kitsu events is the key to automation of many tasks like _project/episode/sequence/shot/asset/task create/update/delete_ and some more. Events listening should run at all times to perform the required processing as it is not possible to catch some of them retrospectively with strong reliability. If such timeout has been encountered, you must relaunch the `sync-service` command to run the synchronization step again. +Connection token is refreshed every week. + ### Push to Kitsu An utility function is provided to help update Kitsu data (a.k.a Zou database) with OpenPype data if the publishing to the production tracker hasn't been possible for some time. Running `push-to-zou` will create the data on behalf of the user. :::caution diff --git a/website/sidebars.js b/website/sidebars.js index af64282d61..f2d9ffee06 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -157,6 +157,7 @@ module.exports = { "dev_host_implementation", "dev_publishing" ] - } + }, + "dev_deadline" ] };