diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 669bf391cd..a35dbf1a17 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -35,6 +35,9 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
+ - 3.16.5
+ - 3.16.5-nightly.5
+ - 3.16.5-nightly.4
- 3.16.5-nightly.3
- 3.16.5-nightly.2
- 3.16.5-nightly.1
@@ -132,9 +135,6 @@ body:
- 3.14.9-nightly.3
- 3.14.9-nightly.2
- 3.14.9-nightly.1
- - 3.14.8
- - 3.14.8-nightly.4
- - 3.14.8-nightly.3
validations:
required: true
- type: dropdown
diff --git a/CHANGELOG.md b/CHANGELOG.md
index f1948b1a3f..c4f9ff57ea 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,679 @@
# Changelog
+## [3.16.5](https://github.com/ynput/OpenPype/tree/3.16.5)
+
+
+[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.4...3.16.5)
+
+### **🆕 New features**
+
+
+
+Attribute Definitions: Multiselection enum def #5547
+
+Added `multiselection` option to `EnumDef`.
+
+
+___
+
+
+
+### **🚀 Enhancements**
+
+
+
+Farm: adding target collector #5494
+
+Enhancing farm publishing workflow.
+
+
+___
+
+
+
+
+
+Maya: Optimize validate plug-in path attributes #5522
+
+- Optimize query (use `cmds.ls` once)
+- Add Select Invalid action
+- Improve validation report
+- Avoid "Unknown object type" errors
+
+
+___
+
+
+
+
+
+Maya: Remove Validate Instance Attributes plug-in #5525
+
+Remove Validate Instance Attributes plug-in.
+
+
+___
+
+
+
+
+
+Enhancement: Tweak logging for artist facing reports #5537
+
+Tweak the logging of publishing for global, deadline, maya and a fusion plugin to have a cleaner artist-facing report.
+- Fix context being reported correctly from CollectContext
+- Fix ValidateMeshArnoldAttributes: fix when arnold is not loaded, fix applying settings, fix for when ai attributes do not exist
+
+
+___
+
+
+
+
+
+AYON: Update settings #5544
+
+Updated settings in AYON addons and conversion of AYON settings in OpenPype.
+
+
+___
+
+
+
+
+
+Chore: Removed Ass export script #5560
+
+Removed Arnold render script, which was obsolete and unused.
+
+
+___
+
+
+
+
+
+Nuke: Allow for knob values to be validated against multiple values. #5042
+
+Knob values can now be validated against multiple values, so you can allow write nodes to be `exr` and `png`, or `16-bit` and `32-bit`.
+
+
+___
+
+
+
+
+
+Enhancement: Cosmetics for Higher version of publish already exists validation error #5190
+
+Fix double spaces in message.Example output **after** the PR:
+
+
+___
+
+
+
+
+
+Nuke: publish existing frames on farm #5409
+
+This PR proposes adding a fourth option in Nuke render publish called "Use Existing Frames - Farm". This would be useful when the farm is busy or when the artist lacks enough farm licenses. Additionally, some artists prefer rendering on the farm but still want to check frames before publishing.By adding the "Use Existing Frames - Farm" option, artists will have more flexibility and control over their render publishing process. This enhancement will streamline the workflow and improve efficiency for Nuke users.
+
+
+___
+
+
+
+
+
+Unreal: Create project in temp location and move to final when done #5476
+
+Create Unreal project in local temporary folder and when done, move it to final destination.
+
+
+___
+
+
+
+
+
+TrayPublisher: adding audio product type into default presets #5489
+
+Adding Audio product type into default presets so anybody can publish audio to their shots.
+
+
+___
+
+
+
+
+
+Global: avoiding cleanup of flagged representation #5502
+
+Publishing folder can be flagged as persistent at representation level.
+
+
+___
+
+
+
+
+
+General: missing tag could raise error #5511
+
+- avoiding potential situation where missing Tag key could raise error
+
+
+___
+
+
+
+
+
+Chore: Queued event system #5514
+
+Implemented event system with more expected behavior of event system. If an event is triggered during other event callback, it is not processed immediately but waits until all callbacks of previous events are done. The event system also allows to not trigger events directly once `emit_event` is called which gives option to process events in custom loops.
+
+
+___
+
+
+
+
+
+Publisher: Tweak log message to provide plugin name after "Plugin" #5521
+
+Fix logged message for settings automatically applied to plugin attributes
+
+
+___
+
+
+
+
+
+Houdini: Improve VDB Selection #5523
+
+Improves VDB selection if selection is `SopNode`: return the selected sop nodeif selection is `ObjNode`: get the output node with the minimum 'outputidx' or the node with display flag
+
+
+___
+
+
+
+
+
+Maya: Refactor/tweak Validate Instance In same Context plug-in #5526
+
+- Chore/Refactor: Re-use existing select invalid and repair actions
+- Enhancement: provide more elaborate PublishValidationError report
+- Bugfix: fix "optional" support by using `OptionalPyblishPluginMixin` base class.
+
+
+___
+
+
+
+
+
+Enhancement: Update houdini main menu #5527
+
+This PR adds two updates:
+- dynamic main menu
+- dynamic asset name and task
+
+
+___
+
+
+
+
+
+Houdini: Reset FPS when clicking Set Frame Range #5528
+
+_Similar to Maya,_ Make `Set Frame Range` resets FPS, issue https://github.com/ynput/OpenPype/issues/5516
+
+
+___
+
+
+
+
+
+Enhancement: Deadline plugins optimize, cleanup and fix optional support for validate deadline pools #5531
+
+- Fix optional support of validate deadline pools
+- Query deadline webservice only once per URL for verification, and once for available deadline pools instead of for every instance
+- Use `deadlineUrl` in `instance.data` when validating pools if it is set.
+- Code cleanup: Re-use existing `requests_get` implementation
+
+
+___
+
+
+
+
+
+Chore: PowerShell script for docker build #5535
+
+Added PowerShell script to run docker build.
+
+
+___
+
+
+
+
+
+AYON: Deadline expand userpaths in executables list #5540
+
+Expande `~` paths in executables list.
+
+
+___
+
+
+
+
+
+Chore: Use correct git url #5542
+
+Fixed github url in README.md.
+
+
+___
+
+
+
+
+
+Chore: Create plugin does not expect system settings #5553
+
+System settings are not passed to initialization of create plugin initialization (and `apply_settings`).
+
+
+___
+
+
+
+
+
+Chore: Allow custom Qt scale factor rounding policy #5555
+
+Do not force `PassThrough` rounding policy if different policy is defined via env variable.
+
+
+___
+
+
+
+
+
+Houdini: Fix outdated containers pop-up on opening last workfile on launch #5567
+
+Fix Houdini not showing outdated containers pop-up on scene open when launching with last workfile argument
+
+
+___
+
+
+
+
+
+Houdini: Improve errors e.g. raise PublishValidationError or cosmetics #5568
+
+Improve errors e.g. raise PublishValidationError or cosmeticsThis also fixes the Increment Current File plug-in since due to an invalid import it was previously broken
+
+
+___
+
+
+
+
+
+Fusion: Code updates #5569
+
+Update fusion code which contains obsolete code. Removed `switch_ui.py` script from fusion with related script in scripts.
+
+
+___
+
+
+
+### **🐛 Bug fixes**
+
+
+
+Maya: Validate Shape Zero fix repair action + provide informational artist-facing report #5524
+
+Refactor to PublishValidationError to allow the RepairAction to work + provide informational report message
+
+
+___
+
+
+
+
+
+Maya: Fix attribute definitions for `CreateYetiCache` #5574
+
+Fix attribute definitions for `CreateYetiCache`
+
+
+___
+
+
+
+
+
+Max: Optional Renderable Camera Validator for Render Instance #5286
+
+Optional validation to check on renderable camera being set up correctly for deadline submission.If not being set up correctly, it wont pass the validation and user can perform repair actions.
+
+
+___
+
+
+
+
+
+Max: Adding custom modifiers back to the loaded objects #5378
+
+The custom parameters OpenpypeData doesn't show in the loaded container when it is being loaded through the loader.
+
+
+___
+
+
+
+
+
+Houdini: Use default_variant to Houdini Node TAB Creator #5421
+
+Use the default variant of the creator plugins on the interactive creator from the TAB node search instead of hard-coding it to `Main`.
+
+
+___
+
+
+
+
+
+Nuke: adding inherited colorspace from instance #5454
+
+Thumbnails are extracted with inherited colorspace collected from rendering write node.
+
+
+___
+
+
+
+
+
+Add kitsu credentials to deadline publish job #5455
+
+This PR hopefully fixes this issue #5440
+
+
+___
+
+
+
+
+
+AYON: Fill entities during editorial #5475
+
+Fill entities and update template data on instances during extract AYON hierarchy.
+
+
+___
+
+
+
+
+
+Ftrack: Fix version 0 when integrating to Ftrack - OP-6595 #5477
+
+Fix publishing version 0 to Ftrack.
+
+
+___
+
+
+
+
+
+OCIO: windows unc path support in Nuke and Hiero #5479
+
+Hiero and Nuke is not supporting windows unc path formatting in OCIO environment variable.
+
+
+___
+
+
+
+
+
+Deadline: Added super call to init #5480
+
+DL 10.3 requires plugin inheriting from DeadlinePlugin to call super's **init** explicitly.
+
+
+___
+
+
+
+
+
+Nuke: fixing thumbnail and monitor out root attributes #5483
+
+Nuke Root Colorspace settings for Thumbnail and Monitor Out schema was gradually changed between version 12, 13, 14 and we needed to address those changes individually for particular version.
+
+
+___
+
+
+
+
+
+Nuke: fixing missing `instance_id` error #5484
+
+Workfiles with Instances created in old publisher workflow were rising error during converting method since they were missing `instance_id` key introduced in new publisher workflow.
+
+
+___
+
+
+
+
+
+Nuke: existing frames validator is repairing render target #5486
+
+Nuke is now correctly repairing render target after the existing frames validator finds missing frames and repair action is used.
+
+
+___
+
+
+
+
+
+added UE to extract burnins families #5487
+
+This PR fixes missing burnins in reviewables when rendering from UE.
+___
+
+
+
+
+
+Harmony: refresh code for current Deadline #5493
+
+- Added support in Deadline Plug-in for new versions of Harmony, in particular version 21 and 22.
+- Remove review=False flag on render instance
+- Add farm=True flag on render instance
+- Fix is_in_tests function call in Harmony Deadline submission plugin
+- Force HarmonyOpenPype.py Deadline Python plug-in to py3
+- Fix cosmetics/hound in HarmonyOpenPype.py Deadline Python plug-in
+
+
+___
+
+
+
+
+
+Publisher: Fix multiselection value #5505
+
+Selection of multiple instances in Publisher does not cause that all instances change all publish attributes to the same value.
+
+
+___
+
+
+
+
+
+Publisher: Avoid warnings on thumbnails if source image also has alpha channel #5510
+
+Avoids the following warning from `ExtractThumbnailFromSource`:
+```
+// pyblish.ExtractThumbnailFromSource : oiiotool WARNING: -o : Can't save 4 channels to jpeg... saving only R,G,B
+```
+
+
+
+___
+
+
+
+
+
+Update ayon-python-api #5512
+
+Update ayon python api and related callbacks.
+
+
+___
+
+
+
+
+
+Max: Fixing the bug of falling back to use workfile for Arnold or any renderers except Redshift #5520
+
+Fix the bug of falling back to use workfile for Arnold
+
+
+___
+
+
+
+
+
+General: Fix Validate Publish Dir Validator #5534
+
+Nonsensical "family" key was used instead of real value (as 'render' etc.) which would result in wrong translation of intermediate family names.Updated docstring.
+
+
+___
+
+
+
+
+
+have the addons loading respect a custom AYON_ADDONS_DIR #5539
+
+When using a custom AYON_ADDONS_DIR environment variable that variable is used in the launcher correctly and downloads and extracts addons to there, however when running Ayon does not respect this environment variable
+
+
+___
+
+
+
+
+
+Deadline: files on representation cannot be single item list #5545
+
+Further logic expects that single item files will be only 'string' not 'list' (eg. repre["files"] = "abc.exr" not repre["files"] = ["abc.exr"].This would cause an issue in ExtractReview later.This could happen if DL rendered single frame file with different frame value.
+
+
+___
+
+
+
+
+
+Webpublisher: better encode list values for click #5546
+
+Targets could be a list, original implementation pushed it as a separate items, it must be added as `--targets webpulish --targets filepublish`.`wepublish_routes` handles triggering from UI, changes in `publish_functions` handle triggering from cmd (for tests, api access).
+
+
+___
+
+
+
+
+
+Houdini: Introduce imprint function for correct version in hda loader #5548
+
+Resolve #5478
+
+
+___
+
+
+
+
+
+AYON: Fill entities during editorial (2) #5549
+
+Fix changes made in https://github.com/ynput/OpenPype/pull/5475.
+
+
+___
+
+
+
+
+
+Max: OP Data updates in Loaders #5563
+
+Fix the bug on the loaders not being able to load the objects when iterating key and values with the dict.Max prefers list over the list in dict.
+
+
+___
+
+
+
+
+
+Create Plugins: Better check of overriden '__init__' method #5571
+
+Create plugins do not log warning messages about each create plugin because of wrong `__init__` method check.
+
+
+___
+
+
+
+### **Merged pull requests**
+
+
+
+Tests: fix unit tests #5533
+
+Fixed failing tests.Updated Unreal's validator to match removed general one which had a couple of issues fixed.
+
+
+___
+
+
+
+
+
+
## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4)
diff --git a/openpype/client/server/conversion_utils.py b/openpype/client/server/conversion_utils.py
index a6c190a0fc..f67a1ef9c4 100644
--- a/openpype/client/server/conversion_utils.py
+++ b/openpype/client/server/conversion_utils.py
@@ -663,10 +663,13 @@ def convert_v4_representation_to_v3(representation):
if isinstance(context, six.string_types):
context = json.loads(context)
- if "folder" in context:
- _c_folder = context.pop("folder")
+ if "asset" not in context and "folder" in context:
+ _c_folder = context["folder"]
context["asset"] = _c_folder["name"]
+ elif "asset" in context and "folder" not in context:
+ context["folder"] = {"name": context["asset"]}
+
if "product" in context:
_c_product = context.pop("product")
context["family"] = _c_product["type"]
@@ -959,9 +962,11 @@ def convert_create_representation_to_v4(representation, con):
converted_representation["files"] = new_files
context = representation["context"]
- context["folder"] = {
- "name": context.pop("asset", None)
- }
+ if "folder" not in context:
+ context["folder"] = {
+ "name": context.get("asset")
+ }
+
context["product"] = {
"type": context.pop("family", None),
"name": context.pop("subset", None),
@@ -1285,7 +1290,7 @@ def convert_update_representation_to_v4(
if "context" in update_data:
context = update_data["context"]
- if "asset" in context:
+ if "folder" not in context and "asset" in context:
context["folder"] = {"name": context.pop("asset")}
if "family" in context or "subset" in context:
diff --git a/openpype/hooks/pre_ocio_hook.py b/openpype/hooks/pre_ocio_hook.py
index 1307ed9f76..add3a0adaf 100644
--- a/openpype/hooks/pre_ocio_hook.py
+++ b/openpype/hooks/pre_ocio_hook.py
@@ -45,6 +45,9 @@ class OCIOEnvHook(PreLaunchHook):
if config_data:
ocio_path = config_data["path"]
+ if self.host_name in ["nuke", "hiero"]:
+ ocio_path = ocio_path.replace("\\", "/")
+
self.log.info(
f"Setting OCIO environment to config path: {ocio_path}")
diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py
index dcf424b44f..fbe600ae68 100644
--- a/openpype/hosts/aftereffects/plugins/create/create_render.py
+++ b/openpype/hosts/aftereffects/plugins/create/create_render.py
@@ -164,7 +164,7 @@ class RenderCreator(Creator):
api.get_stub().rename_item(comp_id,
new_comp_name)
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["aftereffects"]["create"]["RenderCreator"]
)
diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py
index aa46461915..49874d6cff 100644
--- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py
+++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py
@@ -138,7 +138,6 @@ class CollectAERender(publish.AbstractCollectRender):
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
- instance.toBeRenderedOn = "deadline"
instance.renderer = "aerender"
instance.farm = True # to skip integrate
if "review" in instance.families:
diff --git a/openpype/hosts/blender/plugins/load/load_blend.py b/openpype/hosts/blender/plugins/load/load_blend.py
index 99f291a5a7..fa41f4374b 100644
--- a/openpype/hosts/blender/plugins/load/load_blend.py
+++ b/openpype/hosts/blender/plugins/load/load_blend.py
@@ -119,7 +119,7 @@ class BlendLoader(plugin.AssetLoader):
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
- libpath = self.fname
+ libpath = self.filepath_from_context(context)
asset = context["asset"]["name"]
subset = context["subset"]["name"]
diff --git a/openpype/hosts/blender/plugins/load/load_camera_abc.py b/openpype/hosts/blender/plugins/load/load_camera_abc.py
index e5afecff66..05d3fb764d 100644
--- a/openpype/hosts/blender/plugins/load/load_camera_abc.py
+++ b/openpype/hosts/blender/plugins/load/load_camera_abc.py
@@ -100,7 +100,7 @@ class AbcCameraLoader(plugin.AssetLoader):
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
- objects = self._process(libpath, asset_group, group_name)
+ self._process(libpath, asset_group, group_name)
objects = []
nodes = list(asset_group.children)
diff --git a/openpype/hosts/blender/plugins/load/load_camera_fbx.py b/openpype/hosts/blender/plugins/load/load_camera_fbx.py
index b9d05dda0a..3cca6e7fd3 100644
--- a/openpype/hosts/blender/plugins/load/load_camera_fbx.py
+++ b/openpype/hosts/blender/plugins/load/load_camera_fbx.py
@@ -103,7 +103,7 @@ class FbxCameraLoader(plugin.AssetLoader):
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
- objects = self._process(libpath, asset_group, group_name)
+ self._process(libpath, asset_group, group_name)
objects = []
nodes = list(asset_group.children)
diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py
index f4babc94d3..87159e53f0 100644
--- a/openpype/hosts/blender/plugins/publish/extract_abc.py
+++ b/openpype/hosts/blender/plugins/publish/extract_abc.py
@@ -21,8 +21,6 @@ class ExtractABC(publish.Extractor):
filename = f"{instance.name}.abc"
filepath = os.path.join(stagingdir, filename)
- context = bpy.context
-
# Perform extraction
self.log.info("Performing extraction..")
diff --git a/openpype/hosts/blender/plugins/publish/extract_abc_animation.py b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py
index e141ccaa44..44b2ba3761 100644
--- a/openpype/hosts/blender/plugins/publish/extract_abc_animation.py
+++ b/openpype/hosts/blender/plugins/publish/extract_abc_animation.py
@@ -20,8 +20,6 @@ class ExtractAnimationABC(publish.Extractor):
filename = f"{instance.name}.abc"
filepath = os.path.join(stagingdir, filename)
- context = bpy.context
-
# Perform extraction
self.log.info("Performing extraction..")
diff --git a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py
index a21a59b151..036be7bf3c 100644
--- a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py
+++ b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py
@@ -21,16 +21,11 @@ class ExtractCameraABC(publish.Extractor):
filename = f"{instance.name}.abc"
filepath = os.path.join(stagingdir, filename)
- context = bpy.context
-
# Perform extraction
self.log.info("Performing extraction..")
plugin.deselect_all()
- selected = []
- active = None
-
asset_group = None
for obj in instance:
if obj.get(AVALON_PROPERTY):
diff --git a/openpype/hosts/flame/plugins/load/load_clip.py b/openpype/hosts/flame/plugins/load/load_clip.py
index 338833b449..ca4eab0f63 100644
--- a/openpype/hosts/flame/plugins/load/load_clip.py
+++ b/openpype/hosts/flame/plugins/load/load_clip.py
@@ -48,7 +48,6 @@ class LoadClip(opfapi.ClipLoader):
self.fpd = fproject.current_workspace.desktop
# load clip to timeline and get main variables
- namespace = namespace
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
diff --git a/openpype/hosts/flame/plugins/load/load_clip_batch.py b/openpype/hosts/flame/plugins/load/load_clip_batch.py
index ca43b94ee9..1f3a017d72 100644
--- a/openpype/hosts/flame/plugins/load/load_clip_batch.py
+++ b/openpype/hosts/flame/plugins/load/load_clip_batch.py
@@ -45,7 +45,6 @@ class LoadClipBatch(opfapi.ClipLoader):
self.batch = options.get("batch") or flame.batch
# load clip to timeline and get main variables
- namespace = namespace
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
index 23fdf5e785..e14f960a2b 100644
--- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
+++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
@@ -325,7 +325,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
def _create_shot_instance(self, context, clip_name, **data):
master_layer = data.get("heroTrack")
hierarchy_data = data.get("hierarchyData")
- asset = data.get("asset")
if not master_layer:
return
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py
deleted file mode 100644
index 1a0a9911ea..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_selected_to32bit.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from openpype.hosts.fusion.api import (
- comp_lock_and_undo_chunk,
- get_current_comp
-)
-
-
-def main():
- comp = get_current_comp()
- """Set all selected backgrounds to 32 bit"""
- with comp_lock_and_undo_chunk(comp, 'Selected Backgrounds to 32bit'):
- tools = comp.GetToolList(True, "Background").values()
- for tool in tools:
- tool.Depth = 5
-
-
-main()
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py
deleted file mode 100644
index c2eea505e5..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/backgrounds_to32bit.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from openpype.hosts.fusion.api import (
- comp_lock_and_undo_chunk,
- get_current_comp
-)
-
-
-def main():
- comp = get_current_comp()
- """Set all backgrounds to 32 bit"""
- with comp_lock_and_undo_chunk(comp, 'Backgrounds to 32bit'):
- tools = comp.GetToolList(False, "Background").values()
- for tool in tools:
- tool.Depth = 5
-
-
-main()
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py
deleted file mode 100644
index 2118767f4d..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_selected_to32bit.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from openpype.hosts.fusion.api import (
- comp_lock_and_undo_chunk,
- get_current_comp
-)
-
-
-def main():
- comp = get_current_comp()
- """Set all selected loaders to 32 bit"""
- with comp_lock_and_undo_chunk(comp, 'Selected Loaders to 32bit'):
- tools = comp.GetToolList(True, "Loader").values()
- for tool in tools:
- tool.Depth = 5
-
-
-main()
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py
deleted file mode 100644
index 7dd1f66a5e..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/32bit/loaders_to32bit.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from openpype.hosts.fusion.api import (
- comp_lock_and_undo_chunk,
- get_current_comp
-)
-
-
-def main():
- comp = get_current_comp()
- """Set all loaders to 32 bit"""
- with comp_lock_and_undo_chunk(comp, 'Loaders to 32bit'):
- tools = comp.GetToolList(False, "Loader").values()
- for tool in tools:
- tool.Depth = 5
-
-
-main()
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py
deleted file mode 100644
index 87322235f5..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import os
-import sys
-import glob
-import logging
-
-from qtpy import QtWidgets, QtCore
-
-import qtawesome as qta
-
-from openpype.client import get_assets
-from openpype import style
-from openpype.pipeline import (
- install_host,
- get_current_project_name,
-)
-from openpype.hosts.fusion import api
-from openpype.pipeline.context_tools import get_workdir_from_session
-
-log = logging.getLogger("Fusion Switch Shot")
-
-
-class App(QtWidgets.QWidget):
-
- def __init__(self, parent=None):
-
- ################################################
- # |---------------------| |------------------| #
- # |Comp | |Asset | #
- # |[..][ v]| |[ v]| #
- # |---------------------| |------------------| #
- # | Update existing comp [ ] | #
- # |------------------------------------------| #
- # | Switch | #
- # |------------------------------------------| #
- ################################################
-
- QtWidgets.QWidget.__init__(self, parent)
-
- layout = QtWidgets.QVBoxLayout()
-
- # Comp related input
- comp_hlayout = QtWidgets.QHBoxLayout()
- comp_label = QtWidgets.QLabel("Comp file")
- comp_label.setFixedWidth(50)
- comp_box = QtWidgets.QComboBox()
-
- button_icon = qta.icon("fa.folder", color="white")
- open_from_dir = QtWidgets.QPushButton()
- open_from_dir.setIcon(button_icon)
-
- comp_box.setFixedHeight(25)
- open_from_dir.setFixedWidth(25)
- open_from_dir.setFixedHeight(25)
-
- comp_hlayout.addWidget(comp_label)
- comp_hlayout.addWidget(comp_box)
- comp_hlayout.addWidget(open_from_dir)
-
- # Asset related input
- asset_hlayout = QtWidgets.QHBoxLayout()
- asset_label = QtWidgets.QLabel("Shot")
- asset_label.setFixedWidth(50)
-
- asset_box = QtWidgets.QComboBox()
- asset_box.setLineEdit(QtWidgets.QLineEdit())
- asset_box.setFixedHeight(25)
-
- refresh_icon = qta.icon("fa.refresh", color="white")
- refresh_btn = QtWidgets.QPushButton()
- refresh_btn.setIcon(refresh_icon)
-
- asset_box.setFixedHeight(25)
- refresh_btn.setFixedWidth(25)
- refresh_btn.setFixedHeight(25)
-
- asset_hlayout.addWidget(asset_label)
- asset_hlayout.addWidget(asset_box)
- asset_hlayout.addWidget(refresh_btn)
-
- # Options
- options = QtWidgets.QHBoxLayout()
- options.setAlignment(QtCore.Qt.AlignLeft)
-
- current_comp_check = QtWidgets.QCheckBox()
- current_comp_check.setChecked(True)
- current_comp_label = QtWidgets.QLabel("Use current comp")
-
- options.addWidget(current_comp_label)
- options.addWidget(current_comp_check)
-
- accept_btn = QtWidgets.QPushButton("Switch")
-
- layout.addLayout(options)
- layout.addLayout(comp_hlayout)
- layout.addLayout(asset_hlayout)
- layout.addWidget(accept_btn)
-
- self._open_from_dir = open_from_dir
- self._comps = comp_box
- self._assets = asset_box
- self._use_current = current_comp_check
- self._accept_btn = accept_btn
- self._refresh_btn = refresh_btn
-
- self.setWindowTitle("Fusion Switch Shot")
- self.setLayout(layout)
-
- self.resize(260, 140)
- self.setMinimumWidth(260)
- self.setFixedHeight(140)
-
- self.connections()
-
- # Update ui to correct state
- self._on_use_current_comp()
- self._refresh()
-
- def connections(self):
- self._use_current.clicked.connect(self._on_use_current_comp)
- self._open_from_dir.clicked.connect(self._on_open_from_dir)
- self._refresh_btn.clicked.connect(self._refresh)
- self._accept_btn.clicked.connect(self._on_switch)
-
- def _on_use_current_comp(self):
- state = self._use_current.isChecked()
- self._open_from_dir.setEnabled(not state)
- self._comps.setEnabled(not state)
-
- def _on_open_from_dir(self):
-
- start_dir = get_workdir_from_session()
- comp_file, _ = QtWidgets.QFileDialog.getOpenFileName(
- self, "Choose comp", start_dir)
-
- if not comp_file:
- return
-
- # Create completer
- self.populate_comp_box([comp_file])
- self._refresh()
-
- def _refresh(self):
- # Clear any existing items
- self._assets.clear()
-
- asset_names = self.collect_asset_names()
- completer = QtWidgets.QCompleter(asset_names)
-
- self._assets.setCompleter(completer)
- self._assets.addItems(asset_names)
-
- def _on_switch(self):
-
- if not self._use_current.isChecked():
- file_name = self._comps.itemData(self._comps.currentIndex())
- else:
- comp = api.get_current_comp()
- file_name = comp.GetAttrs("COMPS_FileName")
-
- asset = self._assets.currentText()
-
- import colorbleed.scripts.fusion_switch_shot as switch_shot
- switch_shot.switch(asset_name=asset, filepath=file_name, new=True)
-
- def collect_slap_comps(self, directory):
- items = glob.glob("{}/*.comp".format(directory))
- return items
-
- def collect_asset_names(self):
- project_name = get_current_project_name()
- asset_docs = get_assets(project_name, fields=["name"])
- asset_names = {
- asset_doc["name"]
- for asset_doc in asset_docs
- }
- return list(asset_names)
-
- def populate_comp_box(self, files):
- """Ensure we display the filename only but the path is stored as well
-
- Args:
- files (list): list of full file path [path/to/item/item.ext,]
-
- Returns:
- None
- """
-
- for f in files:
- filename = os.path.basename(f)
- self._comps.addItem(filename, userData=f)
-
-
-if __name__ == '__main__':
- install_host(api)
-
- app = QtWidgets.QApplication(sys.argv)
- window = App()
- window.setStyleSheet(style.load_stylesheet())
- window.show()
- sys.exit(app.exec_())
diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py
deleted file mode 100644
index 3d2d1ecfa6..0000000000
--- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/update_loader_ranges.py
+++ /dev/null
@@ -1,40 +0,0 @@
-"""Forces Fusion to 'retrigger' the Loader to update.
-
-Warning:
- This might change settings like 'Reverse', 'Loop', trims and other
- settings of the Loader. So use this at your own risk.
-
-"""
-from openpype.hosts.fusion.api.pipeline import (
- get_current_comp,
- comp_lock_and_undo_chunk
-)
-
-
-def update_loader_ranges():
- comp = get_current_comp()
- with comp_lock_and_undo_chunk(comp, "Reload clip time ranges"):
- tools = comp.GetToolList(True, "Loader").values()
- for tool in tools:
-
- # Get tool attributes
- tool_a = tool.GetAttrs()
- clipTable = tool_a['TOOLST_Clip_Name']
- altclipTable = tool_a['TOOLST_AltClip_Name']
- startTime = tool_a['TOOLNT_Clip_Start']
- old_global_in = tool.GlobalIn[comp.CurrentTime]
-
- # Reapply
- for index, _ in clipTable.items():
- time = startTime[index]
- tool.Clip[time] = tool.Clip[time]
-
- for index, _ in altclipTable.items():
- time = startTime[index]
- tool.ProxyFilename[time] = tool.ProxyFilename[time]
-
- tool.GlobalIn[comp.CurrentTime] = old_global_in
-
-
-if __name__ == '__main__':
- update_loader_ranges()
diff --git a/openpype/hosts/fusion/deploy/fusion_shared.prefs b/openpype/hosts/fusion/deploy/fusion_shared.prefs
index b379ea7c66..93b08aa886 100644
--- a/openpype/hosts/fusion/deploy/fusion_shared.prefs
+++ b/openpype/hosts/fusion/deploy/fusion_shared.prefs
@@ -5,7 +5,7 @@ Global = {
Map = {
["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy",
["Config:"] = "UserPaths:Config;OpenPype:Config",
- ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts",
+ ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts",
},
},
Script = {
diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py
index 04898d0a45..39edca4de3 100644
--- a/openpype/hosts/fusion/plugins/create/create_saver.py
+++ b/openpype/hosts/fusion/plugins/create/create_saver.py
@@ -30,10 +30,6 @@ class CreateSaver(NewCreator):
instance_attributes = [
"reviewable"
]
- default_variants = [
- "Main",
- "Mask"
- ]
# TODO: This should be renamed together with Nuke so it is aligned
temp_rendering_path_template = (
@@ -250,11 +246,7 @@ class CreateSaver(NewCreator):
label="Review",
)
- def apply_settings(
- self,
- project_settings,
- system_settings
- ):
+ def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
# plugin settings
diff --git a/openpype/hosts/fusion/plugins/publish/collect_instances.py b/openpype/hosts/fusion/plugins/publish/collect_instances.py
index 6016baa2a9..4d6da79b77 100644
--- a/openpype/hosts/fusion/plugins/publish/collect_instances.py
+++ b/openpype/hosts/fusion/plugins/publish/collect_instances.py
@@ -85,5 +85,5 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
# Add review family if the instance is marked as 'review'
# This could be done through a 'review' Creator attribute.
if instance.data.get("review", False):
- self.log.info("Adding review family..")
+ self.log.debug("Adding review family..")
instance.data["families"].append("review")
diff --git a/openpype/hosts/fusion/plugins/publish/collect_render.py b/openpype/hosts/fusion/plugins/publish/collect_render.py
index a20a142701..341f3f191a 100644
--- a/openpype/hosts/fusion/plugins/publish/collect_render.py
+++ b/openpype/hosts/fusion/plugins/publish/collect_render.py
@@ -108,7 +108,6 @@ class CollectFusionRender(
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
- instance.toBeRenderedOn = "deadline"
instance.farm = True # to skip integrate
if "review" in instance.families:
# to skip ExtractReview locally
diff --git a/openpype/hosts/harmony/plugins/load/load_template.py b/openpype/hosts/harmony/plugins/load/load_template.py
index f3c69a9104..a78a1bf1ec 100644
--- a/openpype/hosts/harmony/plugins/load/load_template.py
+++ b/openpype/hosts/harmony/plugins/load/load_template.py
@@ -82,7 +82,6 @@ class TemplateLoader(load.LoaderPlugin):
node = harmony.find_node_by_name(node_name, "GROUP")
self_name = self.__class__.__name__
- update_and_replace = False
if is_representation_from_latest(representation):
self._set_green(node)
else:
diff --git a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py
index 5e9b9094a7..af825c052a 100644
--- a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py
+++ b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py
@@ -147,13 +147,13 @@ class CollectFarmRender(publish.AbstractCollectRender):
attachTo=False,
setMembers=[node],
publish=info[4],
- review=False,
renderer=None,
priority=50,
name=node.split("/")[1],
family="render.farm",
families=["render.farm"],
+ farm=True,
resolutionWidth=context.data["resolutionWidth"],
resolutionHeight=context.data["resolutionHeight"],
@@ -174,7 +174,6 @@ class CollectFarmRender(publish.AbstractCollectRender):
outputFormat=info[1],
outputStartFrame=info[3],
leadingZeros=info[2],
- toBeRenderedOn='deadline',
ignoreFrameHandleCheck=True
)
diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py
index 65a4009756..52f96261b2 100644
--- a/openpype/hosts/hiero/api/plugin.py
+++ b/openpype/hosts/hiero/api/plugin.py
@@ -317,20 +317,6 @@ class Spacer(QtWidgets.QWidget):
self.setLayout(layout)
-def get_reference_node_parents(ref):
- """Return all parent reference nodes of reference node
-
- Args:
- ref (str): reference node.
-
- Returns:
- list: The upstream parent reference nodes.
-
- """
- parents = []
- return parents
-
-
class SequenceLoader(LoaderPlugin):
"""A basic SequenceLoader for Resolve
diff --git a/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py b/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py
index d455ad4a4e..fcb1ab27a0 100644
--- a/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py
+++ b/openpype/hosts/hiero/plugins/publish/collect_clip_effects.py
@@ -43,7 +43,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
if review and review_track_index == _track_index:
continue
for sitem in sub_track_items:
- effect = None
# make sure this subtrack item is relative of track item
if ((track_item not in sitem.linkedItems())
and (len(sitem.linkedItems()) > 0)):
@@ -53,7 +52,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
continue
effect = self.add_effect(_track_index, sitem)
-
if effect:
effects.update(effect)
diff --git a/openpype/hosts/houdini/api/colorspace.py b/openpype/hosts/houdini/api/colorspace.py
index 7047644225..cc40b9df1c 100644
--- a/openpype/hosts/houdini/api/colorspace.py
+++ b/openpype/hosts/houdini/api/colorspace.py
@@ -1,7 +1,7 @@
import attr
import hou
from openpype.hosts.houdini.api.lib import get_color_management_preferences
-
+from openpype.pipeline.colorspace import get_display_view_colorspace_name
@attr.s
class LayerMetadata(object):
@@ -54,3 +54,16 @@ class ARenderProduct(object):
)
]
return colorspace_data
+
+
+def get_default_display_view_colorspace():
+ """Returns the colorspace attribute of the default (display, view) pair.
+
+ It's used for 'ociocolorspace' parm in OpenGL Node."""
+
+ prefs = get_color_management_preferences()
+ return get_display_view_colorspace_name(
+ config_path=prefs["config"],
+ display=prefs["display"],
+ view=prefs["view"]
+ )
diff --git a/openpype/hosts/houdini/api/creator_node_shelves.py b/openpype/hosts/houdini/api/creator_node_shelves.py
index 7c6122cffe..1f9fef7417 100644
--- a/openpype/hosts/houdini/api/creator_node_shelves.py
+++ b/openpype/hosts/houdini/api/creator_node_shelves.py
@@ -57,28 +57,31 @@ def create_interactive(creator_identifier, **kwargs):
list: The created instances.
"""
-
- # TODO Use Qt instead
- result, variant = hou.ui.readInput('Define variant name',
- buttons=("Ok", "Cancel"),
- initial_contents='Main',
- title="Define variant",
- help="Set the variant for the "
- "publish instance",
- close_choice=1)
- if result == 1:
- # User interrupted
- return
- variant = variant.strip()
- if not variant:
- raise RuntimeError("Empty variant value entered.")
-
host = registered_host()
context = CreateContext(host)
creator = context.manual_creators.get(creator_identifier)
if not creator:
- raise RuntimeError("Invalid creator identifier: "
- "{}".format(creator_identifier))
+ raise RuntimeError("Invalid creator identifier: {}".format(
+ creator_identifier)
+ )
+
+ # TODO Use Qt instead
+ result, variant = hou.ui.readInput(
+ "Define variant name",
+ buttons=("Ok", "Cancel"),
+ initial_contents=creator.get_default_variant(),
+ title="Define variant",
+ help="Set the variant for the publish instance",
+ close_choice=1
+ )
+
+ if result == 1:
+ # User interrupted
+ return
+
+ variant = variant.strip()
+ if not variant:
+ raise RuntimeError("Empty variant value entered.")
# TODO: Once more elaborate unique create behavior should exist per Creator
# instead of per network editor area then we should move this from here
diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py
index 3c325edfa7..c9ae801af5 100644
--- a/openpype/hosts/houdini/api/pipeline.py
+++ b/openpype/hosts/houdini/api/pipeline.py
@@ -303,6 +303,28 @@ def on_save():
lib.set_id(node, new_id, overwrite=False)
+def _show_outdated_content_popup():
+ # Get main window
+ parent = lib.get_main_window()
+ if parent is None:
+ log.info("Skipping outdated content pop-up "
+ "because Houdini window can't be found.")
+ else:
+ from openpype.widgets import popup
+
+ # Show outdated pop-up
+ def _on_show_inventory():
+ from openpype.tools.utils import host_tools
+ host_tools.show_scene_inventory(parent=parent)
+
+ dialog = popup.Popup(parent=parent)
+ dialog.setWindowTitle("Houdini scene has outdated content")
+ dialog.setMessage("There are outdated containers in "
+ "your Houdini scene.")
+ dialog.on_clicked.connect(_on_show_inventory)
+ dialog.show()
+
+
def on_open():
if not hou.isUIAvailable():
@@ -316,28 +338,18 @@ def on_open():
lib.validate_fps()
if any_outdated_containers():
- from openpype.widgets import popup
-
- log.warning("Scene has outdated content.")
-
- # Get main window
parent = lib.get_main_window()
if parent is None:
- log.info("Skipping outdated content pop-up "
- "because Houdini window can't be found.")
+ # When opening Houdini with last workfile on launch the UI hasn't
+ # initialized yet completely when the `on_open` callback triggers.
+ # We defer the dialog popup to wait for the UI to become available.
+ # We assume it will open because `hou.isUIAvailable()` returns True
+ import hdefereval
+ hdefereval.executeDeferred(_show_outdated_content_popup)
else:
+ _show_outdated_content_popup()
- # Show outdated pop-up
- def _on_show_inventory():
- from openpype.tools.utils import host_tools
- host_tools.show_scene_inventory(parent=parent)
-
- dialog = popup.Popup(parent=parent)
- dialog.setWindowTitle("Houdini scene has outdated content")
- dialog.setMessage("There are outdated containers in "
- "your Houdini scene.")
- dialog.on_clicked.connect(_on_show_inventory)
- dialog.show()
+ log.warning("Scene has outdated content.")
def on_new():
diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py
index 70c837205e..730a627dc3 100644
--- a/openpype/hosts/houdini/api/plugin.py
+++ b/openpype/hosts/houdini/api/plugin.py
@@ -296,7 +296,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
"""
return [hou.ropNodeTypeCategory()]
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
settings_name = self.settings_name
diff --git a/openpype/hosts/houdini/plugins/create/create_review.py b/openpype/hosts/houdini/plugins/create/create_review.py
index ab06b30c35..60c34a358b 100644
--- a/openpype/hosts/houdini/plugins/create/create_review.py
+++ b/openpype/hosts/houdini/plugins/create/create_review.py
@@ -3,6 +3,9 @@
from openpype.hosts.houdini.api import plugin
from openpype.lib import EnumDef, BoolDef, NumberDef
+import os
+import hou
+
class CreateReview(plugin.HoudiniCreator):
"""Review with OpenGL ROP"""
@@ -13,7 +16,6 @@ class CreateReview(plugin.HoudiniCreator):
icon = "video-camera"
def create(self, subset_name, instance_data, pre_create_data):
- import hou
instance_data.pop("active", None)
instance_data.update({"node_type": "opengl"})
@@ -82,6 +84,11 @@ class CreateReview(plugin.HoudiniCreator):
instance_node.setParms(parms)
+ # Set OCIO Colorspace to the default output colorspace
+ # if there's OCIO
+ if os.getenv("OCIO"):
+ self.set_colorcorrect_to_default_view_space(instance_node)
+
to_lock = ["id", "family"]
self.lock_parameters(instance_node, to_lock)
@@ -123,3 +130,23 @@ class CreateReview(plugin.HoudiniCreator):
minimum=0.0001,
decimals=3)
]
+
+ def set_colorcorrect_to_default_view_space(self,
+ instance_node):
+ """Set ociocolorspace to the default output space."""
+ from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
+
+ # set Color Correction parameter to OpenColorIO
+ instance_node.setParms({"colorcorrect": 2})
+
+ # Get default view space for ociocolorspace parm.
+ default_view_space = get_default_display_view_colorspace()
+ instance_node.setParms(
+ {"ociocolorspace": default_view_space}
+ )
+
+ self.log.debug(
+ "'OCIO Colorspace' parm on '{}' has been set to "
+ "the default view color space '{}'"
+ .format(instance_node, default_view_space)
+ )
diff --git a/openpype/hosts/houdini/plugins/load/load_bgeo.py b/openpype/hosts/houdini/plugins/load/load_bgeo.py
index 22680178c0..489bf944ed 100644
--- a/openpype/hosts/houdini/plugins/load/load_bgeo.py
+++ b/openpype/hosts/houdini/plugins/load/load_bgeo.py
@@ -34,7 +34,6 @@ class BgeoLoader(load.LoaderPlugin):
# Create a new geo node
container = obj.createNode("geo", node_name=node_name)
- is_sequence = bool(context["representation"]["context"].get("frame"))
# Remove the file node, it only loads static meshes
# Houdini 17 has removed the file node from the geo node
diff --git a/openpype/hosts/houdini/plugins/load/load_hda.py b/openpype/hosts/houdini/plugins/load/load_hda.py
index 57edc341a3..9630716253 100644
--- a/openpype/hosts/houdini/plugins/load/load_hda.py
+++ b/openpype/hosts/houdini/plugins/load/load_hda.py
@@ -59,6 +59,9 @@ class HdaLoader(load.LoaderPlugin):
def_paths = [d.libraryFilePath() for d in defs]
new = def_paths.index(file_path)
defs[new].setIsPreferred(True)
+ hda_node.setParms({
+ "representation": str(representation["_id"])
+ })
def remove(self, container):
node = container["node"]
diff --git a/openpype/hosts/houdini/plugins/publish/collect_output_node.py b/openpype/hosts/houdini/plugins/publish/collect_output_node.py
index 91bd5fdb15..bca3d9fdc1 100644
--- a/openpype/hosts/houdini/plugins/publish/collect_output_node.py
+++ b/openpype/hosts/houdini/plugins/publish/collect_output_node.py
@@ -1,5 +1,7 @@
import pyblish.api
+from openpype.pipeline.publish import KnownPublishError
+
class CollectOutputSOPPath(pyblish.api.InstancePlugin):
"""Collect the out node's SOP/COP Path value."""
@@ -63,8 +65,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
out_node = node.parm("startnode").evalAsNode()
else:
- raise ValueError(
- "ROP node type '%s' is" " not supported." % node_type
+ raise KnownPublishError(
+ "ROP node type '{}' is not supported.".format(node_type)
)
if not out_node:
diff --git a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py
index d4fe37f993..277f922ba4 100644
--- a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py
+++ b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py
@@ -80,14 +80,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
def get_beauty_render_product(self, prefix, suffix=""):
"""Return the beauty output filename if render element enabled
"""
+ # Remove aov suffix from the product: `prefix.aov_suffix` -> `prefix`
aov_parm = ".{}".format(suffix)
- beauty_product = None
- if aov_parm in prefix:
- beauty_product = prefix.replace(aov_parm, "")
- else:
- beauty_product = prefix
-
- return beauty_product
+ return prefix.replace(aov_parm, "")
def get_render_element_name(self, node, prefix, suffix=""):
"""Return the output filename using the AOV prefix and suffix
diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py
index 2493b28bc1..3569de7693 100644
--- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py
+++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py
@@ -2,7 +2,7 @@ import pyblish.api
from openpype.lib import version_up
from openpype.pipeline import registered_host
-from openpype.action import get_errored_plugins_from_data
+from openpype.pipeline.publish import get_errored_plugins_from_context
from openpype.hosts.houdini.api import HoudiniHost
from openpype.pipeline.publish import KnownPublishError
@@ -27,7 +27,7 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
def process(self, context):
- errored_plugins = get_errored_plugins_from_data(context)
+ errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
@@ -40,9 +40,10 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
# Filename must not have changed since collecting
host = registered_host() # type: HoudiniHost
current_file = host.current_file()
- assert (
- context.data["currentFile"] == current_file
- ), "Collected filename mismatches from current scene name."
+ if context.data["currentFile"] != current_file:
+ raise KnownPublishError(
+ "Collected filename mismatches from current scene name."
+ )
new_filepath = version_up(current_file)
host.save_workfile(new_filepath)
diff --git a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py
index 4878738ed3..79387fbef5 100644
--- a/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py
+++ b/openpype/hosts/houdini/plugins/publish/validate_animation_settings.py
@@ -1,5 +1,6 @@
import pyblish.api
+from openpype.pipeline.publish import PublishValidationError
from openpype.hosts.houdini.api import lib
import hou
@@ -30,7 +31,7 @@ class ValidateAnimationSettings(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Output settings do no match for '%s'" % instance
)
diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py
index 4e8e5fc0e8..4f71d79382 100644
--- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py
+++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py
@@ -36,11 +36,11 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
if node.parm("shellexec").eval():
self.raise_error("Must not execute in shell")
if node.parm("prerender").eval() != cmd:
- self.raise_error(("REMOTE_PUBLISH node does not have "
- "correct prerender script."))
+ self.raise_error("REMOTE_PUBLISH node does not have "
+ "correct prerender script.")
if node.parm("lprerender").eval() != "python":
- self.raise_error(("REMOTE_PUBLISH node prerender script "
- "type not set to 'python'"))
+ self.raise_error("REMOTE_PUBLISH node prerender script "
+ "type not set to 'python'")
@classmethod
def repair(cls, context):
@@ -48,5 +48,4 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
lib.create_remote_publish_node(force=True)
def raise_error(self, message):
- self.log.error(message)
- raise PublishValidationError(message, title=self.label)
+ raise PublishValidationError(message)
diff --git a/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py b/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py
new file mode 100644
index 0000000000..03ecd1b052
--- /dev/null
+++ b/openpype/hosts/houdini/plugins/publish/validate_review_colorspace.py
@@ -0,0 +1,90 @@
+# -*- coding: utf-8 -*-
+import pyblish.api
+from openpype.pipeline import (
+ PublishValidationError,
+ OptionalPyblishPluginMixin
+)
+from openpype.pipeline.publish import RepairAction
+from openpype.hosts.houdini.api.action import SelectROPAction
+
+import os
+import hou
+
+
+class SetDefaultViewSpaceAction(RepairAction):
+ label = "Set default view colorspace"
+ icon = "mdi.monitor"
+
+
+class ValidateReviewColorspace(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
+ """Validate Review Colorspace parameters.
+
+ It checks if 'OCIO Colorspace' parameter was set to valid value.
+ """
+
+ order = pyblish.api.ValidatorOrder + 0.1
+ families = ["review"]
+ hosts = ["houdini"]
+ label = "Validate Review Colorspace"
+ actions = [SetDefaultViewSpaceAction, SelectROPAction]
+
+ optional = True
+
+ def process(self, instance):
+
+ if not self.is_active(instance.data):
+ return
+
+ if os.getenv("OCIO") is None:
+ self.log.debug(
+ "Using Houdini's Default Color Management, "
+ " skipping check.."
+ )
+ return
+
+ rop_node = hou.node(instance.data["instance_node"])
+ if rop_node.evalParm("colorcorrect") != 2:
+ # any colorspace settings other than default requires
+ # 'Color Correct' parm to be set to 'OpenColorIO'
+ raise PublishValidationError(
+ "'Color Correction' parm on '{}' ROP must be set to"
+ " 'OpenColorIO'".format(rop_node.path())
+ )
+
+ if rop_node.evalParm("ociocolorspace") not in \
+ hou.Color.ocio_spaces():
+
+ raise PublishValidationError(
+ "Invalid value: Colorspace name doesn't exist.\n"
+ "Check 'OCIO Colorspace' parameter on '{}' ROP"
+ .format(rop_node.path())
+ )
+
+ @classmethod
+ def repair(cls, instance):
+ """Set Default View Space Action.
+
+ It is a helper action more than a repair action,
+ used to set colorspace on opengl node to the default view.
+ """
+ from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
+
+ rop_node = hou.node(instance.data["instance_node"])
+
+ if rop_node.evalParm("colorcorrect") != 2:
+ rop_node.setParms({"colorcorrect": 2})
+ cls.log.debug(
+ "'Color Correction' parm on '{}' has been set to"
+ " 'OpenColorIO'".format(rop_node.path())
+ )
+
+ # Get default view colorspace name
+ default_view_space = get_default_display_view_colorspace()
+
+ rop_node.setParms({"ociocolorspace": default_view_space})
+ cls.log.info(
+ "'OCIO Colorspace' parm on '{}' has been set to "
+ "the default view color space '{}'"
+ .format(rop_node, default_view_space)
+ )
diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py
index 02c44ab94e..1daa96f2b9 100644
--- a/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py
+++ b/openpype/hosts/houdini/plugins/publish/validate_usd_render_product_names.py
@@ -24,7 +24,7 @@ class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin):
if not os.path.isabs(filepath):
invalid.append(
- "Output file path is not " "absolute path: %s" % filepath
+ "Output file path is not absolute path: %s" % filepath
)
if invalid:
diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py
index ccd4cd67e1..8287341456 100644
--- a/openpype/hosts/max/api/lib.py
+++ b/openpype/hosts/max/api/lib.py
@@ -6,7 +6,7 @@ from typing import Any, Dict, Union
import six
from openpype.pipeline.context_tools import (
- get_current_project, get_current_project_asset,)
+ get_current_project, get_current_project_asset)
from pymxs import runtime as rt
JSON_PREFIX = "JSON::"
@@ -312,3 +312,98 @@ def set_timeline(frameStart, frameEnd):
"""
rt.animationRange = rt.interval(frameStart, frameEnd)
return rt.animationRange
+
+
+def unique_namespace(namespace, format="%02d",
+ prefix="", suffix="", con_suffix="CON"):
+ """Return unique namespace
+
+ Arguments:
+ namespace (str): Name of namespace to consider
+ format (str, optional): Formatting of the given iteration number
+ suffix (str, optional): Only consider namespaces with this suffix.
+ con_suffix: max only, for finding the name of the master container
+
+ >>> unique_namespace("bar")
+ # bar01
+ >>> unique_namespace(":hello")
+ # :hello01
+ >>> unique_namespace("bar:", suffix="_NS")
+ # bar01_NS:
+
+ """
+
+ def current_namespace():
+ current = namespace
+ # When inside a namespace Max adds no trailing :
+ if not current.endswith(":"):
+ current += ":"
+ return current
+
+ # Always check against the absolute namespace root
+ # There's no clash with :x if we're defining namespace :a:x
+ ROOT = ":" if namespace.startswith(":") else current_namespace()
+
+ # Strip trailing `:` tokens since we might want to add a suffix
+ start = ":" if namespace.startswith(":") else ""
+ end = ":" if namespace.endswith(":") else ""
+ namespace = namespace.strip(":")
+ if ":" in namespace:
+ # Split off any nesting that we don't uniqify anyway.
+ parents, namespace = namespace.rsplit(":", 1)
+ start += parents + ":"
+ ROOT += start
+
+ iteration = 1
+ increment_version = True
+ while increment_version:
+ nr_namespace = namespace + format % iteration
+ unique = prefix + nr_namespace + suffix
+ container_name = f"{unique}:{namespace}{con_suffix}"
+ if not rt.getNodeByName(container_name):
+ name_space = start + unique + end
+ increment_version = False
+ return name_space
+ else:
+ increment_version = True
+ iteration += 1
+
+
+def get_namespace(container_name):
+ """Get the namespace and name of the sub-container
+
+ Args:
+ container_name (str): the name of master container
+
+ Raises:
+ RuntimeError: when there is no master container found
+
+ Returns:
+ namespace (str): namespace of the sub-container
+ name (str): name of the sub-container
+ """
+ node = rt.getNodeByName(container_name)
+ if not node:
+ raise RuntimeError("Master Container Not Found..")
+ name = rt.getUserProp(node, "name")
+ namespace = rt.getUserProp(node, "namespace")
+ return namespace, name
+
+
+def object_transform_set(container_children):
+ """A function which allows to store the transform of
+ previous loaded object(s)
+ Args:
+ container_children(list): A list of nodes
+
+ Returns:
+ transform_set (dict): A dict with all transform data of
+ the previous loaded object(s)
+ """
+ transform_set = {}
+ for node in container_children:
+ name = f"{node.name}.transform"
+ transform_set[name] = node.pos
+ name = f"{node.name}.scale"
+ transform_set[name] = node.scale
+ return transform_set
diff --git a/openpype/hosts/max/api/lib_rendersettings.py b/openpype/hosts/max/api/lib_rendersettings.py
index 1b62edabee..26e176aa8d 100644
--- a/openpype/hosts/max/api/lib_rendersettings.py
+++ b/openpype/hosts/max/api/lib_rendersettings.py
@@ -37,13 +37,10 @@ class RenderSettings(object):
def set_render_camera(self, selection):
for sel in selection:
# to avoid Attribute Error from pymxs wrapper
- found = False
if rt.classOf(sel) in rt.Camera.classes:
- found = True
rt.viewport.setCamera(sel)
- break
- if not found:
- raise RuntimeError("Camera not found")
+ return
+ raise RuntimeError("Active Camera not found")
def render_output(self, container):
folder = rt.maxFilePath
@@ -113,7 +110,8 @@ class RenderSettings(object):
# for setting up renderable camera
arv = rt.MAXToAOps.ArnoldRenderView()
render_camera = rt.viewport.GetCamera()
- arv.setOption("Camera", str(render_camera))
+ if render_camera:
+ arv.setOption("Camera", str(render_camera))
# TODO: add AOVs and extension
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py
index 03b85a4066..72163f5ecf 100644
--- a/openpype/hosts/max/api/pipeline.py
+++ b/openpype/hosts/max/api/pipeline.py
@@ -15,8 +15,10 @@ from openpype.pipeline import (
)
from openpype.hosts.max.api.menu import OpenPypeMenu
from openpype.hosts.max.api import lib
+from openpype.hosts.max.api.plugin import MS_CUSTOM_ATTRIB
from openpype.hosts.max import MAX_HOST_DIR
+
from pymxs import runtime as rt # noqa
log = logging.getLogger("openpype.hosts.max")
@@ -152,17 +154,18 @@ def ls() -> list:
yield lib.read(container)
-def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"):
+def containerise(name: str, nodes: list, context,
+ namespace=None, loader=None, suffix="_CON"):
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
- "namespace": "",
+ "namespace": namespace or "",
"loader": loader,
"representation": context["representation"]["_id"],
}
- container_name = f"{name}{suffix}"
+ container_name = f"{namespace}:{name}{suffix}"
container = rt.container(name=container_name)
for node in nodes:
node.Parent = container
@@ -170,3 +173,53 @@ def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"):
if not lib.imprint(container_name, data):
print(f"imprinting of {container_name} failed.")
return container
+
+
+def load_custom_attribute_data():
+ """Re-loading the Openpype/AYON custom parameter built by the creator
+
+ Returns:
+ attribute: re-loading the custom OP attributes set in Maxscript
+ """
+ return rt.Execute(MS_CUSTOM_ATTRIB)
+
+
+def import_custom_attribute_data(container: str, selections: list):
+ """Importing the Openpype/AYON custom parameter built by the creator
+
+ Args:
+ container (str): target container which adds custom attributes
+ selections (list): nodes to be added into
+ group in custom attributes
+ """
+ attrs = load_custom_attribute_data()
+ modifier = rt.EmptyModifier()
+ rt.addModifier(container, modifier)
+ container.modifiers[0].name = "OP Data"
+ rt.custAttributes.add(container.modifiers[0], attrs)
+ node_list = []
+ sel_list = []
+ for i in selections:
+ node_ref = rt.NodeTransformMonitor(node=i)
+ node_list.append(node_ref)
+ sel_list.append(str(i))
+
+ # Setting the property
+ rt.setProperty(
+ container.modifiers[0].openPypeData,
+ "all_handles", node_list)
+ rt.setProperty(
+ container.modifiers[0].openPypeData,
+ "sel_list", sel_list)
+
+def update_custom_attribute_data(container: str, selections: list):
+ """Updating the Openpype/AYON custom parameter built by the creator
+
+ Args:
+ container (str): target container which adds custom attributes
+ selections (list): nodes to be added into
+ group in custom attributes
+ """
+ if container.modifiers[0].name == "OP Data":
+ rt.deleteModifier(container, container.modifiers[0])
+ import_custom_attribute_data(container, selections)
diff --git a/openpype/hosts/max/plugins/create/create_render.py b/openpype/hosts/max/plugins/create/create_render.py
index 235046684e..9cc3c8da8a 100644
--- a/openpype/hosts/max/plugins/create/create_render.py
+++ b/openpype/hosts/max/plugins/create/create_render.py
@@ -14,7 +14,6 @@ class CreateRender(plugin.MaxCreator):
def create(self, subset_name, instance_data, pre_create_data):
from pymxs import runtime as rt
- sel_obj = list(rt.selection)
file = rt.maxFileName
filename, _ = os.path.splitext(file)
instance_data["AssetName"] = filename
diff --git a/openpype/hosts/max/plugins/load/load_camera_fbx.py b/openpype/hosts/max/plugins/load/load_camera_fbx.py
index 62284b23d9..f040115417 100644
--- a/openpype/hosts/max/plugins/load/load_camera_fbx.py
+++ b/openpype/hosts/max/plugins/load/load_camera_fbx.py
@@ -1,7 +1,16 @@
import os
from openpype.hosts.max.api import lib, maintained_selection
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.lib import (
+ unique_namespace,
+ get_namespace,
+ object_transform_set
+)
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.pipeline import get_representation_path, load
@@ -13,50 +22,76 @@ class FbxLoader(load.LoaderPlugin):
order = -9
icon = "code-fork"
color = "white"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
-
filepath = self.filepath_from_context(context)
filepath = os.path.normpath(filepath)
rt.FBXImporterSetParam("Animation", True)
rt.FBXImporterSetParam("Camera", True)
rt.FBXImporterSetParam("AxisConversionMethod", True)
+ rt.FBXImporterSetParam("Mode", rt.Name("create"))
rt.FBXImporterSetParam("Preserveinstances", True)
rt.ImportFile(
filepath,
rt.name("noPrompt"),
using=rt.FBXIMP)
- container = rt.GetNodeByName(f"{name}")
- if not container:
- container = rt.Container()
- container.name = f"{name}"
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ container = rt.container(
+ name=f"{namespace}:{name}_{self.postfix}")
+ selections = rt.GetCurrentSelection()
+ import_custom_attribute_data(container, selections)
- for selection in rt.GetCurrentSelection():
+ for selection in selections:
selection.Parent = container
+ selection.name = f"{namespace}:{selection.name}"
return containerise(
- name, [container], context, loader=self.__class__.__name__)
+ name, [container], context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
- node = rt.GetNodeByName(container["instance_node"])
- rt.Select(node.Children)
- fbx_reimport_cmd = (
- f"""
+ node_name = container["instance_node"]
+ node = rt.getNodeByName(node_name)
+ namespace, name = get_namespace(node_name)
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ inst_container = rt.getNodeByName(sub_node_name)
+ rt.Select(inst_container.Children)
+ transform_data = object_transform_set(inst_container.Children)
+ for prev_fbx_obj in rt.selection:
+ if rt.isValidNode(prev_fbx_obj):
+ rt.Delete(prev_fbx_obj)
-FBXImporterSetParam "Animation" true
-FBXImporterSetParam "Cameras" true
-FBXImporterSetParam "AxisConversionMethod" true
-FbxExporterSetParam "UpAxis" "Y"
-FbxExporterSetParam "Preserveinstances" true
+ rt.FBXImporterSetParam("Animation", True)
+ rt.FBXImporterSetParam("Camera", True)
+ rt.FBXImporterSetParam("Mode", rt.Name("merge"))
+ rt.FBXImporterSetParam("AxisConversionMethod", True)
+ rt.FBXImporterSetParam("Preserveinstances", True)
+ rt.ImportFile(
+ path, rt.name("noPrompt"), using=rt.FBXIMP)
+ current_fbx_objects = rt.GetCurrentSelection()
+ for fbx_object in current_fbx_objects:
+ if fbx_object.Parent != inst_container:
+ fbx_object.Parent = inst_container
+ fbx_object.name = f"{namespace}:{fbx_object.name}"
+ fbx_object.pos = transform_data[
+ f"{fbx_object.name}.transform"]
+ fbx_object.scale = transform_data[
+ f"{fbx_object.name}.scale"]
-importFile @"{path}" #noPrompt using:FBXIMP
- """)
- rt.Execute(fbx_reimport_cmd)
+ for children in node.Children:
+ if rt.classOf(children) == rt.Container:
+ if children.name == sub_node_name:
+ update_custom_attribute_data(
+ children, current_fbx_objects)
with maintained_selection():
rt.Select(node)
diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py
index 76cd3bf367..98e9be96e1 100644
--- a/openpype/hosts/max/plugins/load/load_max_scene.py
+++ b/openpype/hosts/max/plugins/load/load_max_scene.py
@@ -1,7 +1,15 @@
import os
from openpype.hosts.max.api import lib
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.lib import (
+ unique_namespace,
+ get_namespace,
+ object_transform_set
+)
+from openpype.hosts.max.api.pipeline import (
+ containerise, import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.pipeline import get_representation_path, load
@@ -16,22 +24,34 @@ class MaxSceneLoader(load.LoaderPlugin):
order = -8
icon = "code-fork"
color = "green"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
-
path = self.filepath_from_context(context)
path = os.path.normpath(path)
# import the max scene by using "merge file"
path = path.replace('\\', '/')
- rt.MergeMaxFile(path)
+ rt.MergeMaxFile(path, quiet=True, includeFullGroup=True)
max_objects = rt.getLastMergedNodes()
- max_container = rt.Container(name=f"{name}")
- for max_object in max_objects:
- max_object.Parent = max_container
+ max_object_names = [obj.name for obj in max_objects]
+ # implement the OP/AYON custom attributes before load
+ max_container = []
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ container_name = f"{namespace}:{name}_{self.postfix}"
+ container = rt.Container(name=container_name)
+ import_custom_attribute_data(container, max_objects)
+ max_container.append(container)
+ max_container.extend(max_objects)
+ for max_obj, obj_name in zip(max_objects, max_object_names):
+ max_obj.name = f"{namespace}:{obj_name}"
return containerise(
- name, [max_container], context, loader=self.__class__.__name__)
+ name, max_container, context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
@@ -39,15 +59,32 @@ class MaxSceneLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node_name = container["instance_node"]
- rt.MergeMaxFile(path,
- rt.Name("noRedraw"),
- rt.Name("deleteOldDups"),
- rt.Name("useSceneMtlDups"))
+ node = rt.getNodeByName(node_name)
+ namespace, name = get_namespace(node_name)
+ sub_container_name = f"{namespace}:{name}_{self.postfix}"
+ # delete the old container with attribute
+ # delete old duplicate
+ rt.Select(node.Children)
+ transform_data = object_transform_set(node.Children)
+ for prev_max_obj in rt.GetCurrentSelection():
+ if rt.isValidNode(prev_max_obj) and prev_max_obj.name != sub_container_name: # noqa
+ rt.Delete(prev_max_obj)
+ rt.MergeMaxFile(path, rt.Name("deleteOldDups"))
- max_objects = rt.getLastMergedNodes()
- container_node = rt.GetNodeByName(node_name)
- for max_object in max_objects:
- max_object.Parent = container_node
+ current_max_objects = rt.getLastMergedNodes()
+ current_max_object_names = [obj.name for obj
+ in current_max_objects]
+ sub_container = rt.getNodeByName(sub_container_name)
+ update_custom_attribute_data(sub_container, current_max_objects)
+ for max_object in current_max_objects:
+ max_object.Parent = node
+ for max_obj, obj_name in zip(current_max_objects,
+ current_max_object_names):
+ max_obj.name = f"{namespace}:{obj_name}"
+ max_obj.pos = transform_data[
+ f"{max_obj.name}.transform"]
+ max_obj.scale = transform_data[
+ f"{max_obj.name}.scale"]
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py
index cff82a593c..c5a73b4327 100644
--- a/openpype/hosts/max/plugins/load/load_model.py
+++ b/openpype/hosts/max/plugins/load/load_model.py
@@ -1,8 +1,14 @@
import os
from openpype.pipeline import load, get_representation_path
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.hosts.max.api import lib
-from openpype.hosts.max.api.lib import maintained_selection
+from openpype.hosts.max.api.lib import (
+ maintained_selection, unique_namespace
+)
class ModelAbcLoader(load.LoaderPlugin):
@@ -14,6 +20,7 @@ class ModelAbcLoader(load.LoaderPlugin):
order = -10
icon = "code-fork"
color = "orange"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -30,7 +37,7 @@ class ModelAbcLoader(load.LoaderPlugin):
rt.AlembicImport.CustomAttributes = True
rt.AlembicImport.UVs = True
rt.AlembicImport.VertexColors = True
- rt.importFile(file_path, rt.name("noPrompt"))
+ rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport)
abc_after = {
c
@@ -45,9 +52,22 @@ class ModelAbcLoader(load.LoaderPlugin):
self.log.error("Something failed when loading.")
abc_container = abc_containers.pop()
+ import_custom_attribute_data(
+ abc_container, abc_container.Children)
+
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ for abc_object in abc_container.Children:
+ abc_object.name = f"{namespace}:{abc_object.name}"
+ # rename the abc container with namespace
+ abc_container_name = f"{namespace}:{name}_{self.postfix}"
+ abc_container.name = abc_container_name
return containerise(
- name, [abc_container], context, loader=self.__class__.__name__
+ name, [abc_container], context,
+ namespace, loader=self.__class__.__name__
)
def update(self, container, representation):
@@ -55,21 +75,19 @@ class ModelAbcLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node = rt.GetNodeByName(container["instance_node"])
- rt.Select(node.Children)
-
- for alembic in rt.Selection:
- abc = rt.GetNodeByName(alembic.name)
- rt.Select(abc.Children)
- for abc_con in rt.Selection:
- container = rt.GetNodeByName(abc_con.name)
- container.source = path
- rt.Select(container.Children)
- for abc_obj in rt.Selection:
- alembic_obj = rt.GetNodeByName(abc_obj.name)
- alembic_obj.source = path
with maintained_selection():
- rt.Select(node)
+ rt.Select(node.Children)
+
+ for alembic in rt.Selection:
+ abc = rt.GetNodeByName(alembic.name)
+ update_custom_attribute_data(abc, abc.Children)
+ rt.Select(abc.Children)
+ for abc_con in abc.Children:
+ abc_con.source = path
+ rt.Select(abc_con.Children)
+ for abc_obj in abc_con.Children:
+ abc_obj.source = path
lib.imprint(
container["instance_node"],
diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py
index 12f526ab95..56c8768675 100644
--- a/openpype/hosts/max/plugins/load/load_model_fbx.py
+++ b/openpype/hosts/max/plugins/load/load_model_fbx.py
@@ -1,7 +1,15 @@
import os
from openpype.pipeline import load, get_representation_path
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.pipeline import (
+ containerise, import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.hosts.max.api import lib
+from openpype.hosts.max.api.lib import (
+ unique_namespace,
+ get_namespace,
+ object_transform_set
+)
from openpype.hosts.max.api.lib import maintained_selection
@@ -13,6 +21,7 @@ class FbxModelLoader(load.LoaderPlugin):
order = -9
icon = "code-fork"
color = "white"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -20,39 +29,69 @@ class FbxModelLoader(load.LoaderPlugin):
filepath = os.path.normpath(self.filepath_from_context(context))
rt.FBXImporterSetParam("Animation", False)
rt.FBXImporterSetParam("Cameras", False)
+ rt.FBXImporterSetParam("Mode", rt.Name("create"))
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP)
- container = rt.GetNodeByName(name)
- if not container:
- container = rt.Container()
- container.name = name
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ container = rt.container(
+ name=f"{namespace}:{name}_{self.postfix}")
+ selections = rt.GetCurrentSelection()
+ import_custom_attribute_data(container, selections)
- for selection in rt.GetCurrentSelection():
+ for selection in selections:
selection.Parent = container
+ selection.name = f"{namespace}:{selection.name}"
return containerise(
- name, [container], context, loader=self.__class__.__name__
+ name, [container], context,
+ namespace, loader=self.__class__.__name__
)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
- node = rt.getNodeByName(container["instance_node"])
- rt.select(node.Children)
+ node_name = container["instance_node"]
+ node = rt.getNodeByName(node_name)
+ namespace, name = get_namespace(node_name)
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ inst_container = rt.getNodeByName(sub_node_name)
+ rt.Select(inst_container.Children)
+ transform_data = object_transform_set(inst_container.Children)
+ for prev_fbx_obj in rt.selection:
+ if rt.isValidNode(prev_fbx_obj):
+ rt.Delete(prev_fbx_obj)
rt.FBXImporterSetParam("Animation", False)
rt.FBXImporterSetParam("Cameras", False)
+ rt.FBXImporterSetParam("Mode", rt.Name("merge"))
rt.FBXImporterSetParam("AxisConversionMethod", True)
- rt.FBXImporterSetParam("UpAxis", "Y")
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP)
+ current_fbx_objects = rt.GetCurrentSelection()
+ for fbx_object in current_fbx_objects:
+ if fbx_object.Parent != inst_container:
+ fbx_object.Parent = inst_container
+ fbx_object.name = f"{namespace}:{fbx_object.name}"
+ fbx_object.pos = transform_data[
+ f"{fbx_object.name}.transform"]
+ fbx_object.scale = transform_data[
+ f"{fbx_object.name}.scale"]
+
+ for children in node.Children:
+ if rt.classOf(children) == rt.Container:
+ if children.name == sub_node_name:
+ update_custom_attribute_data(
+ children, current_fbx_objects)
with maintained_selection():
rt.Select(node)
lib.imprint(
- container["instance_node"],
+ node_name,
{"representation": str(representation["_id"])},
)
diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py
index 18a19414fa..314889e6ec 100644
--- a/openpype/hosts/max/plugins/load/load_model_obj.py
+++ b/openpype/hosts/max/plugins/load/load_model_obj.py
@@ -1,8 +1,18 @@
import os
from openpype.hosts.max.api import lib
+from openpype.hosts.max.api.lib import (
+ unique_namespace,
+ get_namespace,
+ maintained_selection,
+ object_transform_set
+)
from openpype.hosts.max.api.lib import maintained_selection
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.pipeline import get_representation_path, load
@@ -14,6 +24,7 @@ class ObjLoader(load.LoaderPlugin):
order = -9
icon = "code-fork"
color = "white"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -22,36 +33,49 @@ class ObjLoader(load.LoaderPlugin):
self.log.debug("Executing command to import..")
rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
+
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
# create "missing" container for obj import
- container = rt.Container()
- container.name = name
-
+ container = rt.Container(name=f"{namespace}:{name}_{self.postfix}")
+ selections = rt.GetCurrentSelection()
+ import_custom_attribute_data(container, selections)
# get current selection
- for selection in rt.GetCurrentSelection():
+ for selection in selections:
selection.Parent = container
-
- asset = rt.GetNodeByName(name)
-
+ selection.name = f"{namespace}:{selection.name}"
return containerise(
- name, [asset], context, loader=self.__class__.__name__)
+ name, [container], context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
node_name = container["instance_node"]
- node = rt.GetNodeByName(node_name)
-
- instance_name, _ = node_name.split("_")
- container = rt.GetNodeByName(instance_name)
- for child in container.Children:
- rt.Delete(child)
+ node = rt.getNodeByName(node_name)
+ namespace, name = get_namespace(node_name)
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ inst_container = rt.getNodeByName(sub_node_name)
+ rt.Select(inst_container.Children)
+ transform_data = object_transform_set(inst_container.Children)
+ for prev_obj in rt.selection:
+ if rt.isValidNode(prev_obj):
+ rt.Delete(prev_obj)
rt.Execute(f'importFile @"{path}" #noPrompt using:ObjImp')
# get current selection
- for selection in rt.GetCurrentSelection():
- selection.Parent = container
-
+ selections = rt.GetCurrentSelection()
+ update_custom_attribute_data(inst_container, selections)
+ for selection in selections:
+ selection.Parent = inst_container
+ selection.name = f"{namespace}:{selection.name}"
+ selection.pos = transform_data[
+ f"{selection.name}.transform"]
+ selection.scale = transform_data[
+ f"{selection.name}.scale"]
with maintained_selection():
rt.Select(node)
diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py
index 48b50b9b18..f35d8e6327 100644
--- a/openpype/hosts/max/plugins/load/load_model_usd.py
+++ b/openpype/hosts/max/plugins/load/load_model_usd.py
@@ -1,8 +1,16 @@
import os
from openpype.hosts.max.api import lib
+from openpype.hosts.max.api.lib import (
+ unique_namespace,
+ get_namespace,
+ object_transform_set
+)
from openpype.hosts.max.api.lib import maintained_selection
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data
+)
from openpype.pipeline import get_representation_path, load
@@ -15,6 +23,7 @@ class ModelUSDLoader(load.LoaderPlugin):
order = -10
icon = "code-fork"
color = "orange"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -30,11 +39,24 @@ class ModelUSDLoader(load.LoaderPlugin):
rt.LogLevel = rt.Name("info")
rt.USDImporter.importFile(filepath,
importOptions=import_options)
-
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
asset = rt.GetNodeByName(name)
+ import_custom_attribute_data(asset, asset.Children)
+ for usd_asset in asset.Children:
+ usd_asset.name = f"{namespace}:{usd_asset.name}"
+
+ asset_name = f"{namespace}:{name}_{self.postfix}"
+ asset.name = asset_name
+ # need to get the correct container after renamed
+ asset = rt.GetNodeByName(asset_name)
+
return containerise(
- name, [asset], context, loader=self.__class__.__name__)
+ name, [asset], context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
@@ -42,11 +64,16 @@ class ModelUSDLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node_name = container["instance_node"]
node = rt.GetNodeByName(node_name)
+ namespace, name = get_namespace(node_name)
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ transform_data = None
for n in node.Children:
- for r in n.Children:
- rt.Delete(r)
+ rt.Select(n.Children)
+ transform_data = object_transform_set(n.Children)
+ for prev_usd_asset in rt.selection:
+ if rt.isValidNode(prev_usd_asset):
+ rt.Delete(prev_usd_asset)
rt.Delete(n)
- instance_name, _ = node_name.split("_")
import_options = rt.USDImporter.CreateOptions()
base_filename = os.path.basename(path)
@@ -55,11 +82,20 @@ class ModelUSDLoader(load.LoaderPlugin):
rt.LogPath = log_filepath
rt.LogLevel = rt.Name("info")
- rt.USDImporter.importFile(path,
- importOptions=import_options)
+ rt.USDImporter.importFile(
+ path, importOptions=import_options)
- asset = rt.GetNodeByName(instance_name)
+ asset = rt.GetNodeByName(name)
asset.Parent = node
+ import_custom_attribute_data(asset, asset.Children)
+ for children in asset.Children:
+ children.name = f"{namespace}:{children.name}"
+ children.pos = transform_data[
+ f"{children.name}.transform"]
+ children.scale = transform_data[
+ f"{children.name}.scale"]
+
+ asset.name = sub_node_name
with maintained_selection():
rt.Select(node)
diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py
index 290503e053..070dea88d4 100644
--- a/openpype/hosts/max/plugins/load/load_pointcache.py
+++ b/openpype/hosts/max/plugins/load/load_pointcache.py
@@ -7,7 +7,12 @@ Because of limited api, alembics can be only loaded, but not easily updated.
import os
from openpype.pipeline import load, get_representation_path
from openpype.hosts.max.api import lib, maintained_selection
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.lib import unique_namespace
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
class AbcLoader(load.LoaderPlugin):
@@ -19,6 +24,7 @@ class AbcLoader(load.LoaderPlugin):
order = -10
icon = "code-fork"
color = "orange"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -33,7 +39,7 @@ class AbcLoader(load.LoaderPlugin):
}
rt.AlembicImport.ImportToRoot = False
- rt.importFile(file_path, rt.name("noPrompt"))
+ rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport)
abc_after = {
c
@@ -48,13 +54,27 @@ class AbcLoader(load.LoaderPlugin):
self.log.error("Something failed when loading.")
abc_container = abc_containers.pop()
-
- for abc in rt.GetCurrentSelection():
+ selections = rt.GetCurrentSelection()
+ import_custom_attribute_data(
+ abc_container, abc_container.Children)
+ for abc in selections:
for cam_shape in abc.Children:
cam_shape.playbackType = 2
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+
+ for abc_object in abc_container.Children:
+ abc_object.name = f"{namespace}:{abc_object.name}"
+ # rename the abc container with namespace
+ abc_container_name = f"{namespace}:{name}_{self.postfix}"
+ abc_container.name = abc_container_name
+
return containerise(
- name, [abc_container], context, loader=self.__class__.__name__
+ name, [abc_container], context,
+ namespace, loader=self.__class__.__name__
)
def update(self, container, representation):
@@ -63,28 +83,23 @@ class AbcLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node = rt.GetNodeByName(container["instance_node"])
- alembic_objects = self.get_container_children(node, "AlembicObject")
- for alembic_object in alembic_objects:
- alembic_object.source = path
-
- lib.imprint(
- container["instance_node"],
- {"representation": str(representation["_id"])},
- )
-
with maintained_selection():
rt.Select(node.Children)
for alembic in rt.Selection:
abc = rt.GetNodeByName(alembic.name)
+ update_custom_attribute_data(abc, abc.Children)
rt.Select(abc.Children)
- for abc_con in rt.Selection:
- container = rt.GetNodeByName(abc_con.name)
- container.source = path
- rt.Select(container.Children)
- for abc_obj in rt.Selection:
- alembic_obj = rt.GetNodeByName(abc_obj.name)
- alembic_obj.source = path
+ for abc_con in abc.Children:
+ abc_con.source = path
+ rt.Select(abc_con.Children)
+ for abc_obj in abc_con.Children:
+ abc_obj.source = path
+
+ lib.imprint(
+ container["instance_node"],
+ {"representation": str(representation["_id"])},
+ )
def switch(self, container, representation):
self.update(container, representation)
diff --git a/openpype/hosts/max/plugins/load/load_pointcloud.py b/openpype/hosts/max/plugins/load/load_pointcloud.py
index 2a1175167a..c4c4cfbc6c 100644
--- a/openpype/hosts/max/plugins/load/load_pointcloud.py
+++ b/openpype/hosts/max/plugins/load/load_pointcloud.py
@@ -1,7 +1,14 @@
import os
from openpype.hosts.max.api import lib, maintained_selection
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.lib import (
+ unique_namespace, get_namespace
+)
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.pipeline import get_representation_path, load
@@ -13,6 +20,7 @@ class PointCloudLoader(load.LoaderPlugin):
order = -8
icon = "code-fork"
color = "green"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
"""load point cloud by tyCache"""
@@ -22,10 +30,19 @@ class PointCloudLoader(load.LoaderPlugin):
obj = rt.tyCache()
obj.filename = filepath
- prt_container = rt.GetNodeByName(obj.name)
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ prt_container = rt.Container(
+ name=f"{namespace}:{name}_{self.postfix}")
+ import_custom_attribute_data(prt_container, [obj])
+ obj.Parent = prt_container
+ obj.name = f"{namespace}:{obj.name}"
return containerise(
- name, [prt_container], context, loader=self.__class__.__name__)
+ name, [prt_container], context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
"""update the container"""
@@ -33,15 +50,18 @@ class PointCloudLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node = rt.GetNodeByName(container["instance_node"])
+ namespace, name = get_namespace(container["instance_node"])
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ inst_container = rt.getNodeByName(sub_node_name)
+ update_custom_attribute_data(
+ inst_container, inst_container.Children)
with maintained_selection():
rt.Select(node.Children)
- for prt in rt.Selection:
- prt_object = rt.GetNodeByName(prt.name)
- prt_object.filename = path
-
- lib.imprint(container["instance_node"], {
- "representation": str(representation["_id"])
- })
+ for prt in inst_container.Children:
+ prt.filename = path
+ lib.imprint(container["instance_node"], {
+ "representation": str(representation["_id"])
+ })
def switch(self, container, representation):
self.update(container, representation)
diff --git a/openpype/hosts/max/plugins/load/load_redshift_proxy.py b/openpype/hosts/max/plugins/load/load_redshift_proxy.py
index 31692f6367..f7dd95962b 100644
--- a/openpype/hosts/max/plugins/load/load_redshift_proxy.py
+++ b/openpype/hosts/max/plugins/load/load_redshift_proxy.py
@@ -5,8 +5,15 @@ from openpype.pipeline import (
load,
get_representation_path
)
-from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api.pipeline import (
+ containerise,
+ import_custom_attribute_data,
+ update_custom_attribute_data
+)
from openpype.hosts.max.api import lib
+from openpype.hosts.max.api.lib import (
+ unique_namespace, get_namespace
+)
class RedshiftProxyLoader(load.LoaderPlugin):
@@ -18,6 +25,7 @@ class RedshiftProxyLoader(load.LoaderPlugin):
order = -9
icon = "code-fork"
color = "white"
+ postfix = "param"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
@@ -30,24 +38,32 @@ class RedshiftProxyLoader(load.LoaderPlugin):
if collections:
rs_proxy.is_sequence = True
- container = rt.container()
- container.name = name
+ namespace = unique_namespace(
+ name + "_",
+ suffix="_",
+ )
+ container = rt.Container(
+ name=f"{namespace}:{name}_{self.postfix}")
rs_proxy.Parent = container
-
- asset = rt.getNodeByName(name)
+ rs_proxy.name = f"{namespace}:{rs_proxy.name}"
+ import_custom_attribute_data(container, [rs_proxy])
return containerise(
- name, [asset], context, loader=self.__class__.__name__)
+ name, [container], context,
+ namespace, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
- node = rt.getNodeByName(container["instance_node"])
- for children in node.Children:
- children_node = rt.getNodeByName(children.name)
- for proxy in children_node.Children:
- proxy.file = path
+ namespace, name = get_namespace(container["instance_node"])
+ sub_node_name = f"{namespace}:{name}_{self.postfix}"
+ inst_container = rt.getNodeByName(sub_node_name)
+
+ update_custom_attribute_data(
+ inst_container, inst_container.Children)
+ for proxy in inst_container.Children:
+ proxy.file = path
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py
index db5c84fad9..2dfa1520a9 100644
--- a/openpype/hosts/max/plugins/publish/collect_render.py
+++ b/openpype/hosts/max/plugins/publish/collect_render.py
@@ -30,10 +30,12 @@ class CollectRender(pyblish.api.InstancePlugin):
asset = get_current_asset_name()
files_by_aov = RenderProducts().get_beauty(instance.name)
- folder = folder.replace("\\", "/")
aovs = RenderProducts().get_aovs(instance.name)
files_by_aov.update(aovs)
+ camera = rt.viewport.GetCamera()
+ instance.data["cameras"] = [camera.name] if camera else None # noqa
+
if "expectedFiles" not in instance.data:
instance.data["expectedFiles"] = list()
instance.data["files"] = list()
diff --git a/openpype/hosts/max/plugins/publish/extract_camera_abc.py b/openpype/hosts/max/plugins/publish/extract_camera_abc.py
index b42732e70d..b1918c53e0 100644
--- a/openpype/hosts/max/plugins/publish/extract_camera_abc.py
+++ b/openpype/hosts/max/plugins/publish/extract_camera_abc.py
@@ -22,8 +22,6 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
start = float(instance.data.get("frameStartHandle", 1))
end = float(instance.data.get("frameEndHandle", 1))
- container = instance.data["instance_node"]
-
self.log.info("Extracting Camera ...")
stagingdir = self.staging_dir(instance)
diff --git a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py
index 06ac3da093..537c88eb4d 100644
--- a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py
+++ b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py
@@ -19,9 +19,8 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
def process(self, instance):
if not self.is_active(instance.data):
return
- container = instance.data["instance_node"]
- self.log.info("Extracting Camera ...")
+ self.log.debug("Extracting Camera ...")
stagingdir = self.staging_dir(instance)
filename = "{name}.fbx".format(**instance.data)
diff --git a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py
index de5db9ab56..a7a889c587 100644
--- a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py
+++ b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py
@@ -18,10 +18,9 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
def process(self, instance):
if not self.is_active(instance.data):
return
- container = instance.data["instance_node"]
# publish the raw scene for camera
- self.log.info("Extracting Raw Max Scene ...")
+ self.log.debug("Extracting Raw Max Scene ...")
stagingdir = self.staging_dir(instance)
filename = "{name}.max".format(**instance.data)
diff --git a/openpype/hosts/max/plugins/publish/extract_model.py b/openpype/hosts/max/plugins/publish/extract_model.py
index c7ecf7efc9..38f4848c5e 100644
--- a/openpype/hosts/max/plugins/publish/extract_model.py
+++ b/openpype/hosts/max/plugins/publish/extract_model.py
@@ -20,9 +20,7 @@ class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
if not self.is_active(instance.data):
return
- container = instance.data["instance_node"]
-
- self.log.info("Extracting Geometry ...")
+ self.log.debug("Extracting Geometry ...")
stagingdir = self.staging_dir(instance)
filename = "{name}.abc".format(**instance.data)
diff --git a/openpype/hosts/max/plugins/publish/extract_model_fbx.py b/openpype/hosts/max/plugins/publish/extract_model_fbx.py
index 56c2cadd94..fd48ed5007 100644
--- a/openpype/hosts/max/plugins/publish/extract_model_fbx.py
+++ b/openpype/hosts/max/plugins/publish/extract_model_fbx.py
@@ -20,10 +20,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
if not self.is_active(instance.data):
return
- container = instance.data["instance_node"]
-
-
- self.log.info("Extracting Geometry ...")
+ self.log.debug("Extracting Geometry ...")
stagingdir = self.staging_dir(instance)
filename = "{name}.fbx".format(**instance.data)
diff --git a/openpype/hosts/max/plugins/publish/extract_model_obj.py b/openpype/hosts/max/plugins/publish/extract_model_obj.py
index 4fde65cf22..e522b1e7a1 100644
--- a/openpype/hosts/max/plugins/publish/extract_model_obj.py
+++ b/openpype/hosts/max/plugins/publish/extract_model_obj.py
@@ -20,9 +20,7 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
if not self.is_active(instance.data):
return
- container = instance.data["instance_node"]
-
- self.log.info("Extracting Geometry ...")
+ self.log.debug("Extracting Geometry ...")
stagingdir = self.staging_dir(instance)
filename = "{name}.obj".format(**instance.data)
diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py
index 5a99a8b845..c3de623bc0 100644
--- a/openpype/hosts/max/plugins/publish/extract_pointcache.py
+++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py
@@ -54,8 +54,6 @@ class ExtractAlembic(publish.Extractor):
start = float(instance.data.get("frameStartHandle", 1))
end = float(instance.data.get("frameEndHandle", 1))
- container = instance.data["instance_node"]
-
self.log.debug("Extracting pointcache ...")
parent_dir = self.staging_dir(instance)
diff --git a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py
index ab569ecbcb..f67ed30c6b 100644
--- a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py
+++ b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py
@@ -16,11 +16,10 @@ class ExtractRedshiftProxy(publish.Extractor):
families = ["redshiftproxy"]
def process(self, instance):
- container = instance.data["instance_node"]
start = int(instance.context.data.get("frameStart"))
end = int(instance.context.data.get("frameEnd"))
- self.log.info("Extracting Redshift Proxy...")
+ self.log.debug("Extracting Redshift Proxy...")
stagingdir = self.staging_dir(instance)
rs_filename = "{name}.rs".format(**instance.data)
rs_filepath = os.path.join(stagingdir, rs_filename)
diff --git a/openpype/hosts/max/plugins/publish/validate_no_max_content.py b/openpype/hosts/max/plugins/publish/validate_no_max_content.py
index c6a27dace3..73e12e75c9 100644
--- a/openpype/hosts/max/plugins/publish/validate_no_max_content.py
+++ b/openpype/hosts/max/plugins/publish/validate_no_max_content.py
@@ -13,7 +13,6 @@ class ValidateMaxContents(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
families = ["camera",
"maxScene",
- "maxrender",
"review"]
hosts = ["max"]
label = "Max Scene Contents"
diff --git a/openpype/hosts/max/plugins/publish/validate_renderable_camera.py b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py
new file mode 100644
index 0000000000..61321661b5
--- /dev/null
+++ b/openpype/hosts/max/plugins/publish/validate_renderable_camera.py
@@ -0,0 +1,46 @@
+# -*- coding: utf-8 -*-
+import pyblish.api
+from openpype.pipeline import (
+ PublishValidationError,
+ OptionalPyblishPluginMixin)
+from openpype.pipeline.publish import RepairAction
+from openpype.hosts.max.api.lib import get_current_renderer
+
+from pymxs import runtime as rt
+
+
+class ValidateRenderableCamera(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
+ """Validates Renderable Camera
+
+ Check if the renderable camera used for rendering
+ """
+
+ order = pyblish.api.ValidatorOrder
+ families = ["maxrender"]
+ hosts = ["max"]
+ label = "Renderable Camera"
+ optional = True
+ actions = [RepairAction]
+
+ def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+ if not instance.data["cameras"]:
+ raise PublishValidationError(
+ "No renderable Camera found in scene."
+ )
+
+ @classmethod
+ def repair(cls, instance):
+
+ rt.viewport.setType(rt.Name("view_camera"))
+ camera = rt.viewport.GetCamera()
+ cls.log.info(f"Camera {camera} set as renderable camera")
+ renderer_class = get_current_renderer()
+ renderer = str(renderer_class).split(":")[0]
+ if renderer == "Arnold":
+ arv = rt.MAXToAOps.ArnoldRenderView()
+ arv.setOption("Camera", str(camera))
+ arv.close()
+ instance.data["cameras"] = [camera.name]
diff --git a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py
index 5fcb843b20..5ac41b10a0 100644
--- a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py
+++ b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py
@@ -6,11 +6,6 @@ from openpype.pipeline import (
from pymxs import runtime as rt
from openpype.hosts.max.api.lib import reset_scene_resolution
-from openpype.pipeline.context_tools import (
- get_current_project_asset,
- get_current_project
-)
-
class ValidateResolutionSetting(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
@@ -43,22 +38,16 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
"on asset or shot.")
def get_db_resolution(self, instance):
- data = ["data.resolutionWidth", "data.resolutionHeight"]
- project_resolution = get_current_project(fields=data)
- project_resolution_data = project_resolution["data"]
- asset_resolution = get_current_project_asset(fields=data)
- asset_resolution_data = asset_resolution["data"]
- # Set project resolution
- project_width = int(
- project_resolution_data.get("resolutionWidth", 1920))
- project_height = int(
- project_resolution_data.get("resolutionHeight", 1080))
- width = int(
- asset_resolution_data.get("resolutionWidth", project_width))
- height = int(
- asset_resolution_data.get("resolutionHeight", project_height))
+ asset_doc = instance.data["assetEntity"]
+ project_doc = instance.context.data["projectEntity"]
+ for data in [asset_doc["data"], project_doc["data"]]:
+ if "resolutionWidth" in data and "resolutionHeight" in data:
+ width = data["resolutionWidth"]
+ height = data["resolutionHeight"]
+ return int(width), int(height)
- return width, height
+ # Defaults if not found in asset document or project document
+ return 1920, 1080
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py
index f54633c04d..42cf29d0a7 100644
--- a/openpype/hosts/maya/api/lib_rendersettings.py
+++ b/openpype/hosts/maya/api/lib_rendersettings.py
@@ -177,12 +177,7 @@ class RenderSettings(object):
# list all the aovs
all_rs_aovs = cmds.ls(type='RedshiftAOV')
for rs_aov in redshift_aovs:
- rs_layername = rs_aov
- if " " in rs_aov:
- rs_renderlayer = rs_aov.replace(" ", "")
- rs_layername = "rsAov_{}".format(rs_renderlayer)
- else:
- rs_layername = "rsAov_{}".format(rs_aov)
+ rs_layername = "rsAov_{}".format(rs_aov.replace(" ", ""))
if rs_layername in all_rs_aovs:
continue
cmds.rsCreateAov(type=rs_aov)
@@ -317,7 +312,7 @@ class RenderSettings(object):
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(aov_separator)
- except ValueError as e:
+ except ValueError:
six.reraise(
CreatorError,
CreatorError(
diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py
index 00d6602ef9..4032618afb 100644
--- a/openpype/hosts/maya/api/plugin.py
+++ b/openpype/hosts/maya/api/plugin.py
@@ -260,7 +260,7 @@ class MayaCreator(NewCreator, MayaCreatorBase):
default=True)
]
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
settings_name = self.settings_name
@@ -683,7 +683,6 @@ class ReferenceLoader(Loader):
loaded_containers.append(container)
self._organize_containers(nodes, container)
c += 1
- namespace = None
return loaded_containers
diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py
index 214ac18aef..115c73c0d3 100644
--- a/openpype/hosts/maya/plugins/create/create_animation.py
+++ b/openpype/hosts/maya/plugins/create/create_animation.py
@@ -81,10 +81,8 @@ class CreateAnimation(plugin.MayaHiddenCreator):
return defs
- def apply_settings(self, project_settings, system_settings):
- super(CreateAnimation, self).apply_settings(
- project_settings, system_settings
- )
+ def apply_settings(self, project_settings):
+ super(CreateAnimation, self).apply_settings(project_settings)
# Hardcoding creator to be enabled due to existing settings would
# disable the creator causing the creator plugin to not be
# discoverable.
diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py
index cc5c1eb205..6266689af4 100644
--- a/openpype/hosts/maya/plugins/create/create_render.py
+++ b/openpype/hosts/maya/plugins/create/create_render.py
@@ -34,7 +34,7 @@ class CreateRenderlayer(plugin.RenderlayerCreator):
render_settings = {}
@classmethod
- def apply_settings(cls, project_settings, system_settings):
+ def apply_settings(cls, project_settings):
cls.render_settings = project_settings["maya"]["RenderSettings"]
def create(self, subset_name, instance_data, pre_create_data):
diff --git a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
index 4e2a99eced..3c9a79156a 100644
--- a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
+++ b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
@@ -21,7 +21,7 @@ class CreateUnrealSkeletalMesh(plugin.MayaCreator):
# Defined in settings
joint_hints = set()
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
"""Apply project settings to creator"""
settings = (
project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"]
diff --git a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
index 3f96d91a54..025b39fa55 100644
--- a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
+++ b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
@@ -16,7 +16,7 @@ class CreateUnrealStaticMesh(plugin.MayaCreator):
# Defined in settings
collision_prefixes = []
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
"""Apply project settings to creator"""
settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"]
self.collision_prefixes = settings["collision_prefixes"]
diff --git a/openpype/hosts/maya/plugins/create/create_vrayscene.py b/openpype/hosts/maya/plugins/create/create_vrayscene.py
index d601dceb54..2726979d30 100644
--- a/openpype/hosts/maya/plugins/create/create_vrayscene.py
+++ b/openpype/hosts/maya/plugins/create/create_vrayscene.py
@@ -22,7 +22,7 @@ class CreateVRayScene(plugin.RenderlayerCreator):
singleton_node_name = "vraysceneMain"
@classmethod
- def apply_settings(cls, project_settings, system_settings):
+ def apply_settings(cls, project_settings):
cls.render_settings = project_settings["maya"]["RenderSettings"]
def create(self, subset_name, instance_data, pre_create_data):
diff --git a/openpype/hosts/maya/plugins/create/create_yeti_cache.py b/openpype/hosts/maya/plugins/create/create_yeti_cache.py
index 395aa62325..ca002392d4 100644
--- a/openpype/hosts/maya/plugins/create/create_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/create/create_yeti_cache.py
@@ -13,8 +13,7 @@ class CreateYetiCache(plugin.MayaCreator):
family = "yeticache"
icon = "pagelines"
- def __init__(self, *args, **kwargs):
- super(CreateYetiCache, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
defs = [
NumberDef("preroll",
@@ -36,3 +35,5 @@ class CreateYetiCache(plugin.MayaCreator):
default=3,
decimals=0)
)
+
+ return defs
diff --git a/openpype/hosts/maya/plugins/inventory/import_reference.py b/openpype/hosts/maya/plugins/inventory/import_reference.py
index ecc424209d..3f3b85ba6c 100644
--- a/openpype/hosts/maya/plugins/inventory/import_reference.py
+++ b/openpype/hosts/maya/plugins/inventory/import_reference.py
@@ -12,7 +12,6 @@ class ImportReference(InventoryAction):
color = "#d8d8d8"
def process(self, containers):
- references = cmds.ls(type="reference")
for container in containers:
if container["loader"] != "ReferenceLoader":
print("Not a reference, skipping")
diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py
index d08fcd904e..cad42b55f9 100644
--- a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py
+++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py
@@ -43,8 +43,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
import multiverse
# Create the shape
- shape = None
- transform = None
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py
index 91767249e0..61f337f501 100644
--- a/openpype/hosts/maya/plugins/load/load_reference.py
+++ b/openpype/hosts/maya/plugins/load/load_reference.py
@@ -205,7 +205,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
cmds.setAttr("{}.selectHandleZ".format(group_name), cz)
if family == "rig":
- self._post_process_rig(name, namespace, context, options)
+ self._post_process_rig(namespace, context, options)
else:
if "translate" in options:
if not attach_to_root and new_nodes:
@@ -229,7 +229,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
members = get_container_members(container)
self._lock_camera_transforms(members)
- def _post_process_rig(self, name, namespace, context, options):
+ def _post_process_rig(self, namespace, context, options):
nodes = self[:]
create_rig_animation_instance(
diff --git a/openpype/hosts/maya/plugins/load/load_xgen.py b/openpype/hosts/maya/plugins/load/load_xgen.py
index 323f8d7eda..2ad6ad55bc 100644
--- a/openpype/hosts/maya/plugins/load/load_xgen.py
+++ b/openpype/hosts/maya/plugins/load/load_xgen.py
@@ -53,8 +53,6 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
)
# Reference xgen. Xgen does not like being referenced in under a group.
- new_nodes = []
-
with maintained_selection():
nodes = cmds.file(
maya_filepath,
diff --git a/openpype/hosts/maya/plugins/load/load_yeti_cache.py b/openpype/hosts/maya/plugins/load/load_yeti_cache.py
index 5cded13d4e..4a11ea9a2c 100644
--- a/openpype/hosts/maya/plugins/load/load_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/load/load_yeti_cache.py
@@ -15,6 +15,16 @@ from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api.pipeline import containerise
+# Do not reset these values on update but only apply on first load
+# to preserve any potential local overrides
+SKIP_UPDATE_ATTRS = {
+ "displayOutput",
+ "viewportDensity",
+ "viewportWidth",
+ "viewportLength",
+}
+
+
def set_attribute(node, attr, value):
"""Wrapper of set attribute which ignores None values"""
if value is None:
@@ -205,6 +215,8 @@ class YetiCacheLoader(load.LoaderPlugin):
yeti_node = yeti_nodes[0]
for attr, value in node_settings["attrs"].items():
+ if attr in SKIP_UPDATE_ATTRS:
+ continue
set_attribute(attr, value, yeti_node)
cmds.setAttr("{}.representation".format(container_node),
@@ -311,7 +323,6 @@ class YetiCacheLoader(load.LoaderPlugin):
# Update attributes with defaults
attributes = node_settings["attrs"]
attributes.update({
- "viewportDensity": 0.1,
"verbosity": 2,
"fileMode": 1,
@@ -321,6 +332,9 @@ class YetiCacheLoader(load.LoaderPlugin):
"visibleInRefractions": True
})
+ if "viewportDensity" not in attributes:
+ attributes["viewportDensity"] = 0.1
+
# Apply attributes to pgYetiMaya node
for attr, value in attributes.items():
set_attribute(attr, value, yeti_node)
diff --git a/openpype/hosts/maya/plugins/publish/collect_assembly.py b/openpype/hosts/maya/plugins/publish/collect_assembly.py
index 2aef9ab908..f64d6bee44 100644
--- a/openpype/hosts/maya/plugins/publish/collect_assembly.py
+++ b/openpype/hosts/maya/plugins/publish/collect_assembly.py
@@ -35,14 +35,11 @@ class CollectAssembly(pyblish.api.InstancePlugin):
# Get all content from the instance
instance_lookup = set(cmds.ls(instance, type="transform", long=True))
data = defaultdict(list)
- self.log.info(instance_lookup)
hierarchy_nodes = []
for container in containers:
- self.log.info(container)
root = lib.get_container_transforms(container, root=True)
- self.log.info(root)
if not root or root not in instance_lookup:
continue
diff --git a/openpype/hosts/maya/plugins/publish/collect_history.py b/openpype/hosts/maya/plugins/publish/collect_history.py
index 71f0169971..d4e8c6298b 100644
--- a/openpype/hosts/maya/plugins/publish/collect_history.py
+++ b/openpype/hosts/maya/plugins/publish/collect_history.py
@@ -18,7 +18,6 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
hosts = ["maya"]
label = "Maya History"
families = ["rig"]
- verbose = False
def process(self, instance):
diff --git a/openpype/hosts/maya/plugins/publish/collect_instances.py b/openpype/hosts/maya/plugins/publish/collect_instances.py
index 5f914b40d7..5058da3d01 100644
--- a/openpype/hosts/maya/plugins/publish/collect_instances.py
+++ b/openpype/hosts/maya/plugins/publish/collect_instances.py
@@ -28,6 +28,8 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
order = pyblish.api.CollectorOrder
hosts = ["maya"]
+ valid_empty_families = {"workfile", "renderlayer"}
+
def process(self, instance):
objset = instance.data.get("instance_node")
@@ -58,7 +60,7 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
instance[:] = members_hierarchy
- elif instance.data["family"] != "workfile":
+ elif instance.data["family"] not in self.valid_empty_families:
self.log.warning("Empty instance: \"%s\" " % objset)
# Store the exact members of the object set
instance.data["setMembers"] = members
diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py
index b3da920566..a2c3d6acbf 100644
--- a/openpype/hosts/maya/plugins/publish/collect_look.py
+++ b/openpype/hosts/maya/plugins/publish/collect_look.py
@@ -356,8 +356,9 @@ class CollectLook(pyblish.api.InstancePlugin):
# Thus the data will be limited to only what we need.
self.log.debug("obj_set {}".format(sets[obj_set]))
if not sets[obj_set]["members"]:
- self.log.info(
- "Removing redundant set information: {}".format(obj_set))
+ self.log.debug(
+ "Removing redundant set information: {}".format(obj_set)
+ )
sets.pop(obj_set, None)
self.log.debug("Gathering attribute changes to instance members..")
@@ -396,9 +397,9 @@ class CollectLook(pyblish.api.InstancePlugin):
if con:
materials.extend(con)
- self.log.info("Found materials:\n{}".format(materials))
+ self.log.debug("Found materials:\n{}".format(materials))
- self.log.info("Found the following sets:\n{}".format(look_sets))
+ self.log.debug("Found the following sets:\n{}".format(look_sets))
# Get the entire node chain of the look sets
# history = cmds.listHistory(look_sets)
history = []
@@ -456,7 +457,7 @@ class CollectLook(pyblish.api.InstancePlugin):
instance.extend(shader for shader in look_sets if shader
not in instance_lookup)
- self.log.info("Collected look for %s" % instance)
+ self.log.debug("Collected look for %s" % instance)
def collect_sets(self, instance):
"""Collect all objectSets which are of importance for publishing
@@ -593,7 +594,7 @@ class CollectLook(pyblish.api.InstancePlugin):
if attribute == "fileTextureName":
computed_attribute = node + ".computedFileTextureNamePattern"
- self.log.info(" - file source: {}".format(source))
+ self.log.debug(" - file source: {}".format(source))
color_space_attr = "{}.colorSpace".format(node)
try:
color_space = cmds.getAttr(color_space_attr)
@@ -621,7 +622,7 @@ class CollectLook(pyblish.api.InstancePlugin):
dependNode=True)
)
if not source and cmds.nodeType(node) in pxr_nodes:
- self.log.info("Renderman: source is empty, skipping...")
+ self.log.debug("Renderman: source is empty, skipping...")
continue
# We replace backslashes with forward slashes because V-Ray
# can't handle the UDIM files with the backslashes in the
@@ -630,14 +631,14 @@ class CollectLook(pyblish.api.InstancePlugin):
files = get_file_node_files(node)
if len(files) == 0:
- self.log.error("No valid files found from node `%s`" % node)
+ self.log.debug("No valid files found from node `%s`" % node)
- self.log.info("collection of resource done:")
- self.log.info(" - node: {}".format(node))
- self.log.info(" - attribute: {}".format(attribute))
- self.log.info(" - source: {}".format(source))
- self.log.info(" - file: {}".format(files))
- self.log.info(" - color space: {}".format(color_space))
+ self.log.debug("collection of resource done:")
+ self.log.debug(" - node: {}".format(node))
+ self.log.debug(" - attribute: {}".format(attribute))
+ self.log.debug(" - source: {}".format(source))
+ self.log.debug(" - file: {}".format(files))
+ self.log.debug(" - color space: {}".format(color_space))
# Define the resource
yield {
diff --git a/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py b/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py
index 33fc7a025f..bcb979edfc 100644
--- a/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py
+++ b/openpype/hosts/maya/plugins/publish/collect_multiverse_look.py
@@ -268,7 +268,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
- self.log.info("Processing mvLook for '{}'".format(instance))
+ self.log.debug("Processing mvLook for '{}'".format(instance))
nodes = set()
for node in instance:
@@ -281,13 +281,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
long=True)
nodes.update(nodes_of_interest)
- files = []
sets = {}
instance.data["resources"] = []
publishMipMap = instance.data["publishMipMap"]
for node in nodes:
- self.log.info("Getting resources for '{}'".format(node))
+ self.log.debug("Getting resources for '{}'".format(node))
# We know what nodes need to be collected, now we need to
# extract the materials overrides.
@@ -380,12 +379,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
if len(files) == 0:
self.log.error("No valid files found from node `%s`" % node)
- self.log.info("collection of resource done:")
- self.log.info(" - node: {}".format(node))
- self.log.info(" - attribute: {}".format(fname_attrib))
- self.log.info(" - source: {}".format(source))
- self.log.info(" - file: {}".format(files))
- self.log.info(" - color space: {}".format(color_space))
+ self.log.debug("collection of resource done:")
+ self.log.debug(" - node: {}".format(node))
+ self.log.debug(" - attribute: {}".format(fname_attrib))
+ self.log.debug(" - source: {}".format(source))
+ self.log.debug(" - file: {}".format(files))
+ self.log.debug(" - color space: {}".format(color_space))
# Define the resource
resource = {"node": node,
@@ -406,14 +405,14 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
extra_files = []
self.log.debug("Expecting MipMaps, going to look for them.")
for fname in files:
- self.log.info("Checking '{}' for mipmaps".format(fname))
+ self.log.debug("Checking '{}' for mipmaps".format(fname))
if is_mipmap(fname):
self.log.debug(" - file is already MipMap, skipping.")
continue
mipmap = get_mipmap(fname)
if mipmap:
- self.log.info(" mipmap found for '{}'".format(fname))
+ self.log.debug(" mipmap found for '{}'".format(fname))
extra_files.append(mipmap)
else:
self.log.warning(" no mipmap found for '{}'".format(fname))
diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py
index c17a8789e4..82392f67bd 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render.py
@@ -105,7 +105,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
"family": cmds.getAttr("{}.family".format(s)),
}
)
- self.log.info(" -> attach render to: {}".format(s))
+ self.log.debug(" -> attach render to: {}".format(s))
layer_name = layer.name()
@@ -137,10 +137,10 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
has_cameras = any(product.camera for product in render_products)
assert has_cameras, "No render cameras found."
- self.log.info("multipart: {}".format(
+ self.log.debug("multipart: {}".format(
multipart))
assert expected_files, "no file names were generated, this is a bug"
- self.log.info(
+ self.log.debug(
"expected files: {}".format(
json.dumps(expected_files, indent=4, sort_keys=True)
)
@@ -175,7 +175,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov_first_key] = full_paths
full_exp_files = [aov_dict]
- self.log.info(full_exp_files)
+ self.log.debug(full_exp_files)
if publish_meta_path is None:
raise KnownPublishError("Unable to detect any expected output "
@@ -227,7 +227,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
if platform.system().lower() in ["linux", "darwin"]:
common_publish_meta_path = "/" + common_publish_meta_path
- self.log.info(
+ self.log.debug(
"Publish meta path: {}".format(common_publish_meta_path))
# Get layer specific settings, might be overrides
@@ -300,7 +300,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
)
if rr_settings["enabled"]:
data["rrPathName"] = instance.data.get("rrPathName")
- self.log.info(data["rrPathName"])
+ self.log.debug(data["rrPathName"])
if self.sync_workfile_version:
data["version"] = context.data["version"]
diff --git a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
index c3dc31ead9..035c531a9b 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
@@ -37,7 +37,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
# Get renderer
renderer = instance.data["renderer"]
- self.log.info("Renderer found: {}".format(renderer))
+ self.log.debug("Renderer found: {}".format(renderer))
rp_node_types = {"vray": ["VRayRenderElement", "VRayRenderElementSet"],
"arnold": ["aiAOV"],
@@ -66,8 +66,8 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
result.append(render_pass)
- self.log.info("Found {} render elements / AOVs for "
- "'{}'".format(len(result), instance.data["subset"]))
+ self.log.debug("Found {} render elements / AOVs for "
+ "'{}'".format(len(result), instance.data["subset"]))
instance.data["renderPasses"] = result
diff --git a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
index d1c3cf3b2c..4443e2e0db 100644
--- a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
+++ b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
@@ -21,11 +21,12 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin):
else:
layer = instance.data["renderlayer"]
- self.log.info("layer: {}".format(layer))
cameras = cmds.ls(type="camera", long=True)
- renderable = [c for c in cameras if
- get_attr_in_layer("%s.renderable" % c, layer)]
+ renderable = [cam for cam in cameras if
+ get_attr_in_layer("{}.renderable".format(cam), layer)]
- self.log.info("Found cameras %s: %s" % (len(renderable), renderable))
+ self.log.debug(
+ "Found renderable cameras %s: %s", len(renderable), renderable
+ )
instance.data["cameras"] = renderable
diff --git a/openpype/hosts/maya/plugins/publish/collect_rig_sets.py b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py
new file mode 100644
index 0000000000..36a4211af1
--- /dev/null
+++ b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py
@@ -0,0 +1,39 @@
+import pyblish.api
+from maya import cmds
+
+
+class CollectRigSets(pyblish.api.InstancePlugin):
+ """Ensure rig contains pipeline-critical content
+
+ Every rig must contain at least two object sets:
+ "controls_SET" - Set of all animatable controls
+ "out_SET" - Set of all cacheable meshes
+
+ """
+
+ order = pyblish.api.CollectorOrder + 0.05
+ label = "Collect Rig Sets"
+ hosts = ["maya"]
+ families = ["rig"]
+
+ accepted_output = ["mesh", "transform"]
+ accepted_controllers = ["transform"]
+
+ def process(self, instance):
+
+ # Find required sets by suffix
+ searching = {"controls_SET", "out_SET"}
+ found = {}
+ for node in cmds.ls(instance, exactType="objectSet"):
+ for suffix in searching:
+ if node.endswith(suffix):
+ found[suffix] = node
+ searching.remove(suffix)
+ break
+ if not searching:
+ break
+
+ self.log.debug("Found sets: {}".format(found))
+ rig_sets = instance.data.setdefault("rig_sets", {})
+ for name, objset in found.items():
+ rig_sets[name] = objset
diff --git a/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py b/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py
index 79d0856fa0..03b6c4a188 100644
--- a/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py
+++ b/openpype/hosts/maya/plugins/publish/collect_unreal_staticmesh.py
@@ -19,7 +19,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
instance.data["geometryMembers"] = cmds.sets(
geometry_set, query=True)
- self.log.info("geometry: {}".format(
+ self.log.debug("geometry: {}".format(
pformat(instance.data.get("geometryMembers"))))
collision_set = [
@@ -29,7 +29,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
instance.data["collisionMembers"] = cmds.sets(
collision_set, query=True)
- self.log.info("collisions: {}".format(
+ self.log.debug("collisions: {}".format(
pformat(instance.data.get("collisionMembers"))))
frame = cmds.currentTime(query=True)
diff --git a/openpype/hosts/maya/plugins/publish/collect_xgen.py b/openpype/hosts/maya/plugins/publish/collect_xgen.py
index 46968f7d1a..45648e1776 100644
--- a/openpype/hosts/maya/plugins/publish/collect_xgen.py
+++ b/openpype/hosts/maya/plugins/publish/collect_xgen.py
@@ -67,5 +67,5 @@ class CollectXgen(pyblish.api.InstancePlugin):
data["transfers"] = transfers
- self.log.info(data)
+ self.log.debug(data)
instance.data.update(data)
diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py
index e6b5ca4260..4dcda29050 100644
--- a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py
@@ -4,12 +4,23 @@ import pyblish.api
from openpype.hosts.maya.api import lib
-SETTINGS = {"renderDensity",
- "renderWidth",
- "renderLength",
- "increaseRenderBounds",
- "imageSearchPath",
- "cbId"}
+
+SETTINGS = {
+ # Preview
+ "displayOutput",
+ "colorR", "colorG", "colorB",
+ "viewportDensity",
+ "viewportWidth",
+ "viewportLength",
+ # Render attributes
+ "renderDensity",
+ "renderWidth",
+ "renderLength",
+ "increaseRenderBounds",
+ "imageSearchPath",
+ # Pipeline specific
+ "cbId"
+}
class CollectYetiCache(pyblish.api.InstancePlugin):
@@ -39,10 +50,6 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
# Get yeti nodes and their transforms
yeti_shapes = cmds.ls(instance, type="pgYetiMaya")
for shape in yeti_shapes:
- shape_data = {"transform": None,
- "name": shape,
- "cbId": lib.get_id(shape),
- "attrs": None}
# Get specific node attributes
attr_data = {}
@@ -58,9 +65,12 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
parent = cmds.listRelatives(shape, parent=True)[0]
transform_data = {"name": parent, "cbId": lib.get_id(parent)}
- # Store collected data
- shape_data["attrs"] = attr_data
- shape_data["transform"] = transform_data
+ shape_data = {
+ "transform": transform_data,
+ "name": shape,
+ "cbId": lib.get_id(shape),
+ "attrs": attr_data,
+ }
settings["nodes"].append(shape_data)
diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py
index bc15edd9e0..df761cde13 100644
--- a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py
+++ b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py
@@ -119,7 +119,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
texture_filenames = []
if image_search_paths:
-
# TODO: Somehow this uses OS environment path separator, `:` vs `;`
# Later on check whether this is pipeline OS cross-compatible.
image_search_paths = [p for p in
@@ -130,13 +129,13 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
# List all related textures
texture_filenames = cmds.pgYetiCommand(node, listTextures=True)
- self.log.info("Found %i texture(s)" % len(texture_filenames))
+ self.log.debug("Found %i texture(s)" % len(texture_filenames))
# Get all reference nodes
reference_nodes = cmds.pgYetiGraph(node,
listNodes=True,
type="reference")
- self.log.info("Found %i reference node(s)" % len(reference_nodes))
+ self.log.debug("Found %i reference node(s)" % len(reference_nodes))
if texture_filenames and not image_search_paths:
raise ValueError("pgYetiMaya node '%s' is missing the path to the "
diff --git a/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py b/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py
index 102f0e46a2..46cc9090bb 100644
--- a/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py
+++ b/openpype/hosts/maya/plugins/publish/extract_arnold_scene_source.py
@@ -100,7 +100,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
instance.data["representations"].append(representation)
- self.log.info(
+ self.log.debug(
"Extracted instance {} to: {}".format(instance.name, staging_dir)
)
@@ -126,7 +126,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
instance.data["representations"].append(representation)
def _extract(self, nodes, attribute_data, kwargs):
- self.log.info(
+ self.log.debug(
"Writing {} with:\n{}".format(kwargs["filename"], kwargs)
)
filenames = []
@@ -180,12 +180,12 @@ class ExtractArnoldSceneSource(publish.Extractor):
with lib.attribute_values(attribute_data):
with lib.maintained_selection():
- self.log.info(
+ self.log.debug(
"Writing: {}".format(duplicate_nodes)
)
cmds.select(duplicate_nodes, noExpand=True)
- self.log.info(
+ self.log.debug(
"Extracting ass sequence with: {}".format(kwargs)
)
@@ -194,6 +194,6 @@ class ExtractArnoldSceneSource(publish.Extractor):
for file in exported_files:
filenames.append(os.path.split(file)[1])
- self.log.info("Exported: {}".format(filenames))
+ self.log.debug("Exported: {}".format(filenames))
return filenames, nodes_by_id
diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py
index 9b2978d192..86ffdcef24 100644
--- a/openpype/hosts/maya/plugins/publish/extract_assembly.py
+++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py
@@ -27,7 +27,7 @@ class ExtractAssembly(publish.Extractor):
json_filename = "{}.json".format(instance.name)
json_path = os.path.join(staging_dir, json_filename)
- self.log.info("Dumping scene data for debugging ..")
+ self.log.debug("Dumping scene data for debugging ..")
with open(json_path, "w") as filepath:
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py
index aa445a0387..4ec1399df4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py
+++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py
@@ -94,7 +94,7 @@ class ExtractCameraAlembic(publish.Extractor):
"Attributes to bake must be specified as a list"
)
for attr in self.bake_attributes:
- self.log.info("Adding {} attribute".format(attr))
+ self.log.debug("Adding {} attribute".format(attr))
job_str += " -attr {0}".format(attr)
with lib.evaluation("off"):
@@ -112,5 +112,5 @@ class ExtractCameraAlembic(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '{0}' to: {1}".format(
+ self.log.debug("Extracted instance '{0}' to: {1}".format(
instance.name, path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
index 30e6b89f2f..a50a8f0dfa 100644
--- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
+++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
@@ -111,7 +111,7 @@ class ExtractCameraMayaScene(publish.Extractor):
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
@@ -151,7 +151,7 @@ class ExtractCameraMayaScene(publish.Extractor):
with lib.evaluation("off"):
with lib.suspended_refresh():
if bake_to_worldspace:
- self.log.info(
+ self.log.debug(
"Performing camera bakes: {}".format(transform))
baked = lib.bake_to_world_space(
transform,
@@ -186,7 +186,7 @@ class ExtractCameraMayaScene(publish.Extractor):
unlock(plug)
cmds.setAttr(plug, value)
- self.log.info("Performing extraction..")
+ self.log.debug("Performing extraction..")
cmds.select(cmds.ls(members, dag=True,
shapes=True, long=True), noExpand=True)
cmds.file(path,
@@ -217,5 +217,5 @@ class ExtractCameraMayaScene(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '{0}' to: {1}".format(
+ self.log.debug("Extracted instance '{0}' to: {1}".format(
instance.name, path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_fbx.py b/openpype/hosts/maya/plugins/publish/extract_fbx.py
index 9af3acef65..4f7eaf57bf 100644
--- a/openpype/hosts/maya/plugins/publish/extract_fbx.py
+++ b/openpype/hosts/maya/plugins/publish/extract_fbx.py
@@ -33,11 +33,11 @@ class ExtractFBX(publish.Extractor):
# to format it into a string in a mel expression
path = path.replace('\\', '/')
- self.log.info("Extracting FBX to: {0}".format(path))
+ self.log.debug("Extracting FBX to: {0}".format(path))
members = instance.data["setMembers"]
- self.log.info("Members: {0}".format(members))
- self.log.info("Instance: {0}".format(instance[:]))
+ self.log.debug("Members: {0}".format(members))
+ self.log.debug("Instance: {0}".format(instance[:]))
fbx_exporter.set_options_from_instance(instance)
@@ -58,4 +58,4 @@ class ExtractFBX(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract FBX successful to: {0}".format(path))
+ self.log.debug("Extract FBX successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_gltf.py b/openpype/hosts/maya/plugins/publish/extract_gltf.py
index ac258ffb3d..6d72d28525 100644
--- a/openpype/hosts/maya/plugins/publish/extract_gltf.py
+++ b/openpype/hosts/maya/plugins/publish/extract_gltf.py
@@ -20,14 +20,10 @@ class ExtractGLB(publish.Extractor):
filename = "{0}.glb".format(instance.name)
path = os.path.join(staging_dir, filename)
- self.log.info("Extracting GLB to: {}".format(path))
-
cmds.loadPlugin("maya2glTF", quiet=True)
nodes = instance[:]
- self.log.info("Instance: {0}".format(nodes))
-
start_frame = instance.data('frameStart') or \
int(cmds.playbackOptions(query=True,
animationStartTime=True))# noqa
@@ -48,6 +44,7 @@ class ExtractGLB(publish.Extractor):
"vno": True # visibleNodeOnly
}
+ self.log.debug("Extracting GLB to: {}".format(path))
with lib.maintained_selection():
cmds.select(nodes, hi=True, noExpand=True)
extract_gltf(staging_dir,
@@ -65,4 +62,4 @@ class ExtractGLB(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract GLB successful to: {0}".format(path))
+ self.log.debug("Extract GLB successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py b/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py
index 422f5ad019..16436c6fe4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py
+++ b/openpype/hosts/maya/plugins/publish/extract_gpu_cache.py
@@ -60,6 +60,6 @@ class ExtractGPUCache(publish.Extractor):
instance.data["representations"].append(representation)
- self.log.info(
+ self.log.debug(
"Extracted instance {} to: {}".format(instance.name, staging_dir)
)
diff --git a/openpype/hosts/maya/plugins/publish/extract_import_reference.py b/openpype/hosts/maya/plugins/publish/extract_import_reference.py
index 8bb82be9b6..1fdee28d0c 100644
--- a/openpype/hosts/maya/plugins/publish/extract_import_reference.py
+++ b/openpype/hosts/maya/plugins/publish/extract_import_reference.py
@@ -30,8 +30,8 @@ class ExtractImportReference(publish.Extractor,
tmp_format = "_tmp"
@classmethod
- def apply_settings(cls, project_setting, system_settings):
- cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
+ def apply_settings(cls, project_settings):
+ cls.active = project_settings["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
def process(self, instance):
if not self.is_active(instance.data):
@@ -46,7 +46,7 @@ class ExtractImportReference(publish.Extractor,
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
@@ -69,7 +69,7 @@ class ExtractImportReference(publish.Extractor,
reference_path = os.path.join(dir_path, ref_scene_name)
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
- self.log.info("Performing extraction..")
+ self.log.debug("Performing extraction..")
# This generates script for mayapy to take care of reference
# importing outside current session. It is passing current scene
@@ -111,7 +111,7 @@ print("*** Done")
# process until handles are closed by context manager.
with tempfile.TemporaryDirectory() as tmp_dir_name:
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
- self.log.info("Using script file: {}".format(tmp_script_path))
+ self.log.debug("Using script file: {}".format(tmp_script_path))
with open(tmp_script_path, "wt") as tmp:
tmp.write(script)
@@ -149,9 +149,9 @@ print("*** Done")
"stagingDir": os.path.dirname(current_name),
"outputName": "imported"
}
- self.log.info("%s" % ref_representation)
+ self.log.debug(ref_representation)
instance.data["representations"].append(ref_representation)
- self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
- reference_path))
+ self.log.debug("Extracted instance '%s' to : '%s'" % (ref_scene_name,
+ reference_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_layout.py b/openpype/hosts/maya/plugins/publish/extract_layout.py
index bf5b4fc0e7..75920b44a2 100644
--- a/openpype/hosts/maya/plugins/publish/extract_layout.py
+++ b/openpype/hosts/maya/plugins/publish/extract_layout.py
@@ -23,7 +23,7 @@ class ExtractLayout(publish.Extractor):
stagingdir = self.staging_dir(instance)
# Perform extraction
- self.log.info("Performing extraction..")
+ self.log.debug("Performing extraction..")
if "representations" not in instance.data:
instance.data["representations"] = []
@@ -64,7 +64,7 @@ class ExtractLayout(publish.Extractor):
fields=["parent", "context.family"]
)
- self.log.info(representation)
+ self.log.debug(representation)
version_id = representation.get("parent")
family = representation.get("context").get("family")
@@ -159,5 +159,5 @@ class ExtractLayout(publish.Extractor):
}
instance.data["representations"].append(json_representation)
- self.log.info("Extracted instance '%s' to: %s",
- instance.name, json_representation)
+ self.log.debug("Extracted instance '%s' to: %s",
+ instance.name, json_representation)
diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py
index b13568c781..3506027a1f 100644
--- a/openpype/hosts/maya/plugins/publish/extract_look.py
+++ b/openpype/hosts/maya/plugins/publish/extract_look.py
@@ -307,7 +307,7 @@ class MakeTX(TextureProcessor):
render_colorspace = color_management["rendering_space"]
- self.log.info("tx: converting colorspace {0} "
+ self.log.debug("tx: converting colorspace {0} "
"-> {1}".format(colorspace,
render_colorspace))
args.extend(["--colorconvert", colorspace, render_colorspace])
@@ -331,7 +331,7 @@ class MakeTX(TextureProcessor):
if not os.path.exists(resources_dir):
os.makedirs(resources_dir)
- self.log.info("Generating .tx file for %s .." % source)
+ self.log.debug("Generating .tx file for %s .." % source)
subprocess_args = maketx_args + [
"-v", # verbose
@@ -421,7 +421,7 @@ class ExtractLook(publish.Extractor):
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
@@ -453,7 +453,7 @@ class ExtractLook(publish.Extractor):
relationships = lookdata["relationships"]
sets = list(relationships.keys())
if not sets:
- self.log.info("No sets found for the look")
+ self.log.debug("No sets found for the look")
return
# Specify texture processing executables to activate
@@ -485,7 +485,7 @@ class ExtractLook(publish.Extractor):
remap = results["attrRemap"]
# Extract in correct render layer
- self.log.info("Extracting look maya scene file: {}".format(maya_path))
+ self.log.debug("Extracting look maya scene file: {}".format(maya_path))
layer = instance.data.get("renderlayer", "defaultRenderLayer")
with lib.renderlayer(layer):
# TODO: Ensure membership edits don't become renderlayer overrides
@@ -511,12 +511,12 @@ class ExtractLook(publish.Extractor):
)
# Write the JSON data
- self.log.info("Extract json..")
data = {
"attributes": lookdata["attributes"],
"relationships": relationships
}
+ self.log.debug("Extracting json file: {}".format(json_path))
with open(json_path, "w") as f:
json.dump(data, f)
@@ -557,8 +557,8 @@ class ExtractLook(publish.Extractor):
# Source hash for the textures
instance.data["sourceHashes"] = hashes
- self.log.info("Extracted instance '%s' to: %s" % (instance.name,
- maya_path))
+ self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
+ maya_path))
def _set_resource_result_colorspace(self, resource, colorspace):
"""Update resource resulting colorspace after texture processing"""
@@ -589,14 +589,13 @@ class ExtractLook(publish.Extractor):
resources = instance.data["resources"]
color_management = lib.get_color_management_preferences()
- # Temporary fix to NOT create hardlinks on windows machines
- if platform.system().lower() == "windows":
- self.log.info(
+ force_copy = instance.data.get("forceCopy", False)
+ if not force_copy and platform.system().lower() == "windows":
+ # Temporary fix to NOT create hardlinks on windows machines
+ self.log.warning(
"Forcing copy instead of hardlink due to issues on Windows..."
)
force_copy = True
- else:
- force_copy = instance.data.get("forceCopy", False)
destinations_cache = {}
@@ -671,11 +670,11 @@ class ExtractLook(publish.Extractor):
destination = get_resource_destination_cached(source)
if force_copy or texture_result.transfer_mode == COPY:
transfers.append((source, destination))
- self.log.info('file will be copied {} -> {}'.format(
+ self.log.debug('file will be copied {} -> {}'.format(
source, destination))
elif texture_result.transfer_mode == HARDLINK:
hardlinks.append((source, destination))
- self.log.info('file will be hardlinked {} -> {}'.format(
+ self.log.debug('file will be hardlinked {} -> {}'.format(
source, destination))
# Store the hashes from hash to destination to include in the
@@ -707,7 +706,7 @@ class ExtractLook(publish.Extractor):
color_space_attr = "{}.colorSpace".format(node)
remap[color_space_attr] = resource["result_color_space"]
- self.log.info("Finished remapping destinations ...")
+ self.log.debug("Finished remapping destinations ...")
return {
"fileTransfers": transfers,
@@ -815,8 +814,8 @@ class ExtractLook(publish.Extractor):
if not processed_result:
raise RuntimeError("Texture Processor {} returned "
"no result.".format(processor))
- self.log.info("Generated processed "
- "texture: {}".format(processed_result.path))
+ self.log.debug("Generated processed "
+ "texture: {}".format(processed_result.path))
# TODO: Currently all processors force copy instead of allowing
# hardlinks using source hashes. This should be refactored
@@ -827,7 +826,7 @@ class ExtractLook(publish.Extractor):
if not force_copy:
existing = self._get_existing_hashed_texture(filepath)
if existing:
- self.log.info("Found hash in database, preparing hardlink..")
+ self.log.debug("Found hash in database, preparing hardlink..")
return TextureResult(
path=filepath,
file_hash=texture_hash,
diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
index d87d6c208a..ab170fe48c 100644
--- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
+++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
@@ -34,7 +34,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
@@ -63,7 +63,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
selection += self._get_loaded_containers(members)
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
cmds.select(selection, noExpand=True)
cmds.file(path,
@@ -87,7 +87,8 @@ class ExtractMayaSceneRaw(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
+ self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
+ path))
@staticmethod
def _get_loaded_containers(members):
diff --git a/openpype/hosts/maya/plugins/publish/extract_model.py b/openpype/hosts/maya/plugins/publish/extract_model.py
index 5137dffd94..29c952ebbc 100644
--- a/openpype/hosts/maya/plugins/publish/extract_model.py
+++ b/openpype/hosts/maya/plugins/publish/extract_model.py
@@ -44,7 +44,7 @@ class ExtractModel(publish.Extractor,
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
@@ -56,7 +56,7 @@ class ExtractModel(publish.Extractor,
path = os.path.join(stagingdir, filename)
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
# Get only the shape contents we need in such a way that we avoid
# taking along intermediateObjects
@@ -102,4 +102,5 @@ class ExtractModel(publish.Extractor,
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
+ self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
+ path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py
index 6fe7cf0d55..c2bebeaee6 100644
--- a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py
+++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py
@@ -101,10 +101,10 @@ class ExtractMultiverseLook(publish.Extractor):
# Parse export options
options = self.default_options
- self.log.info("Export options: {0}".format(options))
+ self.log.debug("Export options: {0}".format(options))
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
@@ -114,7 +114,7 @@ class ExtractMultiverseLook(publish.Extractor):
type="mvUsdCompoundShape",
noIntermediate=True,
long=True)
- self.log.info('Collected object {}'.format(members))
+ self.log.debug('Collected object {}'.format(members))
if len(members) > 1:
self.log.error('More than one member: {}'.format(members))
@@ -153,5 +153,5 @@ class ExtractMultiverseLook(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance {} to {}".format(
+ self.log.debug("Extracted instance {} to {}".format(
instance.name, file_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py
index 4399eacda1..17d5891e59 100644
--- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py
+++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py
@@ -150,7 +150,6 @@ class ExtractMultiverseUsd(publish.Extractor):
return options
def get_default_options(self):
- self.log.info("ExtractMultiverseUsd get_default_options")
return self.default_options
def filter_members(self, members):
@@ -173,19 +172,19 @@ class ExtractMultiverseUsd(publish.Extractor):
# Parse export options
options = self.get_default_options()
options = self.parse_overrides(instance, options)
- self.log.info("Export options: {0}".format(options))
+ self.log.debug("Export options: {0}".format(options))
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
- self.log.info('Collected objects: {}'.format(members))
+ self.log.debug('Collected objects: {}'.format(members))
members = self.filter_members(members)
if not members:
self.log.error('No members!')
return
- self.log.info(' - filtered: {}'.format(members))
+ self.log.debug(' - filtered: {}'.format(members))
import multiverse
@@ -229,7 +228,7 @@ class ExtractMultiverseUsd(publish.Extractor):
self.log.debug(" - {}={}".format(key, value))
setattr(asset_write_opts, key, value)
- self.log.info('WriteAsset: {} / {}'.format(file_path, members))
+ self.log.debug('WriteAsset: {} / {}'.format(file_path, members))
multiverse.WriteAsset(file_path, members, asset_write_opts)
if "representations" not in instance.data:
@@ -243,7 +242,7 @@ class ExtractMultiverseUsd(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance {} to {}".format(
+ self.log.debug("Extracted instance {} to {}".format(
instance.name, file_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py
index a62729c198..7966c4fa93 100644
--- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py
+++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py
@@ -105,14 +105,14 @@ class ExtractMultiverseUsdComposition(publish.Extractor):
# Parse export options
options = self.default_options
options = self.parse_overrides(instance, options)
- self.log.info("Export options: {0}".format(options))
+ self.log.debug("Export options: {0}".format(options))
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
- self.log.info('Collected object {}'.format(members))
+ self.log.debug('Collected object {}'.format(members))
import multiverse
@@ -175,5 +175,5 @@ class ExtractMultiverseUsdComposition(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance {} to {}".format(
- instance.name, file_path))
+ self.log.debug("Extracted instance {} to {}".format(instance.name,
+ file_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py
index cf610ac6b4..e4a97db6e4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py
+++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py
@@ -87,10 +87,10 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
# Parse export options
options = self.default_options
- self.log.info("Export options: {0}".format(options))
+ self.log.debug("Export options: {0}".format(options))
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
@@ -100,7 +100,7 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
type="mvUsdCompoundShape",
noIntermediate=True,
long=True)
- self.log.info("Collected object {}".format(members))
+ self.log.debug("Collected object {}".format(members))
# TODO: Deal with asset, composition, override with options.
import multiverse
@@ -153,5 +153,5 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance {} to {}".format(
+ self.log.debug("Extracted instance {} to {}".format(
instance.name, file_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_obj.py b/openpype/hosts/maya/plugins/publish/extract_obj.py
index 518b0f0ff8..ca94130d09 100644
--- a/openpype/hosts/maya/plugins/publish/extract_obj.py
+++ b/openpype/hosts/maya/plugins/publish/extract_obj.py
@@ -30,7 +30,7 @@ class ExtractObj(publish.Extractor):
# The export requires forward slashes because we need to
# format it into a string in a mel expression
- self.log.info("Extracting OBJ to: {0}".format(path))
+ self.log.debug("Extracting OBJ to: {0}".format(path))
members = instance.data("setMembers")
members = cmds.ls(members,
@@ -39,8 +39,8 @@ class ExtractObj(publish.Extractor):
type=("mesh", "nurbsCurve"),
noIntermediate=True,
long=True)
- self.log.info("Members: {0}".format(members))
- self.log.info("Instance: {0}".format(instance[:]))
+ self.log.debug("Members: {0}".format(members))
+ self.log.debug("Instance: {0}".format(instance[:]))
if not cmds.pluginInfo('objExport', query=True, loaded=True):
cmds.loadPlugin('objExport')
@@ -74,4 +74,4 @@ class ExtractObj(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract OBJ successful to: {0}".format(path))
+ self.log.debug("Extract OBJ successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py
index 9580c13841..cfab239da3 100644
--- a/openpype/hosts/maya/plugins/publish/extract_playblast.py
+++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py
@@ -48,7 +48,7 @@ class ExtractPlayblast(publish.Extractor):
self.log.debug("playblast path {}".format(path))
def process(self, instance):
- self.log.info("Extracting capture..")
+ self.log.debug("Extracting capture..")
# get scene fps
fps = instance.data.get("fps") or instance.context.data.get("fps")
@@ -62,7 +62,7 @@ class ExtractPlayblast(publish.Extractor):
if end is None:
end = cmds.playbackOptions(query=True, animationEndTime=True)
- self.log.info("start: {}, end: {}".format(start, end))
+ self.log.debug("start: {}, end: {}".format(start, end))
# get cameras
camera = instance.data["review_camera"]
@@ -119,7 +119,7 @@ class ExtractPlayblast(publish.Extractor):
filename = "{0}".format(instance.name)
path = os.path.join(stagingdir, filename)
- self.log.info("Outputting images to %s" % path)
+ self.log.debug("Outputting images to %s" % path)
preset["filename"] = path
preset["overwrite"] = True
@@ -237,7 +237,7 @@ class ExtractPlayblast(publish.Extractor):
self.log.debug("collection head {}".format(filebase))
if filebase in filename:
frame_collection = collection
- self.log.info(
+ self.log.debug(
"we found collection of interest {}".format(
str(frame_collection)))
diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py
index 9537a11ee4..5530446e3d 100644
--- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py
+++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py
@@ -109,11 +109,11 @@ class ExtractAlembic(publish.Extractor):
instance.context.data["cleanupFullPaths"].append(path)
- self.log.info("Extracted {} to {}".format(instance, dirname))
+ self.log.debug("Extracted {} to {}".format(instance, dirname))
# Extract proxy.
if not instance.data.get("proxy"):
- self.log.info("No proxy nodes found. Skipping proxy extraction.")
+ self.log.debug("No proxy nodes found. Skipping proxy extraction.")
return
path = path.replace(".abc", "_proxy.abc")
diff --git a/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py
index cf6351fdca..921ee44a24 100644
--- a/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py
+++ b/openpype/hosts/maya/plugins/publish/extract_proxy_abc.py
@@ -32,7 +32,7 @@ class ExtractProxyAlembic(publish.Extractor):
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
attr_prefixes = [value for value in attr_prefixes if value.strip()]
- self.log.info("Extracting Proxy Alembic..")
+ self.log.debug("Extracting Proxy Alembic..")
dirname = self.staging_dir(instance)
filename = "{name}.abc".format(**instance.data)
@@ -82,7 +82,7 @@ class ExtractProxyAlembic(publish.Extractor):
instance.context.data["cleanupFullPaths"].append(path)
- self.log.info("Extracted {} to {}".format(instance, dirname))
+ self.log.debug("Extracted {} to {}".format(instance, dirname))
# remove the bounding box
bbox_master = cmds.ls("bbox_grp")
cmds.delete(bbox_master)
diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
index 834b335fc5..3868270b79 100644
--- a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
+++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
@@ -59,7 +59,7 @@ class ExtractRedshiftProxy(publish.Extractor):
# vertex_colors = instance.data.get("vertexColors", False)
# Write out rs file
- self.log.info("Writing: '%s'" % file_path)
+ self.log.debug("Writing: '%s'" % file_path)
with maintained_selection():
cmds.select(instance.data["setMembers"], noExpand=True)
cmds.file(file_path,
@@ -82,5 +82,5 @@ class ExtractRedshiftProxy(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s"
- % (instance.name, staging_dir))
+ self.log.debug("Extracted instance '%s' to: %s"
+ % (instance.name, staging_dir))
diff --git a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py
index 5970c038a4..7e21f5282e 100644
--- a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py
+++ b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py
@@ -37,5 +37,5 @@ class ExtractRenderSetup(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info(
+ self.log.debug(
"Extracted instance '%s' to: %s" % (instance.name, json_path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_rig.py b/openpype/hosts/maya/plugins/publish/extract_rig.py
index be57b9de07..1ffc9a7dae 100644
--- a/openpype/hosts/maya/plugins/publish/extract_rig.py
+++ b/openpype/hosts/maya/plugins/publish/extract_rig.py
@@ -27,7 +27,7 @@ class ExtractRig(publish.Extractor):
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using '.{}' as scene type".format(self.scene_type))
break
except AttributeError:
@@ -39,7 +39,7 @@ class ExtractRig(publish.Extractor):
path = os.path.join(dir_path, filename)
# Perform extraction
- self.log.info("Performing extraction ...")
+ self.log.debug("Performing extraction ...")
with maintained_selection():
cmds.select(instance, noExpand=True)
cmds.file(path,
@@ -63,4 +63,4 @@ class ExtractRig(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
+ self.log.debug("Extracted instance '%s' to: %s", instance.name, path)
diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py
index 4160ac4cb2..e44204cae0 100644
--- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py
+++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py
@@ -24,7 +24,7 @@ class ExtractThumbnail(publish.Extractor):
families = ["review"]
def process(self, instance):
- self.log.info("Extracting capture..")
+ self.log.debug("Extracting capture..")
camera = instance.data["review_camera"]
@@ -96,7 +96,7 @@ class ExtractThumbnail(publish.Extractor):
filename = "{0}".format(instance.name)
path = os.path.join(dst_staging, filename)
- self.log.info("Outputting images to %s" % path)
+ self.log.debug("Outputting images to %s" % path)
preset["filename"] = path
preset["overwrite"] = True
@@ -159,7 +159,7 @@ class ExtractThumbnail(publish.Extractor):
_, thumbnail = os.path.split(playblast)
- self.log.info("file list {}".format(thumbnail))
+ self.log.debug("file list {}".format(thumbnail))
if "representations" not in instance.data:
instance.data["representations"] = []
diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
index 4a797eb462..9c2f55a1ef 100644
--- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
+++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
@@ -57,9 +57,9 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
# to format it into a string in a mel expression
path = path.replace('\\', '/')
- self.log.info("Extracting ABC to: {0}".format(path))
- self.log.info("Members: {0}".format(nodes))
- self.log.info("Instance: {0}".format(instance[:]))
+ self.log.debug("Extracting ABC to: {0}".format(path))
+ self.log.debug("Members: {0}".format(nodes))
+ self.log.debug("Instance: {0}".format(instance[:]))
options = {
"step": instance.data.get("step", 1.0),
@@ -74,7 +74,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
"worldSpace": instance.data.get("worldSpace", True)
}
- self.log.info("Options: {}".format(options))
+ self.log.debug("Options: {}".format(options))
if int(cmds.about(version=True)) >= 2017:
# Since Maya 2017 alembic supports multiple uv sets - write them.
@@ -105,4 +105,4 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract ABC successful to: {0}".format(path))
+ self.log.debug("Extract ABC successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py
index b162ce47f7..96175a07d7 100644
--- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py
+++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_fbx.py
@@ -46,9 +46,9 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
# to format it into a string in a mel expression
path = path.replace('\\', '/')
- self.log.info("Extracting FBX to: {0}".format(path))
- self.log.info("Members: {0}".format(to_extract))
- self.log.info("Instance: {0}".format(instance[:]))
+ self.log.debug("Extracting FBX to: {0}".format(path))
+ self.log.debug("Members: {0}".format(to_extract))
+ self.log.debug("Instance: {0}".format(instance[:]))
fbx_exporter.set_options_from_instance(instance)
@@ -70,7 +70,7 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
renamed_to_extract.append("|".join(node_path))
with renamed(original_parent, parent_node):
- self.log.info("Extracting: {}".format(renamed_to_extract, path))
+ self.log.debug("Extracting: {}".format(renamed_to_extract, path))
fbx_exporter.export(renamed_to_extract, path)
if "representations" not in instance.data:
@@ -84,4 +84,4 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract FBX successful to: {0}".format(path))
+ self.log.debug("Extract FBX successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py
index 44f0615a27..26ab0827e4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py
+++ b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py
@@ -37,15 +37,15 @@ class ExtractUnrealStaticMesh(publish.Extractor):
# to format it into a string in a mel expression
path = path.replace('\\', '/')
- self.log.info("Extracting FBX to: {0}".format(path))
- self.log.info("Members: {0}".format(members))
- self.log.info("Instance: {0}".format(instance[:]))
+ self.log.debug("Extracting FBX to: {0}".format(path))
+ self.log.debug("Members: {0}".format(members))
+ self.log.debug("Instance: {0}".format(instance[:]))
fbx_exporter.set_options_from_instance(instance)
with maintained_selection():
with parent_nodes(members):
- self.log.info("Un-parenting: {}".format(members))
+ self.log.debug("Un-parenting: {}".format(members))
fbx_exporter.export(members, path)
if "representations" not in instance.data:
@@ -59,4 +59,4 @@ class ExtractUnrealStaticMesh(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extract FBX successful to: {0}".format(path))
+ self.log.debug("Extract FBX successful to: {0}".format(path))
diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py
index df16c6c357..21dfcfffc5 100644
--- a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py
+++ b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py
@@ -43,7 +43,7 @@ class ExtractVRayProxy(publish.Extractor):
vertex_colors = instance.data.get("vertexColors", False)
# Write out vrmesh file
- self.log.info("Writing: '%s'" % file_path)
+ self.log.debug("Writing: '%s'" % file_path)
with maintained_selection():
cmds.select(instance.data["setMembers"], noExpand=True)
cmds.vrayCreateProxy(exportType=1,
@@ -68,5 +68,5 @@ class ExtractVRayProxy(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s"
- % (instance.name, staging_dir))
+ self.log.debug("Extracted instance '%s' to: %s"
+ % (instance.name, staging_dir))
diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
index 8442df1611..b0615149a9 100644
--- a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
+++ b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
@@ -20,13 +20,13 @@ class ExtractVrayscene(publish.Extractor):
def process(self, instance):
"""Plugin entry point."""
if instance.data.get("exportOnFarm"):
- self.log.info("vrayscenes will be exported on farm.")
+ self.log.debug("vrayscenes will be exported on farm.")
raise NotImplementedError(
"exporting vrayscenes is not implemented")
# handle sequence
if instance.data.get("vraySceneMultipleFiles"):
- self.log.info("vrayscenes will be exported on farm.")
+ self.log.debug("vrayscenes will be exported on farm.")
raise NotImplementedError(
"exporting vrayscene sequences not implemented yet")
@@ -40,7 +40,6 @@ class ExtractVrayscene(publish.Extractor):
layer_name = instance.data.get("layer")
staging_dir = self.staging_dir(instance)
- self.log.info("staging: {}".format(staging_dir))
template = cmds.getAttr("{}.vrscene_filename".format(node))
start_frame = instance.data.get(
"frameStartHandle") if instance.data.get(
@@ -56,21 +55,21 @@ class ExtractVrayscene(publish.Extractor):
staging_dir, "vrayscene", *formatted_name.split("/"))
# Write out vrscene file
- self.log.info("Writing: '%s'" % file_path)
+ self.log.debug("Writing: '%s'" % file_path)
with maintained_selection():
if "*" not in instance.data["setMembers"]:
- self.log.info(
+ self.log.debug(
"Exporting: {}".format(instance.data["setMembers"]))
set_members = instance.data["setMembers"]
cmds.select(set_members, noExpand=True)
else:
- self.log.info("Exporting all ...")
+ self.log.debug("Exporting all ...")
set_members = cmds.ls(
long=True, objectsOnly=True,
geometry=True, lights=True, cameras=True)
cmds.select(set_members, noExpand=True)
- self.log.info("Appending layer name {}".format(layer_name))
+ self.log.debug("Appending layer name {}".format(layer_name))
set_members.append(layer_name)
export_in_rs_layer(
@@ -93,8 +92,8 @@ class ExtractVrayscene(publish.Extractor):
}
instance.data["representations"].append(representation)
- self.log.info("Extracted instance '%s' to: %s"
- % (instance.name, staging_dir))
+ self.log.debug("Extracted instance '%s' to: %s"
+ % (instance.name, staging_dir))
@staticmethod
def format_vray_output_filename(
diff --git a/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py b/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py
index 0d2a97bc4b..4bd01c2df2 100644
--- a/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py
+++ b/openpype/hosts/maya/plugins/publish/extract_workfile_xgen.py
@@ -241,7 +241,7 @@ class ExtractWorkfileXgen(publish.Extractor):
data[palette] = {attr: old_value}
cmds.setAttr(node_attr, value, type="string")
- self.log.info(
+ self.log.debug(
"Setting \"{}\" on \"{}\"".format(value, node_attr)
)
diff --git a/openpype/hosts/maya/plugins/publish/extract_xgen.py b/openpype/hosts/maya/plugins/publish/extract_xgen.py
index 3c9d0bd344..8409330e49 100644
--- a/openpype/hosts/maya/plugins/publish/extract_xgen.py
+++ b/openpype/hosts/maya/plugins/publish/extract_xgen.py
@@ -77,7 +77,7 @@ class ExtractXgen(publish.Extractor):
xgenm.exportPalette(
instance.data["xgmPalette"].replace("|", ""), temp_xgen_path
)
- self.log.info("Extracted to {}".format(temp_xgen_path))
+ self.log.debug("Extracted to {}".format(temp_xgen_path))
# Import xgen onto the duplicate.
with maintained_selection():
@@ -118,7 +118,7 @@ class ExtractXgen(publish.Extractor):
expressions=True
)
- self.log.info("Extracted to {}".format(maya_filepath))
+ self.log.debug("Extracted to {}".format(maya_filepath))
if os.path.exists(temp_xgen_path):
os.remove(temp_xgen_path)
diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py
index b61f599cab..b113e02219 100644
--- a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py
@@ -39,7 +39,7 @@ class ExtractYetiCache(publish.Extractor):
else:
kwargs.update({"samples": samples})
- self.log.info(
+ self.log.debug(
"Writing out cache {} - {}".format(start_frame, end_frame))
# Start writing the files for snap shot
# will be replace by the Yeti node name
@@ -53,7 +53,7 @@ class ExtractYetiCache(publish.Extractor):
cache_files = [x for x in os.listdir(dirname) if x.endswith(".fur")]
- self.log.info("Writing metadata file")
+ self.log.debug("Writing metadata file")
settings = instance.data["fursettings"]
fursettings_path = os.path.join(dirname, "yeti.fursettings")
with open(fursettings_path, "w") as fp:
@@ -63,7 +63,7 @@ class ExtractYetiCache(publish.Extractor):
if "representations" not in instance.data:
instance.data["representations"] = []
- self.log.info("cache files: {}".format(cache_files[0]))
+ self.log.debug("cache files: {}".format(cache_files[0]))
# Workaround: We do not explicitly register these files with the
# representation solely so that we can write multiple sequences
@@ -87,4 +87,4 @@ class ExtractYetiCache(publish.Extractor):
}
)
- self.log.info("Extracted {} to {}".format(instance, dirname))
+ self.log.debug("Extracted {} to {}".format(instance, dirname))
diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
index 9a46c31177..da67cb911f 100644
--- a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
+++ b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
@@ -109,7 +109,7 @@ class ExtractYetiRig(publish.Extractor):
for family in self.families:
try:
self.scene_type = ext_mapping[family]
- self.log.info(
+ self.log.debug(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
@@ -127,7 +127,7 @@ class ExtractYetiRig(publish.Extractor):
maya_path = os.path.join(dirname,
"yeti_rig.{}".format(self.scene_type))
- self.log.info("Writing metadata file")
+ self.log.debug("Writing metadata file: {}".format(settings_path))
image_search_path = resources_dir = instance.data["resourcesDir"]
@@ -147,7 +147,7 @@ class ExtractYetiRig(publish.Extractor):
dst = os.path.join(image_search_path, os.path.basename(file))
instance.data['transfers'].append([src, dst])
- self.log.info("adding transfer {} -> {}". format(src, dst))
+ self.log.debug("adding transfer {} -> {}". format(src, dst))
# Ensure the imageSearchPath is being remapped to the publish folder
attr_value = {"%s.imageSearchPath" % n: str(image_search_path) for
@@ -182,7 +182,7 @@ class ExtractYetiRig(publish.Extractor):
if "representations" not in instance.data:
instance.data["representations"] = []
- self.log.info("rig file: {}".format(maya_path))
+ self.log.debug("rig file: {}".format(maya_path))
instance.data["representations"].append(
{
'name': self.scene_type,
@@ -191,7 +191,7 @@ class ExtractYetiRig(publish.Extractor):
'stagingDir': dirname
}
)
- self.log.info("settings file: {}".format(settings_path))
+ self.log.debug("settings file: {}".format(settings_path))
instance.data["representations"].append(
{
'name': 'rigsettings',
@@ -201,6 +201,6 @@ class ExtractYetiRig(publish.Extractor):
}
)
- self.log.info("Extracted {} to {}".format(instance, dirname))
+ self.log.debug("Extracted {} to {}".format(instance, dirname))
cmds.select(clear=True)
diff --git a/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py b/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py
index d8e8554b68..759aa23258 100644
--- a/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/reset_xgen_attributes.py
@@ -23,7 +23,7 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin):
for palette, data in xgen_attributes.items():
for attr, value in data.items():
node_attr = "{}.{}".format(palette, attr)
- self.log.info(
+ self.log.debug(
"Setting \"{}\" on \"{}\"".format(value, node_attr)
)
cmds.setAttr(node_attr, value, type="string")
@@ -32,5 +32,5 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin):
# Need to save the scene, cause the attribute changes above does not
# mark the scene as modified so user can exit without committing the
# changes.
- self.log.info("Saving changes.")
+ self.log.debug("Saving changes.")
cmds.file(save=True)
diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
index 8e219eae85..c174fa7a33 100644
--- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
+++ b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
@@ -215,9 +215,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
:rtype: int
:raises: Exception if template ID isn't found
"""
- self.log.info("Trying to find template for [{}]".format(renderer))
+ self.log.debug("Trying to find template for [{}]".format(renderer))
mapped = _get_template_id(renderer)
- self.log.info("got id [{}]".format(mapped))
+ self.log.debug("got id [{}]".format(mapped))
return self._templates.get(mapped)
def _submit(self, payload):
@@ -249,7 +249,6 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
Authenticate with Muster, collect all data, prepare path for post
render publish job and submit job to farm.
"""
- instance.data["toBeRenderedOn"] = "muster"
# setup muster environment
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
@@ -454,8 +453,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
self.preflight_check(instance)
- self.log.info("Submitting ...")
- self.log.info(json.dumps(payload, indent=4, sort_keys=True))
+ self.log.debug("Submitting ...")
+ self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
response = self._submit(payload)
# response = requests.post(url, json=payload)
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
index bcc40760e0..00588cd300 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
@@ -20,7 +20,7 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
- cls.log.info("Checking name of {}".format(instance.name))
+ cls.log.debug("Checking name of {}".format(instance.name))
content_instance = instance.data.get("setMembers", None)
if not content_instance:
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
index 41ef78aab4..06577f38f7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
@@ -23,7 +23,7 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
def process(self, instance):
- self.log.info("Checking namespace for %s" % instance.name)
+ self.log.debug("Checking namespace for %s" % instance.name)
if self.get_invalid(instance):
raise PublishValidationError("Nested namespaces found")
diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py
index c6184ed348..a7043b8407 100644
--- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py
+++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py
@@ -47,10 +47,10 @@ class ValidateFrameRange(pyblish.api.InstancePlugin,
context = instance.context
if instance.data.get("tileRendering"):
- self.log.info((
+ self.log.debug(
"Skipping frame range validation because "
"tile rendering is enabled."
- ))
+ )
return
frame_start_handle = int(context.data.get("frameStartHandle"))
diff --git a/openpype/hosts/maya/plugins/publish/validate_glsl_material.py b/openpype/hosts/maya/plugins/publish/validate_glsl_material.py
index 10c48da404..3b386c3def 100644
--- a/openpype/hosts/maya/plugins/publish/validate_glsl_material.py
+++ b/openpype/hosts/maya/plugins/publish/validate_glsl_material.py
@@ -75,7 +75,7 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
"""
meshes = cmds.ls(instance, type="mesh", long=True)
- cls.log.info("meshes: {}".format(meshes))
+ cls.log.debug("meshes: {}".format(meshes))
# load the glsl shader plugin
cmds.loadPlugin("glslShader", quiet=True)
@@ -96,8 +96,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
cls.log.warning("ogsfx shader file "
"not found in {}".format(ogsfx_path))
- cls.log.info("Find the ogsfx shader file in "
- "default maya directory...")
+ cls.log.debug("Searching the ogsfx shader file in "
+ "default maya directory...")
# re-direct to search the ogsfx path in maya_dir
ogsfx_path = os.getenv("MAYA_APP_DIR") + ogsfx_path
if not os.path.exists(ogsfx_path):
@@ -130,8 +130,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
@classmethod
def pbs_shader_conversion(cls, main_shader, glsl):
- cls.log.info("StringrayPBS detected "
- "-> Can do texture conversion")
+ cls.log.debug("StringrayPBS detected "
+ "-> Can do texture conversion")
for shader in main_shader:
# get the file textures related to the PBS Shader
@@ -168,8 +168,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
@classmethod
def arnold_shader_conversion(cls, main_shader, glsl):
- cls.log.info("aiStandardSurface detected "
- "-> Can do texture conversion")
+ cls.log.debug("aiStandardSurface detected "
+ "-> Can do texture conversion")
for shader in main_shader:
# get the file textures related to the PBS Shader
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py b/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py
deleted file mode 100644
index f870c9f8c4..0000000000
--- a/openpype/hosts/maya/plugins/publish/validate_instance_attributes.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from maya import cmds
-
-import pyblish.api
-from openpype.pipeline.publish import (
- ValidateContentsOrder, PublishValidationError, RepairAction
-)
-from openpype.pipeline import discover_legacy_creator_plugins
-from openpype.hosts.maya.api.lib import imprint
-
-
-class ValidateInstanceAttributes(pyblish.api.InstancePlugin):
- """Validate Instance Attributes.
-
- New attributes can be introduced as new features come in. Old instances
- will need to be updated with these attributes for the documentation to make
- sense, and users do not have to recreate the instances.
- """
-
- order = ValidateContentsOrder
- hosts = ["maya"]
- families = ["*"]
- label = "Instance Attributes"
- plugins_by_family = {
- p.family: p for p in discover_legacy_creator_plugins()
- }
- actions = [RepairAction]
-
- @classmethod
- def get_missing_attributes(self, instance):
- plugin = self.plugins_by_family[instance.data["family"]]
- subset = instance.data["subset"]
- asset = instance.data["asset"]
- objset = instance.data["objset"]
-
- missing_attributes = {}
- for key, value in plugin(subset, asset).data.items():
- if not cmds.objExists("{}.{}".format(objset, key)):
- missing_attributes[key] = value
-
- return missing_attributes
-
- def process(self, instance):
- objset = instance.data.get("objset")
- if objset is None:
- self.log.debug(
- "Skipping {} because no objectset found.".format(instance)
- )
- return
-
- missing_attributes = self.get_missing_attributes(instance)
- if missing_attributes:
- raise PublishValidationError(
- "Missing attributes on {}:\n{}".format(
- objset, missing_attributes
- )
- )
-
- @classmethod
- def repair(cls, instance):
- imprint(instance.data["objset"], cls.get_missing_attributes(instance))
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py
index b257add7e8..4ded57137c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py
@@ -3,94 +3,19 @@
from __future__ import absolute_import
import pyblish.api
+import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
- ValidateContentsOrder, PublishValidationError
+ RepairAction,
+ ValidateContentsOrder,
+ PublishValidationError,
+ OptionalPyblishPluginMixin
)
from maya import cmds
-class SelectInvalidInstances(pyblish.api.Action):
- """Select invalid instances in Outliner."""
-
- label = "Select Instances"
- icon = "briefcase"
- on = "failed"
-
- def process(self, context, plugin):
- """Process invalid validators and select invalid instances."""
- # Get the errored instances
- failed = []
- for result in context.data["results"]:
- if (
- result["error"] is None
- or result["instance"] is None
- or result["instance"] in failed
- or result["plugin"] != plugin
- ):
- continue
-
- failed.append(result["instance"])
-
- # Apply pyblish.logic to get the instances for the plug-in
- instances = pyblish.api.instances_by_plugin(failed, plugin)
-
- if instances:
- self.log.info(
- "Selecting invalid nodes: %s" % ", ".join(
- [str(x) for x in instances]
- )
- )
- self.select(instances)
- else:
- self.log.info("No invalid nodes found.")
- self.deselect()
-
- def select(self, instances):
- cmds.select(instances, replace=True, noExpand=True)
-
- def deselect(self):
- cmds.select(deselect=True)
-
-
-class RepairSelectInvalidInstances(pyblish.api.Action):
- """Repair the instance asset."""
-
- label = "Repair"
- icon = "wrench"
- on = "failed"
-
- def process(self, context, plugin):
- # Get the errored instances
- failed = []
- for result in context.data["results"]:
- if result["error"] is None:
- continue
- if result["instance"] is None:
- continue
- if result["instance"] in failed:
- continue
- if result["plugin"] != plugin:
- continue
-
- failed.append(result["instance"])
-
- # Apply pyblish.logic to get the instances for the plug-in
- instances = pyblish.api.instances_by_plugin(failed, plugin)
-
- context_asset = context.data["assetEntity"]["name"]
- for instance in instances:
- self.set_attribute(instance, context_asset)
-
- def set_attribute(self, instance, context_asset):
- cmds.setAttr(
- instance.data.get("name") + ".asset",
- context_asset,
- type="string"
- )
-
-
-class ValidateInstanceInContext(pyblish.api.InstancePlugin):
+class ValidateInstanceInContext(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validator to check if instance asset match context asset.
When working in per-shot style you always publish data in context of
@@ -104,11 +29,49 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin):
label = "Instance in same Context"
optional = True
hosts = ["maya"]
- actions = [SelectInvalidInstances, RepairSelectInvalidInstances]
+ actions = [
+ openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction
+ ]
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
asset = instance.data.get("asset")
- context_asset = instance.context.data["assetEntity"]["name"]
- msg = "{} has asset {}".format(instance.name, asset)
+ context_asset = self.get_context_asset(instance)
if asset != context_asset:
- raise PublishValidationError(msg)
+ raise PublishValidationError(
+ message=(
+ "Instance '{}' publishes to different asset than current "
+ "context: {}. Current context: {}".format(
+ instance.name, asset, context_asset
+ )
+ ),
+ description=(
+ "## Publishing to a different asset\n"
+ "There are publish instances present which are publishing "
+ "into a different asset than your current context.\n\n"
+ "Usually this is not what you want but there can be cases "
+ "where you might want to publish into another asset or "
+ "shot. If that's the case you can disable the validation "
+ "on the instance to ignore it."
+ )
+ )
+
+ @classmethod
+ def get_invalid(cls, instance):
+ return [instance.data["instance_node"]]
+
+ @classmethod
+ def repair(cls, instance):
+ context_asset = cls.get_context_asset(instance)
+ instance_node = instance.data["instance_node"]
+ cmds.setAttr(
+ "{}.asset".format(instance_node),
+ context_asset,
+ type="string"
+ )
+
+ @staticmethod
+ def get_context_asset(instance):
+ return instance.context.data["assetEntity"]["name"]
diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
index 2f14693ef2..236adfb03d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
@@ -21,7 +21,7 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin):
members = instance.data['setMembers']
export_members = instance.data['exactExportMembers']
- self.log.info("Contents {0}".format(members))
+ self.log.debug("Contents {0}".format(members))
if not len(members) == len(cmds.ls(members, type="instancer")):
self.log.error("Instancer can only contain instancers")
diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
index fcfcdce8b6..714c6229d6 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
@@ -5,8 +5,6 @@ import pyblish.api
from openpype.pipeline.publish import PublishValidationError
-VERBOSE = False
-
def is_cache_resource(resource):
"""Return whether resource is a cacheFile resource"""
@@ -73,9 +71,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
xml = all_files.pop(0)
assert xml.endswith(".xml")
- if VERBOSE:
- cls.log.info("Checking: {0}".format(all_files))
-
# Ensure all files exist (including ticks)
# The remainder file paths should be the .mcx or .mcc files
valdidate_files(all_files)
@@ -129,8 +124,8 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
# for the frames required by the time range.
if ticks:
ticks = list(sorted(ticks))
- cls.log.info("Found ticks: {0} "
- "(substeps: {1})".format(ticks, len(ticks)))
+ cls.log.debug("Found ticks: {0} "
+ "(substeps: {1})".format(ticks, len(ticks)))
# Check all frames except the last since we don't
# require subframes after our time range.
diff --git a/openpype/hosts/maya/plugins/publish/validate_maya_units.py b/openpype/hosts/maya/plugins/publish/validate_maya_units.py
index 1d5619795f..ae6dc093a9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_maya_units.py
+++ b/openpype/hosts/maya/plugins/publish/validate_maya_units.py
@@ -37,7 +37,7 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
)
@classmethod
- def apply_settings(cls, project_settings, system_settings):
+ def apply_settings(cls, project_settings):
"""Apply project settings to creator"""
settings = (
project_settings["maya"]["publish"]["ValidateMayaUnits"]
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
index 55624726ea..bde78a98b8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
@@ -36,28 +36,34 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
optional = True
- @classmethod
- def apply_settings(cls, project_settings, system_settings):
- # todo: this should not be done this way
- attr = "defaultRenderGlobals.currentRenderer"
- cls.active = cmds.getAttr(attr).lower() == "arnold"
+ # cache (will be `dict` when cached)
+ arnold_mesh_defaults = None
@classmethod
def get_default_attributes(cls):
+
+ if cls.arnold_mesh_defaults is not None:
+ # Use from cache
+ return cls.arnold_mesh_defaults
+
# Get default arnold attribute values for mesh type.
defaults = {}
with delete_after() as tmp:
- transform = cmds.createNode("transform")
+ transform = cmds.createNode("transform", skipSelect=True)
tmp.append(transform)
- mesh = cmds.createNode("mesh", parent=transform)
- for attr in cmds.listAttr(mesh, string="ai*"):
+ mesh = cmds.createNode("mesh", parent=transform, skipSelect=True)
+ arnold_attributes = cmds.listAttr(mesh,
+ string="ai*",
+ fromPlugin=True) or []
+ for attr in arnold_attributes:
plug = "{}.{}".format(mesh, attr)
try:
defaults[attr] = get_attribute(plug)
except PublishValidationError:
cls.log.debug("Ignoring arnold attribute: {}".format(attr))
+ cls.arnold_mesh_defaults = defaults # assign cache
return defaults
@classmethod
@@ -109,6 +115,10 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
if not self.is_active(instance.data):
return
+ if not cmds.pluginInfo("mtoa", query=True, loaded=True):
+ # Arnold attributes only exist if plug-in is loaded
+ return
+
invalid = self.get_invalid_attributes(instance, compute=True)
if invalid:
raise PublishValidationError(
diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py
index 6948dcf724..f4c1aa39c7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_model_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py
@@ -125,7 +125,7 @@ class ValidateModelName(pyblish.api.InstancePlugin,
r = re.compile(regex)
for obj in filtered:
- cls.log.info("testing: {}".format(obj))
+ cls.log.debug("testing: {}".format(obj))
m = r.match(obj)
if m is None:
cls.log.error("invalid name on: {}".format(obj))
diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
index 68784a165d..ad0fcafc56 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
@@ -35,12 +35,12 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin,
publishMipMap = instance.data["publishMipMap"]
enforced = True
if intent in self.enforced_intents:
- self.log.info("This validation will be enforced: '{}'"
- .format(intent))
+ self.log.debug("This validation will be enforced: '{}'"
+ .format(intent))
else:
enforced = False
- self.log.info("This validation will NOT be enforced: '{}'"
- .format(intent))
+ self.log.debug("This validation will NOT be enforced: '{}'"
+ .format(intent))
if not instance[:]:
raise PublishValidationError("Instance is empty")
@@ -75,8 +75,9 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin,
self.log.warning(msg)
if invalid:
- raise PublishValidationError("'{}' has invalid look "
- "content".format(instance.name))
+ raise PublishValidationError(
+ "'{}' has invalid look content".format(instance.name)
+ )
def valid_file(self, fname):
self.log.debug("Checking validity of '{}'".format(fname))
diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
index 78334cd01f..9f47bf7a3d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
@@ -4,6 +4,8 @@ from maya import cmds
import pyblish.api
+from openpype.hosts.maya.api.lib import pairwise
+from openpype.hosts.maya.api.action import SelectInvalidAction
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
@@ -19,31 +21,33 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
hosts = ['maya']
families = ["workfile"]
label = "Plug-in Path Attributes"
+ actions = [SelectInvalidAction]
- def get_invalid(self, instance):
+ # Attributes are defined in project settings
+ attribute = []
+
+ @classmethod
+ def get_invalid(cls, instance):
invalid = list()
- # get the project setting
- validate_path = (
- instance.context.data["project_settings"]["maya"]["publish"]
- )
- file_attr = validate_path["ValidatePluginPathAttributes"]["attribute"]
+ file_attr = cls.attribute
if not file_attr:
return invalid
- # get the nodes and file attributes
- for node, attr in file_attr.items():
- # check the related nodes
- targets = cmds.ls(type=node)
+ # Consider only valid node types to avoid "Unknown object type" warning
+ all_node_types = set(cmds.allNodeTypes())
+ node_types = [key for key in file_attr.keys() if key in all_node_types]
- for target in targets:
- # get the filepath
- file_attr = "{}.{}".format(target, attr)
- filepath = cmds.getAttr(file_attr)
+ for node, node_type in pairwise(cmds.ls(type=node_types,
+ showType=True)):
+ # get the filepath
+ file_attr = "{}.{}".format(node, file_attr[node_type])
+ filepath = cmds.getAttr(file_attr)
- if filepath and not os.path.exists(filepath):
- self.log.error("File {0} not exists".format(filepath)) # noqa
- invalid.append(target)
+ if filepath and not os.path.exists(filepath):
+ cls.log.error("{} '{}' uses non-existing filepath: {}"
+ .format(node_type, node, filepath))
+ invalid.append(node)
return invalid
@@ -51,5 +55,16 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
"""Process all directories Set as Filenames in Non-Maya Nodes"""
invalid = self.get_invalid(instance)
if invalid:
- raise PublishValidationError("Non-existent Path "
- "found: {0}".format(invalid))
+ raise PublishValidationError(
+ title="Plug-in Path Attributes",
+ message="Non-existent filepath found on nodes: {}".format(
+ ", ".join(invalid)
+ ),
+ description=(
+ "## Plug-in nodes use invalid filepaths\n"
+ "The workfile contains nodes from plug-ins that use "
+ "filepaths which do not exist.\n\n"
+ "Please make sure their filepaths are correct and the "
+ "files exist on disk."
+ )
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
index 7b5392f8f9..23f031a5db 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
@@ -2,7 +2,9 @@ import pyblish.api
from maya import cmds
from openpype.pipeline.publish import (
- PublishValidationError, ValidateContentsOrder)
+ PublishValidationError,
+ ValidateContentsOrder
+)
class ValidateRigContents(pyblish.api.InstancePlugin):
@@ -24,31 +26,45 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
def process(self, instance):
- objectsets = ("controls_SET", "out_SET")
- missing = [obj for obj in objectsets if obj not in instance]
- assert not missing, ("%s is missing %s" % (instance, missing))
+ # Find required sets by suffix
+ required = ["controls_SET", "out_SET"]
+ missing = [
+ key for key in required if key not in instance.data["rig_sets"]
+ ]
+ if missing:
+ raise PublishValidationError(
+ "%s is missing sets: %s" % (instance, ", ".join(missing))
+ )
+
+ controls_set = instance.data["rig_sets"]["controls_SET"]
+ out_set = instance.data["rig_sets"]["out_SET"]
# Ensure there are at least some transforms or dag nodes
# in the rig instance
set_members = instance.data['setMembers']
if not cmds.ls(set_members, type="dagNode", long=True):
raise PublishValidationError(
- ("No dag nodes in the pointcache instance. "
- "(Empty instance?)"))
+ "No dag nodes in the pointcache instance. "
+ "(Empty instance?)"
+ )
# Ensure contents in sets and retrieve long path for all objects
- output_content = cmds.sets("out_SET", query=True) or []
- assert output_content, "Must have members in rig out_SET"
+ output_content = cmds.sets(out_set, query=True) or []
+ if not output_content:
+ raise PublishValidationError("Must have members in rig out_SET")
output_content = cmds.ls(output_content, long=True)
- controls_content = cmds.sets("controls_SET", query=True) or []
- assert controls_content, "Must have members in rig controls_SET"
+ controls_content = cmds.sets(controls_set, query=True) or []
+ if not controls_content:
+ raise PublishValidationError(
+ "Must have members in rig controls_SET"
+ )
controls_content = cmds.ls(controls_content, long=True)
# Validate members are inside the hierarchy from root node
- root_node = cmds.ls(set_members, assemblies=True)
- hierarchy = cmds.listRelatives(root_node, allDescendents=True,
- fullPath=True)
+ root_nodes = cmds.ls(set_members, assemblies=True, long=True)
+ hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
+ fullPath=True) + root_nodes
hierarchy = set(hierarchy)
invalid_hierarchy = []
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
index 7bbf4257ab..a3828f871b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
@@ -52,22 +52,30 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise PublishValidationError('{} failed, see log '
- 'information'.format(self.label))
+ raise PublishValidationError(
+ '{} failed, see log information'.format(self.label)
+ )
@classmethod
def get_invalid(cls, instance):
- controllers_sets = [i for i in instance if i == "controls_SET"]
- controls = cmds.sets(controllers_sets, query=True)
- assert controls, "Must have 'controls_SET' in rig instance"
+ controls_set = instance.data["rig_sets"].get("controls_SET")
+ if not controls_set:
+ cls.log.error(
+ "Must have 'controls_SET' in rig instance"
+ )
+ return [instance.data["instance_node"]]
+
+ controls = cmds.sets(controls_set, query=True)
# Ensure all controls are within the top group
lookup = set(instance[:])
- assert all(control in lookup for control in cmds.ls(controls,
- long=True)), (
- "All controls must be inside the rig's group."
- )
+ if not all(control in lookup for control in cmds.ls(controls,
+ long=True)):
+ cls.log.error(
+ "All controls must be inside the rig's group."
+ )
+ return [controls_set]
# Validate all controls
has_connections = list()
@@ -181,9 +189,17 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
@classmethod
def repair(cls, instance):
+ controls_set = instance.data["rig_sets"].get("controls_SET")
+ if not controls_set:
+ cls.log.error(
+ "Unable to repair because no 'controls_SET' found in rig "
+ "instance: {}".format(instance)
+ )
+ return
+
# Use a single undo chunk
with undo_chunk():
- controls = cmds.sets("controls_SET", query=True)
+ controls = cmds.sets(controls_set, query=True)
for control in controls:
# Lock visibility
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
index 842c1de01b..03f6a5f1ab 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
@@ -56,11 +56,11 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
- controllers_sets = [i for i in instance if i == "controls_SET"]
- if not controllers_sets:
+ controls_set = instance.data["rig_sets"].get("controls_SET")
+ if not controls_set:
return []
- controls = cmds.sets(controllers_sets, query=True) or []
+ controls = cmds.sets(controls_set, query=True) or []
if not controls:
return []
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
index 39f0941faa..fbd510c683 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
@@ -38,16 +38,19 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
# if a deformer has been created on the shape
invalid = self.get_invalid(instance)
if invalid:
- raise PublishValidationError("Nodes found with mismatching "
- "IDs: {0}".format(invalid))
+ raise PublishValidationError(
+ "Nodes found with mismatching IDs: {0}".format(invalid)
+ )
@classmethod
def get_invalid(cls, instance):
"""Get all nodes which do not match the criteria"""
- invalid = []
+ out_set = instance.data["rig_sets"].get("out_SET")
+ if not out_set:
+ return []
- out_set = next(x for x in instance if x.endswith("out_SET"))
+ invalid = []
members = cmds.sets(out_set, query=True)
shapes = cmds.ls(members,
dag=True,
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
index cbc750bace..24fb36eb8b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
@@ -47,7 +47,10 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
invalid = {}
if compute:
- out_set = next(x for x in instance if "out_SET" in x)
+ out_set = instance.data["rig_sets"].get("out_SET")
+ if not out_set:
+ instance.data["mismatched_output_ids"] = invalid
+ return invalid
instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True)
instance_nodes = cmds.ls(instance_nodes, long=True)
diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
index 7a7e9a0aee..c7af6a60db 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
@@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
ValidateContentsOrder,
RepairAction,
+ PublishValidationError
)
@@ -67,5 +68,30 @@ class ValidateShapeZero(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Shapes found with non-zero component tweaks: "
- "{0}".format(invalid))
+ raise PublishValidationError(
+ title="Shape Component Tweaks",
+ message="Shapes found with non-zero component tweaks: '{}'"
+ "".format(", ".join(invalid)),
+ description=(
+ "## Shapes found with component tweaks\n"
+ "Shapes were detected that have component tweaks on their "
+ "components. Please remove the component tweaks to "
+ "continue.\n\n"
+ "### Repair\n"
+ "The repair action will try to *freeze* the component "
+ "tweaks into the shapes, which is usually the correct fix "
+ "if the mesh has no construction history (= has its "
+ "history deleted)."),
+ detail=(
+ "Maya allows to store component tweaks within shape nodes "
+ "which are applied between its `inMesh` and `outMesh` "
+ "connections resulting in the output of a shape node "
+ "differing from the input. We usually want to avoid this "
+ "for published meshes (in particular for Maya scenes) as "
+ "it can have unintended results when using these meshes "
+ "as intermediate meshes since it applies positional "
+ "differences without being visible edits in the node "
+ "graph.\n\n"
+ "These tweaks are traditionally stored in the `.pnts` "
+ "attribute of shapes.")
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py
index 398b6fb7bf..9084374c76 100644
--- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py
+++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py
@@ -28,7 +28,7 @@ class ValidateSkeletalMeshHierarchy(pyblish.api.InstancePlugin):
parent.split("|")[1] for parent in (joints_parents + geo_parents)
}
- self.log.info(parents_set)
+ self.log.debug(parents_set)
if len(set(parents_set)) > 2:
raise PublishXmlValidationError(
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
index b2cb2ebda2..5ba256f9f5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
@@ -140,12 +140,12 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin,
return
if not self.validate_mesh and not self.validate_collision:
- self.log.info("Validation of both mesh and collision names"
- "is disabled.")
+ self.log.debug("Validation of both mesh and collision names"
+ "is disabled.")
return
if not instance.data.get("collisionMembers", None):
- self.log.info("There are no collision objects to validate")
+ self.log.debug("There are no collision objects to validate")
return
invalid = self.get_invalid(instance)
diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
index a71849da00..14571203ea 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
@@ -52,6 +52,6 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin):
renderlayer = instance.data.get("renderlayer")
with lib.renderlayer(renderlayer):
- cls.log.info("Enabling Distributed Rendering "
- "ignore in batch mode..")
+ cls.log.debug("Enabling Distributed Rendering "
+ "ignore in batch mode..")
cmds.setAttr(cls.ignored_attr, True)
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py
index 06250f5779..a8085418e7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py
@@ -54,7 +54,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin):
# has any yeti callback set or not since if the callback
# is there it wouldn't error and if it weren't then
# nothing happens because there are no yeti nodes.
- cls.log.info(
+ cls.log.debug(
"Yeti is loaded but no yeti nodes were found. "
"Callback validation skipped.."
)
@@ -62,7 +62,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin):
renderer = instance.data["renderer"]
if renderer == "redshift":
- cls.log.info("Redshift ignores any pre and post render callbacks")
+ cls.log.debug("Redshift ignores any pre and post render callbacks")
return False
callback_lookup = cls.callbacks.get(renderer, {})
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
index 96fb475a0a..50a27589ad 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
@@ -37,8 +37,8 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator):
# Allow publish without input meshes.
if not shapes:
- cls.log.info("Found no input meshes for %s, skipping ..."
- % instance)
+ cls.log.debug("Found no input meshes for %s, skipping ..."
+ % instance)
return []
# check if input node is part of groomRig instance
diff --git a/openpype/hosts/maya/tools/mayalookassigner/widgets.py b/openpype/hosts/maya/tools/mayalookassigner/widgets.py
index f2df17e68c..82c37e2104 100644
--- a/openpype/hosts/maya/tools/mayalookassigner/widgets.py
+++ b/openpype/hosts/maya/tools/mayalookassigner/widgets.py
@@ -90,15 +90,13 @@ class AssetOutliner(QtWidgets.QWidget):
def get_all_assets(self):
"""Add all items from the current scene"""
- items = []
with preserve_expanded_rows(self.view):
with preserve_selection(self.view):
self.clear()
nodes = commands.get_all_asset_nodes()
items = commands.create_items_from_nodes(nodes)
self.add_items(items)
-
- return len(items) > 0
+ return len(items) > 0
def get_selected_assets(self):
"""Add all selected items from the current scene"""
diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py
index 4a1e109b17..41e6a27cef 100644
--- a/openpype/hosts/nuke/api/lib.py
+++ b/openpype/hosts/nuke/api/lib.py
@@ -2041,6 +2041,7 @@ class WorkfileSettings(object):
)
workfile_settings = imageio_host["workfile"]
+ viewer_process_settings = imageio_host["viewer"]["viewerProcess"]
if not config_data:
# TODO: backward compatibility for old projects - remove later
@@ -2091,6 +2092,15 @@ class WorkfileSettings(object):
workfile_settings.pop("colorManagement", None)
workfile_settings.pop("OCIO_config", None)
+ # get monitor lut from settings respecting Nuke version differences
+ monitor_lut = workfile_settings.pop("monitorLut", None)
+ monitor_lut_data = self._get_monitor_settings(
+ viewer_process_settings, monitor_lut)
+
+ # set monitor related knobs luts (MonitorOut, Thumbnails)
+ for knob, value_ in monitor_lut_data.items():
+ workfile_settings[knob] = value_
+
# then set the rest
for knob, value_ in workfile_settings.items():
# skip unfilled ocio config path
@@ -2107,8 +2117,9 @@ class WorkfileSettings(object):
# set ocio config path
if config_data:
+ config_path = config_data["path"].replace("\\", "/")
log.info("OCIO config path found: `{}`".format(
- config_data["path"]))
+ config_path))
# check if there's a mismatch between environment and settings
correct_settings = self._is_settings_matching_environment(
@@ -2118,6 +2129,40 @@ class WorkfileSettings(object):
if correct_settings:
self._set_ocio_config_path_to_workfile(config_data)
+ def _get_monitor_settings(self, viewer_lut, monitor_lut):
+ """ Get monitor settings from viewer and monitor lut
+
+ Args:
+ viewer_lut (str): viewer lut string
+ monitor_lut (str): monitor lut string
+
+ Returns:
+ dict: monitor settings
+ """
+ output_data = {}
+ m_display, m_viewer = get_viewer_config_from_string(monitor_lut)
+ v_display, v_viewer = get_viewer_config_from_string(viewer_lut)
+
+ # set monitor lut differently for nuke version 14
+ if nuke.NUKE_VERSION_MAJOR >= 14:
+ output_data["monitorOutLUT"] = create_viewer_profile_string(
+ m_viewer, m_display, path_like=False)
+ # monitorLut=thumbnails - viewerProcess makes more sense
+ output_data["monitorLut"] = create_viewer_profile_string(
+ v_viewer, v_display, path_like=False)
+
+ if nuke.NUKE_VERSION_MAJOR == 13:
+ output_data["monitorOutLUT"] = create_viewer_profile_string(
+ m_viewer, m_display, path_like=False)
+ # monitorLut=thumbnails - viewerProcess makes more sense
+ output_data["monitorLut"] = create_viewer_profile_string(
+ v_viewer, v_display, path_like=True)
+ if nuke.NUKE_VERSION_MAJOR <= 12:
+ output_data["monitorLut"] = create_viewer_profile_string(
+ m_viewer, m_display, path_like=True)
+
+ return output_data
+
def _is_settings_matching_environment(self, config_data):
""" Check if OCIO config path is different from environment
@@ -2177,6 +2222,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
"""
# replace path with env var if possible
ocio_path = self._replace_ocio_path_with_env_var(config_data)
+ ocio_path = ocio_path.replace("\\", "/")
log.info("Setting OCIO config path to: `{}`".format(
ocio_path))
@@ -2232,7 +2278,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
Returns:
str: OCIO config path with environment variable TCL expression
"""
- config_path = config_data["path"]
+ config_path = config_data["path"].replace("\\", "/")
config_template = config_data["template"]
included_vars = self._get_included_vars(config_template)
@@ -3320,11 +3366,11 @@ def get_viewer_config_from_string(input_string):
display = split[0]
elif "(" in viewer:
pattern = r"([\w\d\s\.\-]+).*[(](.*)[)]"
- result = re.findall(pattern, viewer)
+ result_ = re.findall(pattern, viewer)
try:
- result = result.pop()
- display = str(result[1]).rstrip()
- viewer = str(result[0]).rstrip()
+ result_ = result_.pop()
+ display = str(result_[1]).rstrip()
+ viewer = str(result_[0]).rstrip()
except IndexError:
raise IndexError((
"Viewer Input string is not correct. "
@@ -3332,3 +3378,22 @@ def get_viewer_config_from_string(input_string):
).format(input_string))
return (display, viewer)
+
+
+def create_viewer_profile_string(viewer, display=None, path_like=False):
+ """Convert viewer and display to string
+
+ Args:
+ viewer (str): viewer name
+ display (Optional[str]): display name
+ path_like (Optional[bool]): if True, return path like string
+
+ Returns:
+ str: viewer config string
+ """
+ if not display:
+ return viewer
+
+ if path_like:
+ return "{}/{}".format(display, viewer)
+ return "{} ({})".format(viewer, display)
diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py
index 6d48c09d60..a0e1525cd0 100644
--- a/openpype/hosts/nuke/api/plugin.py
+++ b/openpype/hosts/nuke/api/plugin.py
@@ -379,11 +379,7 @@ class NukeWriteCreator(NukeCreator):
sys.exc_info()[2]
)
- def apply_settings(
- self,
- project_settings,
- system_settings
- ):
+ def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
# plugin settings
diff --git a/openpype/hosts/nuke/plugins/load/load_camera_abc.py b/openpype/hosts/nuke/plugins/load/load_camera_abc.py
index fec4ee556e..2939ceebae 100644
--- a/openpype/hosts/nuke/plugins/load/load_camera_abc.py
+++ b/openpype/hosts/nuke/plugins/load/load_camera_abc.py
@@ -112,8 +112,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
- # get corresponding node
- camera_node = nuke.toNode(object_name)
# get main variables
version_data = version_doc.get("data", {})
diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py
index e4b7b155cd..2a26ed82fb 100644
--- a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py
+++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py
@@ -20,7 +20,6 @@ class ExtractReviewDataLut(publish.Extractor):
hosts = ["nuke"]
def process(self, instance):
- families = instance.data["families"]
self.log.info("Creating staging dir...")
if "representations" in instance.data:
staging_dir = instance.data[
diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py
index d57d55f85d..b20df4ffe2 100644
--- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py
+++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py
@@ -91,8 +91,6 @@ class ExtractThumbnail(publish.Extractor):
if collection:
# get path
- fname = os.path.basename(collection.format(
- "{head}{padding}{tail}"))
fhead = collection.format("{head}")
thumb_fname = list(collection)[mid_frame]
diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
index 45c20412c8..9a35b61a0e 100644
--- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
+++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
@@ -14,27 +14,26 @@ class RepairActionBase(pyblish.api.Action):
# Get the errored instances
return get_errored_instances_from_context(context, plugin=plugin)
- def repair_knob(self, instances, state):
+ def repair_knob(self, context, instances, state):
+ create_context = context.data["create_context"]
for instance in instances:
- node = instance.data["transientData"]["node"]
- files_remove = [os.path.join(instance.data["outputDir"], f)
- for r in instance.data.get("representations", [])
- for f in r.get("files", [])
- ]
- self.log.info("Files to be removed: {}".format(files_remove))
- for f in files_remove:
- os.remove(f)
- self.log.debug("removing file: {}".format(f))
- node["render"].setValue(state)
+ # Reset the render knob
+ instance_id = instance.data.get("instance_id")
+ created_instance = create_context.get_instance_by_id(
+ instance_id
+ )
+ created_instance.creator_attributes["render_target"] = state
self.log.info("Rendering toggled to `{}`".format(state))
+ create_context.save_changes()
+
class RepairCollectionActionToLocal(RepairActionBase):
label = "Repair - rerender with \"Local\""
def process(self, context, plugin):
instances = self.get_instance(context, plugin)
- self.repair_knob(instances, "Local")
+ self.repair_knob(context, instances, "local")
class RepairCollectionActionToFarm(RepairActionBase):
@@ -42,7 +41,7 @@ class RepairCollectionActionToFarm(RepairActionBase):
def process(self, context, plugin):
instances = self.get_instance(context, plugin)
- self.repair_knob(instances, "On farm")
+ self.repair_knob(context, instances, "farm")
class ValidateRenderedFrames(pyblish.api.InstancePlugin):
diff --git a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py
index 3bc61c8184..9d4189a1a3 100644
--- a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py
+++ b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py
@@ -98,7 +98,7 @@ class AutoImageCreator(PSAutoCreator):
)
]
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["AutoImageCreator"]
)
diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py
index f3165fca57..8d3ac9f459 100644
--- a/openpype/hosts/photoshop/plugins/create/create_image.py
+++ b/openpype/hosts/photoshop/plugins/create/create_image.py
@@ -171,7 +171,7 @@ class ImageCreator(Creator):
)
]
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["ImageCreator"]
)
diff --git a/openpype/hosts/photoshop/plugins/create/create_review.py b/openpype/hosts/photoshop/plugins/create/create_review.py
index 064485d465..63751d94e4 100644
--- a/openpype/hosts/photoshop/plugins/create/create_review.py
+++ b/openpype/hosts/photoshop/plugins/create/create_review.py
@@ -18,7 +18,7 @@ class ReviewCreator(PSAutoCreator):
it will get recreated in next publish either way).
"""
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["ReviewCreator"]
)
diff --git a/openpype/hosts/photoshop/plugins/create/create_workfile.py b/openpype/hosts/photoshop/plugins/create/create_workfile.py
index d498f0549c..1b255de3a3 100644
--- a/openpype/hosts/photoshop/plugins/create/create_workfile.py
+++ b/openpype/hosts/photoshop/plugins/create/create_workfile.py
@@ -19,7 +19,7 @@ class WorkfileCreator(PSAutoCreator):
in next publish automatically).
"""
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["WorkfileCreator"]
)
diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py
index f1d8419608..77f1a3e91f 100644
--- a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py
+++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py
@@ -16,7 +16,6 @@ class CollectAutoImage(pyblish.api.ContextPlugin):
targets = ["automated"]
def process(self, context):
- family = "image"
for instance in context:
creator_identifier = instance.data.get("creator_identifier")
if creator_identifier and creator_identifier == "auto_image":
diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py
index 59c27f29da..e2bd76ffa2 100644
--- a/openpype/hosts/resolve/api/plugin.py
+++ b/openpype/hosts/resolve/api/plugin.py
@@ -413,8 +413,6 @@ class ClipLoader:
if self.with_handles:
source_in -= handle_start
source_out += handle_end
- handle_start = 0
- handle_end = 0
# make track item from source in bin as item
timeline_item = lib.create_timeline_item(
@@ -433,14 +431,6 @@ class ClipLoader:
self.data["path"], self.active_bin)
_clip_property = media_pool_item.GetClipProperty
- # get handles
- handle_start = self.data["versionData"].get("handleStart")
- handle_end = self.data["versionData"].get("handleEnd")
- if handle_start is None:
- handle_start = int(self.data["assetData"]["handleStart"])
- if handle_end is None:
- handle_end = int(self.data["assetData"]["handleEnd"])
-
source_in = int(_clip_property("Start"))
source_out = int(_clip_property("End"))
diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py
index b99503b3c8..a2afd160fa 100644
--- a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py
+++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py
@@ -49,8 +49,6 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
else:
first_filename = files
- staging_dir = None
-
# Convert to jpeg if not yet
full_input_path = os.path.join(
thumbnail_repre["stagingDir"], first_filename
diff --git a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py
index 1bed07f785..3454b6e135 100644
--- a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py
+++ b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py
@@ -36,7 +36,7 @@ class BatchMovieCreator(TrayPublishCreator):
# Position batch creator after simple creators
order = 110
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
creator_settings = (
project_settings["traypublisher"]["create"]["BatchMovieCreator"]
)
diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py b/openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py
similarity index 83%
rename from openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py
rename to openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py
index c18e10e438..72379ea4e1 100644
--- a/openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py
+++ b/openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py
@@ -2,16 +2,18 @@ import pyblish.api
from openpype.pipeline import OptionalPyblishPluginMixin
-class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin,
- OptionalPyblishPluginMixin):
- """Collect Frame Range data From Asset Entity
+class CollectMissingFrameDataFromAssetEntity(
+ pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin
+):
+ """Collect Missing Frame Range data From Asset Entity
Frame range data will only be collected if the keys
are not yet collected for the instance.
"""
order = pyblish.api.CollectorOrder + 0.491
- label = "Collect Frame Data From Asset Entity"
+ label = "Collect Missing Frame Data From Asset Entity"
families = ["plate", "pointcache",
"vdbcache", "online",
"render"]
diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py
index b962ea464a..09de2d8db2 100644
--- a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py
+++ b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py
@@ -15,7 +15,7 @@ class ValidateFrameRange(OptionalPyblishPluginMixin,
label = "Validate Frame Range"
hosts = ["traypublisher"]
- families = ["render"]
+ families = ["render", "plate"]
order = ValidateContentsOrder
optional = True
diff --git a/openpype/hosts/tvpaint/plugins/create/create_render.py b/openpype/hosts/tvpaint/plugins/create/create_render.py
index 2369c7329f..b7a7c208d9 100644
--- a/openpype/hosts/tvpaint/plugins/create/create_render.py
+++ b/openpype/hosts/tvpaint/plugins/create/create_render.py
@@ -139,7 +139,7 @@ class CreateRenderlayer(TVPaintCreator):
# - Mark by default instance for review
mark_for_review = True
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["tvpaint"]["create"]["create_render_layer"]
)
@@ -387,7 +387,7 @@ class CreateRenderPass(TVPaintCreator):
# Settings
mark_for_review = True
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["tvpaint"]["create"]["create_render_pass"]
)
@@ -690,7 +690,7 @@ class TVPaintAutoDetectRenderCreator(TVPaintCreator):
group_idx_offset = 10
group_idx_padding = 3
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings
["tvpaint"]
@@ -1029,7 +1029,7 @@ class TVPaintSceneRenderCreator(TVPaintAutoCreator):
mark_for_review = True
active_on_create = False
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["tvpaint"]["create"]["create_render_scene"]
)
diff --git a/openpype/hosts/tvpaint/plugins/create/create_review.py b/openpype/hosts/tvpaint/plugins/create/create_review.py
index 886dae7c39..7bb7510a8e 100644
--- a/openpype/hosts/tvpaint/plugins/create/create_review.py
+++ b/openpype/hosts/tvpaint/plugins/create/create_review.py
@@ -12,7 +12,7 @@ class TVPaintReviewCreator(TVPaintAutoCreator):
# Settings
active_on_create = True
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["tvpaint"]["create"]["create_review"]
)
diff --git a/openpype/hosts/tvpaint/plugins/create/create_workfile.py b/openpype/hosts/tvpaint/plugins/create/create_workfile.py
index 41347576d5..c3982c0eca 100644
--- a/openpype/hosts/tvpaint/plugins/create/create_workfile.py
+++ b/openpype/hosts/tvpaint/plugins/create/create_workfile.py
@@ -9,7 +9,7 @@ class TVPaintWorkfileCreator(TVPaintAutoCreator):
label = "Workfile"
icon = "fa.file-o"
- def apply_settings(self, project_settings, system_settings):
+ def apply_settings(self, project_settings):
plugin_settings = (
project_settings["tvpaint"]["create"]["create_workfile"]
)
diff --git a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py
index edc116a8e4..3707ef97aa 100644
--- a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py
+++ b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py
@@ -171,7 +171,7 @@ class LoadImage(plugin.Loader):
george_script = "\n".join(george_script_lines)
execute_george_through_file(george_script)
- def _remove_container(self, container, members=None):
+ def _remove_container(self, container):
if not container:
return
representation = container["representation"]
diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
index 8a610cf388..a13a91de46 100644
--- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
+++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
@@ -63,7 +63,6 @@ class ExtractSequence(pyblish.api.Extractor):
"ignoreLayersTransparency", False
)
- family_lowered = instance.data["family"].lower()
mark_in = instance.context.data["sceneMarkIn"]
mark_out = instance.context.data["sceneMarkOut"]
@@ -76,11 +75,9 @@ class ExtractSequence(pyblish.api.Extractor):
# Frame start/end may be stored as float
frame_start = int(instance.data["frameStart"])
- frame_end = int(instance.data["frameEnd"])
# Handles are not stored per instance but on Context
handle_start = instance.context.data["handleStart"]
- handle_end = instance.context.data["handleEnd"]
scene_bg_color = instance.context.data["sceneBgColor"]
diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
index 202d7854f6..a635bd4cab 100644
--- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
+++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
@@ -2,6 +2,8 @@
"""Hook to launch Unreal and prepare projects."""
import os
import copy
+import shutil
+import tempfile
from pathlib import Path
from qtpy import QtCore
@@ -224,10 +226,24 @@ class UnrealPrelaunchHook(PreLaunchHook):
project_file = project_path / unreal_project_filename
if not project_file.is_file():
- self.exec_ue_project_gen(engine_version,
- unreal_project_name,
- engine_path,
- project_path)
+ with tempfile.TemporaryDirectory() as temp_dir:
+ self.exec_ue_project_gen(engine_version,
+ unreal_project_name,
+ engine_path,
+ Path(temp_dir))
+ try:
+ self.log.info((
+ f"Moving from {temp_dir} to "
+ f"{project_path.as_posix()}"
+ ))
+ shutil.copytree(
+ temp_dir, project_path, dirs_exist_ok=True)
+
+ except shutil.Error as e:
+ raise ApplicationLaunchFailed((
+ f"{self.signature} Cannot copy directory {temp_dir} "
+ f"to {project_path.as_posix()} - {e}"
+ )) from e
self.launch_context.env["AYON_UNREAL_VERSION"] = engine_version
# Append project file to launch arguments
diff --git a/openpype/hosts/unreal/plugins/publish/extract_uasset.py b/openpype/hosts/unreal/plugins/publish/extract_uasset.py
index 48b62faa97..0dd7ff4a0d 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_uasset.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_uasset.py
@@ -19,9 +19,8 @@ class ExtractUAsset(publish.Extractor):
"umap" if "umap" in instance.data.get("families") else "uasset")
ar = unreal.AssetRegistryHelpers.get_asset_registry()
- self.log.info("Performing extraction..")
+ self.log.debug("Performing extraction..")
staging_dir = self.staging_dir(instance)
- filename = f"{instance.name}.{extension}"
members = instance.data.get("members", [])
diff --git a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py
index e56f245d27..20d585e906 100644
--- a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py
+++ b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py
@@ -280,13 +280,14 @@ class BatchPublishEndpoint(WebpublishApiEndpoint):
for key, value in add_args.items():
# Skip key values where value is None
- if value is not None:
- args.append("--{}".format(key))
- # Extend list into arguments (targets can be a list)
- if isinstance(value, (tuple, list)):
- args.extend(value)
- else:
- args.append(value)
+ if value is None:
+ continue
+ arg_key = "--{}".format(key)
+ if not isinstance(value, (tuple, list)):
+ value = [value]
+
+ for item in value:
+ args += [arg_key, item]
log.info("args:: {}".format(args))
if add_to_queue:
diff --git a/openpype/lib/attribute_definitions.py b/openpype/lib/attribute_definitions.py
index 6054d2a92a..a71d6cc72a 100644
--- a/openpype/lib/attribute_definitions.py
+++ b/openpype/lib/attribute_definitions.py
@@ -424,17 +424,25 @@ class TextDef(AbstractAttrDef):
class EnumDef(AbstractAttrDef):
- """Enumeration of single item from items.
+ """Enumeration of items.
+
+ Enumeration of single item from items. Or list of items if multiselection
+ is enabled.
Args:
- items: Items definition that can be converted using
- 'prepare_enum_items'.
- default: Default value. Must be one key(value) from passed items.
+ items (Union[list[str], list[dict[str, Any]]): Items definition that
+ can be converted using 'prepare_enum_items'.
+ default (Optional[Any]): Default value. Must be one key(value) from
+ passed items or list of values for multiselection.
+ multiselection (Optional[bool]): If True, multiselection is allowed.
+ Output is list of selected items.
"""
type = "enum"
- def __init__(self, key, items, default=None, **kwargs):
+ def __init__(
+ self, key, items, default=None, multiselection=False, **kwargs
+ ):
if not items:
raise ValueError((
"Empty 'items' value. {} must have"
@@ -443,30 +451,44 @@ class EnumDef(AbstractAttrDef):
items = self.prepare_enum_items(items)
item_values = [item["value"] for item in items]
- if default not in item_values:
- for value in item_values:
- default = value
- break
+ item_values_set = set(item_values)
+ if multiselection:
+ if default is None:
+ default = []
+ default = list(item_values_set.intersection(default))
+
+ elif default not in item_values:
+ default = next(iter(item_values), None)
super(EnumDef, self).__init__(key, default=default, **kwargs)
self.items = items
- self._item_values = set(item_values)
+ self._item_values = item_values_set
+ self.multiselection = multiselection
def __eq__(self, other):
if not super(EnumDef, self).__eq__(other):
return False
- return self.items == other.items
+ return (
+ self.items == other.items
+ and self.multiselection == other.multiselection
+ )
def convert_value(self, value):
- if value in self._item_values:
- return value
- return self.default
+ if not self.multiselection:
+ if value in self._item_values:
+ return value
+ return self.default
+
+ if value is None:
+ return copy.deepcopy(self.default)
+ return list(self._item_values.intersection(value))
def serialize(self):
data = super(EnumDef, self).serialize()
data["items"] = copy.deepcopy(self.items)
+ data["multiselection"] = self.multiselection
return data
@staticmethod
diff --git a/openpype/lib/python_module_tools.py b/openpype/lib/python_module_tools.py
index a10263f991..bedf19562d 100644
--- a/openpype/lib/python_module_tools.py
+++ b/openpype/lib/python_module_tools.py
@@ -270,8 +270,8 @@ def is_func_signature_supported(func, *args, **kwargs):
Args:
func (function): A function where the signature should be tested.
- *args (tuple[Any]): Positional arguments for function signature.
- **kwargs (dict[str, Any]): Keyword arguments for function signature.
+ *args (Any): Positional arguments for function signature.
+ **kwargs (Any): Keyword arguments for function signature.
Returns:
bool: Function can pass in arguments.
diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py
index 2bae28786e..6e323f55c1 100644
--- a/openpype/lib/transcoding.py
+++ b/openpype/lib/transcoding.py
@@ -724,7 +724,7 @@ def get_ffprobe_data(path_to_file, logger=None):
"""
if not logger:
logger = logging.getLogger(__name__)
- logger.info(
+ logger.debug(
"Getting information about input \"{}\".".format(path_to_file)
)
ffprobe_args = get_ffmpeg_tool_args("ffprobe")
diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py
index 8a408d7f4f..9b4f89c129 100644
--- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py
+++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py
@@ -24,7 +24,7 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
instance.data["deadlineUrl"] = self._collect_deadline_url(instance)
instance.data["deadlineUrl"] = \
instance.data["deadlineUrl"].strip().rstrip("/")
- self.log.info(
+ self.log.debug(
"Using {} for submission.".format(instance.data["deadlineUrl"]))
def _collect_deadline_url(self, render_instance):
diff --git a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py
index ee28612b44..47a0a25755 100644
--- a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py
@@ -27,7 +27,7 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin):
deadline_job_delay = "00:00:08:00"
def process(self, instance):
- instance.data["toBeRenderedOn"] = "deadline"
+
context = instance.context
# get default deadline webservice url from deadline module
@@ -183,10 +183,10 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin):
}
plugin = payload["JobInfo"]["Plugin"]
- self.log.info("using render plugin : {}".format(plugin))
+ self.log.debug("using render plugin : {}".format(plugin))
- self.log.info("Submitting..")
- self.log.info(json.dumps(payload, indent=4, sort_keys=True))
+ self.log.debug("Submitting..")
+ self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
# adding expectied files to instance.data
self.expected_files(instance, render_path)
diff --git a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py
index a48596c6bf..70aa12956d 100644
--- a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py
@@ -233,8 +233,8 @@ class FusionSubmitDeadline(
) for index, key in enumerate(environment)
})
- self.log.info("Submitting..")
- self.log.info(json.dumps(payload, indent=4, sort_keys=True))
+ self.log.debug("Submitting..")
+ self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
# E.g. http://192.168.0.1:8082/api/jobs
url = "{}/api/jobs".format(deadline_url)
diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
index 2c37268f04..17e672334c 100644
--- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
@@ -265,7 +265,7 @@ class HarmonySubmitDeadline(
job_info.SecondaryPool = self._instance.data.get("secondaryPool")
job_info.ChunkSize = self.chunk_size
batch_name = os.path.basename(self._instance.data["source"])
- if is_in_tests:
+ if is_in_tests():
batch_name += datetime.now().strftime("%d%m%Y%H%M%S")
job_info.BatchName = batch_name
job_info.Department = self.department
@@ -369,7 +369,7 @@ class HarmonySubmitDeadline(
# rendering, we need to unzip it.
published_scene = Path(
self.from_published_scene(False))
- self.log.info(f"Processing {published_scene.as_posix()}")
+ self.log.debug(f"Processing {published_scene.as_posix()}")
xstage_path = self._unzip_scene_file(published_scene)
render_path = xstage_path.parent / "renders"
diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py b/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py
index 68aa653804..39c0c3afe4 100644
--- a/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py
+++ b/openpype/modules/deadline/plugins/publish/submit_houdini_remote_publish.py
@@ -162,7 +162,7 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin):
)
# Submit
- self.log.info("Submitting..")
+ self.log.debug("Submitting..")
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
# E.g. http://192.168.0.1:8082/api/jobs
diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py
index 108c377078..8f21a920be 100644
--- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py
@@ -141,4 +141,3 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
# Store output dir for unified publisher (filesequence)
output_dir = os.path.dirname(instance.data["files"][0])
instance.data["outputDir"] = output_dir
- instance.data["toBeRenderedOn"] = "deadline"
diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
index d8725e853c..63c6e4a0c7 100644
--- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
@@ -176,7 +176,6 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
first_file = next(self._iter_expected_files(files))
output_dir = os.path.dirname(first_file)
instance.data["outputDir"] = output_dir
- instance.data["toBeRenderedOn"] = "deadline"
filename = os.path.basename(filepath)
@@ -238,7 +237,10 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
if renderer == "Redshift_Renderer":
plugin_data["redshift_SeparateAovFiles"] = instance.data.get(
"separateAovFiles")
-
+ if instance.data["cameras"]:
+ plugin_info["Camera0"] = None
+ plugin_info["Camera"] = instance.data["cameras"][0]
+ plugin_info["Camera1"] = instance.data["cameras"][0]
self.log.debug("plugin data:{}".format(plugin_data))
plugin_info.update(plugin_data)
diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
index 75d24b28f0..74ecdbe7bf 100644
--- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
@@ -300,7 +300,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
first_file = next(iter_expected_files(expected_files))
output_dir = os.path.dirname(first_file)
instance.data["outputDir"] = output_dir
- instance.data["toBeRenderedOn"] = "deadline"
# Patch workfile (only when use_published is enabled)
if self.use_published:
@@ -335,12 +334,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
payload = self._get_vray_render_payload(payload_data)
- elif "assscene" in instance.data["families"]:
- self.log.debug("Submitting Arnold .ass standalone render..")
- ass_export_payload = self._get_arnold_export_payload(payload_data)
- export_job = self.submit(ass_export_payload)
-
- payload = self._get_arnold_render_payload(payload_data)
else:
self.log.debug("Submitting MayaBatch render..")
payload = self._get_maya_payload(payload_data)
@@ -434,7 +427,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
new_job_info.update(tiles_data["JobInfo"])
new_plugin_info.update(tiles_data["PluginInfo"])
- self.log.info("hashing {} - {}".format(file_index, file))
+ self.log.debug("hashing {} - {}".format(file_index, file))
job_hash = hashlib.sha256(
("{}_{}".format(file_index, file)).encode("utf-8"))
@@ -450,7 +443,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
)
file_index += 1
- self.log.info(
+ self.log.debug(
"Submitting tile job(s) [{}] ...".format(len(frame_payloads)))
# Submit frame tile jobs
@@ -560,7 +553,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
assembly_job_ids = []
num_assemblies = len(assembly_payloads)
for i, payload in enumerate(assembly_payloads):
- self.log.info(
+ self.log.debug(
"submitting assembly job {} of {}".format(i + 1,
num_assemblies)
)
@@ -636,53 +629,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
return job_info, attr.asdict(plugin_info)
- def _get_arnold_export_payload(self, data):
-
- try:
- from openpype.scripts import export_maya_ass_job
- except Exception:
- raise AssertionError(
- "Expected module 'export_maya_ass_job' to be available")
-
- module_path = export_maya_ass_job.__file__
- if module_path.endswith(".pyc"):
- module_path = module_path[: -len(".pyc")] + ".py"
-
- script = os.path.normpath(module_path)
-
- job_info = copy.deepcopy(self.job_info)
- job_info.Name = self._job_info_label("Export")
-
- # Force a single frame Python job
- job_info.Plugin = "Python"
- job_info.Frames = 1
-
- renderlayer = self._instance.data["setMembers"]
-
- # add required env vars for the export script
- envs = {
- "AVALON_APP_NAME": os.environ.get("AVALON_APP_NAME"),
- "OPENPYPE_ASS_EXPORT_RENDER_LAYER": renderlayer,
- "OPENPYPE_ASS_EXPORT_SCENE_FILE": self.scene_path,
- "OPENPYPE_ASS_EXPORT_OUTPUT": job_info.OutputFilename[0],
- "OPENPYPE_ASS_EXPORT_START": int(self._instance.data["frameStartHandle"]), # noqa
- "OPENPYPE_ASS_EXPORT_END": int(self._instance.data["frameEndHandle"]), # noqa
- "OPENPYPE_ASS_EXPORT_STEP": 1
- }
- for key, value in envs.items():
- if not value:
- continue
- job_info.EnvironmentKeyValue[key] = value
-
- plugin_info = PythonPluginInfo(
- ScriptFile=script,
- Version="3.6",
- Arguments="",
- SingleFrameOnly="True"
- )
-
- return job_info, attr.asdict(plugin_info)
-
def _get_vray_render_payload(self, data):
# Job Info
diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
index cfdeb4968b..0295c2b760 100644
--- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
@@ -97,7 +97,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
instance.data["suspend_publish"] = instance.data["attributeValues"][
"suspend_publish"]
- instance.data["toBeRenderedOn"] = "deadline"
families = instance.data["families"]
node = instance.data["transientData"]["node"]
@@ -244,7 +243,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
# resolve any limit groups
limit_groups = self.get_limit_groups()
- self.log.info("Limit groups: `{}`".format(limit_groups))
+ self.log.debug("Limit groups: `{}`".format(limit_groups))
payload = {
"JobInfo": {
@@ -387,10 +386,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
})
plugin = payload["JobInfo"]["Plugin"]
- self.log.info("using render plugin : {}".format(plugin))
+ self.log.debug("using render plugin : {}".format(plugin))
- self.log.info("Submitting..")
- self.log.info(json.dumps(payload, indent=4, sort_keys=True))
+ self.log.debug("Submitting..")
+ self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
# adding expectied files to instance.data
self.expected_files(
diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
index bf4411ef43..20bebe583f 100644
--- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py
+++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
@@ -317,7 +317,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
# remove secondary pool
payload["JobInfo"].pop("SecondaryPool", None)
- self.log.info("Submitting Deadline job ...")
+ self.log.debug("Submitting Deadline publish job ...")
url = "{}/api/jobs".format(self.deadline_url)
response = requests.post(url, json=payload, timeout=10)
@@ -454,7 +454,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
import getpass
render_job = {}
- self.log.info("Faking job data ...")
+ self.log.debug("Faking job data ...")
render_job["Props"] = {}
# Render job doesn't exist because we do not have prior submission.
# We still use data from it so lets fake it.
diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py
index 9f1f7bc518..5d37e7357e 100644
--- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py
+++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py
@@ -70,7 +70,10 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
# Update the representation expected files
self.log.info("Update range from actual job range "
"to frame list: {}".format(frame_list))
- repre["files"] = sorted(job_expected_files)
+ # single item files must be string not list
+ repre["files"] = (sorted(job_expected_files)
+ if len(job_expected_files) > 1 else
+ list(job_expected_files)[0])
# Update the expected files
expected_files = job_expected_files
diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param
index ff2949766c..43a54a464e 100644
--- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param
+++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.param
@@ -77,4 +77,22 @@ CategoryOrder=0
Index=4
Label=Harmony 20 Render Executable
Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines.
-Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium
\ No newline at end of file
+Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 20 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_20/lnx86_64/bin/HarmonyPremium
+
+[Harmony_RenderExecutable_21]
+Type=multilinemultifilename
+Category=Render Executables
+CategoryOrder=0
+Index=4
+Label=Harmony 21 Render Executable
+Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines.
+Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 21 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_21/lnx86_64/bin/HarmonyPremium
+
+[Harmony_RenderExecutable_22]
+Type=multilinemultifilename
+Category=Render Executables
+CategoryOrder=0
+Index=4
+Label=Harmony 22 Render Executable
+Description=The path to the Harmony Render executable file used for rendering. Enter alternative paths on separate lines.
+Default=c:\Program Files (x86)\Toon Boom Animation\Toon Boom Harmony 22 Premium\win64\bin\HarmonyPremium.exe;/Applications/Toon Boom Harmony 22 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium;/usr/local/ToonBoomAnimation/harmonyPremium_22/lnx86_64/bin/HarmonyPremium
diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py
index 2f6e9cf379..32ed76b58d 100644
--- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py
+++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py
@@ -1,3 +1,4 @@
+#!/usr/bin/env python3
from System import *
from System.Diagnostics import *
from System.IO import *
diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
index 4d474fab10..858c0bb2d6 100644
--- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
+++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
@@ -27,8 +27,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
def process(self, instance):
component_list = instance.data.get("ftrackComponentsList")
if not component_list:
- self.log.info(
- "Instance don't have components to integrate to Ftrack."
+ self.log.debug(
+ "Instance doesn't have components to integrate to Ftrack."
" Skipping."
)
return
@@ -37,7 +37,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
task_entity, parent_entity = self.get_instance_entities(
instance, context)
if parent_entity is None:
- self.log.info((
+ self.log.debug((
"Skipping ftrack integration. Instance \"{}\" does not"
" have specified ftrack entities."
).format(str(instance)))
@@ -323,7 +323,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
"type_id": asset_type_id,
"context_id": parent_id
}
- self.log.info("Created new Asset with data: {}.".format(asset_data))
+ self.log.debug("Created new Asset with data: {}.".format(asset_data))
session.create("Asset", asset_data)
session.commit()
return self._query_asset(session, asset_name, asset_type_id, parent_id)
@@ -384,7 +384,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
if comment:
new_asset_version_data["comment"] = comment
- self.log.info("Created new AssetVersion with data {}".format(
+ self.log.debug("Created new AssetVersion with data {}".format(
new_asset_version_data
))
session.create("AssetVersion", new_asset_version_data)
@@ -555,7 +555,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
location=location
)
data["component"] = component_entity
- self.log.info(
+ self.log.debug(
(
"Created new Component with path: {0}, data: {1},"
" metadata: {2}, location: {3}"
diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py
index 6ed02bc8b6..ceaff8ff54 100644
--- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py
+++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_description.py
@@ -40,7 +40,7 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
comment = instance.data["comment"]
if not comment:
- self.log.info("Comment is not set.")
+ self.log.debug("Comment is not set.")
else:
self.log.debug("Comment is set to `{}`".format(comment))
diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py
index 6e82897d89..10b7932cdf 100644
--- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py
+++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_note.py
@@ -47,7 +47,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
app_label = context.data["appLabel"]
comment = instance.data["comment"]
if not comment:
- self.log.info("Comment is not set.")
+ self.log.debug("Comment is not set.")
else:
self.log.debug("Comment is set to `{}`".format(comment))
@@ -127,14 +127,14 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
note_text = StringTemplate.format_template(template, format_data)
if not note_text.solved:
- self.log.warning((
+ self.log.debug((
"Note template require more keys then can be provided."
"\nTemplate: {}\nMissing values for keys:{}\nData: {}"
).format(template, note_text.missing_keys, format_data))
continue
if not note_text:
- self.log.info((
+ self.log.debug((
"Note for AssetVersion {} would be empty. Skipping."
"\nTemplate: {}\nData: {}"
).format(asset_version["id"], template, format_data))
diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py
index 6e5dd056f3..b66e1f01e0 100644
--- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py
+++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py
@@ -121,7 +121,7 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin):
publish_comment = self.format_publish_comment(instance)
if not publish_comment:
- self.log.info("Comment is not set.")
+ self.log.debug("Comment is not set.")
else:
self.log.debug("Comment is `{}`".format(publish_comment))
diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py
index 731132911a..44e5cb6c47 100644
--- a/openpype/pipeline/colorspace.py
+++ b/openpype/pipeline/colorspace.py
@@ -13,12 +13,17 @@ from openpype.lib import (
Logger
)
from openpype.pipeline import Anatomy
+from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS
+
log = Logger.get_logger(__name__)
-class CashedData:
- remapping = None
+class CachedData:
+ remapping = {}
+ allowed_exts = {
+ ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS)
+ }
@contextlib.contextmanager
@@ -546,15 +551,15 @@ def get_remapped_colorspace_to_native(
Union[str, None]: native colorspace name defined in remapping or None
"""
- CashedData.remapping.setdefault(host_name, {})
- if CashedData.remapping[host_name].get("to_native") is None:
+ CachedData.remapping.setdefault(host_name, {})
+ if CachedData.remapping[host_name].get("to_native") is None:
remapping_rules = imageio_host_settings["remapping"]["rules"]
- CashedData.remapping[host_name]["to_native"] = {
+ CachedData.remapping[host_name]["to_native"] = {
rule["ocio_name"]: rule["host_native_name"]
for rule in remapping_rules
}
- return CashedData.remapping[host_name]["to_native"].get(
+ return CachedData.remapping[host_name]["to_native"].get(
ocio_colorspace_name)
@@ -572,15 +577,15 @@ def get_remapped_colorspace_from_native(
Union[str, None]: Ocio colorspace name defined in remapping or None.
"""
- CashedData.remapping.setdefault(host_name, {})
- if CashedData.remapping[host_name].get("from_native") is None:
+ CachedData.remapping.setdefault(host_name, {})
+ if CachedData.remapping[host_name].get("from_native") is None:
remapping_rules = imageio_host_settings["remapping"]["rules"]
- CashedData.remapping[host_name]["from_native"] = {
+ CachedData.remapping[host_name]["from_native"] = {
rule["host_native_name"]: rule["ocio_name"]
for rule in remapping_rules
}
- return CashedData.remapping[host_name]["from_native"].get(
+ return CachedData.remapping[host_name]["from_native"].get(
host_native_colorspace_name)
@@ -601,3 +606,173 @@ def _get_imageio_settings(project_settings, host_name):
imageio_host = project_settings.get(host_name, {}).get("imageio", {})
return imageio_global, imageio_host
+
+
+def get_colorspace_settings_from_publish_context(context_data):
+ """Returns solved settings for the host context.
+
+ Args:
+ context_data (publish.Context.data): publishing context data
+
+ Returns:
+ tuple | bool: config, file rules or None
+ """
+ if "imageioSettings" in context_data and context_data["imageioSettings"]:
+ return context_data["imageioSettings"]
+
+ project_name = context_data["projectName"]
+ host_name = context_data["hostName"]
+ anatomy_data = context_data["anatomyData"]
+ project_settings_ = context_data["project_settings"]
+
+ config_data = get_imageio_config(
+ project_name, host_name,
+ project_settings=project_settings_,
+ anatomy_data=anatomy_data
+ )
+
+ # caching invalid state, so it's not recalculated all the time
+ file_rules = None
+ if config_data:
+ file_rules = get_imageio_file_rules(
+ project_name, host_name,
+ project_settings=project_settings_
+ )
+
+ # caching settings for future instance processing
+ context_data["imageioSettings"] = (config_data, file_rules)
+
+ return config_data, file_rules
+
+
+def set_colorspace_data_to_representation(
+ representation, context_data,
+ colorspace=None,
+ log=None
+):
+ """Sets colorspace data to representation.
+
+ Args:
+ representation (dict): publishing representation
+ context_data (publish.Context.data): publishing context data
+ colorspace (str, optional): colorspace name. Defaults to None.
+ log (logging.Logger, optional): logger instance. Defaults to None.
+
+ Example:
+ ```
+ {
+ # for other publish plugins and loaders
+ "colorspace": "linear",
+ "config": {
+ # for future references in case need
+ "path": "/abs/path/to/config.ocio",
+ # for other plugins within remote publish cases
+ "template": "{project[root]}/path/to/config.ocio"
+ }
+ }
+ ```
+
+ """
+ log = log or Logger.get_logger(__name__)
+
+ file_ext = representation["ext"]
+
+ # check if `file_ext` in lower case is in CachedData.allowed_exts
+ if file_ext.lstrip(".").lower() not in CachedData.allowed_exts:
+ log.debug(
+ "Extension '{}' is not in allowed extensions.".format(file_ext)
+ )
+ return
+
+ # get colorspace settings
+ config_data, file_rules = get_colorspace_settings_from_publish_context(
+ context_data)
+
+ # in case host color management is not enabled
+ if not config_data:
+ log.warning("Host's colorspace management is disabled.")
+ return
+
+ log.debug("Config data is: `{}`".format(config_data))
+
+ project_name = context_data["projectName"]
+ host_name = context_data["hostName"]
+ project_settings = context_data["project_settings"]
+
+ # get one filename
+ filename = representation["files"]
+ if isinstance(filename, list):
+ filename = filename[0]
+
+ # get matching colorspace from rules
+ colorspace = colorspace or get_imageio_colorspace_from_filepath(
+ filename, host_name, project_name,
+ config_data=config_data,
+ file_rules=file_rules,
+ project_settings=project_settings
+ )
+
+ # infuse data to representation
+ if colorspace:
+ colorspace_data = {
+ "colorspace": colorspace,
+ "config": config_data
+ }
+
+ # update data key
+ representation["colorspaceData"] = colorspace_data
+
+
+def get_display_view_colorspace_name(config_path, display, view):
+ """Returns the colorspace attribute of the (display, view) pair.
+
+ Args:
+ config_path (str): path string leading to config.ocio
+ display (str): display name e.g. "ACES"
+ view (str): view name e.g. "sRGB"
+
+ Returns:
+ view color space name (str) e.g. "Output - sRGB"
+ """
+
+ if not compatibility_check():
+ # python environment is not compatible with PyOpenColorIO
+ # needs to be run in subprocess
+ return get_display_view_colorspace_subprocess(config_path,
+ display, view)
+
+ from openpype.scripts.ocio_wrapper import _get_display_view_colorspace_name # noqa
+
+ return _get_display_view_colorspace_name(config_path, display, view)
+
+
+def get_display_view_colorspace_subprocess(config_path, display, view):
+ """Returns the colorspace attribute of the (display, view) pair
+ via subprocess.
+
+ Args:
+ config_path (str): path string leading to config.ocio
+ display (str): display name e.g. "ACES"
+ view (str): view name e.g. "sRGB"
+
+ Returns:
+ view color space name (str) e.g. "Output - sRGB"
+ """
+
+ with _make_temp_json_file() as tmp_json_path:
+ # Prepare subprocess arguments
+ args = [
+ "run", get_ocio_config_script_path(),
+ "config", "get_display_view_colorspace_name",
+ "--in_path", config_path,
+ "--out_path", tmp_json_path,
+ "--display", display,
+ "--view", view
+ ]
+ log.debug("Executing: {}".format(" ".join(args)))
+
+ run_openpype_process(*args, logger=log)
+
+ # return default view colorspace name
+ with open(tmp_json_path, "r") as f:
+ return json.load(f)
diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py
index 3076efcde7..f9e3f86652 100644
--- a/openpype/pipeline/create/context.py
+++ b/openpype/pipeline/create/context.py
@@ -1774,7 +1774,7 @@ class CreateContext:
self.creator_discover_result = report
for creator_class in report.plugins:
if inspect.isabstract(creator_class):
- self.log.info(
+ self.log.debug(
"Skipping abstract Creator {}".format(str(creator_class))
)
continue
@@ -1804,6 +1804,7 @@ class CreateContext:
self,
self.headless
)
+
if not creator.enabled:
disabled_creators[creator_identifier] = creator
continue
diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py
index 38d6b6f465..6aa08cae70 100644
--- a/openpype/pipeline/create/creator_plugins.py
+++ b/openpype/pipeline/create/creator_plugins.py
@@ -1,16 +1,12 @@
import copy
import collections
-from abc import (
- ABCMeta,
- abstractmethod,
- abstractproperty
-)
+from abc import ABCMeta, abstractmethod
import six
from openpype.settings import get_system_settings, get_project_settings
-from openpype.lib import Logger
+from openpype.lib import Logger, is_func_signature_supported
from openpype.pipeline.plugin_discover import (
discover,
register_plugin,
@@ -84,7 +80,8 @@ class SubsetConvertorPlugin(object):
def host(self):
return self._create_context.host
- @abstractproperty
+ @property
+ @abstractmethod
def identifier(self):
"""Converted identifier.
@@ -161,7 +158,6 @@ class BaseCreator:
Args:
project_settings (Dict[str, Any]): Project settings.
- system_settings (Dict[str, Any]): System settings.
create_context (CreateContext): Context which initialized creator.
headless (bool): Running in headless mode.
"""
@@ -208,10 +204,41 @@ class BaseCreator:
# - we may use UI inside processing this attribute should be checked
self.headless = headless
- self.apply_settings(project_settings, system_settings)
+ expect_system_settings = False
+ if is_func_signature_supported(
+ self.apply_settings, project_settings
+ ):
+ self.apply_settings(project_settings)
+ else:
+ expect_system_settings = True
+ # Backwards compatibility for system settings
+ self.apply_settings(project_settings, system_settings)
- def apply_settings(self, project_settings, system_settings):
- """Method called on initialization of plugin to apply settings."""
+ init_use_base = any(
+ self.__class__.__init__ is cls.__init__
+ for cls in {
+ BaseCreator,
+ Creator,
+ HiddenCreator,
+ AutoCreator,
+ }
+ )
+ if not init_use_base or expect_system_settings:
+ self.log.warning((
+ "WARNING: Source - Create plugin {}."
+ " System settings argument will not be passed to"
+ " '__init__' and 'apply_settings' methods in future versions"
+ " of OpenPype. Planned version to drop the support"
+ " is 3.16.6 or 3.17.0. Please contact Ynput core team if you"
+ " need to keep system settings."
+ ).format(self.__class__.__name__))
+
+ def apply_settings(self, project_settings):
+ """Method called on initialization of plugin to apply settings.
+
+ Args:
+ project_settings (dict[str, Any]): Project settings.
+ """
pass
@@ -224,7 +251,8 @@ class BaseCreator:
return self.family
- @abstractproperty
+ @property
+ @abstractmethod
def family(self):
"""Family that plugin represents."""
diff --git a/openpype/pipeline/load/plugins.py b/openpype/pipeline/load/plugins.py
index f87fb3312d..8acfcfdb6c 100644
--- a/openpype/pipeline/load/plugins.py
+++ b/openpype/pipeline/load/plugins.py
@@ -234,6 +234,19 @@ class LoaderPlugin(list):
"""
return cls.options or []
+ @property
+ def fname(self):
+ """Backwards compatibility with deprecation warning"""
+
+ self.log.warning((
+ "DEPRECATION WARNING: Source - Loader plugin {}."
+ " The 'fname' property on the Loader plugin will be removed in"
+ " future versions of OpenPype. Planned version to drop the support"
+ " is 3.16.6 or 3.17.0."
+ ).format(self.__class__.__name__))
+ if hasattr(self, "_fname"):
+ return self._fname
+
class SubsetLoaderPlugin(LoaderPlugin):
"""Load subset into host application
diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py
index 42418be40e..b10d6032b3 100644
--- a/openpype/pipeline/load/utils.py
+++ b/openpype/pipeline/load/utils.py
@@ -318,7 +318,8 @@ def load_with_repre_context(
# Backwards compatibility: Originally the loader's __init__ required the
# representation context to set `fname` attribute to the filename to load
- loader.fname = get_representation_path_from_context(repre_context)
+ # Deprecated - to be removed in OpenPype 3.16.6 or 3.17.0.
+ loader._fname = get_representation_path_from_context(repre_context)
return loader.load(repre_context, name, namespace, options)
diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py
index 6877d556c3..8a26402bd8 100644
--- a/openpype/pipeline/publish/abstract_collect_render.py
+++ b/openpype/pipeline/publish/abstract_collect_render.py
@@ -75,7 +75,6 @@ class RenderInstance(object):
tilesY = attr.ib(default=0) # number of tiles in Y
# submit_publish_job
- toBeRenderedOn = attr.ib(default=None)
deadlineSubmissionJob = attr.ib(default=None)
anatomyData = attr.ib(default=None)
outputDir = attr.ib(default=None)
diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py
index 815761cd0f..1ae6ea43b2 100644
--- a/openpype/pipeline/publish/lib.py
+++ b/openpype/pipeline/publish/lib.py
@@ -952,6 +952,7 @@ def replace_with_published_scene_path(instance, replace_in_path=True):
return file_path
+
def add_repre_files_for_cleanup(instance, repre):
""" Explicitly mark repre files to be deleted.
@@ -960,7 +961,16 @@ def add_repre_files_for_cleanup(instance, repre):
"""
files = repre["files"]
staging_dir = repre.get("stagingDir")
- if not staging_dir or instance.data.get("stagingDir_persistent"):
+
+ # first make sure representation level is not persistent
+ if (
+ not staging_dir
+ or repre.get("stagingDir_persistent")
+ ):
+ return
+
+ # then look into instance level if it's not persistent
+ if instance.data.get("stagingDir_persistent"):
return
if isinstance(files, str):
diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py
index ba3be6397e..ae6cbc42d1 100644
--- a/openpype/pipeline/publish/publish_plugins.py
+++ b/openpype/pipeline/publish/publish_plugins.py
@@ -1,6 +1,5 @@
import inspect
from abc import ABCMeta
-from pprint import pformat
import pyblish.api
from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin
from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS
@@ -14,9 +13,8 @@ from .lib import (
)
from openpype.pipeline.colorspace import (
- get_imageio_colorspace_from_filepath,
- get_imageio_config,
- get_imageio_file_rules
+ get_colorspace_settings_from_publish_context,
+ set_colorspace_data_to_representation
)
@@ -306,12 +304,8 @@ class ColormanagedPyblishPluginMixin(object):
matching colorspace from rules. Finally, it infuses this
data into the representation.
"""
- allowed_ext = set(
- ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS)
- )
- @staticmethod
- def get_colorspace_settings(context):
+ def get_colorspace_settings(self, context):
"""Returns solved settings for the host context.
Args:
@@ -320,50 +314,18 @@ class ColormanagedPyblishPluginMixin(object):
Returns:
tuple | bool: config, file rules or None
"""
- if "imageioSettings" in context.data:
- return context.data["imageioSettings"]
-
- project_name = context.data["projectName"]
- host_name = context.data["hostName"]
- anatomy_data = context.data["anatomyData"]
- project_settings_ = context.data["project_settings"]
-
- config_data = get_imageio_config(
- project_name, host_name,
- project_settings=project_settings_,
- anatomy_data=anatomy_data
- )
-
- # in case host color management is not enabled
- if not config_data:
- return None
-
- file_rules = get_imageio_file_rules(
- project_name, host_name,
- project_settings=project_settings_
- )
-
- # caching settings for future instance processing
- context.data["imageioSettings"] = (config_data, file_rules)
-
- return config_data, file_rules
+ return get_colorspace_settings_from_publish_context(context.data)
def set_representation_colorspace(
self, representation, context,
colorspace=None,
- colorspace_settings=None
):
"""Sets colorspace data to representation.
Args:
representation (dict): publishing representation
context (publish.Context): publishing context
- config_data (dict): host resolved config data
- file_rules (dict): host resolved file rules data
colorspace (str, optional): colorspace name. Defaults to None.
- colorspace_settings (tuple[dict, dict], optional):
- Settings for config_data and file_rules.
- Defaults to None.
Example:
```
@@ -380,64 +342,10 @@ class ColormanagedPyblishPluginMixin(object):
```
"""
- ext = representation["ext"]
- # check extension
- self.log.debug("__ ext: `{}`".format(ext))
- # check if ext in lower case is in self.allowed_ext
- if ext.lstrip(".").lower() not in self.allowed_ext:
- self.log.debug(
- "Extension '{}' is not in allowed extensions.".format(ext)
- )
- return
-
- if colorspace_settings is None:
- colorspace_settings = self.get_colorspace_settings(context)
-
- # in case host color management is not enabled
- if not colorspace_settings:
- self.log.warning("Host's colorspace management is disabled.")
- return
-
- # unpack colorspace settings
- config_data, file_rules = colorspace_settings
-
- if not config_data:
- # warn in case no colorspace path was defined
- self.log.warning("No colorspace management was defined")
- return
-
- self.log.debug("Config data is: `{}`".format(config_data))
-
- project_name = context.data["projectName"]
- host_name = context.data["hostName"]
- project_settings = context.data["project_settings"]
-
- # get one filename
- filename = representation["files"]
- if isinstance(filename, list):
- filename = filename[0]
-
- self.log.debug("__ filename: `{}`".format(filename))
-
- # get matching colorspace from rules
- colorspace = colorspace or get_imageio_colorspace_from_filepath(
- filename, host_name, project_name,
- config_data=config_data,
- file_rules=file_rules,
- project_settings=project_settings
+ # using cached settings if available
+ set_colorspace_data_to_representation(
+ representation, context.data,
+ colorspace,
+ log=self.log
)
- self.log.debug("__ colorspace: `{}`".format(colorspace))
-
- # infuse data to representation
- if colorspace:
- colorspace_data = {
- "colorspace": colorspace,
- "config": config_data
- }
-
- # update data key
- representation["colorspaceData"] = colorspace_data
-
- self.log.debug("__ colorspace_data: `{}`".format(
- pformat(colorspace_data)))
diff --git a/openpype/plugins/publish/cleanup.py b/openpype/plugins/publish/cleanup.py
index 573cd829e4..6c122ddf09 100644
--- a/openpype/plugins/publish/cleanup.py
+++ b/openpype/plugins/publish/cleanup.py
@@ -69,7 +69,7 @@ class CleanUp(pyblish.api.InstancePlugin):
skip_cleanup_filepaths.add(os.path.normpath(path))
if self.remove_temp_renders:
- self.log.info("Cleaning renders new...")
+ self.log.debug("Cleaning renders new...")
self.clean_renders(instance, skip_cleanup_filepaths)
if [ef for ef in self.exclude_families
@@ -95,10 +95,12 @@ class CleanUp(pyblish.api.InstancePlugin):
return
if instance.data.get("stagingDir_persistent"):
- self.log.info("Staging dir: %s should be persistent" % staging_dir)
+ self.log.debug(
+ "Staging dir {} should be persistent".format(staging_dir)
+ )
return
- self.log.info("Removing staging directory {}".format(staging_dir))
+ self.log.debug("Removing staging directory {}".format(staging_dir))
shutil.rmtree(staging_dir)
def clean_renders(self, instance, skip_cleanup_filepaths):
diff --git a/openpype/plugins/publish/cleanup_farm.py b/openpype/plugins/publish/cleanup_farm.py
index 8052f13734..e655437ced 100644
--- a/openpype/plugins/publish/cleanup_farm.py
+++ b/openpype/plugins/publish/cleanup_farm.py
@@ -26,10 +26,10 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
# Skip process if is not in list of source hosts in which this
# plugin should run
if src_host_name not in self.allowed_hosts:
- self.log.info((
+ self.log.debug(
"Source host \"{}\" is not in list of enabled hosts {}."
- " Skipping"
- ).format(str(src_host_name), str(self.allowed_hosts)))
+ " Skipping".format(src_host_name, self.allowed_hosts)
+ )
return
self.log.debug("Preparing filepaths to remove")
@@ -47,7 +47,7 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
dirpaths_to_remove.add(os.path.normpath(staging_dir))
if not dirpaths_to_remove:
- self.log.info("Nothing to remove. Skipping")
+ self.log.debug("Nothing to remove. Skipping")
return
self.log.debug("Filepaths to remove are:\n{}".format(
diff --git a/openpype/plugins/publish/collect_audio.py b/openpype/plugins/publish/collect_audio.py
index 3a0ddb3281..6aaadfc568 100644
--- a/openpype/plugins/publish/collect_audio.py
+++ b/openpype/plugins/publish/collect_audio.py
@@ -53,8 +53,8 @@ class CollectAudio(pyblish.api.ContextPlugin):
):
# Skip instances that already have audio filled
if instance.data.get("audio"):
- self.log.info(
- "Skipping Audio collecion. It is already collected"
+ self.log.debug(
+ "Skipping Audio collection. It is already collected"
)
continue
filtered_instances.append(instance)
@@ -70,7 +70,7 @@ class CollectAudio(pyblish.api.ContextPlugin):
instances_by_asset_name[asset_name].append(instance)
asset_names = set(instances_by_asset_name.keys())
- self.log.info((
+ self.log.debug((
"Searching for audio subset '{subset}' in assets {assets}"
).format(
subset=self.audio_subset_name,
@@ -100,7 +100,7 @@ class CollectAudio(pyblish.api.ContextPlugin):
"offset": 0,
"filename": repre_path
}]
- self.log.info("Audio Data added to instance ...")
+ self.log.debug("Audio Data added to instance ...")
def query_representations(self, project_name, asset_names):
"""Query representations related to audio subsets for passed assets.
diff --git a/openpype/plugins/publish/collect_current_context.py b/openpype/plugins/publish/collect_current_context.py
index 166d75e5de..8b12a3f77f 100644
--- a/openpype/plugins/publish/collect_current_context.py
+++ b/openpype/plugins/publish/collect_current_context.py
@@ -39,5 +39,12 @@ class CollectCurrentContext(pyblish.api.ContextPlugin):
# - 'task' -> 'taskName'
self.log.info((
- "Collected project context\nProject: {}\nAsset: {}\nTask: {}"
- ).format(project_name, asset_name, task_name))
+ "Collected project context\n"
+ "Project: {project_name}\n"
+ "Asset: {asset_name}\n"
+ "Task: {task_name}"
+ ).format(
+ project_name=context.data["projectName"],
+ asset_name=context.data["asset"],
+ task_name=context.data["task"]
+ ))
diff --git a/openpype/plugins/publish/collect_farm_target.py b/openpype/plugins/publish/collect_farm_target.py
new file mode 100644
index 0000000000..adcd842b48
--- /dev/null
+++ b/openpype/plugins/publish/collect_farm_target.py
@@ -0,0 +1,35 @@
+# -*- coding: utf-8 -*-
+import pyblish.api
+
+
+class CollectFarmTarget(pyblish.api.InstancePlugin):
+ """Collects the render target for the instance
+ """
+
+ order = pyblish.api.CollectorOrder + 0.499
+ label = "Collect Farm Target"
+ targets = ["local"]
+
+ def process(self, instance):
+ if not instance.data.get("farm"):
+ return
+
+ context = instance.context
+
+ farm_name = ""
+ op_modules = context.data.get("openPypeModules")
+
+ for farm_renderer in ["deadline", "royalrender", "muster"]:
+ op_module = op_modules.get(farm_renderer, False)
+
+ if op_module and op_module.enabled:
+ farm_name = farm_renderer
+ elif not op_module:
+ self.log.error("Cannot get OpenPype {0} module.".format(
+ farm_renderer))
+
+ if farm_name:
+ self.log.debug("Collected render target: {0}".format(farm_name))
+ instance.data["toBeRenderedOn"] = farm_name
+ else:
+ AssertionError("No OpenPype renderer module found")
diff --git a/openpype/plugins/publish/collect_hierarchy.py b/openpype/plugins/publish/collect_hierarchy.py
index 687397be8a..b5fd1e4bb9 100644
--- a/openpype/plugins/publish/collect_hierarchy.py
+++ b/openpype/plugins/publish/collect_hierarchy.py
@@ -24,7 +24,7 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
final_context[project_name]['entity_type'] = 'Project'
for instance in context:
- self.log.info("Processing instance: `{}` ...".format(instance))
+ self.log.debug("Processing instance: `{}` ...".format(instance))
# shot data dict
shot_data = {}
diff --git a/openpype/plugins/publish/collect_input_representations_to_versions.py b/openpype/plugins/publish/collect_input_representations_to_versions.py
index 54a3214647..2b8c745d3d 100644
--- a/openpype/plugins/publish/collect_input_representations_to_versions.py
+++ b/openpype/plugins/publish/collect_input_representations_to_versions.py
@@ -46,3 +46,10 @@ class CollectInputRepresentationsToVersions(pyblish.api.ContextPlugin):
version_id = representation_id_to_version_id.get(repre_id)
if version_id:
input_versions.append(version_id)
+ else:
+ self.log.debug(
+ "Representation id {} skipped because its version is "
+ "not found in current project. Likely it is loaded "
+ "from a library project or uses a deleted "
+ "representation or version.".format(repre_id)
+ )
diff --git a/openpype/plugins/publish/collect_rendered_files.py b/openpype/plugins/publish/collect_rendered_files.py
index 6c8d1e9ca5..aaf290ace7 100644
--- a/openpype/plugins/publish/collect_rendered_files.py
+++ b/openpype/plugins/publish/collect_rendered_files.py
@@ -91,12 +91,12 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
# now we can just add instances from json file and we are done
for instance_data in data.get("instances"):
- self.log.info(" - processing instance for {}".format(
+ self.log.debug(" - processing instance for {}".format(
instance_data.get("subset")))
instance = self._context.create_instance(
instance_data.get("subset")
)
- self.log.info("Filling stagingDir...")
+ self.log.debug("Filling stagingDir...")
self._fill_staging_dir(instance_data, anatomy)
instance.data.update(instance_data)
@@ -121,7 +121,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
"offset": 0
}]
})
- self.log.info(
+ self.log.debug(
f"Adding audio to instance: {instance.data['audio']}")
def process(self, context):
@@ -137,11 +137,11 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
# Using already collected Anatomy
anatomy = context.data["anatomy"]
- self.log.info("Getting root setting for project \"{}\"".format(
+ self.log.debug("Getting root setting for project \"{}\"".format(
anatomy.project_name
))
- self.log.info("anatomy: {}".format(anatomy.roots))
+ self.log.debug("anatomy: {}".format(anatomy.roots))
try:
session_is_set = False
for path in paths:
@@ -156,7 +156,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
if remapped:
session_data["AVALON_WORKDIR"] = remapped
- self.log.info("Setting session using data from file")
+ self.log.debug("Setting session using data from file")
legacy_io.Session.update(session_data)
os.environ.update(session_data)
session_is_set = True
diff --git a/openpype/plugins/publish/collect_scene_version.py b/openpype/plugins/publish/collect_scene_version.py
index 70a0aca296..7920c1e82b 100644
--- a/openpype/plugins/publish/collect_scene_version.py
+++ b/openpype/plugins/publish/collect_scene_version.py
@@ -63,4 +63,6 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
"filename: {}".format(filename))
context.data['version'] = int(version)
- self.log.info('Scene Version: %s' % context.data.get('version'))
+ self.log.debug(
+ "Collected scene version: {}".format(context.data.get('version'))
+ )
diff --git a/openpype/plugins/publish/collect_sequence_frame_data.py b/openpype/plugins/publish/collect_sequence_frame_data.py
index c200b245e9..6c2bfbf358 100644
--- a/openpype/plugins/publish/collect_sequence_frame_data.py
+++ b/openpype/plugins/publish/collect_sequence_frame_data.py
@@ -50,4 +50,7 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
return {
"frameStart": repres_frames[0],
"frameEnd": repres_frames[-1],
+ "handleStart": 0,
+ "handleEnd": 0,
+ "fps": instance.context.data["assetEntity"]["data"]["fps"]
}
diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py
index e5b37ee3b4..dc8aab6ce4 100644
--- a/openpype/plugins/publish/extract_burnin.py
+++ b/openpype/plugins/publish/extract_burnin.py
@@ -83,7 +83,7 @@ class ExtractBurnin(publish.Extractor):
return
if not instance.data.get("representations"):
- self.log.info(
+ self.log.debug(
"Instance does not have filled representations. Skipping")
return
@@ -135,11 +135,11 @@ class ExtractBurnin(publish.Extractor):
burnin_defs, repre["tags"]
)
if not repre_burnin_defs:
- self.log.info((
+ self.log.debug(
"Skipped representation. All burnin definitions from"
- " selected profile does not match to representation's"
- " tags. \"{}\""
- ).format(str(repre["tags"])))
+ " selected profile do not match to representation's"
+ " tags. \"{}\"".format(repre["tags"])
+ )
continue
filtered_repres.append((repre, repre_burnin_defs))
@@ -164,7 +164,7 @@ class ExtractBurnin(publish.Extractor):
logger=self.log)
if not profile:
- self.log.info((
+ self.log.debug((
"Skipped instance. None of profiles in presets are for"
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
" | Task type \"{}\" | Subset \"{}\" "
@@ -176,7 +176,7 @@ class ExtractBurnin(publish.Extractor):
# Pre-filter burnin definitions by instance families
burnin_defs = self.filter_burnins_defs(profile, instance)
if not burnin_defs:
- self.log.info((
+ self.log.debug((
"Skipped instance. Burnin definitions are not set for profile"
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
" | Profile \"{}\""
@@ -223,10 +223,10 @@ class ExtractBurnin(publish.Extractor):
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
- self.log.info((
+ self.log.debug(
"Can't determine if representation requires conversion."
" Skipped."
- ))
+ )
continue
# Do conversion if needed
diff --git a/openpype/plugins/publish/extract_color_transcode.py b/openpype/plugins/publish/extract_color_transcode.py
index f7c8af9318..dbf1b6c8a6 100644
--- a/openpype/plugins/publish/extract_color_transcode.py
+++ b/openpype/plugins/publish/extract_color_transcode.py
@@ -320,7 +320,7 @@ class ExtractOIIOTranscode(publish.Extractor):
logger=self.log)
if not profile:
- self.log.info((
+ self.log.debug((
"Skipped instance. None of profiles in presets are for"
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
" | Task type \"{}\" | Subset \"{}\" "
diff --git a/openpype/plugins/publish/extract_colorspace_data.py b/openpype/plugins/publish/extract_colorspace_data.py
index 363df28fb5..8873dcd637 100644
--- a/openpype/plugins/publish/extract_colorspace_data.py
+++ b/openpype/plugins/publish/extract_colorspace_data.py
@@ -30,7 +30,7 @@ class ExtractColorspaceData(publish.Extractor,
def process(self, instance):
representations = instance.data.get("representations")
if not representations:
- self.log.info("No representations at instance : `{}`".format(
+ self.log.debug("No representations at instance : `{}`".format(
instance))
return
diff --git a/openpype/plugins/publish/extract_hierarchy_avalon.py b/openpype/plugins/publish/extract_hierarchy_avalon.py
index 1d57545bc0..d70f0cbdd7 100644
--- a/openpype/plugins/publish/extract_hierarchy_avalon.py
+++ b/openpype/plugins/publish/extract_hierarchy_avalon.py
@@ -21,7 +21,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
return
if "hierarchyContext" not in context.data:
- self.log.info("skipping IntegrateHierarchyToAvalon")
+ self.log.debug("skipping ExtractHierarchyToAvalon")
return
if not legacy_io.Session:
diff --git a/openpype/plugins/publish/extract_hierarchy_to_ayon.py b/openpype/plugins/publish/extract_hierarchy_to_ayon.py
index 915650ae41..0d9131718b 100644
--- a/openpype/plugins/publish/extract_hierarchy_to_ayon.py
+++ b/openpype/plugins/publish/extract_hierarchy_to_ayon.py
@@ -8,6 +8,11 @@ from ayon_api import slugify_string
from ayon_api.entity_hub import EntityHub
from openpype import AYON_SERVER_ENABLED
+from openpype.client import get_assets
+from openpype.pipeline.template_data import (
+ get_asset_template_data,
+ get_task_template_data,
+)
def _default_json_parse(value):
@@ -27,13 +32,51 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
hierarchy_context = context.data.get("hierarchyContext")
if not hierarchy_context:
- self.log.info("Skipping")
+ self.log.debug("Skipping ExtractHierarchyToAYON")
return
project_name = context.data["projectName"]
+ self._create_hierarchy(context, project_name)
+ self._fill_instance_entities(context, project_name)
+
+ def _fill_instance_entities(self, context, project_name):
+ instances_by_asset_name = collections.defaultdict(list)
+ for instance in context:
+ if instance.data.get("publish") is False:
+ continue
+
+ instance_entity = instance.data.get("assetEntity")
+ if instance_entity:
+ continue
+
+ # Skip if instance asset does not match
+ instance_asset_name = instance.data.get("asset")
+ instances_by_asset_name[instance_asset_name].append(instance)
+
+ project_doc = context.data["projectEntity"]
+ asset_docs = get_assets(
+ project_name, asset_names=instances_by_asset_name.keys()
+ )
+ asset_docs_by_name = {
+ asset_doc["name"]: asset_doc
+ for asset_doc in asset_docs
+ }
+ for asset_name, instances in instances_by_asset_name.items():
+ asset_doc = asset_docs_by_name[asset_name]
+ asset_data = get_asset_template_data(asset_doc, project_name)
+ for instance in instances:
+ task_name = instance.data.get("task")
+ template_data = get_task_template_data(
+ project_doc, asset_doc, task_name)
+ template_data.update(copy.deepcopy(asset_data))
+
+ instance.data["anatomyData"].update(template_data)
+ instance.data["assetEntity"] = asset_doc
+
+ def _create_hierarchy(self, context, project_name):
hierarchy_context = self._filter_hierarchy(context)
if not hierarchy_context:
- self.log.info("All folders were filtered out")
+ self.log.debug("All folders were filtered out")
return
self.log.debug("Hierarchy_context: {}".format(
diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py
index 886384fee6..d89fbb90c4 100644
--- a/openpype/plugins/publish/extract_review_slate.py
+++ b/openpype/plugins/publish/extract_review_slate.py
@@ -15,6 +15,7 @@ from openpype.lib import (
get_ffmpeg_format_args,
)
from openpype.pipeline import publish
+from openpype.pipeline.publish import KnownPublishError
class ExtractReviewSlate(publish.Extractor):
@@ -46,7 +47,7 @@ class ExtractReviewSlate(publish.Extractor):
"*": inst_data["slateFrame"]
}
- self.log.info("_ slates_data: {}".format(pformat(slates_data)))
+ self.log.debug("_ slates_data: {}".format(pformat(slates_data)))
if "reviewToWidth" in inst_data:
use_legacy_code = True
@@ -76,7 +77,7 @@ class ExtractReviewSlate(publish.Extractor):
)
# get slate data
slate_path = self._get_slate_path(input_file, slates_data)
- self.log.info("_ slate_path: {}".format(slate_path))
+ self.log.debug("_ slate_path: {}".format(slate_path))
slate_width, slate_height = self._get_slates_resolution(slate_path)
@@ -93,9 +94,10 @@ class ExtractReviewSlate(publish.Extractor):
# Raise exception of any stream didn't define input resolution
if input_width is None:
- raise AssertionError((
+ raise KnownPublishError(
"FFprobe couldn't read resolution from input file: \"{}\""
- ).format(input_path))
+ .format(input_path)
+ )
(
audio_codec,
diff --git a/openpype/plugins/publish/extract_scanline_exr.py b/openpype/plugins/publish/extract_scanline_exr.py
index 9f22794a79..747155689b 100644
--- a/openpype/plugins/publish/extract_scanline_exr.py
+++ b/openpype/plugins/publish/extract_scanline_exr.py
@@ -29,24 +29,24 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin):
representations_new = []
for repre in representations:
- self.log.info(
+ self.log.debug(
"Processing representation {}".format(repre.get("name")))
tags = repre.get("tags", [])
if "toScanline" not in tags:
- self.log.info(" - missing toScanline tag")
+ self.log.debug(" - missing toScanline tag")
continue
# run only on exrs
if repre.get("ext") != "exr":
- self.log.info("- not EXR files")
+ self.log.debug("- not EXR files")
continue
if not isinstance(repre['files'], (list, tuple)):
input_files = [repre['files']]
- self.log.info("We have a single frame")
+ self.log.debug("We have a single frame")
else:
input_files = repre['files']
- self.log.info("We have a sequence")
+ self.log.debug("We have a sequence")
stagingdir = os.path.normpath(repre.get("stagingDir"))
@@ -68,7 +68,7 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin):
]
subprocess_exr = " ".join(oiio_cmd)
- self.log.info(f"running: {subprocess_exr}")
+ self.log.debug(f"running: {subprocess_exr}")
run_subprocess(subprocess_exr, logger=self.log)
# raise error if there is no ouptput
diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py
index b72a6d02ad..de101ac7ac 100644
--- a/openpype/plugins/publish/extract_thumbnail.py
+++ b/openpype/plugins/publish/extract_thumbnail.py
@@ -43,12 +43,12 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
# Skip if instance have 'review' key in data set to 'False'
if not self._is_review_instance(instance):
- self.log.info("Skipping - no review set on instance.")
+ self.log.debug("Skipping - no review set on instance.")
return
# Check if already has thumbnail created
if self._already_has_thumbnail(instance_repres):
- self.log.info("Thumbnail representation already present.")
+ self.log.debug("Thumbnail representation already present.")
return
# skip crypto passes.
@@ -58,15 +58,15 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
# representation that can be determined much earlier and
# with better precision.
if "crypto" in subset_name.lower():
- self.log.info("Skipping crypto passes.")
+ self.log.debug("Skipping crypto passes.")
return
filtered_repres = self._get_filtered_repres(instance)
if not filtered_repres:
- self.log.info((
- "Instance don't have representations"
- " that can be used as source for thumbnail. Skipping"
- ))
+ self.log.info(
+ "Instance doesn't have representations that can be used "
+ "as source for thumbnail. Skipping thumbnail extraction."
+ )
return
# Create temp directory for thumbnail
@@ -107,10 +107,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
# oiiotool isn't available
if not thumbnail_created:
if oiio_supported:
- self.log.info((
+ self.log.debug(
"Converting with FFMPEG because input"
" can't be read by OIIO."
- ))
+ )
thumbnail_created = self.create_thumbnail_ffmpeg(
full_input_path, full_output_path
@@ -165,8 +165,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
continue
if not repre.get("files"):
- self.log.info((
- "Representation \"{}\" don't have files. Skipping"
+ self.log.debug((
+ "Representation \"{}\" doesn't have files. Skipping"
).format(repre["name"]))
continue
@@ -174,7 +174,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return filtered_repres
def create_thumbnail_oiio(self, src_path, dst_path):
- self.log.info("Extracting thumbnail {}".format(dst_path))
+ self.log.debug("Extracting thumbnail with OIIO: {}".format(dst_path))
oiio_cmd = get_oiio_tool_args(
"oiiotool",
"-a", src_path,
@@ -192,7 +192,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return False
def create_thumbnail_ffmpeg(self, src_path, dst_path):
- self.log.info("outputting {}".format(dst_path))
+ self.log.debug("Extracting thumbnail with FFMPEG: {}".format(dst_path))
ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg")
ffmpeg_args = self.ffmpeg_args or {}
@@ -225,7 +225,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return True
except Exception:
self.log.warning(
- "Failed to create thubmnail using ffmpeg",
+ "Failed to create thumbnail using ffmpeg",
exc_info=True
)
return False
diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py
index 1b9f0a8bae..401a5d615d 100644
--- a/openpype/plugins/publish/extract_thumbnail_from_source.py
+++ b/openpype/plugins/publish/extract_thumbnail_from_source.py
@@ -49,7 +49,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
# Check if already has thumbnail created
if self._instance_has_thumbnail(instance):
- self.log.info("Thumbnail representation already present.")
+ self.log.debug("Thumbnail representation already present.")
return
dst_filepath = self._create_thumbnail(
@@ -98,7 +98,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
thumbnail_created = False
oiio_supported = is_oiio_supported()
- self.log.info("Thumbnail source: {}".format(thumbnail_source))
+ self.log.debug("Thumbnail source: {}".format(thumbnail_source))
src_basename = os.path.basename(thumbnail_source)
dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg"
full_output_path = os.path.join(dst_staging, dst_filename)
@@ -115,10 +115,10 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
# oiiotool isn't available
if not thumbnail_created:
if oiio_supported:
- self.log.info((
+ self.log.info(
"Converting with FFMPEG because input"
" can't be read by OIIO."
- ))
+ )
thumbnail_created = self.create_thumbnail_ffmpeg(
thumbnail_source, full_output_path
@@ -143,20 +143,20 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
return False
def create_thumbnail_oiio(self, src_path, dst_path):
- self.log.info("outputting {}".format(dst_path))
+ self.log.debug("Outputting thumbnail with OIIO: {}".format(dst_path))
oiio_cmd = get_oiio_tool_args(
"oiiotool",
"-a", src_path,
"--ch", "R,G,B",
"-o", dst_path
)
- self.log.info("Running: {}".format(" ".join(oiio_cmd)))
+ self.log.debug("Running: {}".format(" ".join(oiio_cmd)))
try:
run_subprocess(oiio_cmd, logger=self.log)
return True
except Exception:
self.log.warning(
- "Failed to create thubmnail using oiiotool",
+ "Failed to create thumbnail using oiiotool",
exc_info=True
)
return False
@@ -173,13 +173,13 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
dst_path
)
- self.log.info("Running: {}".format(" ".join(ffmpeg_cmd)))
+ self.log.debug("Running: {}".format(" ".join(ffmpeg_cmd)))
try:
run_subprocess(ffmpeg_cmd, logger=self.log)
return True
except Exception:
self.log.warning(
- "Failed to create thubmnail using ffmpeg",
+ "Failed to create thumbnail using ffmpeg",
exc_info=True
)
return False
diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py
index 2907ae1839..5e00cfc96f 100644
--- a/openpype/plugins/publish/extract_trim_video_audio.py
+++ b/openpype/plugins/publish/extract_trim_video_audio.py
@@ -36,7 +36,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
# get staging dir
staging_dir = self.staging_dir(instance)
- self.log.info("Staging dir set to: `{}`".format(staging_dir))
+ self.log.debug("Staging dir set to: `{}`".format(staging_dir))
# Generate mov file.
fps = instance.data["fps"]
@@ -59,7 +59,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
extensions = [output_file_type]
for ext in extensions:
- self.log.info("Processing ext: `{}`".format(ext))
+ self.log.debug("Processing ext: `{}`".format(ext))
if not ext.startswith("."):
ext = "." + ext
@@ -98,7 +98,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
ffmpeg_args.append(clip_trimed_path)
joined_args = " ".join(ffmpeg_args)
- self.log.info(f"Processing: {joined_args}")
+ self.log.debug(f"Processing: {joined_args}")
run_subprocess(
ffmpeg_args, logger=self.log
)
diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py
index be07cffe72..7e48155b9e 100644
--- a/openpype/plugins/publish/integrate.py
+++ b/openpype/plugins/publish/integrate.py
@@ -155,13 +155,13 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# Instance should be integrated on a farm
if instance.data.get("farm"):
- self.log.info(
+ self.log.debug(
"Instance is marked to be processed on farm. Skipping")
return
# Instance is marked to not get integrated
if not instance.data.get("integrate", True):
- self.log.info("Instance is marked to skip integrating. Skipping")
+ self.log.debug("Instance is marked to skip integrating. Skipping")
return
filtered_repres = self.filter_representations(instance)
@@ -306,7 +306,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# increase if the file transaction takes a long time.
op_session.commit()
- self.log.info("Subset {subset[name]} and Version {version[name]} "
+ self.log.info("Subset '{subset[name]}' version {version[name]} "
"written to database..".format(subset=subset,
version=version))
@@ -392,8 +392,13 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
p["representation"]["_id"]: p for p in prepared_representations
}
- self.log.info("Registered {} representations"
- "".format(len(prepared_representations)))
+ self.log.info(
+ "Registered {} representations: {}".format(
+ len(prepared_representations),
+ ", ".join(p["representation"]["name"]
+ for p in prepared_representations)
+ )
+ )
def prepare_subset(self, instance, op_session, project_name):
asset_doc = instance.data["assetEntity"]
diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py
index 6c21664b78..9f0f7fe7f3 100644
--- a/openpype/plugins/publish/integrate_hero_version.py
+++ b/openpype/plugins/publish/integrate_hero_version.py
@@ -275,10 +275,10 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
backup_hero_publish_dir = _backup_hero_publish_dir
break
except Exception:
- self.log.info((
+ self.log.info(
"Could not remove previous backup folder."
- " Trying to add index to folder name"
- ))
+ " Trying to add index to folder name."
+ )
_backup_hero_publish_dir = (
backup_hero_publish_dir + str(idx)
diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py
index 9929d8f754..0c12255d38 100644
--- a/openpype/plugins/publish/integrate_thumbnail.py
+++ b/openpype/plugins/publish/integrate_thumbnail.py
@@ -41,7 +41,9 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin):
def process(self, context):
if AYON_SERVER_ENABLED:
- self.log.info("AYON is enabled. Skipping v3 thumbnail integration")
+ self.log.debug(
+ "AYON is enabled. Skipping v3 thumbnail integration"
+ )
return
# Filter instances which can be used for integration
@@ -74,14 +76,14 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin):
thumbnail_template = anatomy.templates["publish"]["thumbnail"]
if not thumbnail_template:
- self.log.info("Thumbnail template is not filled. Skipping.")
+ self.log.debug("Thumbnail template is not filled. Skipping.")
return
if (
not thumbnail_root
and thumbnail_root_format_key in thumbnail_template
):
- self.log.warning(("{} is not set. Skipping.").format(env_key))
+ self.log.warning("{} is not set. Skipping.".format(env_key))
return
# Collect verion ids from all filtered instance
diff --git a/openpype/plugins/publish/integrate_thumbnail_ayon.py b/openpype/plugins/publish/integrate_thumbnail_ayon.py
index ba5664c69f..cf05327ce8 100644
--- a/openpype/plugins/publish/integrate_thumbnail_ayon.py
+++ b/openpype/plugins/publish/integrate_thumbnail_ayon.py
@@ -35,13 +35,13 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin):
def process(self, context):
if not AYON_SERVER_ENABLED:
- self.log.info("AYON is not enabled. Skipping")
+ self.log.debug("AYON is not enabled. Skipping")
return
# Filter instances which can be used for integration
filtered_instance_items = self._prepare_instances(context)
if not filtered_instance_items:
- self.log.info(
+ self.log.debug(
"All instances were filtered. Thumbnail integration skipped."
)
return
@@ -110,7 +110,7 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin):
# Skip instance if thumbnail path is not available for it
if not thumbnail_path:
- self.log.info((
+ self.log.debug((
"Skipping thumbnail integration for instance \"{}\"."
" Instance and context"
" thumbnail paths are not available."
diff --git a/openpype/plugins/publish/validate_asset_docs.py b/openpype/plugins/publish/validate_asset_docs.py
index 9a1ca5b8de..8dfd783c39 100644
--- a/openpype/plugins/publish/validate_asset_docs.py
+++ b/openpype/plugins/publish/validate_asset_docs.py
@@ -22,11 +22,11 @@ class ValidateAssetDocs(pyblish.api.InstancePlugin):
return
if instance.data.get("assetEntity"):
- self.log.info("Instance has set asset document in its data.")
+ self.log.debug("Instance has set asset document in its data.")
elif instance.data.get("newAssetPublishing"):
# skip if it is editorial
- self.log.info("Editorial instance is no need to check...")
+ self.log.debug("Editorial instance has no need to check...")
else:
raise PublishValidationError((
diff --git a/openpype/plugins/publish/validate_editorial_asset_name.py b/openpype/plugins/publish/validate_editorial_asset_name.py
index 4f8a1abf2e..fca0d8e7f5 100644
--- a/openpype/plugins/publish/validate_editorial_asset_name.py
+++ b/openpype/plugins/publish/validate_editorial_asset_name.py
@@ -56,7 +56,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
}
continue
- self.log.info("correct asset: {}".format(asset))
+ self.log.debug("correct asset: {}".format(asset))
if assets_missing_name:
wrong_names = {}
diff --git a/openpype/plugins/publish/validate_file_saved.py b/openpype/plugins/publish/validate_file_saved.py
index 448eaccf57..94aadc9358 100644
--- a/openpype/plugins/publish/validate_file_saved.py
+++ b/openpype/plugins/publish/validate_file_saved.py
@@ -1,5 +1,7 @@
import pyblish.api
+from openpype.pipeline.publish import PublishValidationError
+
class ValidateCurrentSaveFile(pyblish.api.ContextPlugin):
"""File must be saved before publishing"""
@@ -12,4 +14,4 @@ class ValidateCurrentSaveFile(pyblish.api.ContextPlugin):
current_file = context.data["currentFile"]
if not current_file:
- raise RuntimeError("File not saved")
+ raise PublishValidationError("File not saved")
diff --git a/openpype/plugins/publish/validate_filesequences.py b/openpype/plugins/publish/validate_filesequences.py
index 8a877d79bb..0ac281022d 100644
--- a/openpype/plugins/publish/validate_filesequences.py
+++ b/openpype/plugins/publish/validate_filesequences.py
@@ -1,5 +1,7 @@
import pyblish.api
+from openpype.pipeline.publish import PublishValidationError
+
class ValidateFileSequences(pyblish.api.ContextPlugin):
"""Validates whether any file sequences were collected."""
@@ -10,4 +12,5 @@ class ValidateFileSequences(pyblish.api.ContextPlugin):
label = "Validate File Sequences"
def process(self, context):
- assert context, "Nothing collected."
+ if not context:
+ raise PublishValidationError("Nothing collected.")
diff --git a/openpype/plugins/publish/validate_intent.py b/openpype/plugins/publish/validate_intent.py
index 23d57bb2b7..832c7cc0a1 100644
--- a/openpype/plugins/publish/validate_intent.py
+++ b/openpype/plugins/publish/validate_intent.py
@@ -1,7 +1,7 @@
-import os
import pyblish.api
from openpype.lib import filter_profiles
+from openpype.pipeline.publish import PublishValidationError
class ValidateIntent(pyblish.api.ContextPlugin):
@@ -51,12 +51,10 @@ class ValidateIntent(pyblish.api.ContextPlugin):
))
return
- msg = (
- "Please make sure that you select the intent of this publish."
- )
-
intent = context.data.get("intent") or {}
self.log.debug(str(intent))
intent_value = intent.get("value")
if not intent_value:
- raise AssertionError(msg)
+ raise PublishValidationError(
+ "Please make sure that you select the intent of this publish."
+ )
diff --git a/openpype/plugins/publish/validate_publish_dir.py b/openpype/plugins/publish/validate_publish_dir.py
index ad5fd34434..0eb93da583 100644
--- a/openpype/plugins/publish/validate_publish_dir.py
+++ b/openpype/plugins/publish/validate_publish_dir.py
@@ -47,15 +47,16 @@ class ValidatePublishDir(pyblish.api.InstancePlugin):
# original_dirname must be convertable to rootless path
# in other case it is path inside of root folder for the project
success, _ = anatomy.find_root_template_from_path(original_dirname)
-
- formatting_data = {
- "original_dirname": original_dirname,
- }
- msg = "Path '{}' not in project folder.".format(original_dirname) + \
- " Please publish from inside of project folder."
if not success:
- raise PublishXmlValidationError(self, msg, key="not_in_dir",
- formatting_data=formatting_data)
+ raise PublishXmlValidationError(
+ plugin=self,
+ message=(
+ "Path '{}' not in project folder. Please publish from "
+ "inside of project folder.".format(original_dirname)
+ ),
+ key="not_in_dir",
+ formatting_data={"original_dirname": original_dirname}
+ )
def _get_template_name_from_instance(self, instance):
"""Find template which will be used during integration."""
diff --git a/openpype/scripts/export_maya_ass_job.py b/openpype/scripts/export_maya_ass_job.py
deleted file mode 100644
index 16e841ce96..0000000000
--- a/openpype/scripts/export_maya_ass_job.py
+++ /dev/null
@@ -1,105 +0,0 @@
-"""This module is used for command line exporting of ASS files.
-
-WARNING:
-This need to be rewriten to be able use it in Pype 3!
-"""
-
-import os
-import argparse
-import logging
-import subprocess
-import platform
-
-try:
- from shutil import which
-except ImportError:
- # we are in python < 3.3
- def which(command):
- path = os.getenv('PATH')
- for p in path.split(os.path.pathsep):
- p = os.path.join(p, command)
- if os.path.exists(p) and os.access(p, os.X_OK):
- return p
-
-handler = logging.basicConfig()
-log = logging.getLogger("Publish Image Sequences")
-log.setLevel(logging.DEBUG)
-
-error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
-
-
-def __main__():
- parser = argparse.ArgumentParser()
- parser.add_argument("--paths",
- nargs="*",
- default=[],
- help="The filepaths to publish. This can be a "
- "directory or a path to a .json publish "
- "configuration.")
- parser.add_argument("--gui",
- default=False,
- action="store_true",
- help="Whether to run Pyblish in GUI mode.")
-
- parser.add_argument("--pype", help="Pype root")
-
- kwargs, args = parser.parse_known_args()
-
- print("Running pype ...")
- auto_pype_root = os.path.dirname(os.path.abspath(__file__))
- auto_pype_root = os.path.abspath(auto_pype_root + "../../../../..")
-
- auto_pype_root = os.environ.get('OPENPYPE_SETUP_PATH') or auto_pype_root
- if os.environ.get('OPENPYPE_SETUP_PATH'):
- print("Got Pype location from environment: {}".format(
- os.environ.get('OPENPYPE_SETUP_PATH')))
-
- pype_command = "openpype.ps1"
- if platform.system().lower() == "linux":
- pype_command = "pype"
- elif platform.system().lower() == "windows":
- pype_command = "openpype.bat"
-
- if kwargs.pype:
- pype_root = kwargs.pype
- else:
- # test if pype.bat / pype is in the PATH
- # if it is, which() will return its path and we use that.
- # if not, we use auto_pype_root path. Caveat of that one is
- # that it can be UNC path and that will not work on windows.
-
- pype_path = which(pype_command)
-
- if pype_path:
- pype_root = os.path.dirname(pype_path)
- else:
- pype_root = auto_pype_root
-
- print("Set pype root to: {}".format(pype_root))
- print("Paths: {}".format(kwargs.paths or [os.getcwd()]))
-
- # paths = kwargs.paths or [os.environ.get("OPENPYPE_METADATA_FILE")] or [os.getcwd()] # noqa
-
- mayabatch = os.environ.get("AVALON_APP_NAME").replace("maya", "mayabatch")
- args = [
- os.path.join(pype_root, pype_command),
- "launch",
- "--app",
- mayabatch,
- "-script",
- os.path.join(pype_root, "repos", "pype",
- "pype", "scripts", "export_maya_ass_sequence.mel")
- ]
-
- print("Pype command: {}".format(" ".join(args)))
- # Forcing forwaring the environment because environment inheritance does
- # not always work.
- # Cast all values in environment to str to be safe
- env = {k: str(v) for k, v in os.environ.items()}
- exit_code = subprocess.call(args, env=env)
- if exit_code != 0:
- raise RuntimeError("Publishing failed.")
-
-
-if __name__ == '__main__':
- __main__()
diff --git a/openpype/scripts/export_maya_ass_sequence.mel b/openpype/scripts/export_maya_ass_sequence.mel
deleted file mode 100644
index b3b9a8543e..0000000000
--- a/openpype/scripts/export_maya_ass_sequence.mel
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- Script to export specified layer as ass files.
-
-Attributes:
-
- scene_file (str): Name of the scene to load.
- start (int): Start frame.
- end (int): End frame.
- step (int): Step size.
- output_path (str): File output path.
- render_layer (str): Name of render layer.
-
-*/
-
-$scene_file=`getenv "OPENPYPE_ASS_EXPORT_SCENE_FILE"`;
-$step=`getenv "OPENPYPE_ASS_EXPORT_STEP"`;
-$start=`getenv "OPENPYPE_ASS_EXPORT_START"`;
-$end=`getenv "OPENPYPE_ASS_EXPORT_END"`;
-$file_path=`getenv "OPENPYPE_ASS_EXPORT_OUTPUT"`;
-$render_layer = `getenv "OPENPYPE_ASS_EXPORT_RENDER_LAYER"`;
-
-print("*** ASS Export Plugin\n");
-
-if ($scene_file == "") {
- print("!!! cannot determine scene file\n");
- quit -a -ex -1;
-}
-
-if ($step == "") {
- print("!!! cannot determine step size\n");
- quit -a -ex -1;
-}
-
-if ($start == "") {
- print("!!! cannot determine start frame\n");
- quit -a -ex -1;
-}
-
-if ($end == "") {
- print("!!! cannot determine end frame\n");
- quit -a -ex -1;
-}
-
-if ($file_path == "") {
- print("!!! cannot determine output file\n");
- quit -a -ex -1;
-}
-
-if ($render_layer == "") {
- print("!!! cannot determine render layer\n");
- quit -a -ex -1;
-}
-
-
-print(">>> Opening Scene [ " + $scene_file + " ]\n");
-
-// open scene
-file -o -f $scene_file;
-
-// switch to render layer
-print(">>> Switching layer [ "+ $render_layer + " ]\n");
-editRenderLayerGlobals -currentRenderLayer $render_layer;
-
-// export
-print(">>> Exporting to [ " + $file_path + " ]\n");
-arnoldExportAss -mask 255 -sl 1 -ll 1 -bb 1 -sf $start -se $end -b -fs $step;
-print("--- Done\n");
diff --git a/openpype/scripts/fusion_switch_shot.py b/openpype/scripts/fusion_switch_shot.py
deleted file mode 100644
index 1cc728226f..0000000000
--- a/openpype/scripts/fusion_switch_shot.py
+++ /dev/null
@@ -1,241 +0,0 @@
-import os
-import re
-import sys
-import logging
-
-from openpype.client import get_asset_by_name, get_versions
-
-# Pipeline imports
-from openpype.hosts.fusion import api
-import openpype.hosts.fusion.api.lib as fusion_lib
-
-# Config imports
-from openpype.lib import version_up
-from openpype.pipeline import (
- install_host,
- registered_host,
- legacy_io,
- get_current_project_name,
-)
-
-from openpype.pipeline.context_tools import get_workdir_from_session
-from openpype.pipeline.version_start import get_versioning_start
-
-log = logging.getLogger("Update Slap Comp")
-
-
-def _format_version_folder(folder):
- """Format a version folder based on the filepath
-
- Args:
- folder: file path to a folder
-
- Returns:
- str: new version folder name
- """
-
- new_version = get_versioning_start(
- get_current_project_name(),
- "fusion",
- family="workfile"
- )
- if os.path.isdir(folder):
- re_version = re.compile(r"v\d+$")
- versions = [i for i in os.listdir(folder) if os.path.isdir(i)
- and re_version.match(i)]
- if versions:
- # ensure the "v" is not included
- new_version = int(max(versions)[1:]) + 1
-
- version_folder = "v{:03d}".format(new_version)
-
- return version_folder
-
-
-def _get_fusion_instance():
- fusion = getattr(sys.modules["__main__"], "fusion", None)
- if fusion is None:
- try:
- # Support for FuScript.exe, BlackmagicFusion module for py2 only
- import BlackmagicFusion as bmf
- fusion = bmf.scriptapp("Fusion")
- except ImportError:
- raise RuntimeError("Could not find a Fusion instance")
- return fusion
-
-
-def _format_filepath(session):
-
- project = session["AVALON_PROJECT"]
- asset = session["AVALON_ASSET"]
-
- # Save updated slap comp
- work_path = get_workdir_from_session(session)
- walk_to_dir = os.path.join(work_path, "scenes", "slapcomp")
- slapcomp_dir = os.path.abspath(walk_to_dir)
-
- # Ensure destination exists
- if not os.path.isdir(slapcomp_dir):
- log.warning("Folder did not exist, creating folder structure")
- os.makedirs(slapcomp_dir)
-
- # Compute output path
- new_filename = "{}_{}_slapcomp_v001.comp".format(project, asset)
- new_filepath = os.path.join(slapcomp_dir, new_filename)
-
- # Create new unqiue filepath
- if os.path.exists(new_filepath):
- new_filepath = version_up(new_filepath)
-
- return new_filepath
-
-
-def _update_savers(comp, session):
- """Update all savers of the current comp to ensure the output is correct
-
- Args:
- comp (object): current comp instance
- session (dict): the current Avalon session
-
- Returns:
- None
- """
-
- new_work = get_workdir_from_session(session)
- renders = os.path.join(new_work, "renders")
- version_folder = _format_version_folder(renders)
- renders_version = os.path.join(renders, version_folder)
-
- comp.Print("New renders to: %s\n" % renders)
-
- with api.comp_lock_and_undo_chunk(comp):
- savers = comp.GetToolList(False, "Saver").values()
- for saver in savers:
- filepath = saver.GetAttrs("TOOLST_Clip_Name")[1.0]
- filename = os.path.basename(filepath)
- new_path = os.path.join(renders_version, filename)
- saver["Clip"] = new_path
-
-
-def update_frame_range(comp, representations):
- """Update the frame range of the comp and render length
-
- The start and end frame are based on the lowest start frame and the highest
- end frame
-
- Args:
- comp (object): current focused comp
- representations (list) collection of dicts
-
- Returns:
- None
-
- """
-
- version_ids = [r["parent"] for r in representations]
- project_name = get_current_project_name()
- versions = list(get_versions(project_name, version_ids=version_ids))
-
- start = min(v["data"]["frameStart"] for v in versions)
- end = max(v["data"]["frameEnd"] for v in versions)
-
- fusion_lib.update_frame_range(start, end, comp=comp)
-
-
-def switch(asset_name, filepath=None, new=True):
- """Switch the current containers of the file to the other asset (shot)
-
- Args:
- filepath (str): file path of the comp file
- asset_name (str): name of the asset (shot)
- new (bool): Save updated comp under a different name
-
- Returns:
- comp path (str): new filepath of the updated comp
-
- """
-
- # If filepath provided, ensure it is valid absolute path
- if filepath is not None:
- if not os.path.isabs(filepath):
- filepath = os.path.abspath(filepath)
-
- assert os.path.exists(filepath), "%s must exist " % filepath
-
- # Assert asset name exists
- # It is better to do this here then to wait till switch_shot does it
- project_name = get_current_project_name()
- asset = get_asset_by_name(project_name, asset_name)
- assert asset, "Could not find '%s' in the database" % asset_name
-
- # Go to comp
- if not filepath:
- current_comp = api.get_current_comp()
- assert current_comp is not None, "Could not find current comp"
- else:
- fusion = _get_fusion_instance()
- current_comp = fusion.LoadComp(filepath, quiet=True)
- assert current_comp is not None, "Fusion could not load '%s'" % filepath
-
- host = registered_host()
- containers = list(host.ls())
- assert containers, "Nothing to update"
-
- representations = []
- for container in containers:
- try:
- representation = fusion_lib.switch_item(container,
- asset_name=asset_name)
- representations.append(representation)
- except Exception as e:
- current_comp.Print("Error in switching! %s\n" % e.message)
-
- message = "Switched %i Loaders of the %i\n" % (len(representations),
- len(containers))
- current_comp.Print(message)
-
- # Build the session to switch to
- switch_to_session = legacy_io.Session.copy()
- switch_to_session["AVALON_ASSET"] = asset['name']
-
- if new:
- comp_path = _format_filepath(switch_to_session)
-
- # Update savers output based on new session
- _update_savers(current_comp, switch_to_session)
- else:
- comp_path = version_up(filepath)
-
- current_comp.Print(comp_path)
-
- current_comp.Print("\nUpdating frame range")
- update_frame_range(current_comp, representations)
-
- current_comp.Save(comp_path)
-
- return comp_path
-
-
-if __name__ == '__main__':
-
- import argparse
-
- parser = argparse.ArgumentParser(description="Switch to a shot within an"
- "existing comp file")
-
- parser.add_argument("--file_path",
- type=str,
- default=True,
- help="File path of the comp to use")
-
- parser.add_argument("--asset_name",
- type=str,
- default=True,
- help="Name of the asset (shot) to switch")
-
- args, unknown = parser.parse_args()
-
- install_host(api)
- switch(args.asset_name, args.file_path)
-
- sys.exit(0)
diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py
index 16558642c6..40553d30f2 100644
--- a/openpype/scripts/ocio_wrapper.py
+++ b/openpype/scripts/ocio_wrapper.py
@@ -174,5 +174,79 @@ def _get_views_data(config_path):
return data
+def _get_display_view_colorspace_name(config_path, display, view):
+ """Returns the colorspace attribute of the (display, view) pair.
+
+ Args:
+ config_path (str): path string leading to config.ocio
+ display (str): display name e.g. "ACES"
+ view (str): view name e.g. "sRGB"
+
+
+ Raises:
+ IOError: Input config does not exist.
+
+ Returns:
+ view color space name (str) e.g. "Output - sRGB"
+ """
+
+ config_path = Path(config_path)
+
+ if not config_path.is_file():
+ raise IOError("Input path should be `config.ocio` file")
+
+ config = ocio.Config.CreateFromFile(str(config_path))
+ colorspace = config.getDisplayViewColorSpaceName(display, view)
+
+ return colorspace
+
+
+@config.command(
+ name="get_display_view_colorspace_name",
+ help=(
+ "return default view colorspace name "
+ "for the given display and view "
+ "--path input arg is required"
+ )
+)
+@click.option("--in_path", required=True,
+ help="path where to read ocio config file",
+ type=click.Path(exists=True))
+@click.option("--out_path", required=True,
+ help="path where to write output json file",
+ type=click.Path())
+@click.option("--display", required=True,
+ help="display name",
+ type=click.STRING)
+@click.option("--view", required=True,
+ help="view name",
+ type=click.STRING)
+def get_display_view_colorspace_name(in_path, out_path,
+ display, view):
+ """Aggregate view colorspace name to file.
+
+ Wrapper command for processes without access to OpenColorIO
+
+ Args:
+ in_path (str): config file path string
+ out_path (str): temp json file path string
+ display (str): display name e.g. "ACES"
+ view (str): view name e.g. "sRGB"
+
+ Example of use:
+ > pyton.exe ./ocio_wrapper.py config \
+ get_display_view_colorspace_name --in_path= \
+ --out_path= --display= --view=
+ """
+
+ out_data = _get_display_view_colorspace_name(in_path,
+ display,
+ view)
+
+ with open(out_path, "w") as f:
+ json.dump(out_data, f)
+
+ print(f"Display view colorspace saved to '{out_path}'")
+
if __name__ == '__main__':
main()
diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py
index 50abfe4839..9a4f0607e0 100644
--- a/openpype/settings/ayon_settings.py
+++ b/openpype/settings/ayon_settings.py
@@ -616,6 +616,23 @@ def _convert_maya_project_settings(ayon_settings, output):
output["maya"] = ayon_maya
+def _convert_3dsmax_project_settings(ayon_settings, output):
+ if "max" not in ayon_settings:
+ return
+
+ ayon_max = ayon_settings["max"]
+ _convert_host_imageio(ayon_max)
+ if "PointCloud" in ayon_max:
+ point_cloud_attribute = ayon_max["PointCloud"]["attribute"]
+ new_point_cloud_attribute = {
+ item["name"]: item["value"]
+ for item in point_cloud_attribute
+ }
+ ayon_max["PointCloud"]["attribute"] = new_point_cloud_attribute
+
+ output["max"] = ayon_max
+
+
def _convert_nuke_knobs(knobs):
new_knobs = []
for knob in knobs:
@@ -737,6 +754,17 @@ def _convert_nuke_project_settings(ayon_settings, output):
item_filter["subsets"] = item_filter.pop("product_names")
item_filter["families"] = item_filter.pop("product_types")
+ reformat_nodes_config = item.get("reformat_nodes_config") or {}
+ reposition_nodes = reformat_nodes_config.get(
+ "reposition_nodes") or []
+
+ for reposition_node in reposition_nodes:
+ if "knobs" not in reposition_node:
+ continue
+ reposition_node["knobs"] = _convert_nuke_knobs(
+ reposition_node["knobs"]
+ )
+
name = item.pop("name")
new_review_data_outputs[name] = item
ayon_publish["ExtractReviewDataMov"]["outputs"] = new_review_data_outputs
@@ -1261,6 +1289,7 @@ def convert_project_settings(ayon_settings, default_settings):
_convert_flame_project_settings(ayon_settings, output)
_convert_fusion_project_settings(ayon_settings, output)
_convert_maya_project_settings(ayon_settings, output)
+ _convert_3dsmax_project_settings(ayon_settings, output)
_convert_nuke_project_settings(ayon_settings, output)
_convert_hiero_project_settings(ayon_settings, output)
_convert_photoshop_project_settings(ayon_settings, output)
diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json
index 7673725831..6964db0013 100644
--- a/openpype/settings/defaults/project_settings/houdini.json
+++ b/openpype/settings/defaults/project_settings/houdini.json
@@ -106,6 +106,11 @@
"$JOB"
]
},
+ "ValidateReviewColorspace": {
+ "enabled": true,
+ "optional": true,
+ "active": true
+ },
"ValidateContainers": {
"enabled": true,
"optional": true,
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json
index 670b1a0bc2..d5f70b0312 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json
@@ -40,6 +40,10 @@
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
+ {
+ "key": "ValidateReviewColorspace",
+ "label": "Validate Review Colorspace"
+ },
{
"key": "ValidateContainers",
"label": "ValidateContainers"
diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py
index 7967416e9f..d9c55f4a64 100644
--- a/openpype/tools/attribute_defs/widgets.py
+++ b/openpype/tools/attribute_defs/widgets.py
@@ -19,6 +19,7 @@ from openpype.tools.utils import (
CustomTextComboBox,
FocusSpinBox,
FocusDoubleSpinBox,
+ MultiSelectionComboBox,
)
from openpype.widgets.nice_checkbox import NiceCheckbox
@@ -412,10 +413,19 @@ class EnumAttrWidget(_BaseAttrDefWidget):
self._multivalue = False
super(EnumAttrWidget, self).__init__(*args, **kwargs)
+ @property
+ def multiselection(self):
+ return self.attr_def.multiselection
+
def _ui_init(self):
- input_widget = CustomTextComboBox(self)
- combo_delegate = QtWidgets.QStyledItemDelegate(input_widget)
- input_widget.setItemDelegate(combo_delegate)
+ if self.multiselection:
+ input_widget = MultiSelectionComboBox(self)
+
+ else:
+ input_widget = CustomTextComboBox(self)
+ combo_delegate = QtWidgets.QStyledItemDelegate(input_widget)
+ input_widget.setItemDelegate(combo_delegate)
+ self._combo_delegate = combo_delegate
if self.attr_def.tooltip:
input_widget.setToolTip(self.attr_def.tooltip)
@@ -427,9 +437,11 @@ class EnumAttrWidget(_BaseAttrDefWidget):
if idx >= 0:
input_widget.setCurrentIndex(idx)
- input_widget.currentIndexChanged.connect(self._on_value_change)
+ if self.multiselection:
+ input_widget.value_changed.connect(self._on_value_change)
+ else:
+ input_widget.currentIndexChanged.connect(self._on_value_change)
- self._combo_delegate = combo_delegate
self._input_widget = input_widget
self.main_layout.addWidget(input_widget, 0)
@@ -442,17 +454,40 @@ class EnumAttrWidget(_BaseAttrDefWidget):
self.value_changed.emit(new_value, self.attr_def.id)
def current_value(self):
+ if self.multiselection:
+ return self._input_widget.value()
idx = self._input_widget.currentIndex()
return self._input_widget.itemData(idx)
+ def _multiselection_multivalue_prep(self, values):
+ final = None
+ multivalue = False
+ for value in values:
+ value = set(value)
+ if final is None:
+ final = value
+ elif multivalue or final != value:
+ final |= value
+ multivalue = True
+ return list(final), multivalue
+
def set_value(self, value, multivalue=False):
if multivalue:
- set_value = set(value)
- if len(set_value) == 1:
- multivalue = False
- value = tuple(set_value)[0]
+ if self.multiselection:
+ value, multivalue = self._multiselection_multivalue_prep(
+ value)
+ else:
+ set_value = set(value)
+ if len(set_value) == 1:
+ multivalue = False
+ value = tuple(set_value)[0]
- if not multivalue:
+ if self.multiselection:
+ self._input_widget.blockSignals(True)
+ self._input_widget.set_value(value)
+ self._input_widget.blockSignals(False)
+
+ elif not multivalue:
idx = self._input_widget.findData(value)
cur_idx = self._input_widget.currentIndex()
if idx != cur_idx and idx >= 0:
diff --git a/openpype/tools/publisher/widgets/screenshot_widget.py b/openpype/tools/publisher/widgets/screenshot_widget.py
index 4ccf920571..64cccece6c 100644
--- a/openpype/tools/publisher/widgets/screenshot_widget.py
+++ b/openpype/tools/publisher/widgets/screenshot_widget.py
@@ -31,7 +31,6 @@ class ScreenMarquee(QtWidgets.QDialog):
fade_anim.setEndValue(50)
fade_anim.setDuration(200)
fade_anim.setEasingCurve(QtCore.QEasingCurve.OutCubic)
- fade_anim.start(QtCore.QAbstractAnimation.DeleteWhenStopped)
fade_anim.valueChanged.connect(self._on_fade_anim)
@@ -46,7 +45,7 @@ class ScreenMarquee(QtWidgets.QDialog):
for screen in QtWidgets.QApplication.screens():
screen.geometryChanged.connect(self._fit_screen_geometry)
- self._opacity = fade_anim.currentValue()
+ self._opacity = fade_anim.startValue()
self._click_pos = None
self._capture_rect = None
diff --git a/openpype/tools/settings/settings/item_widgets.py b/openpype/tools/settings/settings/item_widgets.py
index 117eca7d6b..2fd13cbbd8 100644
--- a/openpype/tools/settings/settings/item_widgets.py
+++ b/openpype/tools/settings/settings/item_widgets.py
@@ -4,6 +4,7 @@ from qtpy import QtWidgets, QtCore, QtGui
from openpype.widgets.sliders import NiceSlider
from openpype.tools.settings import CHILD_OFFSET
+from openpype.tools.utils import MultiSelectionComboBox
from openpype.settings.entities.exceptions import BaseInvalidValue
from .widgets import (
@@ -15,7 +16,6 @@ from .widgets import (
SettingsNiceCheckbox,
SettingsLineEdit
)
-from .multiselection_combobox import MultiSelectionComboBox
from .wrapper_widgets import (
WrapperWidget,
CollapsibleWrapper,
diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py
index f35bfaee70..d343353112 100644
--- a/openpype/tools/utils/__init__.py
+++ b/openpype/tools/utils/__init__.py
@@ -38,6 +38,7 @@ from .models import (
from .overlay_messages import (
MessageOverlayObject,
)
+from .multiselection_combobox import MultiSelectionComboBox
__all__ = (
@@ -78,4 +79,6 @@ __all__ = (
"RecursiveSortFilterProxyModel",
"MessageOverlayObject",
+
+ "MultiSelectionComboBox",
)
diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py
index 2df46c1eae..723e71e7aa 100644
--- a/openpype/tools/utils/lib.py
+++ b/openpype/tools/utils/lib.py
@@ -170,8 +170,12 @@ def get_openpype_qt_app():
if attr is not None:
QtWidgets.QApplication.setAttribute(attr)
- if hasattr(
- QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy"
+ policy = os.getenv("QT_SCALE_FACTOR_ROUNDING_POLICY")
+ if (
+ hasattr(
+ QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy"
+ )
+ and not policy
):
QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy(
QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough
diff --git a/openpype/tools/settings/settings/multiselection_combobox.py b/openpype/tools/utils/multiselection_combobox.py
similarity index 84%
rename from openpype/tools/settings/settings/multiselection_combobox.py
rename to openpype/tools/utils/multiselection_combobox.py
index d64fc83745..34361fca17 100644
--- a/openpype/tools/settings/settings/multiselection_combobox.py
+++ b/openpype/tools/utils/multiselection_combobox.py
@@ -1,9 +1,10 @@
from qtpy import QtCore, QtGui, QtWidgets
-from openpype.tools.utils.lib import (
+
+from .lib import (
checkstate_int_to_enum,
checkstate_enum_to_int,
)
-from openpype.tools.utils.constants import (
+from .constants import (
CHECKED_INT,
UNCHECKED_INT,
ITEM_IS_USER_TRISTATE,
@@ -60,12 +61,25 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
self._block_mouse_release_timer = QtCore.QTimer(self, singleShot=True)
self._initial_mouse_pos = None
self._separator = separator
- self.placeholder_text = placeholder
- self.delegate = ComboItemDelegate(self)
- self.setItemDelegate(self.delegate)
+ self._placeholder_text = placeholder
+ delegate = ComboItemDelegate(self)
+ self.setItemDelegate(delegate)
- self.lines = {}
- self.item_height = None
+ self._lines = {}
+ self._item_height = None
+ self._custom_text = None
+ self._delegate = delegate
+
+ def get_placeholder_text(self):
+ return self._placeholder_text
+
+ def set_placeholder_text(self, text):
+ self._placeholder_text = text
+ self._update_size_hint()
+
+ def set_custom_text(self, text):
+ self._custom_text = text
+ self._update_size_hint()
def focusInEvent(self, event):
self.focused_in.emit()
@@ -158,7 +172,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
if new_state is not None:
model.setData(current_index, new_state, QtCore.Qt.CheckStateRole)
self.view().update(current_index)
- self.update_size_hint()
+ self._update_size_hint()
self.value_changed.emit()
return True
@@ -182,25 +196,33 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
self.initStyleOption(option)
painter.drawComplexControl(QtWidgets.QStyle.CC_ComboBox, option)
- # draw the icon and text
items = self.checked_items_text()
- if not items:
- option.currentText = self.placeholder_text
+ # draw the icon and text
+ draw_text = True
+ combotext = None
+ if self._custom_text is not None:
+ combotext = self._custom_text
+ elif not items:
+ combotext = self._placeholder_text
+ else:
+ draw_text = False
+ if draw_text:
+ option.currentText = combotext
option.palette.setCurrentColorGroup(QtGui.QPalette.Disabled)
painter.drawControl(QtWidgets.QStyle.CE_ComboBoxLabel, option)
return
font_metricts = self.fontMetrics()
- if self.item_height is None:
+ if self._item_height is None:
self.updateGeometry()
self.update()
return
- for line, items in self.lines.items():
+ for line, items in self._lines.items():
top_y = (
option.rect.top()
- + (line * self.item_height)
+ + (line * self._item_height)
+ self.top_bottom_margins
)
left_x = option.rect.left() + self.left_offset
@@ -210,7 +232,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
label_rect.moveTop(top_y)
label_rect.moveLeft(left_x)
- label_rect.setHeight(self.item_height)
+ label_rect.setHeight(self._item_height)
label_rect.setWidth(
label_rect.width() + self.left_right_padding
)
@@ -239,14 +261,18 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
def resizeEvent(self, *args, **kwargs):
super(MultiSelectionComboBox, self).resizeEvent(*args, **kwargs)
- self.update_size_hint()
+ self._update_size_hint()
- def update_size_hint(self):
- self.lines = {}
+ def _update_size_hint(self):
+ if self._custom_text is not None:
+ self.update()
+ return
+ self._lines = {}
items = self.checked_items_text()
if not items:
self.update()
+ self.repaint()
return
option = QtWidgets.QStyleOptionComboBox()
@@ -259,7 +285,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
total_width = option.rect.width() - btn_rect.width()
line = 0
- self.lines = {line: []}
+ self._lines = {line: []}
font_metricts = self.fontMetrics()
default_left_x = 0 + self.left_offset
@@ -270,18 +296,18 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
right_x = left_x + width
if right_x > total_width:
left_x = int(default_left_x)
- if self.lines.get(line):
+ if self._lines.get(line):
line += 1
- self.lines[line] = [item]
+ self._lines[line] = [item]
left_x += width
else:
- self.lines[line] = [item]
+ self._lines[line] = [item]
line += 1
else:
- if line in self.lines:
- self.lines[line].append(item)
+ if line in self._lines:
+ self._lines[line].append(item)
else:
- self.lines[line] = [item]
+ self._lines[line] = [item]
left_x = left_x + width + self.item_spacing
self.update()
@@ -289,18 +315,20 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
def sizeHint(self):
value = super(MultiSelectionComboBox, self).sizeHint()
- lines = len(self.lines)
- if lines == 0:
- lines = 1
+ lines = 1
+ if self._custom_text is None:
+ lines = len(self._lines)
+ if lines == 0:
+ lines = 1
- if self.item_height is None:
- self.item_height = (
+ if self._item_height is None:
+ self._item_height = (
self.fontMetrics().height()
+ (2 * self.top_bottom_padding)
+ (2 * self.top_bottom_margins)
)
value.setHeight(
- (lines * self.item_height)
+ (lines * self._item_height)
+ (2 * self.top_bottom_margins)
)
return value
@@ -316,7 +344,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
else:
check_state = UNCHECKED_INT
self.setItemData(idx, check_state, QtCore.Qt.CheckStateRole)
- self.update_size_hint()
+ self._update_size_hint()
def value(self):
items = list()
diff --git a/openpype/version.py b/openpype/version.py
index 12f797228b..d5d46bab0c 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.16.5-nightly.3"
+__version__ = "3.16.5"
diff --git a/pyproject.toml b/pyproject.toml
index a07c547123..68fbf19c91 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
-version = "3.16.4" # OpenPype
+version = "3.16.5" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"
diff --git a/server_addon/aftereffects/server/settings/creator_plugins.py b/server_addon/aftereffects/server/settings/creator_plugins.py
index ee52fadd40..9cb03b0b26 100644
--- a/server_addon/aftereffects/server/settings/creator_plugins.py
+++ b/server_addon/aftereffects/server/settings/creator_plugins.py
@@ -5,7 +5,7 @@ from ayon_server.settings import BaseSettingsModel
class CreateRenderPlugin(BaseSettingsModel):
mark_for_review: bool = Field(True, title="Review")
- defaults: list[str] = Field(
+ default_variants: list[str] = Field(
default_factory=list,
title="Default Variants"
)
diff --git a/server_addon/aftereffects/server/settings/main.py b/server_addon/aftereffects/server/settings/main.py
index 04d2e51cc9..4edc46d259 100644
--- a/server_addon/aftereffects/server/settings/main.py
+++ b/server_addon/aftereffects/server/settings/main.py
@@ -40,7 +40,7 @@ DEFAULT_AFTEREFFECTS_SETTING = {
"create": {
"RenderCreator": {
"mark_for_review": True,
- "defaults": [
+ "default_variants": [
"Main"
]
}
diff --git a/server_addon/aftereffects/server/version.py b/server_addon/aftereffects/server/version.py
index a242f0e757..df0c92f1e2 100644
--- a/server_addon/aftereffects/server/version.py
+++ b/server_addon/aftereffects/server/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring addon version."""
-__version__ = "0.1.1"
+__version__ = "0.1.2"
diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py
index d19d732e71..ca8f7e63ed 100644
--- a/server_addon/core/server/settings/main.py
+++ b/server_addon/core/server/settings/main.py
@@ -4,6 +4,7 @@ from ayon_server.settings import (
BaseSettingsModel,
MultiplatformPathListModel,
ensure_unique_names,
+ task_types_enum,
)
from ayon_server.exceptions import BadRequestException
@@ -38,13 +39,52 @@ class CoreImageIOConfigModel(BaseSettingsModel):
class CoreImageIOBaseModel(BaseSettingsModel):
activate_global_color_management: bool = Field(
False,
- title="Override global OCIO config"
+ title="Enable Color Management"
)
ocio_config: CoreImageIOConfigModel = Field(
- default_factory=CoreImageIOConfigModel, title="OCIO config"
+ default_factory=CoreImageIOConfigModel,
+ title="OCIO config"
)
file_rules: CoreImageIOFileRulesModel = Field(
- default_factory=CoreImageIOFileRulesModel, title="File Rules"
+ default_factory=CoreImageIOFileRulesModel,
+ title="File Rules"
+ )
+
+
+class VersionStartCategoryProfileModel(BaseSettingsModel):
+ _layout = "expanded"
+ host_names: list[str] = Field(
+ default_factory=list,
+ title="Host names"
+ )
+ task_types: list[str] = Field(
+ default_factory=list,
+ title="Task types",
+ enum_resolver=task_types_enum
+ )
+ task_names: list[str] = Field(
+ default_factory=list,
+ title="Task names"
+ )
+ product_types: list[str] = Field(
+ default_factory=list,
+ title="Product types"
+ )
+ product_names: list[str] = Field(
+ default_factory=list,
+ title="Product names"
+ )
+ version_start: int = Field(
+ 1,
+ title="Version Start",
+ ge=0
+ )
+
+
+class VersionStartCategoryModel(BaseSettingsModel):
+ profiles: list[VersionStartCategoryProfileModel] = Field(
+ default_factory=list,
+ title="Profiles"
)
@@ -61,6 +101,10 @@ class CoreSettings(BaseSettingsModel):
default_factory=GlobalToolsModel,
title="Tools"
)
+ version_start_category: VersionStartCategoryModel = Field(
+ default_factory=VersionStartCategoryModel,
+ title="Version start"
+ )
imageio: CoreImageIOBaseModel = Field(
default_factory=CoreImageIOBaseModel,
title="Color Management (ImageIO)"
@@ -131,6 +175,9 @@ DEFAULT_VALUES = {
"studio_code": "",
"environments": "{}",
"tools": DEFAULT_TOOLS_VALUES,
+ "version_start_category": {
+ "profiles": []
+ },
"publish": DEFAULT_PUBLISH_VALUES,
"project_folder_structure": json.dumps({
"__project_root__": {
diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py
index b3e47d6948..528e847fce 100644
--- a/server_addon/houdini/server/settings/publish_plugins.py
+++ b/server_addon/houdini/server/settings/publish_plugins.py
@@ -151,7 +151,7 @@ class ValidateWorkfilePathsModel(BaseSettingsModel):
)
-class ValidateContainersModel(BaseSettingsModel):
+class BasicValidateModel(BaseSettingsModel):
enabled: bool = Field(title="Enabled")
optional: bool = Field(title="Optional")
active: bool = Field(title="Active")
@@ -161,8 +161,11 @@ class PublishPluginsModel(BaseSettingsModel):
ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field(
default_factory=ValidateWorkfilePathsModel,
title="Validate workfile paths settings.")
- ValidateContainers: ValidateContainersModel = Field(
- default_factory=ValidateContainersModel,
+ ValidateReviewColorspace: BasicValidateModel = Field(
+ default_factory=BasicValidateModel,
+ title="Validate Review Colorspace.")
+ ValidateContainers: BasicValidateModel = Field(
+ default_factory=BasicValidateModel,
title="Validate Latest Containers.")
ValidateSubsetName: ValidateContainersModel = Field(
default_factory=ValidateContainersModel,
@@ -188,6 +191,11 @@ DEFAULT_HOUDINI_PUBLISH_SETTINGS = {
"$JOB"
]
},
+ "ValidateReviewColorspace": {
+ "enabled": True,
+ "optional": True,
+ "active": True
+ },
"ValidateContainers": {
"enabled": True,
"optional": True,
diff --git a/server_addon/max/server/settings/render_settings.py b/server_addon/max/server/settings/render_settings.py
index 6c236d9f12..c00cb5e436 100644
--- a/server_addon/max/server/settings/render_settings.py
+++ b/server_addon/max/server/settings/render_settings.py
@@ -44,6 +44,6 @@ class RenderSettingsModel(BaseSettingsModel):
DEFAULT_RENDER_SETTINGS = {
"default_render_image_folder": "renders/3dsmax",
"aov_separator": "underscore",
- "image_format": "png",
+ "image_format": "exr",
"multipass": True
}
diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py
index 9b97b92e59..11e2b8a36c 100644
--- a/server_addon/maya/server/settings/creators.py
+++ b/server_addon/maya/server/settings/creators.py
@@ -252,7 +252,9 @@ DEFAULT_CREATORS_SETTINGS = {
},
"CreateUnrealSkeletalMesh": {
"enabled": True,
- "default_variants": [],
+ "default_variants": [
+ "Main",
+ ],
"joint_hints": "jnt_org"
},
"CreateMultiverseLook": {
diff --git a/server_addon/traypublisher/server/settings/simple_creators.py b/server_addon/traypublisher/server/settings/simple_creators.py
index 94d6602738..8335b9d34e 100644
--- a/server_addon/traypublisher/server/settings/simple_creators.py
+++ b/server_addon/traypublisher/server/settings/simple_creators.py
@@ -288,5 +288,22 @@ DEFAULT_SIMPLE_CREATORS = [
"allow_multiple_items": True,
"allow_version_control": False,
"extensions": []
+ },
+ {
+ "product_type": "audio",
+ "identifier": "",
+ "label": "Audio ",
+ "icon": "fa5s.file-audio",
+ "default_variants": [
+ "Main"
+ ],
+ "description": "Audio product",
+ "detailed_description": "Audio files for review or final delivery",
+ "allow_sequences": False,
+ "allow_multiple_items": False,
+ "allow_version_control": False,
+ "extensions": [
+ ".wav"
+ ]
}
]
diff --git a/tests/unit/openpype/pipeline/test_colorspace.py b/tests/unit/openpype/pipeline/test_colorspace.py
index c22acee2d4..ac35a28303 100644
--- a/tests/unit/openpype/pipeline/test_colorspace.py
+++ b/tests/unit/openpype/pipeline/test_colorspace.py
@@ -28,10 +28,9 @@ class TestPipelineColorspace(TestPipeline):
cd to OpenPype repo root dir
poetry run python ./start.py runtests ../tests/unit/openpype/pipeline
"""
-
TEST_FILES = [
(
- "1Lf-mFxev7xiwZCWfImlRcw7Fj8XgNQMh",
+ "1csqimz8bbNcNgxtEXklLz6GRv91D3KgA",
"test_pipeline_colorspace.zip",
""
)