diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 9fcb69e2e9..090e28a951 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -35,6 +35,9 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
+ - 3.16.0-nightly.1
+ - 3.15.12
+ - 3.15.12-nightly.4
- 3.15.12-nightly.3
- 3.15.12-nightly.2
- 3.15.12-nightly.1
@@ -132,9 +135,6 @@ body:
- 3.14.5-nightly.3
- 3.14.5-nightly.2
- 3.14.5-nightly.1
- - 3.14.4
- - 3.14.4-nightly.4
- - 3.14.4-nightly.3
validations:
required: true
- type: dropdown
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 095e0d96e4..b74fea7e3d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,609 @@
# Changelog
+## [3.15.12](https://github.com/ynput/OpenPype/tree/3.15.12)
+
+
+[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.11...3.15.12)
+
+### **🆕 New features**
+
+
+
+Tray Publisher: User can set colorspace per instance explicitly #4901
+
+With this feature a user can set/override the colorspace for the representations of an instance explicitly instead of relying on the File Rules from project settings or alike. This way you can ingest any file and explicitly say "this file is colorspace X".
+
+
+___
+
+
+
+
+
+Review Family in Max #5001
+
+Review Feature by creating preview animation in 3dsmax(The code is still cleaning up so there is going to be some updates until it is ready for review)
+
+
+___
+
+
+
+
+
+AfterEffects: support for workfile template builder #5163
+
+This PR add functionality of templated workfile builder. It allows someone to prepare AE workfile with placeholders as for automatically loading particular representation of particular subset of particular asset from context where workfile is opened.Selection from multiple prepared workfiles is provided with usage of templates, specific type of tasks could use particular workfile template etc.Artists then can build workfile from template when opening new workfile.
+
+
+___
+
+
+
+
+
+CreatePlugin: Get next version helper #5242
+
+Implemented helper functions to get next available versions for create instances.
+
+
+___
+
+
+
+### **🚀 Enhancements**
+
+
+
+Maya: Improve Templates #4854
+
+Use library method for fetching reference node and support parent in hierarchy.
+
+
+___
+
+
+
+
+
+Bug: Maya - xgen sidecar files arent moved when saving workfile as an new asset workfile changing context - OP-6222 #5215
+
+This PR manages the Xgen files when switching context in the Workfiles app.
+
+
+___
+
+
+
+
+
+node references to check for duplicates in Max #5192
+
+No duplicates for node references in Max when users trying to select nodes before publishing
+
+
+___
+
+
+
+
+
+Tweak profiles logging to debug level #5194
+
+Tweak profiles logging to debug level since they aren't artist facing logs.
+
+
+___
+
+
+
+
+
+Enhancement: Reduce more visual clutter for artists in new publisher reports #5208
+
+Got this from one of our artists' reports - figured some of these logs were definitely not for the artist, reduced those logs to debug level.
+
+
+___
+
+
+
+
+
+Cosmetics: Tweak pyblish repair actions (icon, logs, docstring) #5213
+
+- Add icon to RepairContextAction
+- logs to debug level
+- also add attempt repair for RepairAction for consistency
+- fix RepairContextAction docstring to mention correct argument name
+
+#### Additional info
+
+We should not forget to remove this ["deprecated" actions.py file](https://github.com/ynput/OpenPype/blob/3501d0d23a78fbaef106da2fffe946cb49bef855/openpype/action.py) in 3.16 (next-minor)
+
+## Testing notes:
+
+1. Run some fabulous repairs!
+
+___
+
+
+
+
+
+Maya: fix save file prompt on launch last workfile with color management enabled + restructure `set_colorspace` #5225
+
+- Only set `configFilePath` when OCIO env var is not set since it doesn't do anything if OCIO var is set anyway.
+- Set the Maya 2022+ default OCIO path using the resources path instead of "" to avoid Maya Save File on new file after launch
+- **Bugfix: This is what fixes the Save prompt on open last workfile feature with Global color management enabled**
+- Move all code related to applying the maya settings together after querying the settings
+- Swap around the `if use_workfile_settings` since the check was reversed
+- Use `get_current_project_name()` instead of environment vars
+
+
+___
+
+
+
+
+
+Enhancement: More descriptive error messages for Loaders #5227
+
+Tweak raised errors and error messages for loader errors.
+
+
+___
+
+
+
+
+
+Houdini: add select invalid action for ValidateSopOutputNode #5231
+
+This PR adds `SelectROPAction` action to `houdini\api\action.py`and it's used in `Validate Output Node``SelectROPAction` is used to select the associated ROPs with the errored instances.
+
+
+___
+
+
+
+
+
+Remove new lines from the delivery template string #5235
+
+If the delivery template has a new line symbol at the end, say it was copied from the text editor, the delivery process will fail with `OSError` due to incorrect destination path. To avoid that I added `rstrip()` to the `delivery_path` processing.
+
+
+___
+
+
+
+
+
+Houdini: better selection on pointcache creation #5250
+
+Houdini allows `ObjNode` path as `sop_path` in the `ROP` unlike OP/ Ayon require `sop_path` to be set to a sop node path explicitly In this code, better selection is used to filter out invalid selections from OP/ Ayon point of viewValid selections are
+- `SopNode` that has parent of type `geo` or `subnet`
+- `ObjNode` of type `geo` that has
+- `SopNode` of type `output`
+- `SopNode` with render flag `on` (if no `Sopnode` of type `output`)this effectively filter
+- empty `ObjNode`
+- `ObjNode`(s) of other types like `cam` and `dopnet`
+- `SopNode`(s) that thier parents of other types like `cam` and `sop solver`
+
+
+___
+
+
+
+
+
+Update scene inventory even if any errors occurred during update #5252
+
+When selecting many items in the scene inventory to update versions and one of the items would error out the updating stops. However, before this PR the scene inventory would also NOT refresh making you think it did nothing.Also implemented as method to allow some code deduplication.
+
+
+___
+
+
+
+### **🐛 Bug fixes**
+
+
+
+Maya: Convert frame values to integers #5188
+
+Convert frame values to integers.
+
+
+___
+
+
+
+
+
+Maya: fix the register_event_callback correctly collecting workfile save after #5214
+
+fixing the bug of register_event_callback not being able to collect action of "workfile_save_after" for lock file action
+
+
+___
+
+
+
+
+
+Maya: aligning default settings to distributed aces 1.2 config #5233
+
+Maya colorspace setttings defaults are set the way they align our distributed ACES 1.2 config file set in global colorspace configs.
+
+
+___
+
+
+
+
+
+RepairAction and SelectInvalidAction filter instances failed on the exact plugin #5240
+
+RepairAction and SelectInvalidAction actually filter to instances that failed on the exact plugin - not on "any failure"
+
+
+___
+
+
+
+
+
+Maya: Bugfix look update nodes by id with non-unique shape names (query with `fullPath`) #5257
+
+Fixes a bug where updating attributes on nodes with assigned shader if shape name existed more than once in the scene due to `cmds.listRelatives` call not being done with the `fullPath=True` flag.Original error:
+```python
+# Traceback (most recent call last):
+# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 264, in
+# lambda: self._show_version_dialog(items))
+# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 722, in _show_version_dialog
+# self._update_containers(items, version)
+# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 849, in _update_containers
+# update_container(item, item_version)
+# File "E:\openpype\OpenPype\openpype\pipeline\load\utils.py", line 502, in update_container
+# return loader.update(container, new_representation)
+# File "E:\openpype\OpenPype\openpype\hosts\maya\plugins\load\load_look.py", line 119, in update
+# nodes_by_id[lib.get_id(n)].append(n)
+# File "E:\openpype\OpenPype\openpype\hosts\maya\api\lib.py", line 1420, in get_id
+# sel.add(node)
+```
+
+
+___
+
+
+
+
+
+Nuke: Create nodes with inpanel=False #5051
+
+This PR is meant to remove the annoyance of the UI changing focus to the properties window just for the property window of the newly created node to disappear. Instead of using node.hideControlPanel I'm implementing the concealment during the creation of the node which will not change the focus of the current window.
+___
+
+
+
+
+
+Fix the reset frame range not setting up the right timeline in Max #5187
+
+Resolve #5181
+
+
+___
+
+
+
+
+
+Resolve: after launch automatization fixes #5193
+
+Workfile is no correctly created and aligned witch actual project. Also the launching mechanism is now fixed so even no workfile had been saved yet it will open OpenPype menu automatically.
+
+
+___
+
+
+
+
+
+General: Revert backward incompatible change of path to template to multiplatform #5197
+
+Now platformity is still handed by usage of `work[root]` (or any other root that is accessible across platforms.)
+
+
+___
+
+
+
+
+
+Nuke: root set format updating in node graph #5198
+
+Nuke root node needs to be reset on some values so any knobs could be updated in node graph. This works the same way as an user would change frame number so expressions would update its values in knobs.
+
+
+___
+
+
+
+
+
+Hiero: fixing otio current project and cosmetics #5200
+
+Otio were not returning correct current project once additional Untitled project was open in project manager stack.
+
+
+___
+
+
+
+
+
+Max: Publisher instances dont hold its enabled disabled states when Publisher reopened again #5202
+
+Resolve #5183, general maxscript conversion issue to python (e.g. bool conversion, true in maxscript while True in Python)(Also resolve the ValueError when you change the subset to publish into list view menu)
+
+
+___
+
+
+
+
+
+Burnins: Filter script is defined only for video streams #5205
+
+Burnins are working for inputs with audio.
+
+
+___
+
+
+
+
+
+Colorspace lib fix compatible python version comparison #5212
+
+Fix python version comparison.
+
+
+___
+
+
+
+
+
+Houdini: Fix `get_color_management_preferences` #5217
+
+Fix the issue described here where the logic for retrieving the current OCIO display and view was incorrectly trying to apply a regex to it.
+
+
+___
+
+
+
+
+
+Houdini: Redshift ROP image format bug #5218
+
+Problem :
+"RS_outputFileFormat" parm value was missing
+and there were more "image_format" than redshift rop supports
+
+Fix:
+1) removed unnecessary formats from `image_format_enum`
+2) add the selected format value to `RS_outputFileFormat`
+___
+
+
+
+
+
+Colorspace: check PyOpenColorIO rather then python version #5223
+
+Fixing previously merged PR (https://github.com/ynput/OpenPype/pull/5212) And applying better way to check compatibility with PyOpenColorIO python api.
+
+
+___
+
+
+
+
+
+Validate delivery action representations status #5228
+
+- disable delivery button if no representations checked
+- fix macos combobox layout
+- add error message if no delivery templates found
+
+
+___
+
+
+
+
+
+ Houdini: Add geometry check for pointcache family #5230
+
+When `sop_path` on ABC ROP node points to a non `SopNode`, these validators `validate_abc_primitive_to_detail.py`, `validate_primitive_hierarchy_paths.py` will error and crash when this line is executed `geo = output_node.geometryAtFrame(frame)`
+
+
+___
+
+
+
+
+
+Houdini: Add geometry check for VDB family #5232
+
+When `sop_path` on Geometry ROP node points to a non SopNode, this validator `validate_vdb_output_node.py` will error and crash when this line is executed`sop_node.geometryAtFrame(frame)`
+
+
+___
+
+
+
+
+
+Substance Painter: Include the setting only in publish tab #5234
+
+Instead of having two settings in both create and publish tab, there is solely one setting in the publish tab for users to set up the parameters.Resolve #5172
+
+
+___
+
+
+
+
+
+Maya: Fix collecting arnold prefix when none #5243
+
+When no prefix is specified in render settings, the renderlayer collector would error.
+
+
+___
+
+
+
+
+
+Deadline: OPENPYPE_VERSION should only be added when running from build #5244
+
+When running from source the environment variable `OPENPYPE_VERSION` should not be added. This is a bugfix for the feature #4489
+
+
+___
+
+
+
+
+
+Fix no prompt for "unsaved changes" showing when opening workfile in Houdini #5246
+
+Fix no prompt for "unsaved changes" showing when opening workfile in Houdini.
+
+
+___
+
+
+
+
+
+Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter #5248
+
+Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter.
+
+
+___
+
+
+
+
+
+General: add the os library before os.environ.get #5249
+
+Adding os library into `creator_plugins.py` due to `os.environ.get` in line 667
+
+
+___
+
+
+
+
+
+Maya: Fix set_attribute for enum attributes #5261
+
+Fix for #5260
+
+
+___
+
+
+
+
+
+Unreal: Move Qt imports away from module init #5268
+
+Importing `Window` creates errors in headless mode.
+```
+*** WRN: >>> { ModulesLoader }: [ FAILED to import host folder unreal ]
+=============================
+No Qt bindings could be found
+=============================
+Traceback (most recent call last):
+ File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 252, in
+ from PySide6 import __version__ as PYSIDE_VERSION # analysis:ignore
+ModuleNotFoundERROR: No module named 'PySide6'
+
+During handling of the above exception, another exception occurred:
+
+Traceback (most recent call last):
+ File "C:\Users\tokejepsen\OpenPype\openpype\modules\base.py", line 385, in _load_modules
+ default_module = __import__(
+ File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\__init__.py", line 1, in
+ from .addon import UnrealAddon
+ File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\addon.py", line 4, in
+ from openpype.widgets.message_window import Window
+ File "C:\Users\tokejepsen\OpenPype\openpype\widgets\__init__.py", line 1, in
+ from .password_dialog import PasswordDialog
+ File "C:\Users\tokejepsen\OpenPype\openpype\widgets\password_dialog.py", line 1, in
+ from qtpy import QtWidgets, QtCore, QtGui
+ File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 259, in
+ raise QtBindingsNotFoundERROR()
+qtpy.QtBindingsNotFoundERROR: No Qt bindings could be found
+```
+
+
+___
+
+
+
+### **🔀 Refactored code**
+
+
+
+Maya: Minor refactoring and code cleanup #5226
+
+Some small cleanup and refactoring of logic. Removing old comments, unused imports and some minor optimization. Also removed the prints of the loader names of each container the scene in `fix_incompatible_containers` + optimizing by using `set` and defining only once. Moved some UI related code/tweaks to run `on_init` only if not in headless mode. Removed an empty `obj.py` file.Each commit message kind of describes why the change was made.
+
+
+___
+
+
+
+### **Merged pull requests**
+
+
+
+Bug: Template builder fails when loading data without outliner representation #5222
+
+I add an assertion management in case the container does not have a represention in outliner.
+
+
+___
+
+
+
+
+
+AfterEffects - add container check validator to AE settings #5203
+
+Adds check if scene contains only latest version of loaded containers.
+
+
+___
+
+
+
+
+
+
## [3.15.11](https://github.com/ynput/OpenPype/tree/3.15.11)
diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py
index df74070fee..554d5f2016 100644
--- a/openpype/hosts/houdini/plugins/create/create_pointcache.py
+++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py
@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating pointcache alembics."""
from openpype.hosts.houdini.api import plugin
-from openpype.pipeline import CreatedInstance
import hou
@@ -14,15 +13,13 @@ class CreatePointCache(plugin.HoudiniCreator):
icon = "gears"
def create(self, subset_name, instance_data, pre_create_data):
- import hou
-
instance_data.pop("active", None)
instance_data.update({"node_type": "alembic"})
instance = super(CreatePointCache, self).create(
subset_name,
instance_data,
- pre_create_data) # type: CreatedInstance
+ pre_create_data)
instance_node = hou.node(instance.get("instance_node"))
parms = {
@@ -37,13 +34,44 @@ class CreatePointCache(plugin.HoudiniCreator):
}
if self.selected_nodes:
- parms["sop_path"] = self.selected_nodes[0].path()
+ selected_node = self.selected_nodes[0]
- # try to find output node
- for child in self.selected_nodes[0].children():
- if child.type().name() == "output":
- parms["sop_path"] = child.path()
- break
+ # Although Houdini allows ObjNode path on `sop_path` for the
+ # the ROP node we prefer it set to the SopNode path explicitly
+
+ # Allow sop level paths (e.g. /obj/geo1/box1)
+ if isinstance(selected_node, hou.SopNode):
+ parms["sop_path"] = selected_node.path()
+ self.log.debug(
+ "Valid SopNode selection, 'SOP Path' in ROP will be set to '%s'."
+ % selected_node.path()
+ )
+
+ # Allow object level paths to Geometry nodes (e.g. /obj/geo1)
+ # but do not allow other object level nodes types like cameras, etc.
+ elif isinstance(selected_node, hou.ObjNode) and \
+ selected_node.type().name() in ["geo"]:
+
+ # get the output node with the minimum
+ # 'outputidx' or the node with display flag
+ sop_path = self.get_obj_output(selected_node)
+
+ if sop_path:
+ parms["sop_path"] = sop_path.path()
+ self.log.debug(
+ "Valid ObjNode selection, 'SOP Path' in ROP will be set to "
+ "the child path '%s'."
+ % sop_path.path()
+ )
+
+ if not parms.get("sop_path", None):
+ self.log.debug(
+ "Selection isn't valid. 'SOP Path' in ROP will be empty."
+ )
+ else:
+ self.log.debug(
+ "No Selection. 'SOP Path' in ROP will be empty."
+ )
instance_node.setParms(parms)
instance_node.parm("trange").set(1)
@@ -57,3 +85,23 @@ class CreatePointCache(plugin.HoudiniCreator):
hou.ropNodeTypeCategory(),
hou.sopNodeTypeCategory()
]
+
+ def get_obj_output(self, obj_node):
+ """Find output node with the smallest 'outputidx'."""
+
+ outputs = obj_node.subnetOutputs()
+
+ # if obj_node is empty
+ if not outputs:
+ return
+
+ # if obj_node has one output child whether its
+ # sop output node or a node with the render flag
+ elif len(outputs) == 1:
+ return outputs[0]
+
+ # if there are more than one, then it have multiple ouput nodes
+ # return the one with the minimum 'outputidx'
+ else:
+ return min(outputs,
+ key=lambda node: node.evalParm('outputidx'))
diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py
index 6d1e8d03b4..5a99a8b845 100644
--- a/openpype/hosts/max/plugins/publish/extract_pointcache.py
+++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py
@@ -56,7 +56,7 @@ class ExtractAlembic(publish.Extractor):
container = instance.data["instance_node"]
- self.log.info("Extracting pointcache ...")
+ self.log.debug("Extracting pointcache ...")
parent_dir = self.staging_dir(instance)
file_name = "{name}.abc".format(**instance.data)
diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py
index 8569bbd38f..ef8ddf8bac 100644
--- a/openpype/hosts/maya/api/lib.py
+++ b/openpype/hosts/maya/api/lib.py
@@ -32,13 +32,11 @@ from openpype.pipeline import (
load_container,
registered_host,
)
-from openpype.pipeline.create import (
- legacy_create,
- get_legacy_creator_by_name,
-)
+from openpype.lib import NumberDef
+from openpype.pipeline.context_tools import get_current_project_asset
+from openpype.pipeline.create import CreateContext
from openpype.pipeline.context_tools import (
get_current_asset_name,
- get_current_project_asset,
get_current_project_name,
get_current_task_name
)
@@ -122,16 +120,14 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
-DISPLAY_LIGHTS_VALUES = [
- "project_settings", "default", "all", "selected", "flat", "none"
-]
-DISPLAY_LIGHTS_LABELS = [
- "Use Project Settings",
- "Default Lighting",
- "All Lights",
- "Selected Lights",
- "Flat Lighting",
- "No Lights"
+
+DISPLAY_LIGHTS_ENUM = [
+ {"label": "Use Project Settings", "value": "project_settings"},
+ {"label": "Default Lighting", "value": "default"},
+ {"label": "All Lights", "value": "all"},
+ {"label": "Selected Lights", "value": "selected"},
+ {"label": "Flat Lighting", "value": "flat"},
+ {"label": "No Lights", "value": "none"}
]
@@ -343,8 +339,8 @@ def pairwise(iterable):
return zip(a, a)
-def collect_animation_data(fps=False):
- """Get the basic animation data
+def collect_animation_defs(fps=False):
+ """Get the basic animation attribute defintions for the publisher.
Returns:
OrderedDict
@@ -363,17 +359,42 @@ def collect_animation_data(fps=False):
handle_end = frame_end_handle - frame_end
# build attributes
- data = OrderedDict()
- data["frameStart"] = frame_start
- data["frameEnd"] = frame_end
- data["handleStart"] = handle_start
- data["handleEnd"] = handle_end
- data["step"] = 1.0
+ defs = [
+ NumberDef("frameStart",
+ label="Frame Start",
+ default=frame_start,
+ decimals=0),
+ NumberDef("frameEnd",
+ label="Frame End",
+ default=frame_end,
+ decimals=0),
+ NumberDef("handleStart",
+ label="Handle Start",
+ default=handle_start,
+ decimals=0),
+ NumberDef("handleEnd",
+ label="Handle End",
+ default=handle_end,
+ decimals=0),
+ NumberDef("step",
+ label="Step size",
+ tooltip="A smaller step size means more samples and larger "
+ "output files.\n"
+ "A 1.0 step size is a single sample every frame.\n"
+ "A 0.5 step size is two samples per frame.\n"
+ "A 0.2 step size is five samples per frame.",
+ default=1.0,
+ decimals=3),
+ ]
if fps:
- data["fps"] = mel.eval('currentTimeUnitToFPS()')
+ current_fps = mel.eval('currentTimeUnitToFPS()')
+ fps_def = NumberDef(
+ "fps", label="FPS", default=current_fps, decimals=5
+ )
+ defs.append(fps_def)
- return data
+ return defs
def imprint(node, data):
@@ -459,10 +480,10 @@ def lsattrs(attrs):
attrs (dict): Name and value pairs of expected matches
Example:
- >> # Return nodes with an `age` of five.
- >> lsattr({"age": "five"})
- >> # Return nodes with both `age` and `color` of five and blue.
- >> lsattr({"age": "five", "color": "blue"})
+ >>> # Return nodes with an `age` of five.
+ >>> lsattrs({"age": "five"})
+ >>> # Return nodes with both `age` and `color` of five and blue.
+ >>> lsattrs({"age": "five", "color": "blue"})
Return:
list: matching nodes.
@@ -1522,7 +1543,15 @@ def set_attribute(attribute, value, node):
cmds.addAttr(node, longName=attribute, **kwargs)
node_attr = "{}.{}".format(node, attribute)
- if "dataType" in kwargs:
+ enum_type = cmds.attributeQuery(attribute, node=node, enum=True)
+ if enum_type and value_type == "str":
+ enum_string_values = cmds.attributeQuery(
+ attribute, node=node, listEnum=True
+ )[0].split(":")
+ cmds.setAttr(
+ "{}.{}".format(node, attribute), enum_string_values.index(value)
+ )
+ elif "dataType" in kwargs:
attr_type = kwargs["dataType"]
cmds.setAttr(node_attr, value, type=attr_type)
else:
@@ -4078,12 +4107,10 @@ def create_rig_animation_instance(
)
assert roots, "No root nodes in rig, this is a bug."
- asset = legacy_io.Session["AVALON_ASSET"]
- dependency = str(context["representation"]["_id"])
-
custom_subset = options.get("animationSubsetName")
if custom_subset:
formatting_data = {
+ # TODO remove 'asset_type' and replace 'asset_name' with 'asset'
"asset_name": context['asset']['name'],
"asset_type": context['asset']['type'],
"subset": context['subset']['name'],
@@ -4101,14 +4128,17 @@ def create_rig_animation_instance(
if log:
log.info("Creating subset: {}".format(namespace))
+ # Fill creator identifier
+ creator_identifier = "io.openpype.creators.maya.animation"
+
+ host = registered_host()
+ create_context = CreateContext(host)
+
# Create the animation instance
- creator_plugin = get_legacy_creator_by_name("CreateAnimation")
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
- legacy_create(
- creator_plugin,
- name=namespace,
- asset=asset,
- options={"useSelection": True},
- data={"dependencies": dependency}
+ create_context.create(
+ creator_identifier=creator_identifier,
+ variant=namespace,
+ pre_create_data={"use_selection": True}
)
diff --git a/openpype/hosts/maya/api/lib_renderproducts.py b/openpype/hosts/maya/api/lib_renderproducts.py
index 7bfb53d500..b5b71a5a36 100644
--- a/openpype/hosts/maya/api/lib_renderproducts.py
+++ b/openpype/hosts/maya/api/lib_renderproducts.py
@@ -177,7 +177,7 @@ def get(layer, render_instance=None):
}.get(renderer_name.lower(), None)
if renderer is None:
raise UnsupportedRendererException(
- "unsupported {}".format(renderer_name)
+ "Unsupported renderer: {}".format(renderer_name)
)
return renderer(layer, render_instance)
diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py
index 5284c0249d..645d6f5a1c 100644
--- a/openpype/hosts/maya/api/menu.py
+++ b/openpype/hosts/maya/api/menu.py
@@ -66,10 +66,12 @@ def install():
cmds.menuItem(divider=True)
- # Create default items
cmds.menuItem(
"Create...",
- command=lambda *args: host_tools.show_creator(parent=parent_widget)
+ command=lambda *args: host_tools.show_publisher(
+ parent=parent_widget,
+ tab="create"
+ )
)
cmds.menuItem(
@@ -82,8 +84,9 @@ def install():
cmds.menuItem(
"Publish...",
- command=lambda *args: host_tools.show_publish(
- parent=parent_widget
+ command=lambda *args: host_tools.show_publisher(
+ parent=parent_widget,
+ tab="publish"
),
image=pyblish_icon
)
diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py
index e2d00b5bd7..2f2ab83f79 100644
--- a/openpype/hosts/maya/api/pipeline.py
+++ b/openpype/hosts/maya/api/pipeline.py
@@ -1,3 +1,5 @@
+import json
+import base64
import os
import errno
import logging
@@ -14,6 +16,7 @@ from openpype.host import (
HostBase,
IWorkfileHost,
ILoadHost,
+ IPublishHost,
HostDirmap,
)
from openpype.tools.utils import host_tools
@@ -64,7 +67,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
-class MayaHost(HostBase, IWorkfileHost, ILoadHost):
+class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
name = "maya"
def __init__(self):
@@ -150,6 +153,20 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
with lib.maintained_selection():
yield
+ def get_context_data(self):
+ data = cmds.fileInfo("OpenPypeContext", query=True)
+ if not data:
+ return {}
+
+ data = data[0] # Maya seems to return a list
+ decoded = base64.b64decode(data).decode("utf-8")
+ return json.loads(decoded)
+
+ def update_context_data(self, data, changes):
+ json_str = json.dumps(data)
+ encoded = base64.b64encode(json_str.encode("utf-8"))
+ return cmds.fileInfo("OpenPypeContext", encoded)
+
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py
index 0971251469..9a67147eb4 100644
--- a/openpype/hosts/maya/api/plugin.py
+++ b/openpype/hosts/maya/api/plugin.py
@@ -1,29 +1,39 @@
+import json
import os
-
-from maya import cmds
+from abc import ABCMeta
import qargparse
+import six
+from maya import cmds
+from maya.app.renderSetup.model import renderSetup
-from openpype.lib import Logger
+from openpype.lib import BoolDef, Logger
+from openpype.pipeline import AVALON_CONTAINER_ID, Anatomy, CreatedInstance
+from openpype.pipeline import Creator as NewCreator
from openpype.pipeline import (
- LegacyCreator,
- LoaderPlugin,
- get_representation_path,
- AVALON_CONTAINER_ID,
- Anatomy,
-)
+ CreatorError, LegacyCreator, LoaderPlugin, get_representation_path,
+ legacy_io)
from openpype.pipeline.load import LoadError
from openpype.settings import get_project_settings
-from .pipeline import containerise
-from . import lib
+from . import lib
+from .lib import imprint, read
+from .pipeline import containerise
log = Logger.get_logger()
+def _get_attr(node, attr, default=None):
+ """Helper to get attribute which allows attribute to not exist."""
+ if not cmds.attributeQuery(attr, node=node, exists=True):
+ return default
+ return cmds.getAttr("{}.{}".format(node, attr))
+
+
# Backwards compatibility: these functions has been moved to lib.
def get_reference_node(*args, **kwargs):
- """
+ """Get the reference node from the container members
+
Deprecated:
This function was moved and will be removed in 3.16.x.
"""
@@ -60,6 +70,379 @@ class Creator(LegacyCreator):
return instance
+@six.add_metaclass(ABCMeta)
+class MayaCreatorBase(object):
+
+ @staticmethod
+ def cache_subsets(shared_data):
+ """Cache instances for Creators to shared data.
+
+ Create `maya_cached_subsets` key when needed in shared data and
+ fill it with all collected instances from the scene under its
+ respective creator identifiers.
+
+ If legacy instances are detected in the scene, create
+ `maya_cached_legacy_subsets` there and fill it with
+ all legacy subsets under family as a key.
+
+ Args:
+ Dict[str, Any]: Shared data.
+
+ Return:
+ Dict[str, Any]: Shared data dictionary.
+
+ """
+ if shared_data.get("maya_cached_subsets") is None:
+ cache = dict()
+ cache_legacy = dict()
+
+ for node in cmds.ls(type="objectSet"):
+
+ if _get_attr(node, attr="id") != "pyblish.avalon.instance":
+ continue
+
+ creator_id = _get_attr(node, attr="creator_identifier")
+ if creator_id is not None:
+ # creator instance
+ cache.setdefault(creator_id, []).append(node)
+ else:
+ # legacy instance
+ family = _get_attr(node, attr="family")
+ if family is None:
+ # must be a broken instance
+ continue
+
+ cache_legacy.setdefault(family, []).append(node)
+
+ shared_data["maya_cached_subsets"] = cache
+ shared_data["maya_cached_legacy_subsets"] = cache_legacy
+ return shared_data
+
+ def imprint_instance_node(self, node, data):
+
+ # We never store the instance_node as value on the node since
+ # it's the node name itself
+ data.pop("instance_node", None)
+
+ # We store creator attributes at the root level and assume they
+ # will not clash in names with `subset`, `task`, etc. and other
+ # default names. This is just so these attributes in many cases
+ # are still editable in the maya UI by artists.
+ # pop to move to end of dict to sort attributes last on the node
+ creator_attributes = data.pop("creator_attributes", {})
+ data.update(creator_attributes)
+
+ # We know the "publish_attributes" will be complex data of
+ # settings per plugins, we'll store this as a flattened json structure
+ # pop to move to end of dict to sort attributes last on the node
+ data["publish_attributes"] = json.dumps(
+ data.pop("publish_attributes", {})
+ )
+
+ # Since we flattened the data structure for creator attributes we want
+ # to correctly detect which flattened attributes should end back in the
+ # creator attributes when reading the data from the node, so we store
+ # the relevant keys as a string
+ data["__creator_attributes_keys"] = ",".join(creator_attributes.keys())
+
+ # Kill any existing attributes just so we can imprint cleanly again
+ for attr in data.keys():
+ if cmds.attributeQuery(attr, node=node, exists=True):
+ cmds.deleteAttr("{}.{}".format(node, attr))
+
+ return imprint(node, data)
+
+ def read_instance_node(self, node):
+ node_data = read(node)
+
+ # Never care about a cbId attribute on the object set
+ # being read as 'data'
+ node_data.pop("cbId", None)
+
+ # Move the relevant attributes into "creator_attributes" that
+ # we flattened originally
+ node_data["creator_attributes"] = {}
+ creator_attribute_keys = node_data.pop("__creator_attributes_keys",
+ "").split(",")
+ for key in creator_attribute_keys:
+ if key in node_data:
+ node_data["creator_attributes"][key] = node_data.pop(key)
+
+ publish_attributes = node_data.get("publish_attributes")
+ if publish_attributes:
+ node_data["publish_attributes"] = json.loads(publish_attributes)
+
+ # Explicitly re-parse the node name
+ node_data["instance_node"] = node
+
+ return node_data
+
+
+@six.add_metaclass(ABCMeta)
+class MayaCreator(NewCreator, MayaCreatorBase):
+
+ def create(self, subset_name, instance_data, pre_create_data):
+
+ members = list()
+ if pre_create_data.get("use_selection"):
+ members = cmds.ls(selection=True)
+
+ with lib.undo_chunk():
+ instance_node = cmds.sets(members, name=subset_name)
+ instance_data["instance_node"] = instance_node
+ instance = CreatedInstance(
+ self.family,
+ subset_name,
+ instance_data,
+ self)
+ self._add_instance_to_context(instance)
+
+ self.imprint_instance_node(instance_node,
+ data=instance.data_to_store())
+ return instance
+
+ def collect_instances(self):
+ self.cache_subsets(self.collection_shared_data)
+ cached_subsets = self.collection_shared_data["maya_cached_subsets"]
+ for node in cached_subsets.get(self.identifier, []):
+ node_data = self.read_instance_node(node)
+
+ created_instance = CreatedInstance.from_existing(node_data, self)
+ self._add_instance_to_context(created_instance)
+
+ def update_instances(self, update_list):
+ for created_inst, _changes in update_list:
+ data = created_inst.data_to_store()
+ node = data.get("instance_node")
+
+ self.imprint_instance_node(node, data)
+
+ def remove_instances(self, instances):
+ """Remove specified instance from the scene.
+
+ This is only removing `id` parameter so instance is no longer
+ instance, because it might contain valuable data for artist.
+
+ """
+ for instance in instances:
+ node = instance.data.get("instance_node")
+ if node:
+ cmds.delete(node)
+
+ self._remove_instance_from_context(instance)
+
+ def get_pre_create_attr_defs(self):
+ return [
+ BoolDef("use_selection",
+ label="Use selection",
+ default=True)
+ ]
+
+
+def ensure_namespace(namespace):
+ """Make sure the namespace exists.
+
+ Args:
+ namespace (str): The preferred namespace name.
+
+ Returns:
+ str: The generated or existing namespace
+
+ """
+ exists = cmds.namespace(exists=namespace)
+ if exists:
+ return namespace
+ else:
+ return cmds.namespace(add=namespace)
+
+
+class RenderlayerCreator(NewCreator, MayaCreatorBase):
+ """Creator which creates an instance per renderlayer in the workfile.
+
+ Create and manages renderlayer subset per renderLayer in workfile.
+ This generates a singleton node in the scene which, if it exists, tells the
+ Creator to collect Maya rendersetup renderlayers as individual instances.
+ As such, triggering create doesn't actually create the instance node per
+ layer but only the node which tells the Creator it may now collect
+ an instance per renderlayer.
+
+ """
+
+ # These are required to be overridden in subclass
+ singleton_node_name = ""
+
+ # These are optional to be overridden in subclass
+ layer_instance_prefix = None
+
+ def _get_singleton_node(self, return_all=False):
+ nodes = lib.lsattr("pre_creator_identifier", self.identifier)
+ if nodes:
+ return nodes if return_all else nodes[0]
+
+ def create(self, subset_name, instance_data, pre_create_data):
+ # A Renderlayer is never explicitly created using the create method.
+ # Instead, renderlayers from the scene are collected. Thus "create"
+ # would only ever be called to say, 'hey, please refresh collect'
+ self.create_singleton_node()
+
+ # if no render layers are present, create default one with
+ # asterisk selector
+ rs = renderSetup.instance()
+ if not rs.getRenderLayers():
+ render_layer = rs.createRenderLayer("Main")
+ collection = render_layer.createCollection("defaultCollection")
+ collection.getSelector().setPattern('*')
+
+ # By RenderLayerCreator.create we make it so that the renderlayer
+ # instances directly appear even though it just collects scene
+ # renderlayers. This doesn't actually 'create' any scene contents.
+ self.collect_instances()
+
+ def create_singleton_node(self):
+ if self._get_singleton_node():
+ raise CreatorError("A Render instance already exists - only "
+ "one can be configured.")
+
+ with lib.undo_chunk():
+ node = cmds.sets(empty=True, name=self.singleton_node_name)
+ lib.imprint(node, data={
+ "pre_creator_identifier": self.identifier
+ })
+
+ return node
+
+ def collect_instances(self):
+
+ # We only collect if the global render instance exists
+ if not self._get_singleton_node():
+ return
+
+ rs = renderSetup.instance()
+ layers = rs.getRenderLayers()
+ for layer in layers:
+ layer_instance_node = self.find_layer_instance_node(layer)
+ if layer_instance_node:
+ data = self.read_instance_node(layer_instance_node)
+ instance = CreatedInstance.from_existing(data, creator=self)
+ else:
+ # No existing scene instance node for this layer. Note that
+ # this instance will not have the `instance_node` data yet
+ # until it's been saved/persisted at least once.
+ # TODO: Correctly define the subset name using templates
+ prefix = self.layer_instance_prefix or self.family
+ subset_name = "{}{}".format(prefix, layer.name())
+ instance_data = {
+ "asset": legacy_io.Session["AVALON_ASSET"],
+ "task": legacy_io.Session["AVALON_TASK"],
+ "variant": layer.name(),
+ }
+ instance = CreatedInstance(
+ family=self.family,
+ subset_name=subset_name,
+ data=instance_data,
+ creator=self
+ )
+
+ instance.transient_data["layer"] = layer
+ self._add_instance_to_context(instance)
+
+ def find_layer_instance_node(self, layer):
+ connected_sets = cmds.listConnections(
+ "{}.message".format(layer.name()),
+ source=False,
+ destination=True,
+ type="objectSet"
+ ) or []
+
+ for node in connected_sets:
+ if not cmds.attributeQuery("creator_identifier",
+ node=node,
+ exists=True):
+ continue
+
+ creator_identifier = cmds.getAttr(node + ".creator_identifier")
+ if creator_identifier == self.identifier:
+ self.log.info(f"Found node: {node}")
+ return node
+
+ def _create_layer_instance_node(self, layer):
+
+ # We only collect if a CreateRender instance exists
+ create_render_set = self._get_singleton_node()
+ if not create_render_set:
+ raise CreatorError("Creating a renderlayer instance node is not "
+ "allowed if no 'CreateRender' instance exists")
+
+ namespace = "_{}".format(self.singleton_node_name)
+ namespace = ensure_namespace(namespace)
+
+ name = "{}:{}".format(namespace, layer.name())
+ render_set = cmds.sets(name=name, empty=True)
+
+ # Keep an active link with the renderlayer so we can retrieve it
+ # later by a physical maya connection instead of relying on the layer
+ # name
+ cmds.addAttr(render_set, longName="renderlayer", at="message")
+ cmds.connectAttr("{}.message".format(layer.name()),
+ "{}.renderlayer".format(render_set), force=True)
+
+ # Add the set to the 'CreateRender' set.
+ cmds.sets(render_set, forceElement=create_render_set)
+
+ return render_set
+
+ def update_instances(self, update_list):
+ # We only generate the persisting layer data into the scene once
+ # we save with the UI on e.g. validate or publish
+ for instance, _changes in update_list:
+ instance_node = instance.data.get("instance_node")
+
+ # Ensure a node exists to persist the data to
+ if not instance_node:
+ layer = instance.transient_data["layer"]
+ instance_node = self._create_layer_instance_node(layer)
+ instance.data["instance_node"] = instance_node
+
+ self.imprint_instance_node(instance_node,
+ data=instance.data_to_store())
+
+ def imprint_instance_node(self, node, data):
+ # Do not ever try to update the `renderlayer` since it'll try
+ # to remove the attribute and recreate it but fail to keep it a
+ # message attribute link. We only ever imprint that on the initial
+ # node creation.
+ # TODO: Improve how this is handled
+ data.pop("renderlayer", None)
+ data.get("creator_attributes", {}).pop("renderlayer", None)
+
+ return super(RenderlayerCreator, self).imprint_instance_node(node,
+ data=data)
+
+ def remove_instances(self, instances):
+ """Remove specified instances from the scene.
+
+ This is only removing `id` parameter so instance is no longer
+ instance, because it might contain valuable data for artist.
+
+ """
+ # Instead of removing the single instance or renderlayers we instead
+ # remove the CreateRender node this creator relies on to decide whether
+ # it should collect anything at all.
+ nodes = self._get_singleton_node(return_all=True)
+ if nodes:
+ cmds.delete(nodes)
+
+ # Remove ALL the instances even if only one gets deleted
+ for instance in list(self.create_context.instances):
+ if instance.get("creator_identifier") == self.identifier:
+ self._remove_instance_from_context(instance)
+
+ # Remove the stored settings per renderlayer too
+ node = instance.data.get("instance_node")
+ if node and cmds.objExists(node):
+ cmds.delete(node)
+
+
class Loader(LoaderPlugin):
hosts = ["maya"]
@@ -186,6 +569,7 @@ class ReferenceLoader(Loader):
def update(self, container, representation):
from maya import cmds
+
from openpype.hosts.maya.api.lib import get_container_members
node = container["objectName"]
diff --git a/openpype/hosts/maya/plugins/create/convert_legacy.py b/openpype/hosts/maya/plugins/create/convert_legacy.py
new file mode 100644
index 0000000000..6133abc205
--- /dev/null
+++ b/openpype/hosts/maya/plugins/create/convert_legacy.py
@@ -0,0 +1,165 @@
+from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
+from openpype.hosts.maya.api import plugin
+from openpype.hosts.maya.api.lib import read
+
+from maya import cmds
+from maya.app.renderSetup.model import renderSetup
+
+
+class MayaLegacyConvertor(SubsetConvertorPlugin,
+ plugin.MayaCreatorBase):
+ """Find and convert any legacy subsets in the scene.
+
+ This Convertor will find all legacy subsets in the scene and will
+ transform them to the current system. Since the old subsets doesn't
+ retain any information about their original creators, the only mapping
+ we can do is based on their families.
+
+ Its limitation is that you can have multiple creators creating subset
+ of the same family and there is no way to handle it. This code should
+ nevertheless cover all creators that came with OpenPype.
+
+ """
+ identifier = "io.openpype.creators.maya.legacy"
+
+ # Cases where the identifier or new family doesn't correspond to the
+ # original family on the legacy instances
+ special_family_conversions = {
+ "rendering": "io.openpype.creators.maya.renderlayer",
+ }
+
+ def find_instances(self):
+
+ self.cache_subsets(self.collection_shared_data)
+ legacy = self.collection_shared_data.get("maya_cached_legacy_subsets")
+ if not legacy:
+ return
+
+ self.add_convertor_item("Convert legacy instances")
+
+ def convert(self):
+ self.remove_convertor_item()
+
+ # We can't use the collected shared data cache here
+ # we re-query it here directly to convert all found.
+ cache = {}
+ self.cache_subsets(cache)
+ legacy = cache.get("maya_cached_legacy_subsets")
+ if not legacy:
+ return
+
+ # From all current new style manual creators find the mapping
+ # from family to identifier
+ family_to_id = {}
+ for identifier, creator in self.create_context.manual_creators.items():
+ family = getattr(creator, "family", None)
+ if not family:
+ continue
+
+ if family in family_to_id:
+ # We have a clash of family -> identifier. Multiple
+ # new style creators use the same family
+ self.log.warning("Clash on family->identifier: "
+ "{}".format(identifier))
+ family_to_id[family] = identifier
+
+ family_to_id.update(self.special_family_conversions)
+
+ # We also embed the current 'task' into the instance since legacy
+ # instances didn't store that data on the instances. The old style
+ # logic was thus to be live to the current task to begin with.
+ data = dict()
+ data["task"] = self.create_context.get_current_task_name()
+
+ for family, instance_nodes in legacy.items():
+ if family not in family_to_id:
+ self.log.warning(
+ "Unable to convert legacy instance with family '{}'"
+ " because there is no matching new creator's family"
+ "".format(family)
+ )
+ continue
+
+ creator_id = family_to_id[family]
+ creator = self.create_context.manual_creators[creator_id]
+ data["creator_identifier"] = creator_id
+
+ if isinstance(creator, plugin.RenderlayerCreator):
+ self._convert_per_renderlayer(instance_nodes, data, creator)
+ else:
+ self._convert_regular(instance_nodes, data)
+
+ def _convert_regular(self, instance_nodes, data):
+ # We only imprint the creator identifier for it to identify
+ # as the new style creator
+ for instance_node in instance_nodes:
+ self.imprint_instance_node(instance_node,
+ data=data.copy())
+
+ def _convert_per_renderlayer(self, instance_nodes, data, creator):
+ # Split the instance into an instance per layer
+ rs = renderSetup.instance()
+ layers = rs.getRenderLayers()
+ if not layers:
+ self.log.error(
+ "Can't convert legacy renderlayer instance because no existing"
+ " renderSetup layers exist in the scene."
+ )
+ return
+
+ creator_attribute_names = {
+ attr_def.key for attr_def in creator.get_instance_attr_defs()
+ }
+
+ for instance_node in instance_nodes:
+
+ # Ensure we have the new style singleton node generated
+ # TODO: Make function public
+ singleton_node = creator._get_singleton_node()
+ if singleton_node:
+ self.log.error(
+ "Can't convert legacy renderlayer instance '{}' because"
+ " new style instance '{}' already exists".format(
+ instance_node,
+ singleton_node
+ )
+ )
+ continue
+
+ creator.create_singleton_node()
+
+ # We are creating new nodes to replace the original instance
+ # Copy the attributes of the original instance to the new node
+ original_data = read(instance_node)
+
+ # The family gets converted to the new family (this is due to
+ # "rendering" family being converted to "renderlayer" family)
+ original_data["family"] = creator.family
+
+ # Convert to creator attributes when relevant
+ creator_attributes = {}
+ for key in list(original_data.keys()):
+ # Iterate in order of the original attributes to preserve order
+ # in the output creator attributes
+ if key in creator_attribute_names:
+ creator_attributes[key] = original_data.pop(key)
+ original_data["creator_attributes"] = creator_attributes
+
+ # For layer in maya layers
+ for layer in layers:
+ layer_instance_node = creator.find_layer_instance_node(layer)
+ if not layer_instance_node:
+ # TODO: Make function public
+ layer_instance_node = creator._create_layer_instance_node(
+ layer
+ )
+
+ # Transfer the main attributes of the original instance
+ layer_data = original_data.copy()
+ layer_data.update(data)
+
+ self.imprint_instance_node(layer_instance_node,
+ data=layer_data)
+
+ # Delete the legacy instance node
+ cmds.delete(instance_node)
diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py
index 095cbcdd64..cade8603ce 100644
--- a/openpype/hosts/maya/plugins/create/create_animation.py
+++ b/openpype/hosts/maya/plugins/create/create_animation.py
@@ -2,9 +2,13 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
+from openpype.lib import (
+ BoolDef,
+ TextDef
+)
-class CreateAnimation(plugin.Creator):
+class CreateAnimation(plugin.MayaCreator):
"""Animation output for character rigs"""
# We hide the animation creator from the UI since the creation of it
@@ -13,48 +17,71 @@ class CreateAnimation(plugin.Creator):
# Note: This setting is actually applied from project settings
enabled = False
+ identifier = "io.openpype.creators.maya.animation"
name = "animationDefault"
label = "Animation"
family = "animation"
icon = "male"
+
write_color_sets = False
write_face_sets = False
include_parent_hierarchy = False
include_user_defined_attributes = False
- def __init__(self, *args, **kwargs):
- super(CreateAnimation, self).__init__(*args, **kwargs)
+ # TODO: Would be great if we could visually hide this from the creator
+ # by default but do allow to generate it through code.
- # create an ordered dict with the existing data first
+ def get_instance_attr_defs(self):
- # get basic animation data : start / end / handles / steps
- for key, value in lib.collect_animation_data().items():
- self.data[key] = value
+ defs = lib.collect_animation_defs()
- # Write vertex colors with the geometry.
- self.data["writeColorSets"] = self.write_color_sets
- self.data["writeFaceSets"] = self.write_face_sets
-
- # Include only renderable visible shapes.
- # Skips locators and empty transforms
- self.data["renderableOnly"] = False
-
- # Include only nodes that are visible at least once during the
- # frame range.
- self.data["visibleOnly"] = False
-
- # Include the groups above the out_SET content
- self.data["includeParentHierarchy"] = self.include_parent_hierarchy
-
- # Default to exporting world-space
- self.data["worldSpace"] = True
+ defs.extend([
+ BoolDef("writeColorSets",
+ label="Write vertex colors",
+ tooltip="Write vertex colors with the geometry",
+ default=self.write_color_sets),
+ BoolDef("writeFaceSets",
+ label="Write face sets",
+ tooltip="Write face sets with the geometry",
+ default=self.write_face_sets),
+ BoolDef("writeNormals",
+ label="Write normals",
+ tooltip="Write normals with the deforming geometry",
+ default=True),
+ BoolDef("renderableOnly",
+ label="Renderable Only",
+ tooltip="Only export renderable visible shapes",
+ default=False),
+ BoolDef("visibleOnly",
+ label="Visible Only",
+ tooltip="Only export dag objects visible during "
+ "frame range",
+ default=False),
+ BoolDef("includeParentHierarchy",
+ label="Include Parent Hierarchy",
+ tooltip="Whether to include parent hierarchy of nodes in "
+ "the publish instance",
+ default=self.include_parent_hierarchy),
+ BoolDef("worldSpace",
+ label="World-Space Export",
+ default=True),
+ BoolDef("includeUserDefinedAttributes",
+ label="Include User Defined Attributes",
+ default=self.include_user_defined_attributes),
+ TextDef("attr",
+ label="Custom Attributes",
+ default="",
+ placeholder="attr1, attr2"),
+ TextDef("attrPrefix",
+ label="Custom Attributes Prefix",
+ placeholder="prefix1, prefix2")
+ ])
+ # TODO: Implement these on a Deadline plug-in instead?
+ """
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
+ """
- # Default to write normals.
- self.data["writeNormals"] = True
-
- value = self.include_user_defined_attributes
- self.data["includeUserDefinedAttributes"] = value
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py
index 2afb897e94..0c8cf8d2bb 100644
--- a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py
+++ b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py
@@ -2,17 +2,20 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
-
-from maya import cmds
+from openpype.lib import (
+ NumberDef,
+ BoolDef
+)
-class CreateArnoldSceneSource(plugin.Creator):
+class CreateArnoldSceneSource(plugin.MayaCreator):
"""Arnold Scene Source"""
- name = "ass"
+ identifier = "io.openpype.creators.maya.ass"
label = "Arnold Scene Source"
family = "ass"
icon = "cube"
+
expandProcedurals = False
motionBlur = True
motionBlurKeys = 2
@@ -28,39 +31,71 @@ class CreateArnoldSceneSource(plugin.Creator):
maskColor_manager = False
maskOperator = False
- def __init__(self, *args, **kwargs):
- super(CreateArnoldSceneSource, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # Add animation data
- self.data.update(lib.collect_animation_data())
+ defs = lib.collect_animation_defs()
- self.data["expandProcedurals"] = self.expandProcedurals
- self.data["motionBlur"] = self.motionBlur
- self.data["motionBlurKeys"] = self.motionBlurKeys
- self.data["motionBlurLength"] = self.motionBlurLength
+ defs.extend([
+ BoolDef("expandProcedural",
+ label="Expand Procedural",
+ default=self.expandProcedurals),
+ BoolDef("motionBlur",
+ label="Motion Blur",
+ default=self.motionBlur),
+ NumberDef("motionBlurKeys",
+ label="Motion Blur Keys",
+ decimals=0,
+ default=self.motionBlurKeys),
+ NumberDef("motionBlurLength",
+ label="Motion Blur Length",
+ decimals=3,
+ default=self.motionBlurLength),
- # Masks
- self.data["maskOptions"] = self.maskOptions
- self.data["maskCamera"] = self.maskCamera
- self.data["maskLight"] = self.maskLight
- self.data["maskShape"] = self.maskShape
- self.data["maskShader"] = self.maskShader
- self.data["maskOverride"] = self.maskOverride
- self.data["maskDriver"] = self.maskDriver
- self.data["maskFilter"] = self.maskFilter
- self.data["maskColor_manager"] = self.maskColor_manager
- self.data["maskOperator"] = self.maskOperator
+ # Masks
+ BoolDef("maskOptions",
+ label="Export Options",
+ default=self.maskOptions),
+ BoolDef("maskCamera",
+ label="Export Cameras",
+ default=self.maskCamera),
+ BoolDef("maskLight",
+ label="Export Lights",
+ default=self.maskLight),
+ BoolDef("maskShape",
+ label="Export Shapes",
+ default=self.maskShape),
+ BoolDef("maskShader",
+ label="Export Shaders",
+ default=self.maskShader),
+ BoolDef("maskOverride",
+ label="Export Override Nodes",
+ default=self.maskOverride),
+ BoolDef("maskDriver",
+ label="Export Drivers",
+ default=self.maskDriver),
+ BoolDef("maskFilter",
+ label="Export Filters",
+ default=self.maskFilter),
+ BoolDef("maskOperator",
+ label="Export Operators",
+ default=self.maskOperator),
+ BoolDef("maskColor_manager",
+ label="Export Color Managers",
+ default=self.maskColor_manager),
+ ])
- def process(self):
- instance = super(CreateArnoldSceneSource, self).process()
+ return defs
- nodes = []
+ def create(self, subset_name, instance_data, pre_create_data):
- if (self.options or {}).get("useSelection"):
- nodes = cmds.ls(selection=True)
+ from maya import cmds
- cmds.sets(nodes, rm=instance)
+ instance = super(CreateArnoldSceneSource, self).create(
+ subset_name, instance_data, pre_create_data
+ )
- assContent = cmds.sets(name=instance + "_content_SET")
- assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True)
- cmds.sets([assContent, assProxy], forceElement=instance)
+ instance_node = instance.get("instance_node")
+
+ content = cmds.sets(name=instance_node + "_content_SET", empty=True)
+ proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
+ cmds.sets([content, proxy], forceElement=instance_node)
diff --git a/openpype/hosts/maya/plugins/create/create_assembly.py b/openpype/hosts/maya/plugins/create/create_assembly.py
index ff5e1d45c4..813fe4da04 100644
--- a/openpype/hosts/maya/plugins/create/create_assembly.py
+++ b/openpype/hosts/maya/plugins/create/create_assembly.py
@@ -1,10 +1,10 @@
from openpype.hosts.maya.api import plugin
-class CreateAssembly(plugin.Creator):
+class CreateAssembly(plugin.MayaCreator):
"""A grouped package of loaded content"""
- name = "assembly"
+ identifier = "io.openpype.creators.maya.assembly"
label = "Assembly"
family = "assembly"
icon = "cubes"
diff --git a/openpype/hosts/maya/plugins/create/create_camera.py b/openpype/hosts/maya/plugins/create/create_camera.py
index 8b2c881036..0219f56330 100644
--- a/openpype/hosts/maya/plugins/create/create_camera.py
+++ b/openpype/hosts/maya/plugins/create/create_camera.py
@@ -2,33 +2,35 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
+from openpype.lib import BoolDef
-class CreateCamera(plugin.Creator):
+class CreateCamera(plugin.MayaCreator):
"""Single baked camera"""
- name = "cameraMain"
+ identifier = "io.openpype.creators.maya.camera"
label = "Camera"
family = "camera"
icon = "video-camera"
- def __init__(self, *args, **kwargs):
- super(CreateCamera, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # get basic animation data : start / end / handles / steps
- animation_data = lib.collect_animation_data()
- for key, value in animation_data.items():
- self.data[key] = value
+ defs = lib.collect_animation_defs()
- # Bake to world space by default, when this is False it will also
- # include the parent hierarchy in the baked results
- self.data['bakeToWorldSpace'] = True
+ defs.extend([
+ BoolDef("bakeToWorldSpace",
+ label="Bake to World-Space",
+ tooltip="Bake to World-Space",
+ default=True),
+ ])
+
+ return defs
-class CreateCameraRig(plugin.Creator):
+class CreateCameraRig(plugin.MayaCreator):
"""Complex hierarchy with camera."""
- name = "camerarigMain"
+ identifier = "io.openpype.creators.maya.camerarig"
label = "Camera Rig"
family = "camerarig"
icon = "video-camera"
diff --git a/openpype/hosts/maya/plugins/create/create_layout.py b/openpype/hosts/maya/plugins/create/create_layout.py
index 1768a3d49e..168743d4dc 100644
--- a/openpype/hosts/maya/plugins/create/create_layout.py
+++ b/openpype/hosts/maya/plugins/create/create_layout.py
@@ -1,16 +1,21 @@
from openpype.hosts.maya.api import plugin
+from openpype.lib import BoolDef
-class CreateLayout(plugin.Creator):
+class CreateLayout(plugin.MayaCreator):
"""A grouped package of loaded content"""
- name = "layoutMain"
+ identifier = "io.openpype.creators.maya.layout"
label = "Layout"
family = "layout"
icon = "cubes"
- def __init__(self, *args, **kwargs):
- super(CreateLayout, self).__init__(*args, **kwargs)
- # enable this when you want to
- # publish group of loaded asset
- self.data["groupLoadedAssets"] = False
+ def get_instance_attr_defs(self):
+
+ return [
+ BoolDef("groupLoadedAssets",
+ label="Group Loaded Assets",
+ tooltip="Enable this when you want to publish group of "
+ "loaded asset",
+ default=False)
+ ]
diff --git a/openpype/hosts/maya/plugins/create/create_look.py b/openpype/hosts/maya/plugins/create/create_look.py
index 51b0b8819a..385ae81e01 100644
--- a/openpype/hosts/maya/plugins/create/create_look.py
+++ b/openpype/hosts/maya/plugins/create/create_look.py
@@ -1,29 +1,53 @@
from openpype.hosts.maya.api import (
- lib,
- plugin
+ plugin,
+ lib
+)
+from openpype.lib import (
+ BoolDef,
+ TextDef
)
-class CreateLook(plugin.Creator):
+class CreateLook(plugin.MayaCreator):
"""Shader connections defining shape look"""
- name = "look"
+ identifier = "io.openpype.creators.maya.look"
label = "Look"
family = "look"
icon = "paint-brush"
+
make_tx = True
rs_tex = False
- def __init__(self, *args, **kwargs):
- super(CreateLook, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- self.data["renderlayer"] = lib.get_current_renderlayer()
+ return [
+ # TODO: This value should actually get set on create!
+ TextDef("renderLayer",
+ # TODO: Bug: Hidden attribute's label is still shown in UI?
+ hidden=True,
+ default=lib.get_current_renderlayer(),
+ label="Renderlayer",
+ tooltip="Renderlayer to extract the look from"),
+ BoolDef("maketx",
+ label="MakeTX",
+ tooltip="Whether to generate .tx files for your textures",
+ default=self.make_tx),
+ BoolDef("rstex",
+ label="Convert textures to .rstex",
+ tooltip="Whether to generate Redshift .rstex files for "
+ "your textures",
+ default=self.rs_tex),
+ BoolDef("forceCopy",
+ label="Force Copy",
+ tooltip="Enable users to force a copy instead of hardlink."
+ "\nNote: On Windows copy is always forced due to "
+ "bugs in windows' implementation of hardlinks.",
+ default=False)
+ ]
- # Whether to automatically convert the textures to .tx upon publish.
- self.data["maketx"] = self.make_tx
- # Whether to automatically convert the textures to .rstex upon publish.
- self.data["rstex"] = self.rs_tex
- # Enable users to force a copy.
- # - on Windows is "forceCopy" always changed to `True` because of
- # windows implementation of hardlinks
- self.data["forceCopy"] = False
+ def get_pre_create_attr_defs(self):
+ # Show same attributes on create but include use selection
+ defs = super(CreateLook, self).get_pre_create_attr_defs()
+ defs.extend(self.get_instance_attr_defs())
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_mayaascii.py b/openpype/hosts/maya/plugins/create/create_mayascene.py
similarity index 65%
rename from openpype/hosts/maya/plugins/create/create_mayaascii.py
rename to openpype/hosts/maya/plugins/create/create_mayascene.py
index f54f2df812..b61c97aebf 100644
--- a/openpype/hosts/maya/plugins/create/create_mayaascii.py
+++ b/openpype/hosts/maya/plugins/create/create_mayascene.py
@@ -1,9 +1,10 @@
from openpype.hosts.maya.api import plugin
-class CreateMayaScene(plugin.Creator):
+class CreateMayaScene(plugin.MayaCreator):
"""Raw Maya Scene file export"""
+ identifier = "io.openpype.creators.maya.mayascene"
name = "mayaScene"
label = "Maya Scene"
family = "mayaScene"
diff --git a/openpype/hosts/maya/plugins/create/create_model.py b/openpype/hosts/maya/plugins/create/create_model.py
index 520e962f74..30f1a82281 100644
--- a/openpype/hosts/maya/plugins/create/create_model.py
+++ b/openpype/hosts/maya/plugins/create/create_model.py
@@ -1,26 +1,43 @@
from openpype.hosts.maya.api import plugin
+from openpype.lib import (
+ BoolDef,
+ TextDef
+)
-class CreateModel(plugin.Creator):
+class CreateModel(plugin.MayaCreator):
"""Polygonal static geometry"""
- name = "modelMain"
+ identifier = "io.openpype.creators.maya.model"
label = "Model"
family = "model"
icon = "cube"
defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"]
+
write_color_sets = False
write_face_sets = False
- def __init__(self, *args, **kwargs):
- super(CreateModel, self).__init__(*args, **kwargs)
- # Vertex colors with the geometry
- self.data["writeColorSets"] = self.write_color_sets
- self.data["writeFaceSets"] = self.write_face_sets
+ def get_instance_attr_defs(self):
- # Include attributes by attribute name or prefix
- self.data["attr"] = ""
- self.data["attrPrefix"] = ""
-
- # Whether to include parent hierarchy of nodes in the instance
- self.data["includeParentHierarchy"] = False
+ return [
+ BoolDef("writeColorSets",
+ label="Write vertex colors",
+ tooltip="Write vertex colors with the geometry",
+ default=self.write_color_sets),
+ BoolDef("writeFaceSets",
+ label="Write face sets",
+ tooltip="Write face sets with the geometry",
+ default=self.write_face_sets),
+ BoolDef("includeParentHierarchy",
+ label="Include Parent Hierarchy",
+ tooltip="Whether to include parent hierarchy of nodes in "
+ "the publish instance",
+ default=False),
+ TextDef("attr",
+ label="Custom Attributes",
+ default="",
+ placeholder="attr1, attr2"),
+ TextDef("attrPrefix",
+ label="Custom Attributes Prefix",
+ placeholder="prefix1, prefix2")
+ ]
diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_look.py b/openpype/hosts/maya/plugins/create/create_multiverse_look.py
index f47c88a93b..f27eb57fc1 100644
--- a/openpype/hosts/maya/plugins/create/create_multiverse_look.py
+++ b/openpype/hosts/maya/plugins/create/create_multiverse_look.py
@@ -1,15 +1,27 @@
from openpype.hosts.maya.api import plugin
+from openpype.lib import (
+ BoolDef,
+ EnumDef
+)
-class CreateMultiverseLook(plugin.Creator):
+class CreateMultiverseLook(plugin.MayaCreator):
"""Create Multiverse Look"""
- name = "mvLook"
+ identifier = "io.openpype.creators.maya.mvlook"
label = "Multiverse Look"
family = "mvLook"
icon = "cubes"
- def __init__(self, *args, **kwargs):
- super(CreateMultiverseLook, self).__init__(*args, **kwargs)
- self.data["fileFormat"] = ["usda", "usd"]
- self.data["publishMipMap"] = True
+ def get_instance_attr_defs(self):
+
+ return [
+ EnumDef("fileFormat",
+ label="File Format",
+ tooltip="USD export file format",
+ items=["usda", "usd"],
+ default="usda"),
+ BoolDef("publishMipMap",
+ label="Publish MipMap",
+ default=True),
+ ]
diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py
index 8cd76b5f40..0b0ad3bccb 100644
--- a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py
+++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py
@@ -1,53 +1,135 @@
from openpype.hosts.maya.api import plugin, lib
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+ TextDef,
+ EnumDef
+)
-class CreateMultiverseUsd(plugin.Creator):
+class CreateMultiverseUsd(plugin.MayaCreator):
"""Create Multiverse USD Asset"""
- name = "mvUsdMain"
+ identifier = "io.openpype.creators.maya.mvusdasset"
label = "Multiverse USD Asset"
family = "usd"
icon = "cubes"
- def __init__(self, *args, **kwargs):
- super(CreateMultiverseUsd, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # Add animation data first, since it maintains order.
- self.data.update(lib.collect_animation_data(True))
+ defs = lib.collect_animation_defs(fps=True)
+ defs.extend([
+ EnumDef("fileFormat",
+ label="File format",
+ items=["usd", "usda", "usdz"],
+ default="usd"),
+ BoolDef("stripNamespaces",
+ label="Strip Namespaces",
+ default=True),
+ BoolDef("mergeTransformAndShape",
+ label="Merge Transform and Shape",
+ default=False),
+ BoolDef("writeAncestors",
+ label="Write Ancestors",
+ default=True),
+ BoolDef("flattenParentXforms",
+ label="Flatten Parent Xforms",
+ default=False),
+ BoolDef("writeSparseOverrides",
+ label="Write Sparse Overrides",
+ default=False),
+ BoolDef("useMetaPrimPath",
+ label="Use Meta Prim Path",
+ default=False),
+ TextDef("customRootPath",
+ label="Custom Root Path",
+ default=''),
+ TextDef("customAttributes",
+ label="Custom Attributes",
+ tooltip="Comma-separated list of attribute names",
+ default=''),
+ TextDef("nodeTypesToIgnore",
+ label="Node Types to Ignore",
+ tooltip="Comma-separated list of node types to be ignored",
+ default=''),
+ BoolDef("writeMeshes",
+ label="Write Meshes",
+ default=True),
+ BoolDef("writeCurves",
+ label="Write Curves",
+ default=True),
+ BoolDef("writeParticles",
+ label="Write Particles",
+ default=True),
+ BoolDef("writeCameras",
+ label="Write Cameras",
+ default=False),
+ BoolDef("writeLights",
+ label="Write Lights",
+ default=False),
+ BoolDef("writeJoints",
+ label="Write Joints",
+ default=False),
+ BoolDef("writeCollections",
+ label="Write Collections",
+ default=False),
+ BoolDef("writePositions",
+ label="Write Positions",
+ default=True),
+ BoolDef("writeNormals",
+ label="Write Normals",
+ default=True),
+ BoolDef("writeUVs",
+ label="Write UVs",
+ default=True),
+ BoolDef("writeColorSets",
+ label="Write Color Sets",
+ default=False),
+ BoolDef("writeTangents",
+ label="Write Tangents",
+ default=False),
+ BoolDef("writeRefPositions",
+ label="Write Ref Positions",
+ default=True),
+ BoolDef("writeBlendShapes",
+ label="Write BlendShapes",
+ default=False),
+ BoolDef("writeDisplayColor",
+ label="Write Display Color",
+ default=True),
+ BoolDef("writeSkinWeights",
+ label="Write Skin Weights",
+ default=False),
+ BoolDef("writeMaterialAssignment",
+ label="Write Material Assignment",
+ default=False),
+ BoolDef("writeHardwareShader",
+ label="Write Hardware Shader",
+ default=False),
+ BoolDef("writeShadingNetworks",
+ label="Write Shading Networks",
+ default=False),
+ BoolDef("writeTransformMatrix",
+ label="Write Transform Matrix",
+ default=True),
+ BoolDef("writeUsdAttributes",
+ label="Write USD Attributes",
+ default=True),
+ BoolDef("writeInstancesAsReferences",
+ label="Write Instances as References",
+ default=False),
+ BoolDef("timeVaryingTopology",
+ label="Time Varying Topology",
+ default=False),
+ TextDef("customMaterialNamespace",
+ label="Custom Material Namespace",
+ default=''),
+ NumberDef("numTimeSamples",
+ label="Num Time Samples",
+ default=1),
+ NumberDef("timeSamplesSpan",
+ label="Time Samples Span",
+ default=0.0),
+ ])
- self.data["fileFormat"] = ["usd", "usda", "usdz"]
- self.data["stripNamespaces"] = True
- self.data["mergeTransformAndShape"] = False
- self.data["writeAncestors"] = True
- self.data["flattenParentXforms"] = False
- self.data["writeSparseOverrides"] = False
- self.data["useMetaPrimPath"] = False
- self.data["customRootPath"] = ''
- self.data["customAttributes"] = ''
- self.data["nodeTypesToIgnore"] = ''
- self.data["writeMeshes"] = True
- self.data["writeCurves"] = True
- self.data["writeParticles"] = True
- self.data["writeCameras"] = False
- self.data["writeLights"] = False
- self.data["writeJoints"] = False
- self.data["writeCollections"] = False
- self.data["writePositions"] = True
- self.data["writeNormals"] = True
- self.data["writeUVs"] = True
- self.data["writeColorSets"] = False
- self.data["writeTangents"] = False
- self.data["writeRefPositions"] = True
- self.data["writeBlendShapes"] = False
- self.data["writeDisplayColor"] = True
- self.data["writeSkinWeights"] = False
- self.data["writeMaterialAssignment"] = False
- self.data["writeHardwareShader"] = False
- self.data["writeShadingNetworks"] = False
- self.data["writeTransformMatrix"] = True
- self.data["writeUsdAttributes"] = True
- self.data["writeInstancesAsReferences"] = False
- self.data["timeVaryingTopology"] = False
- self.data["customMaterialNamespace"] = ''
- self.data["numTimeSamples"] = 1
- self.data["timeSamplesSpan"] = 0.0
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py
index ed466a8068..66ddd83eda 100644
--- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py
+++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py
@@ -1,26 +1,48 @@
from openpype.hosts.maya.api import plugin, lib
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+ EnumDef
+)
-class CreateMultiverseUsdComp(plugin.Creator):
+class CreateMultiverseUsdComp(plugin.MayaCreator):
"""Create Multiverse USD Composition"""
- name = "mvUsdCompositionMain"
+ identifier = "io.openpype.creators.maya.mvusdcomposition"
label = "Multiverse USD Composition"
family = "mvUsdComposition"
icon = "cubes"
- def __init__(self, *args, **kwargs):
- super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # Add animation data first, since it maintains order.
- self.data.update(lib.collect_animation_data(True))
+ defs = lib.collect_animation_defs(fps=True)
+ defs.extend([
+ EnumDef("fileFormat",
+ label="File format",
+ items=["usd", "usda"],
+ default="usd"),
+ BoolDef("stripNamespaces",
+ label="Strip Namespaces",
+ default=False),
+ BoolDef("mergeTransformAndShape",
+ label="Merge Transform and Shape",
+ default=False),
+ BoolDef("flattenContent",
+ label="Flatten Content",
+ default=False),
+ BoolDef("writeAsCompoundLayers",
+ label="Write As Compound Layers",
+ default=False),
+ BoolDef("writePendingOverrides",
+ label="Write Pending Overrides",
+ default=False),
+ NumberDef("numTimeSamples",
+ label="Num Time Samples",
+ default=1),
+ NumberDef("timeSamplesSpan",
+ label="Time Samples Span",
+ default=0.0),
+ ])
- # Order of `fileFormat` must match extract_multiverse_usd_comp.py
- self.data["fileFormat"] = ["usda", "usd"]
- self.data["stripNamespaces"] = False
- self.data["mergeTransformAndShape"] = False
- self.data["flattenContent"] = False
- self.data["writeAsCompoundLayers"] = False
- self.data["writePendingOverrides"] = False
- self.data["numTimeSamples"] = 1
- self.data["timeSamplesSpan"] = 0.0
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py
index 06e22df295..e1534dd68c 100644
--- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py
+++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py
@@ -1,30 +1,59 @@
from openpype.hosts.maya.api import plugin, lib
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+ EnumDef
+)
class CreateMultiverseUsdOver(plugin.Creator):
"""Create Multiverse USD Override"""
- name = "mvUsdOverrideMain"
+ identifier = "io.openpype.creators.maya.mvusdoverride"
label = "Multiverse USD Override"
family = "mvUsdOverride"
icon = "cubes"
- def __init__(self, *args, **kwargs):
- super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
+ defs = lib.collect_animation_defs(fps=True)
+ defs.extend([
+ EnumDef("fileFormat",
+ label="File format",
+ items=["usd", "usda"],
+ default="usd"),
+ BoolDef("writeAll",
+ label="Write All",
+ default=False),
+ BoolDef("writeTransforms",
+ label="Write Transforms",
+ default=True),
+ BoolDef("writeVisibility",
+ label="Write Visibility",
+ default=True),
+ BoolDef("writeAttributes",
+ label="Write Attributes",
+ default=True),
+ BoolDef("writeMaterials",
+ label="Write Materials",
+ default=True),
+ BoolDef("writeVariants",
+ label="Write Variants",
+ default=True),
+ BoolDef("writeVariantsDefinition",
+ label="Write Variants Definition",
+ default=True),
+ BoolDef("writeActiveState",
+ label="Write Active State",
+ default=True),
+ BoolDef("writeNamespaces",
+ label="Write Namespaces",
+ default=False),
+ NumberDef("numTimeSamples",
+ label="Num Time Samples",
+ default=1),
+ NumberDef("timeSamplesSpan",
+ label="Time Samples Span",
+ default=0.0),
+ ])
- # Add animation data first, since it maintains order.
- self.data.update(lib.collect_animation_data(True))
-
- # Order of `fileFormat` must match extract_multiverse_usd_over.py
- self.data["fileFormat"] = ["usda", "usd"]
- self.data["writeAll"] = False
- self.data["writeTransforms"] = True
- self.data["writeVisibility"] = True
- self.data["writeAttributes"] = True
- self.data["writeMaterials"] = True
- self.data["writeVariants"] = True
- self.data["writeVariantsDefinition"] = True
- self.data["writeActiveState"] = True
- self.data["writeNamespaces"] = False
- self.data["numTimeSamples"] = 1
- self.data["timeSamplesSpan"] = 0.0
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_pointcache.py b/openpype/hosts/maya/plugins/create/create_pointcache.py
index 1b8d5e6850..f4e8cbfc9a 100644
--- a/openpype/hosts/maya/plugins/create/create_pointcache.py
+++ b/openpype/hosts/maya/plugins/create/create_pointcache.py
@@ -4,47 +4,85 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
+from openpype.lib import (
+ BoolDef,
+ TextDef
+)
-class CreatePointCache(plugin.Creator):
+class CreatePointCache(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
- name = "pointcache"
- label = "Point Cache"
+ identifier = "io.openpype.creators.maya.pointcache"
+ label = "Pointcache"
family = "pointcache"
icon = "gears"
write_color_sets = False
write_face_sets = False
include_user_defined_attributes = False
- def __init__(self, *args, **kwargs):
- super(CreatePointCache, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # Add animation data
- self.data.update(lib.collect_animation_data())
+ defs = lib.collect_animation_defs()
- # Vertex colors with the geometry.
- self.data["writeColorSets"] = self.write_color_sets
- # Vertex colors with the geometry.
- self.data["writeFaceSets"] = self.write_face_sets
- self.data["renderableOnly"] = False # Only renderable visible shapes
- self.data["visibleOnly"] = False # only nodes that are visible
- self.data["includeParentHierarchy"] = False # Include parent groups
- self.data["worldSpace"] = True # Default to exporting world-space
- self.data["refresh"] = False # Default to suspend refresh.
-
- # Add options for custom attributes
- value = self.include_user_defined_attributes
- self.data["includeUserDefinedAttributes"] = value
- self.data["attr"] = ""
- self.data["attrPrefix"] = ""
+ defs.extend([
+ BoolDef("writeColorSets",
+ label="Write vertex colors",
+ tooltip="Write vertex colors with the geometry",
+ default=False),
+ BoolDef("writeFaceSets",
+ label="Write face sets",
+ tooltip="Write face sets with the geometry",
+ default=False),
+ BoolDef("renderableOnly",
+ label="Renderable Only",
+ tooltip="Only export renderable visible shapes",
+ default=False),
+ BoolDef("visibleOnly",
+ label="Visible Only",
+ tooltip="Only export dag objects visible during "
+ "frame range",
+ default=False),
+ BoolDef("includeParentHierarchy",
+ label="Include Parent Hierarchy",
+ tooltip="Whether to include parent hierarchy of nodes in "
+ "the publish instance",
+ default=False),
+ BoolDef("worldSpace",
+ label="World-Space Export",
+ default=True),
+ BoolDef("refresh",
+ label="Refresh viewport during export",
+ default=False),
+ BoolDef("includeUserDefinedAttributes",
+ label="Include User Defined Attributes",
+ default=self.include_user_defined_attributes),
+ TextDef("attr",
+ label="Custom Attributes",
+ default="",
+ placeholder="attr1, attr2"),
+ TextDef("attrPrefix",
+ label="Custom Attributes Prefix",
+ default="",
+ placeholder="prefix1, prefix2")
+ ])
+ # TODO: Implement these on a Deadline plug-in instead?
+ """
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
+ """
- def process(self):
- instance = super(CreatePointCache, self).process()
+ return defs
- assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True)
- cmds.sets(assProxy, forceElement=instance)
+ def create(self, subset_name, instance_data, pre_create_data):
+
+ instance = super(CreatePointCache, self).create(
+ subset_name, instance_data, pre_create_data
+ )
+ instance_node = instance.get("instance_node")
+
+ # For Arnold standin proxy
+ proxy_set = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
+ cmds.sets(proxy_set, forceElement=instance_node)
diff --git a/openpype/hosts/maya/plugins/create/create_proxy_abc.py b/openpype/hosts/maya/plugins/create/create_proxy_abc.py
index 2946f7b530..d89470ebee 100644
--- a/openpype/hosts/maya/plugins/create/create_proxy_abc.py
+++ b/openpype/hosts/maya/plugins/create/create_proxy_abc.py
@@ -2,34 +2,49 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
+from openpype.lib import (
+ BoolDef,
+ TextDef
+)
-class CreateProxyAlembic(plugin.Creator):
+class CreateProxyAlembic(plugin.MayaCreator):
"""Proxy Alembic for animated data"""
- name = "proxyAbcMain"
+ identifier = "io.openpype.creators.maya.proxyabc"
label = "Proxy Alembic"
family = "proxyAbc"
icon = "gears"
write_color_sets = False
write_face_sets = False
- def __init__(self, *args, **kwargs):
- super(CreateProxyAlembic, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- # Add animation data
- self.data.update(lib.collect_animation_data())
+ defs = lib.collect_animation_defs()
- # Vertex colors with the geometry.
- self.data["writeColorSets"] = self.write_color_sets
- # Vertex colors with the geometry.
- self.data["writeFaceSets"] = self.write_face_sets
- # Default to exporting world-space
- self.data["worldSpace"] = True
+ defs.extend([
+ BoolDef("writeColorSets",
+ label="Write vertex colors",
+ tooltip="Write vertex colors with the geometry",
+ default=self.write_color_sets),
+ BoolDef("writeFaceSets",
+ label="Write face sets",
+ tooltip="Write face sets with the geometry",
+ default=self.write_face_sets),
+ BoolDef("worldSpace",
+ label="World-Space Export",
+ default=True),
+ TextDef("nameSuffix",
+ label="Name Suffix for Bounding Box",
+ default="_BBox",
+ placeholder="_BBox"),
+ TextDef("attr",
+ label="Custom Attributes",
+ default="",
+ placeholder="attr1, attr2"),
+ TextDef("attrPrefix",
+ label="Custom Attributes Prefix",
+ placeholder="prefix1, prefix2")
+ ])
- # name suffix for the bounding box
- self.data["nameSuffix"] = "_BBox"
-
- # Add options for custom attributes
- self.data["attr"] = ""
- self.data["attrPrefix"] = ""
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py
index 419a8d99d4..2490738e8f 100644
--- a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py
+++ b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py
@@ -2,22 +2,24 @@
"""Creator of Redshift proxy subset types."""
from openpype.hosts.maya.api import plugin, lib
+from openpype.lib import BoolDef
-class CreateRedshiftProxy(plugin.Creator):
+class CreateRedshiftProxy(plugin.MayaCreator):
"""Create instance of Redshift Proxy subset."""
- name = "redshiftproxy"
+ identifier = "io.openpype.creators.maya.redshiftproxy"
label = "Redshift Proxy"
family = "redshiftproxy"
icon = "gears"
- def __init__(self, *args, **kwargs):
- super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- animation_data = lib.collect_animation_data()
+ defs = [
+ BoolDef("animation",
+ label="Export animation",
+ default=False)
+ ]
- self.data["animation"] = False
- self.data["proxyFrameStart"] = animation_data["frameStart"]
- self.data["proxyFrameEnd"] = animation_data["frameEnd"]
- self.data["proxyFrameStep"] = animation_data["step"]
+ defs.extend(lib.collect_animation_defs())
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py
index 4681175808..cc5c1eb205 100644
--- a/openpype/hosts/maya/plugins/create/create_render.py
+++ b/openpype/hosts/maya/plugins/create/create_render.py
@@ -1,425 +1,108 @@
# -*- coding: utf-8 -*-
"""Create ``Render`` instance in Maya."""
-import json
-import os
-import appdirs
-import requests
-
-from maya import cmds
-from maya.app.renderSetup.model import renderSetup
-
-from openpype.settings import (
- get_system_settings,
- get_project_settings,
-)
-from openpype.lib import requests_get
-from openpype.modules import ModulesManager
-from openpype.pipeline import legacy_io
from openpype.hosts.maya.api import (
- lib,
lib_rendersettings,
plugin
)
+from openpype.pipeline import CreatorError
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+)
-class CreateRender(plugin.Creator):
- """Create *render* instance.
+class CreateRenderlayer(plugin.RenderlayerCreator):
+ """Create and manages renderlayer subset per renderLayer in workfile.
- Render instances are not actually published, they hold options for
- collecting of render data. It render instance is present, it will trigger
- collection of render layers, AOVs, cameras for either direct submission
- to render farm or export as various standalone formats (like V-Rays
- ``vrscenes`` or Arnolds ``ass`` files) and then submitting them to render
- farm.
-
- Instance has following attributes::
-
- primaryPool (list of str): Primary list of slave machine pool to use.
- secondaryPool (list of str): Optional secondary list of slave pools.
- suspendPublishJob (bool): Suspend the job after it is submitted.
- extendFrames (bool): Use already existing frames from previous version
- to extend current render.
- overrideExistingFrame (bool): Overwrite already existing frames.
- priority (int): Submitted job priority
- framesPerTask (int): How many frames per task to render. This is
- basically job division on render farm.
- whitelist (list of str): White list of slave machines
- machineList (list of str): Specific list of slave machines to use
- useMayaBatch (bool): Use Maya batch mode to render as opposite to
- Maya interactive mode. This consumes different licenses.
- vrscene (bool): Submit as ``vrscene`` file for standalone V-Ray
- renderer.
- ass (bool): Submit as ``ass`` file for standalone Arnold renderer.
- tileRendering (bool): Instance is set to tile rendering mode. We
- won't submit actual render, but we'll make publish job to wait
- for Tile Assembly job done and then publish.
- strict_error_checking (bool): Enable/disable error checking on DL
-
- See Also:
- https://pype.club/docs/artist_hosts_maya#creating-basic-render-setup
+ This generates a single node in the scene which tells the Creator to if
+ it exists collect Maya rendersetup renderlayers as individual instances.
+ As such, triggering create doesn't actually create the instance node per
+ layer but only the node which tells the Creator it may now collect
+ the renderlayers.
"""
+ identifier = "io.openpype.creators.maya.renderlayer"
+ family = "renderlayer"
label = "Render"
- family = "rendering"
icon = "eye"
- _token = None
- _user = None
- _password = None
- _project_settings = None
+ layer_instance_prefix = "render"
+ singleton_node_name = "renderingMain"
- def __init__(self, *args, **kwargs):
- """Constructor."""
- super(CreateRender, self).__init__(*args, **kwargs)
+ render_settings = {}
- # Defaults
- self._project_settings = get_project_settings(
- legacy_io.Session["AVALON_PROJECT"])
- if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
+ @classmethod
+ def apply_settings(cls, project_settings, system_settings):
+ cls.render_settings = project_settings["maya"]["RenderSettings"]
+
+ def create(self, subset_name, instance_data, pre_create_data):
+ # Only allow a single render instance to exist
+ if self._get_singleton_node():
+ raise CreatorError("A Render instance already exists - only "
+ "one can be configured.")
+
+ # Apply default project render settings on create
+ if self.render_settings.get("apply_render_settings"):
lib_rendersettings.RenderSettings().set_default_renderer_settings()
- # Deadline-only
- manager = ModulesManager()
- deadline_settings = get_system_settings()["modules"]["deadline"]
- if not deadline_settings["enabled"]:
- self.deadline_servers = {}
- return
- self.deadline_module = manager.modules_by_name["deadline"]
- try:
- default_servers = deadline_settings["deadline_urls"]
- project_servers = (
- self._project_settings["deadline"]["deadline_servers"]
- )
- self.deadline_servers = {
- k: default_servers[k]
- for k in project_servers
- if k in default_servers
- }
+ super(CreateRenderlayer, self).create(subset_name,
+ instance_data,
+ pre_create_data)
- if not self.deadline_servers:
- self.deadline_servers = default_servers
-
- except AttributeError:
- # Handle situation were we had only one url for deadline.
- # get default deadline webservice url from deadline module
- self.deadline_servers = self.deadline_module.deadline_urls
-
- def process(self):
- """Entry point."""
- exists = cmds.ls(self.name)
- if exists:
- cmds.warning("%s already exists." % exists[0])
- return
-
- use_selection = self.options.get("useSelection")
- with lib.undo_chunk():
- self._create_render_settings()
- self.instance = super(CreateRender, self).process()
- # create namespace with instance
- index = 1
- namespace_name = "_{}".format(str(self.instance))
- try:
- cmds.namespace(rm=namespace_name)
- except RuntimeError:
- # namespace is not empty, so we leave it untouched
- pass
-
- while cmds.namespace(exists=namespace_name):
- namespace_name = "_{}{}".format(str(self.instance), index)
- index += 1
-
- namespace = cmds.namespace(add=namespace_name)
-
- # add Deadline server selection list
- if self.deadline_servers:
- cmds.scriptJob(
- attributeChange=[
- "{}.deadlineServers".format(self.instance),
- self._deadline_webservice_changed
- ])
-
- cmds.setAttr("{}.machineList".format(self.instance), lock=True)
- rs = renderSetup.instance()
- layers = rs.getRenderLayers()
- if use_selection:
- self.log.info("Processing existing layers")
- sets = []
- for layer in layers:
- self.log.info(" - creating set for {}:{}".format(
- namespace, layer.name()))
- render_set = cmds.sets(
- n="{}:{}".format(namespace, layer.name()))
- sets.append(render_set)
- cmds.sets(sets, forceElement=self.instance)
-
- # if no render layers are present, create default one with
- # asterisk selector
- if not layers:
- render_layer = rs.createRenderLayer('Main')
- collection = render_layer.createCollection("defaultCollection")
- collection.getSelector().setPattern('*')
-
- return self.instance
-
- def _deadline_webservice_changed(self):
- """Refresh Deadline server dependent options."""
- # get selected server
- webservice = self.deadline_servers[
- self.server_aliases[
- cmds.getAttr("{}.deadlineServers".format(self.instance))
- ]
- ]
- pools = self.deadline_module.get_deadline_pools(webservice, self.log)
- cmds.deleteAttr("{}.primaryPool".format(self.instance))
- cmds.deleteAttr("{}.secondaryPool".format(self.instance))
-
- pool_setting = (self._project_settings["deadline"]
- ["publish"]
- ["CollectDeadlinePools"])
-
- primary_pool = pool_setting["primary_pool"]
- sorted_pools = self._set_default_pool(list(pools), primary_pool)
- cmds.addAttr(
- self.instance,
- longName="primaryPool",
- attributeType="enum",
- enumName=":".join(sorted_pools)
- )
- cmds.setAttr(
- "{}.primaryPool".format(self.instance),
- 0,
- keyable=False,
- channelBox=True
- )
-
- pools = ["-"] + pools
- secondary_pool = pool_setting["secondary_pool"]
- sorted_pools = self._set_default_pool(list(pools), secondary_pool)
- cmds.addAttr(
- self.instance,
- longName="secondaryPool",
- attributeType="enum",
- enumName=":".join(sorted_pools)
- )
- cmds.setAttr(
- "{}.secondaryPool".format(self.instance),
- 0,
- keyable=False,
- channelBox=True
- )
-
- def _create_render_settings(self):
+ def get_instance_attr_defs(self):
"""Create instance settings."""
- # get pools (slave machines of the render farm)
- pool_names = []
- default_priority = 50
- self.data["suspendPublishJob"] = False
- self.data["review"] = True
- self.data["extendFrames"] = False
- self.data["overrideExistingFrame"] = True
- # self.data["useLegacyRenderLayers"] = True
- self.data["priority"] = default_priority
- self.data["tile_priority"] = default_priority
- self.data["framesPerTask"] = 1
- self.data["whitelist"] = False
- self.data["machineList"] = ""
- self.data["useMayaBatch"] = False
- self.data["tileRendering"] = False
- self.data["tilesX"] = 2
- self.data["tilesY"] = 2
- self.data["convertToScanline"] = False
- self.data["useReferencedAovs"] = False
- self.data["renderSetupIncludeLights"] = (
- self._project_settings.get(
- "maya", {}).get(
- "RenderSettings", {}).get(
- "enable_all_lights", False)
- )
- # Disable for now as this feature is not working yet
- # self.data["assScene"] = False
+ return [
+ BoolDef("review",
+ label="Review",
+ tooltip="Mark as reviewable",
+ default=True),
+ BoolDef("extendFrames",
+ label="Extend Frames",
+ tooltip="Extends the frames on top of the previous "
+ "publish.\nIf the previous was 1001-1050 and you "
+ "would now submit 1020-1070 only the new frames "
+ "1051-1070 would be rendered and published "
+ "together with the previously rendered frames.\n"
+ "If 'overrideExistingFrame' is enabled it *will* "
+ "render any existing frames.",
+ default=False),
+ BoolDef("overrideExistingFrame",
+ label="Override Existing Frame",
+ tooltip="Override existing rendered frames "
+ "(if they exist).",
+ default=True),
- system_settings = get_system_settings()["modules"]
+ # TODO: Should these move to submit_maya_deadline plugin?
+ # Tile rendering
+ BoolDef("tileRendering",
+ label="Enable tiled rendering",
+ default=False),
+ NumberDef("tilesX",
+ label="Tiles X",
+ default=2,
+ minimum=1,
+ decimals=0),
+ NumberDef("tilesY",
+ label="Tiles Y",
+ default=2,
+ minimum=1,
+ decimals=0),
- deadline_enabled = system_settings["deadline"]["enabled"]
- muster_enabled = system_settings["muster"]["enabled"]
- muster_url = system_settings["muster"]["MUSTER_REST_URL"]
+ # Additional settings
+ BoolDef("convertToScanline",
+ label="Convert to Scanline",
+ tooltip="Convert the output images to scanline images",
+ default=False),
+ BoolDef("useReferencedAovs",
+ label="Use Referenced AOVs",
+ tooltip="Consider the AOVs from referenced scenes as well",
+ default=False),
- if deadline_enabled and muster_enabled:
- self.log.error(
- "Both Deadline and Muster are enabled. " "Cannot support both."
- )
- raise RuntimeError("Both Deadline and Muster are enabled")
-
- if deadline_enabled:
- self.server_aliases = list(self.deadline_servers.keys())
- self.data["deadlineServers"] = self.server_aliases
-
- try:
- deadline_url = self.deadline_servers["default"]
- except KeyError:
- # if 'default' server is not between selected,
- # use first one for initial list of pools.
- deadline_url = next(iter(self.deadline_servers.values()))
- # Uses function to get pool machines from the assigned deadline
- # url in settings
- pool_names = self.deadline_module.get_deadline_pools(deadline_url,
- self.log)
- maya_submit_dl = self._project_settings.get(
- "deadline", {}).get(
- "publish", {}).get(
- "MayaSubmitDeadline", {})
- priority = maya_submit_dl.get("priority", default_priority)
- self.data["priority"] = priority
-
- tile_priority = maya_submit_dl.get("tile_priority",
- default_priority)
- self.data["tile_priority"] = tile_priority
-
- strict_error_checking = maya_submit_dl.get("strict_error_checking",
- True)
- self.data["strict_error_checking"] = strict_error_checking
-
- # Pool attributes should be last since they will be recreated when
- # the deadline server changes.
- pool_setting = (self._project_settings["deadline"]
- ["publish"]
- ["CollectDeadlinePools"])
- primary_pool = pool_setting["primary_pool"]
- self.data["primaryPool"] = self._set_default_pool(pool_names,
- primary_pool)
- # We add a string "-" to allow the user to not
- # set any secondary pools
- pool_names = ["-"] + pool_names
- secondary_pool = pool_setting["secondary_pool"]
- self.data["secondaryPool"] = self._set_default_pool(pool_names,
- secondary_pool)
-
- if muster_enabled:
- self.log.info(">>> Loading Muster credentials ...")
- self._load_credentials()
- self.log.info(">>> Getting pools ...")
- pools = []
- try:
- pools = self._get_muster_pools()
- except requests.exceptions.HTTPError as e:
- if e.startswith("401"):
- self.log.warning("access token expired")
- self._show_login()
- raise RuntimeError("Access token expired")
- except requests.exceptions.ConnectionError:
- self.log.error("Cannot connect to Muster API endpoint.")
- raise RuntimeError("Cannot connect to {}".format(muster_url))
- for pool in pools:
- self.log.info(" - pool: {}".format(pool["name"]))
- pool_names.append(pool["name"])
-
- self.options = {"useSelection": False} # Force no content
-
- def _set_default_pool(self, pool_names, pool_value):
- """Reorder pool names, default should come first"""
- if pool_value and pool_value in pool_names:
- pool_names.remove(pool_value)
- pool_names = [pool_value] + pool_names
- return pool_names
-
- def _load_credentials(self):
- """Load Muster credentials.
-
- Load Muster credentials from file and set ``MUSTER_USER``,
- ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from settings.
-
- Raises:
- RuntimeError: If loaded credentials are invalid.
- AttributeError: If ``MUSTER_REST_URL`` is not set.
-
- """
- app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype"))
- file_name = "muster_cred.json"
- fpath = os.path.join(app_dir, file_name)
- file = open(fpath, "r")
- muster_json = json.load(file)
- self._token = muster_json.get("token", None)
- if not self._token:
- self._show_login()
- raise RuntimeError("Invalid access token for Muster")
- file.close()
- self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
- if not self.MUSTER_REST_URL:
- raise AttributeError("Muster REST API url not set")
-
- def _get_muster_pools(self):
- """Get render pools from Muster.
-
- Raises:
- Exception: If pool list cannot be obtained from Muster.
-
- """
- params = {"authToken": self._token}
- api_entry = "/api/pools/list"
- response = requests_get(self.MUSTER_REST_URL + api_entry,
- params=params)
- if response.status_code != 200:
- if response.status_code == 401:
- self.log.warning("Authentication token expired.")
- self._show_login()
- else:
- self.log.error(
- ("Cannot get pools from "
- "Muster: {}").format(response.status_code)
- )
- raise Exception("Cannot get pools from Muster")
- try:
- pools = response.json()["ResponseData"]["pools"]
- except ValueError as e:
- self.log.error("Invalid response from Muster server {}".format(e))
- raise Exception("Invalid response from Muster server")
-
- return pools
-
- def _show_login(self):
- # authentication token expired so we need to login to Muster
- # again to get it. We use Pype API call to show login window.
- api_url = "{}/muster/show_login".format(
- os.environ["OPENPYPE_WEBSERVER_URL"])
- self.log.debug(api_url)
- login_response = requests_get(api_url, timeout=1)
- if login_response.status_code != 200:
- self.log.error("Cannot show login form to Muster")
- raise Exception("Cannot show login form to Muster")
-
- def _requests_post(self, *args, **kwargs):
- """Wrap request post method.
-
- Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
- variable is found. This is useful when Deadline or Muster server are
- running with self-signed certificates and their certificate is not
- added to trusted certificates on client machines.
-
- Warning:
- Disabling SSL certificate validation is defeating one line
- of defense SSL is providing and it is not recommended.
-
- """
- if "verify" not in kwargs:
- kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
- return requests.post(*args, **kwargs)
-
- def _requests_get(self, *args, **kwargs):
- """Wrap request get method.
-
- Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
- variable is found. This is useful when Deadline or Muster server are
- running with self-signed certificates and their certificate is not
- added to trusted certificates on client machines.
-
- Warning:
- Disabling SSL certificate validation is defeating one line
- of defense SSL is providing and it is not recommended.
-
- """
- if "verify" not in kwargs:
- kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
- return requests.get(*args, **kwargs)
+ BoolDef("renderSetupIncludeLights",
+ label="Render Setup Include Lights",
+ default=self.render_settings.get("enable_all_lights",
+ False))
+ ]
diff --git a/openpype/hosts/maya/plugins/create/create_rendersetup.py b/openpype/hosts/maya/plugins/create/create_rendersetup.py
index 494f90d87b..dd64a0a842 100644
--- a/openpype/hosts/maya/plugins/create/create_rendersetup.py
+++ b/openpype/hosts/maya/plugins/create/create_rendersetup.py
@@ -1,55 +1,31 @@
-from openpype.hosts.maya.api import (
- lib,
- plugin
-)
-from maya import cmds
+from openpype.hosts.maya.api import plugin
+from openpype.pipeline import CreatorError
-class CreateRenderSetup(plugin.Creator):
+class CreateRenderSetup(plugin.MayaCreator):
"""Create rendersetup template json data"""
- name = "rendersetup"
+ identifier = "io.openpype.creators.maya.rendersetup"
label = "Render Setup Preset"
family = "rendersetup"
icon = "tablet"
- def __init__(self, *args, **kwargs):
- super(CreateRenderSetup, self).__init__(*args, **kwargs)
+ def get_pre_create_attr_defs(self):
+ # Do not show the "use_selection" setting from parent class
+ return []
- # here we can pre-create renderSetup layers, possibly utlizing
- # settings for it.
+ def create(self, subset_name, instance_data, pre_create_data):
- # _____
- # / __\__
- # | / __\__
- # | | / \
- # | | | |
- # \__| | |
- # \__| |
- # \_____/
+ existing_instance = None
+ for instance in self.create_context.instances:
+ if instance.family == self.family:
+ existing_instance = instance
+ break
- # from pype.api import get_project_settings
- # import maya.app.renderSetup.model.renderSetup as renderSetup
- # settings = get_project_settings(os.environ['AVALON_PROJECT'])
- # layer = settings['maya']['create']['renderSetup']["layer"]
+ if existing_instance:
+ raise CreatorError("A RenderSetup instance already exists - only "
+ "one can be configured.")
- # rs = renderSetup.instance()
- # rs.createRenderLayer(layer)
-
- self.options = {"useSelection": False} # Force no content
-
- def process(self):
- exists = cmds.ls(self.name)
- assert len(exists) <= 1, (
- "More than one renderglobal exists, this is a bug"
- )
-
- if exists:
- return cmds.warning("%s already exists." % exists[0])
-
- with lib.undo_chunk():
- instance = super(CreateRenderSetup, self).process()
-
- self.data["renderSetup"] = "42"
- null = cmds.sets(name="null_SET", empty=True)
- cmds.sets([null], forceElement=instance)
+ super(CreateRenderSetup, self).create(subset_name,
+ instance_data,
+ pre_create_data)
diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py
index 40ae99b57c..f60e2406bc 100644
--- a/openpype/hosts/maya/plugins/create/create_review.py
+++ b/openpype/hosts/maya/plugins/create/create_review.py
@@ -1,76 +1,142 @@
-import os
-from collections import OrderedDict
import json
+from maya import cmds
+
from openpype.hosts.maya.api import (
lib,
plugin
)
-from openpype.settings import get_project_settings
-from openpype.pipeline import get_current_project_name, get_current_task_name
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+ EnumDef
+)
+from openpype.pipeline import CreatedInstance
from openpype.client import get_asset_by_name
+TRANSPARENCIES = [
+ "preset",
+ "simple",
+ "object sorting",
+ "weighted average",
+ "depth peeling",
+ "alpha cut"
+]
-class CreateReview(plugin.Creator):
- """Single baked camera"""
- name = "reviewDefault"
+class CreateReview(plugin.MayaCreator):
+ """Playblast reviewable"""
+
+ identifier = "io.openpype.creators.maya.review"
label = "Review"
family = "review"
icon = "video-camera"
- keepImages = False
- isolate = False
- imagePlane = True
- Width = 0
- Height = 0
- transparency = [
- "preset",
- "simple",
- "object sorting",
- "weighted average",
- "depth peeling",
- "alpha cut"
- ]
+
useMayaTimeline = True
panZoom = False
- def __init__(self, *args, **kwargs):
- super(CreateReview, self).__init__(*args, **kwargs)
- data = OrderedDict(**self.data)
+ # Overriding "create" method to prefill values from settings.
+ def create(self, subset_name, instance_data, pre_create_data):
- project_name = get_current_project_name()
- asset_doc = get_asset_by_name(project_name, data["asset"])
- task_name = get_current_task_name()
+ members = list()
+ if pre_create_data.get("use_selection"):
+ members = cmds.ls(selection=True)
+
+ project_name = self.project_name
+ asset_doc = get_asset_by_name(project_name, instance_data["asset"])
+ task_name = instance_data["task"]
preset = lib.get_capture_preset(
task_name,
asset_doc["data"]["tasks"][task_name]["type"],
- data["subset"],
- get_project_settings(project_name),
+ subset_name,
+ self.project_settings,
self.log
)
- if os.environ.get("OPENPYPE_DEBUG") == "1":
- self.log.debug(
- "Using preset: {}".format(
- json.dumps(preset, indent=4, sort_keys=True)
- )
+ self.log.debug(
+ "Using preset: {}".format(
+ json.dumps(preset, indent=4, sort_keys=True)
)
+ )
+
+ with lib.undo_chunk():
+ instance_node = cmds.sets(members, name=subset_name)
+ instance_data["instance_node"] = instance_node
+ instance = CreatedInstance(
+ self.family,
+ subset_name,
+ instance_data,
+ self)
+
+ creator_attribute_defs_by_key = {
+ x.key: x for x in instance.creator_attribute_defs
+ }
+ mapping = {
+ "review_width": preset["Resolution"]["width"],
+ "review_height": preset["Resolution"]["height"],
+ "isolate": preset["Generic"]["isolate_view"],
+ "imagePlane": preset["Viewport Options"]["imagePlane"],
+ "panZoom": preset["Generic"]["pan_zoom"]
+ }
+ for key, value in mapping.items():
+ creator_attribute_defs_by_key[key].default = value
+
+ self._add_instance_to_context(instance)
+
+ self.imprint_instance_node(instance_node,
+ data=instance.data_to_store())
+ return instance
+
+ def get_instance_attr_defs(self):
+
+ defs = lib.collect_animation_defs()
# Option for using Maya or asset frame range in settings.
- frame_range = lib.get_frame_range()
- if self.useMayaTimeline:
- frame_range = lib.collect_animation_data(fps=True)
- for key, value in frame_range.items():
- data[key] = value
+ if not self.useMayaTimeline:
+ # Update the defaults to be the asset frame range
+ frame_range = lib.get_frame_range()
+ defs_by_key = {attr_def.key: attr_def for attr_def in defs}
+ for key, value in frame_range.items():
+ if key not in defs_by_key:
+ raise RuntimeError("Attribute definition not found to be "
+ "updated for key: {}".format(key))
+ attr_def = defs_by_key[key]
+ attr_def.default = value
- data["fps"] = lib.collect_animation_data(fps=True)["fps"]
+ defs.extend([
+ NumberDef("review_width",
+ label="Review width",
+ tooltip="A value of zero will use the asset resolution.",
+ decimals=0,
+ minimum=0,
+ default=0),
+ NumberDef("review_height",
+ label="Review height",
+ tooltip="A value of zero will use the asset resolution.",
+ decimals=0,
+ minimum=0,
+ default=0),
+ BoolDef("keepImages",
+ label="Keep Images",
+ tooltip="Whether to also publish along the image sequence "
+ "next to the video reviewable.",
+ default=False),
+ BoolDef("isolate",
+ label="Isolate render members of instance",
+ tooltip="When enabled only the members of the instance "
+ "will be included in the playblast review.",
+ default=False),
+ BoolDef("imagePlane",
+ label="Show Image Plane",
+ default=True),
+ EnumDef("transparency",
+ label="Transparency",
+ items=TRANSPARENCIES),
+ BoolDef("panZoom",
+ label="Enable camera pan/zoom",
+ default=True),
+ EnumDef("displayLights",
+ label="Display Lights",
+ items=lib.DISPLAY_LIGHTS_ENUM),
+ ])
- data["keepImages"] = self.keepImages
- data["transparency"] = self.transparency
- data["review_width"] = preset["Resolution"]["width"]
- data["review_height"] = preset["Resolution"]["height"]
- data["isolate"] = preset["Generic"]["isolate_view"]
- data["imagePlane"] = preset["Viewport Options"]["imagePlane"]
- data["panZoom"] = preset["Generic"]["pan_zoom"]
- data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS
-
- self.data = data
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py
index 8032e5fbbd..04104cb7cb 100644
--- a/openpype/hosts/maya/plugins/create/create_rig.py
+++ b/openpype/hosts/maya/plugins/create/create_rig.py
@@ -1,25 +1,25 @@
from maya import cmds
-from openpype.hosts.maya.api import (
- lib,
- plugin
-)
+from openpype.hosts.maya.api import plugin
-class CreateRig(plugin.Creator):
+class CreateRig(plugin.MayaCreator):
"""Artist-friendly rig with controls to direct motion"""
- name = "rigDefault"
+ identifier = "io.openpype.creators.maya.rig"
label = "Rig"
family = "rig"
icon = "wheelchair"
- def process(self):
+ def create(self, subset_name, instance_data, pre_create_data):
- with lib.undo_chunk():
- instance = super(CreateRig, self).process()
+ instance = super(CreateRig, self).create(subset_name,
+ instance_data,
+ pre_create_data)
- self.log.info("Creating Rig instance set up ...")
- controls = cmds.sets(name="controls_SET", empty=True)
- pointcache = cmds.sets(name="out_SET", empty=True)
- cmds.sets([controls, pointcache], forceElement=instance)
+ instance_node = instance.get("instance_node")
+
+ self.log.info("Creating Rig instance set up ...")
+ controls = cmds.sets(name="controls_SET", empty=True)
+ pointcache = cmds.sets(name="out_SET", empty=True)
+ cmds.sets([controls, pointcache], forceElement=instance_node)
diff --git a/openpype/hosts/maya/plugins/create/create_setdress.py b/openpype/hosts/maya/plugins/create/create_setdress.py
index 4246183fdb..594a3dc46d 100644
--- a/openpype/hosts/maya/plugins/create/create_setdress.py
+++ b/openpype/hosts/maya/plugins/create/create_setdress.py
@@ -1,16 +1,19 @@
from openpype.hosts.maya.api import plugin
+from openpype.lib import BoolDef
-class CreateSetDress(plugin.Creator):
+class CreateSetDress(plugin.MayaCreator):
"""A grouped package of loaded content"""
- name = "setdressMain"
+ identifier = "io.openpype.creators.maya.setdress"
label = "Set Dress"
family = "setdress"
icon = "cubes"
defaults = ["Main", "Anim"]
- def __init__(self, *args, **kwargs):
- super(CreateSetDress, self).__init__(*args, **kwargs)
-
- self.data["exactSetMembersOnly"] = True
+ def get_instance_attr_defs(self):
+ return [
+ BoolDef("exactSetMembersOnly",
+ label="Exact Set Members Only",
+ default=True)
+ ]
diff --git a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
index 6e72bf5324..4e2a99eced 100644
--- a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
+++ b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py
@@ -1,47 +1,63 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Skeletal Meshes."""
from openpype.hosts.maya.api import plugin, lib
-from openpype.pipeline import legacy_io
+from openpype.lib import (
+ BoolDef,
+ TextDef
+)
+
from maya import cmds # noqa
-class CreateUnrealSkeletalMesh(plugin.Creator):
+class CreateUnrealSkeletalMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
- name = "staticMeshMain"
+
+ identifier = "io.openpype.creators.maya.unrealskeletalmesh"
label = "Unreal - Skeletal Mesh"
family = "skeletalMesh"
icon = "thumbs-up"
dynamic_subset_keys = ["asset"]
- joint_hints = []
+ # Defined in settings
+ joint_hints = set()
- def __init__(self, *args, **kwargs):
- """Constructor."""
- super(CreateUnrealSkeletalMesh, self).__init__(*args, **kwargs)
-
- @classmethod
- def get_dynamic_data(
- cls, variant, task_name, asset_id, project_name, host_name
- ):
- dynamic_data = super(CreateUnrealSkeletalMesh, cls).get_dynamic_data(
- variant, task_name, asset_id, project_name, host_name
+ def apply_settings(self, project_settings, system_settings):
+ """Apply project settings to creator"""
+ settings = (
+ project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"]
)
- dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET")
+ self.joint_hints = set(settings.get("joint_hints", []))
+
+ def get_dynamic_data(
+ self, variant, task_name, asset_doc, project_name, host_name, instance
+ ):
+ """
+ The default subset name templates for Unreal include {asset} and thus
+ we should pass that along as dynamic data.
+ """
+ dynamic_data = super(CreateUnrealSkeletalMesh, self).get_dynamic_data(
+ variant, task_name, asset_doc, project_name, host_name, instance
+ )
+ dynamic_data["asset"] = asset_doc["name"]
return dynamic_data
- def process(self):
- self.name = "{}_{}".format(self.family, self.name)
- with lib.undo_chunk():
- instance = super(CreateUnrealSkeletalMesh, self).process()
- content = cmds.sets(instance, query=True)
+ def create(self, subset_name, instance_data, pre_create_data):
+
+ with lib.undo_chunk():
+ instance = super(CreateUnrealSkeletalMesh, self).create(
+ subset_name, instance_data, pre_create_data)
+ instance_node = instance.get("instance_node")
+
+ # We reorganize the geometry that was originally added into the
+ # set into either 'joints_SET' or 'geometry_SET' based on the
+ # joint_hints from project settings
+ members = cmds.sets(instance_node, query=True)
+ cmds.sets(clear=instance_node)
- # empty set and process its former content
- cmds.sets(content, rm=instance)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
joints_set = cmds.sets(name="joints_SET", empty=True)
- cmds.sets([geometry_set, joints_set], forceElement=instance)
- members = cmds.ls(content) or []
+ cmds.sets([geometry_set, joints_set], forceElement=instance_node)
for node in members:
if node in self.joint_hints:
@@ -49,20 +65,38 @@ class CreateUnrealSkeletalMesh(plugin.Creator):
else:
cmds.sets(node, forceElement=geometry_set)
- # Add animation data
- self.data.update(lib.collect_animation_data())
+ def get_instance_attr_defs(self):
- # Only renderable visible shapes
- self.data["renderableOnly"] = False
- # only nodes that are visible
- self.data["visibleOnly"] = False
- # Include parent groups
- self.data["includeParentHierarchy"] = False
- # Default to exporting world-space
- self.data["worldSpace"] = True
- # Default to suspend refresh.
- self.data["refresh"] = False
+ defs = lib.collect_animation_defs()
- # Add options for custom attributes
- self.data["attr"] = ""
- self.data["attrPrefix"] = ""
+ defs.extend([
+ BoolDef("renderableOnly",
+ label="Renderable Only",
+ tooltip="Only export renderable visible shapes",
+ default=False),
+ BoolDef("visibleOnly",
+ label="Visible Only",
+ tooltip="Only export dag objects visible during "
+ "frame range",
+ default=False),
+ BoolDef("includeParentHierarchy",
+ label="Include Parent Hierarchy",
+ tooltip="Whether to include parent hierarchy of nodes in "
+ "the publish instance",
+ default=False),
+ BoolDef("worldSpace",
+ label="World-Space Export",
+ default=True),
+ BoolDef("refresh",
+ label="Refresh viewport during export",
+ default=False),
+ TextDef("attr",
+ label="Custom Attributes",
+ default="",
+ placeholder="attr1, attr2"),
+ TextDef("attrPrefix",
+ label="Custom Attributes Prefix",
+ placeholder="prefix1, prefix2")
+ ])
+
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
index 44cbee0502..3f96d91a54 100644
--- a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
+++ b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
@@ -1,58 +1,90 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from openpype.hosts.maya.api import plugin, lib
-from openpype.settings import get_project_settings
-from openpype.pipeline import legacy_io
from maya import cmds # noqa
-class CreateUnrealStaticMesh(plugin.Creator):
+class CreateUnrealStaticMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
- name = "staticMeshMain"
+
+ identifier = "io.openpype.creators.maya.unrealstaticmesh"
label = "Unreal - Static Mesh"
family = "staticMesh"
icon = "cube"
dynamic_subset_keys = ["asset"]
- def __init__(self, *args, **kwargs):
- """Constructor."""
- super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs)
- self._project_settings = get_project_settings(
- legacy_io.Session["AVALON_PROJECT"])
+ # Defined in settings
+ collision_prefixes = []
+
+ def apply_settings(self, project_settings, system_settings):
+ """Apply project settings to creator"""
+ settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"]
+ self.collision_prefixes = settings["collision_prefixes"]
- @classmethod
def get_dynamic_data(
- cls, variant, task_name, asset_id, project_name, host_name
+ self, variant, task_name, asset_doc, project_name, host_name, instance
):
- dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data(
- variant, task_name, asset_id, project_name, host_name
+ """
+ The default subset name templates for Unreal include {asset} and thus
+ we should pass that along as dynamic data.
+ """
+ dynamic_data = super(CreateUnrealStaticMesh, self).get_dynamic_data(
+ variant, task_name, asset_doc, project_name, host_name, instance
)
- dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET")
+ dynamic_data["asset"] = asset_doc["name"]
return dynamic_data
- def process(self):
- self.name = "{}_{}".format(self.family, self.name)
- with lib.undo_chunk():
- instance = super(CreateUnrealStaticMesh, self).process()
- content = cmds.sets(instance, query=True)
+ def create(self, subset_name, instance_data, pre_create_data):
+
+ with lib.undo_chunk():
+ instance = super(CreateUnrealStaticMesh, self).create(
+ subset_name, instance_data, pre_create_data)
+ instance_node = instance.get("instance_node")
+
+ # We reorganize the geometry that was originally added into the
+ # set into either 'collision_SET' or 'geometry_SET' based on the
+ # collision_prefixes from project settings
+ members = cmds.sets(instance_node, query=True)
+ cmds.sets(clear=instance_node)
- # empty set and process its former content
- cmds.sets(content, rm=instance)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
collisions_set = cmds.sets(name="collisions_SET", empty=True)
- cmds.sets([geometry_set, collisions_set], forceElement=instance)
+ cmds.sets([geometry_set, collisions_set],
+ forceElement=instance_node)
- members = cmds.ls(content, long=True) or []
+ members = cmds.ls(members, long=True) or []
children = cmds.listRelatives(members, allDescendents=True,
fullPath=True) or []
- children = cmds.ls(children, type="transform")
- for node in children:
- if cmds.listRelatives(node, type="shape"):
- if [
- n for n in self.collision_prefixes
- if node.startswith(n)
- ]:
- cmds.sets(node, forceElement=collisions_set)
- else:
- cmds.sets(node, forceElement=geometry_set)
+ transforms = cmds.ls(members + children, type="transform")
+ for transform in transforms:
+
+ if not cmds.listRelatives(transform,
+ type="shape",
+ noIntermediate=True):
+ # Exclude all transforms that have no direct shapes
+ continue
+
+ if self.has_collision_prefix(transform):
+ cmds.sets(transform, forceElement=collisions_set)
+ else:
+ cmds.sets(transform, forceElement=geometry_set)
+
+ def has_collision_prefix(self, node_path):
+ """Return whether node name of path matches collision prefix.
+
+ If the node name matches the collision prefix we add it to the
+ `collisions_SET` instead of the `geometry_SET`.
+
+ Args:
+ node_path (str): Maya node path.
+
+ Returns:
+ bool: Whether the node should be considered a collision mesh.
+
+ """
+ node_name = node_path.rsplit("|", 1)[-1]
+ for prefix in self.collision_prefixes:
+ if node_name.startswith(prefix):
+ return True
+ return False
diff --git a/openpype/hosts/maya/plugins/create/create_vrayproxy.py b/openpype/hosts/maya/plugins/create/create_vrayproxy.py
index d135073e82..b0a95538e1 100644
--- a/openpype/hosts/maya/plugins/create/create_vrayproxy.py
+++ b/openpype/hosts/maya/plugins/create/create_vrayproxy.py
@@ -1,10 +1,14 @@
-from openpype.hosts.maya.api import plugin
+from openpype.hosts.maya.api import (
+ plugin,
+ lib
+)
+from openpype.lib import BoolDef
-class CreateVrayProxy(plugin.Creator):
+class CreateVrayProxy(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
- name = "vrayproxy"
+ identifier = "io.openpype.creators.maya.vrayproxy"
label = "VRay Proxy"
family = "vrayproxy"
icon = "gears"
@@ -12,15 +16,35 @@ class CreateVrayProxy(plugin.Creator):
vrmesh = True
alembic = True
- def __init__(self, *args, **kwargs):
- super(CreateVrayProxy, self).__init__(*args, **kwargs)
+ def get_instance_attr_defs(self):
- self.data["animation"] = False
- self.data["frameStart"] = 1
- self.data["frameEnd"] = 1
+ defs = [
+ BoolDef("animation",
+ label="Export Animation",
+ default=False)
+ ]
- # Write vertex colors
- self.data["vertexColors"] = False
+ # Add time range attributes but remove some attributes
+ # which this instance actually doesn't use
+ defs.extend(lib.collect_animation_defs())
+ remove = {"handleStart", "handleEnd", "step"}
+ defs = [attr_def for attr_def in defs if attr_def.key not in remove]
- self.data["vrmesh"] = self.vrmesh
- self.data["alembic"] = self.alembic
+ defs.extend([
+ BoolDef("vertexColors",
+ label="Write vertex colors",
+ tooltip="Write vertex colors with the geometry",
+ default=False),
+ BoolDef("vrmesh",
+ label="Export VRayMesh",
+ tooltip="Publish a .vrmesh (VRayMesh) file for "
+ "this VRayProxy",
+ default=self.vrmesh),
+ BoolDef("alembic",
+ label="Export Alembic",
+ tooltip="Publish a .abc (Alembic) file for "
+ "this VRayProxy",
+ default=self.alembic),
+ ])
+
+ return defs
diff --git a/openpype/hosts/maya/plugins/create/create_vrayscene.py b/openpype/hosts/maya/plugins/create/create_vrayscene.py
index 59d80e6d5b..d601dceb54 100644
--- a/openpype/hosts/maya/plugins/create/create_vrayscene.py
+++ b/openpype/hosts/maya/plugins/create/create_vrayscene.py
@@ -1,266 +1,52 @@
# -*- coding: utf-8 -*-
"""Create instance of vrayscene."""
-import os
-import json
-import appdirs
-import requests
-
-from maya import cmds
-import maya.app.renderSetup.model.renderSetup as renderSetup
from openpype.hosts.maya.api import (
- lib,
+ lib_rendersettings,
plugin
)
-from openpype.settings import (
- get_system_settings,
- get_project_settings
-)
-
-from openpype.lib import requests_get
-from openpype.pipeline import (
- CreatorError,
- legacy_io,
-)
-from openpype.modules import ModulesManager
+from openpype.pipeline import CreatorError
+from openpype.lib import BoolDef
-class CreateVRayScene(plugin.Creator):
+class CreateVRayScene(plugin.RenderlayerCreator):
"""Create Vray Scene."""
- label = "VRay Scene"
+ identifier = "io.openpype.creators.maya.vrayscene"
+
family = "vrayscene"
+ label = "VRay Scene"
icon = "cubes"
- _project_settings = None
+ render_settings = {}
+ singleton_node_name = "vraysceneMain"
- def __init__(self, *args, **kwargs):
- """Entry."""
- super(CreateVRayScene, self).__init__(*args, **kwargs)
- self._rs = renderSetup.instance()
- self.data["exportOnFarm"] = False
- deadline_settings = get_system_settings()["modules"]["deadline"]
+ @classmethod
+ def apply_settings(cls, project_settings, system_settings):
+ cls.render_settings = project_settings["maya"]["RenderSettings"]
- manager = ModulesManager()
- self.deadline_module = manager.modules_by_name["deadline"]
+ def create(self, subset_name, instance_data, pre_create_data):
+ # Only allow a single render instance to exist
+ if self._get_singleton_node():
+ raise CreatorError("A Render instance already exists - only "
+ "one can be configured.")
- if not deadline_settings["enabled"]:
- self.deadline_servers = {}
- return
- self._project_settings = get_project_settings(
- legacy_io.Session["AVALON_PROJECT"])
+ super(CreateVRayScene, self).create(subset_name,
+ instance_data,
+ pre_create_data)
- try:
- default_servers = deadline_settings["deadline_urls"]
- project_servers = (
- self._project_settings["deadline"]["deadline_servers"]
- )
- self.deadline_servers = {
- k: default_servers[k]
- for k in project_servers
- if k in default_servers
- }
+ # Apply default project render settings on create
+ if self.render_settings.get("apply_render_settings"):
+ lib_rendersettings.RenderSettings().set_default_renderer_settings()
- if not self.deadline_servers:
- self.deadline_servers = default_servers
+ def get_instance_attr_defs(self):
+ """Create instance settings."""
- except AttributeError:
- # Handle situation were we had only one url for deadline.
- # get default deadline webservice url from deadline module
- self.deadline_servers = self.deadline_module.deadline_urls
-
- def process(self):
- """Entry point."""
- exists = cmds.ls(self.name)
- if exists:
- return cmds.warning("%s already exists." % exists[0])
-
- use_selection = self.options.get("useSelection")
- with lib.undo_chunk():
- self._create_vray_instance_settings()
- self.instance = super(CreateVRayScene, self).process()
-
- index = 1
- namespace_name = "_{}".format(str(self.instance))
- try:
- cmds.namespace(rm=namespace_name)
- except RuntimeError:
- # namespace is not empty, so we leave it untouched
- pass
-
- while(cmds.namespace(exists=namespace_name)):
- namespace_name = "_{}{}".format(str(self.instance), index)
- index += 1
-
- namespace = cmds.namespace(add=namespace_name)
-
- # add Deadline server selection list
- if self.deadline_servers:
- cmds.scriptJob(
- attributeChange=[
- "{}.deadlineServers".format(self.instance),
- self._deadline_webservice_changed
- ])
-
- # create namespace with instance
- layers = self._rs.getRenderLayers()
- if use_selection:
- print(">>> processing existing layers")
- sets = []
- for layer in layers:
- print(" - creating set for {}".format(layer.name()))
- render_set = cmds.sets(
- n="{}:{}".format(namespace, layer.name()))
- sets.append(render_set)
- cmds.sets(sets, forceElement=self.instance)
-
- # if no render layers are present, create default one with
- # asterix selector
- if not layers:
- render_layer = self._rs.createRenderLayer('Main')
- collection = render_layer.createCollection("defaultCollection")
- collection.getSelector().setPattern('*')
-
- def _deadline_webservice_changed(self):
- """Refresh Deadline server dependent options."""
- # get selected server
- from maya import cmds
- webservice = self.deadline_servers[
- self.server_aliases[
- cmds.getAttr("{}.deadlineServers".format(self.instance))
- ]
+ return [
+ BoolDef("vraySceneMultipleFiles",
+ label="V-Ray Scene Multiple Files",
+ default=False),
+ BoolDef("exportOnFarm",
+ label="Export on farm",
+ default=False)
]
- pools = self.deadline_module.get_deadline_pools(webservice)
- cmds.deleteAttr("{}.primaryPool".format(self.instance))
- cmds.deleteAttr("{}.secondaryPool".format(self.instance))
- cmds.addAttr(self.instance, longName="primaryPool",
- attributeType="enum",
- enumName=":".join(pools))
- cmds.addAttr(self.instance, longName="secondaryPool",
- attributeType="enum",
- enumName=":".join(["-"] + pools))
-
- def _create_vray_instance_settings(self):
- # get pools
- pools = []
-
- system_settings = get_system_settings()["modules"]
-
- deadline_enabled = system_settings["deadline"]["enabled"]
- muster_enabled = system_settings["muster"]["enabled"]
- muster_url = system_settings["muster"]["MUSTER_REST_URL"]
-
- if deadline_enabled and muster_enabled:
- self.log.error(
- "Both Deadline and Muster are enabled. " "Cannot support both."
- )
- raise CreatorError("Both Deadline and Muster are enabled")
-
- self.server_aliases = self.deadline_servers.keys()
- self.data["deadlineServers"] = self.server_aliases
-
- if deadline_enabled:
- # if default server is not between selected, use first one for
- # initial list of pools.
- try:
- deadline_url = self.deadline_servers["default"]
- except KeyError:
- deadline_url = [
- self.deadline_servers[k]
- for k in self.deadline_servers.keys()
- ][0]
-
- pool_names = self.deadline_module.get_deadline_pools(deadline_url)
-
- if muster_enabled:
- self.log.info(">>> Loading Muster credentials ...")
- self._load_credentials()
- self.log.info(">>> Getting pools ...")
- try:
- pools = self._get_muster_pools()
- except requests.exceptions.HTTPError as e:
- if e.startswith("401"):
- self.log.warning("access token expired")
- self._show_login()
- raise CreatorError("Access token expired")
- except requests.exceptions.ConnectionError:
- self.log.error("Cannot connect to Muster API endpoint.")
- raise CreatorError("Cannot connect to {}".format(muster_url))
- pool_names = []
- for pool in pools:
- self.log.info(" - pool: {}".format(pool["name"]))
- pool_names.append(pool["name"])
-
- self.data["primaryPool"] = pool_names
-
- self.data["suspendPublishJob"] = False
- self.data["priority"] = 50
- self.data["whitelist"] = False
- self.data["machineList"] = ""
- self.data["vraySceneMultipleFiles"] = False
- self.options = {"useSelection": False} # Force no content
-
- def _load_credentials(self):
- """Load Muster credentials.
-
- Load Muster credentials from file and set ``MUSTER_USER``,
- ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets.
-
- Raises:
- CreatorError: If loaded credentials are invalid.
- AttributeError: If ``MUSTER_REST_URL`` is not set.
-
- """
- app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype"))
- file_name = "muster_cred.json"
- fpath = os.path.join(app_dir, file_name)
- file = open(fpath, "r")
- muster_json = json.load(file)
- self._token = muster_json.get("token", None)
- if not self._token:
- self._show_login()
- raise CreatorError("Invalid access token for Muster")
- file.close()
- self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
- if not self.MUSTER_REST_URL:
- raise AttributeError("Muster REST API url not set")
-
- def _get_muster_pools(self):
- """Get render pools from Muster.
-
- Raises:
- CreatorError: If pool list cannot be obtained from Muster.
-
- """
- params = {"authToken": self._token}
- api_entry = "/api/pools/list"
- response = requests_get(self.MUSTER_REST_URL + api_entry,
- params=params)
- if response.status_code != 200:
- if response.status_code == 401:
- self.log.warning("Authentication token expired.")
- self._show_login()
- else:
- self.log.error(
- ("Cannot get pools from "
- "Muster: {}").format(response.status_code)
- )
- raise CreatorError("Cannot get pools from Muster")
- try:
- pools = response.json()["ResponseData"]["pools"]
- except ValueError as e:
- self.log.error("Invalid response from Muster server {}".format(e))
- raise CreatorError("Invalid response from Muster server")
-
- return pools
-
- def _show_login(self):
- # authentication token expired so we need to login to Muster
- # again to get it. We use Pype API call to show login window.
- api_url = "{}/muster/show_login".format(
- os.environ["OPENPYPE_WEBSERVER_URL"])
- self.log.debug(api_url)
- login_response = requests_get(api_url, timeout=1)
- if login_response.status_code != 200:
- self.log.error("Cannot show login form to Muster")
- raise CreatorError("Cannot show login form to Muster")
diff --git a/openpype/hosts/maya/plugins/create/create_workfile.py b/openpype/hosts/maya/plugins/create/create_workfile.py
new file mode 100644
index 0000000000..d84753cd7f
--- /dev/null
+++ b/openpype/hosts/maya/plugins/create/create_workfile.py
@@ -0,0 +1,88 @@
+# -*- coding: utf-8 -*-
+"""Creator plugin for creating workfiles."""
+from openpype.pipeline import CreatedInstance, AutoCreator
+from openpype.client import get_asset_by_name
+from openpype.hosts.maya.api import plugin
+from maya import cmds
+
+
+class CreateWorkfile(plugin.MayaCreatorBase, AutoCreator):
+ """Workfile auto-creator."""
+ identifier = "io.openpype.creators.maya.workfile"
+ label = "Workfile"
+ family = "workfile"
+ icon = "fa5.file"
+
+ default_variant = "Main"
+
+ def create(self):
+
+ variant = self.default_variant
+ current_instance = next(
+ (
+ instance for instance in self.create_context.instances
+ if instance.creator_identifier == self.identifier
+ ), None)
+
+ project_name = self.project_name
+ asset_name = self.create_context.get_current_asset_name()
+ task_name = self.create_context.get_current_task_name()
+ host_name = self.create_context.host_name
+
+ if current_instance is None:
+ asset_doc = get_asset_by_name(project_name, asset_name)
+ subset_name = self.get_subset_name(
+ variant, task_name, asset_doc, project_name, host_name
+ )
+ data = {
+ "asset": asset_name,
+ "task": task_name,
+ "variant": variant
+ }
+ data.update(
+ self.get_dynamic_data(
+ variant, task_name, asset_doc,
+ project_name, host_name, current_instance)
+ )
+ self.log.info("Auto-creating workfile instance...")
+ current_instance = CreatedInstance(
+ self.family, subset_name, data, self
+ )
+ self._add_instance_to_context(current_instance)
+ elif (
+ current_instance["asset"] != asset_name
+ or current_instance["task"] != task_name
+ ):
+ # Update instance context if is not the same
+ asset_doc = get_asset_by_name(project_name, asset_name)
+ subset_name = self.get_subset_name(
+ variant, task_name, asset_doc, project_name, host_name
+ )
+ current_instance["asset"] = asset_name
+ current_instance["task"] = task_name
+ current_instance["subset"] = subset_name
+
+ def collect_instances(self):
+ self.cache_subsets(self.collection_shared_data)
+ cached_subsets = self.collection_shared_data["maya_cached_subsets"]
+ for node in cached_subsets.get(self.identifier, []):
+ node_data = self.read_instance_node(node)
+
+ created_instance = CreatedInstance.from_existing(node_data, self)
+ self._add_instance_to_context(created_instance)
+
+ def update_instances(self, update_list):
+ for created_inst, _changes in update_list:
+ data = created_inst.data_to_store()
+ node = data.get("instance_node")
+ if not node:
+ node = self.create_node()
+ created_inst["instance_node"] = node
+ data = created_inst.data_to_store()
+
+ self.imprint_instance_node(node, data)
+
+ def create_node(self):
+ node = cmds.sets(empty=True, name="workfileMain")
+ cmds.setAttr(node + ".hiddenInOutliner", True)
+ return node
diff --git a/openpype/hosts/maya/plugins/create/create_xgen.py b/openpype/hosts/maya/plugins/create/create_xgen.py
index 70e23cf47b..eaafb0959a 100644
--- a/openpype/hosts/maya/plugins/create/create_xgen.py
+++ b/openpype/hosts/maya/plugins/create/create_xgen.py
@@ -1,10 +1,10 @@
from openpype.hosts.maya.api import plugin
-class CreateXgen(plugin.Creator):
+class CreateXgen(plugin.MayaCreator):
"""Xgen"""
- name = "xgen"
+ identifier = "io.openpype.creators.maya.xgen"
label = "Xgen"
family = "xgen"
icon = "pagelines"
diff --git a/openpype/hosts/maya/plugins/create/create_yeti_cache.py b/openpype/hosts/maya/plugins/create/create_yeti_cache.py
index e8c3203f21..395aa62325 100644
--- a/openpype/hosts/maya/plugins/create/create_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/create/create_yeti_cache.py
@@ -1,15 +1,14 @@
-from collections import OrderedDict
-
from openpype.hosts.maya.api import (
lib,
plugin
)
+from openpype.lib import NumberDef
-class CreateYetiCache(plugin.Creator):
+class CreateYetiCache(plugin.MayaCreator):
"""Output for procedural plugin nodes of Yeti """
- name = "yetiDefault"
+ identifier = "io.openpype.creators.maya.yeticache"
label = "Yeti Cache"
family = "yeticache"
icon = "pagelines"
@@ -17,14 +16,23 @@ class CreateYetiCache(plugin.Creator):
def __init__(self, *args, **kwargs):
super(CreateYetiCache, self).__init__(*args, **kwargs)
- self.data["preroll"] = 0
+ defs = [
+ NumberDef("preroll",
+ label="Preroll",
+ minimum=0,
+ default=0,
+ decimals=0)
+ ]
# Add animation data without step and handles
- anim_data = lib.collect_animation_data()
- anim_data.pop("step")
- anim_data.pop("handleStart")
- anim_data.pop("handleEnd")
- self.data.update(anim_data)
+ defs.extend(lib.collect_animation_defs())
+ remove = {"step", "handleStart", "handleEnd"}
+ defs = [attr_def for attr_def in defs if attr_def.key not in remove]
- # Add samples
- self.data["samples"] = 3
+ # Add samples after frame range
+ defs.append(
+ NumberDef("samples",
+ label="Samples",
+ default=3,
+ decimals=0)
+ )
diff --git a/openpype/hosts/maya/plugins/create/create_yeti_rig.py b/openpype/hosts/maya/plugins/create/create_yeti_rig.py
index 7abe2988cd..445bcf46d8 100644
--- a/openpype/hosts/maya/plugins/create/create_yeti_rig.py
+++ b/openpype/hosts/maya/plugins/create/create_yeti_rig.py
@@ -6,18 +6,22 @@ from openpype.hosts.maya.api import (
)
-class CreateYetiRig(plugin.Creator):
+class CreateYetiRig(plugin.MayaCreator):
"""Output for procedural plugin nodes ( Yeti / XGen / etc)"""
+ identifier = "io.openpype.creators.maya.yetirig"
label = "Yeti Rig"
family = "yetiRig"
icon = "usb"
- def process(self):
+ def create(self, subset_name, instance_data, pre_create_data):
with lib.undo_chunk():
- instance = super(CreateYetiRig, self).process()
+ instance = super(CreateYetiRig, self).create(subset_name,
+ instance_data,
+ pre_create_data)
+ instance_node = instance.get("instance_node")
self.log.info("Creating Rig instance set up ...")
input_meshes = cmds.sets(name="input_SET", empty=True)
- cmds.sets(input_meshes, forceElement=instance)
+ cmds.sets(input_meshes, forceElement=instance_node)
diff --git a/openpype/hosts/maya/plugins/load/load_look.py b/openpype/hosts/maya/plugins/load/load_look.py
index 8f3e017658..b060ae2b05 100644
--- a/openpype/hosts/maya/plugins/load/load_look.py
+++ b/openpype/hosts/maya/plugins/load/load_look.py
@@ -29,7 +29,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
color = "orange"
def process_reference(self, context, name, namespace, options):
- import maya.cmds as cmds
+ from maya import cmds
with lib.maintained_selection():
file_url = self.prepare_root_value(self.fname,
@@ -113,8 +113,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# region compute lookup
nodes_by_id = defaultdict(list)
- for n in nodes:
- nodes_by_id[lib.get_id(n)].append(n)
+ for node in nodes:
+ nodes_by_id[lib.get_id(node)].append(node)
lib.apply_attributes(attributes, nodes_by_id)
def _get_nodes_with_shader(self, shader_nodes):
@@ -125,14 +125,16 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
Returns
node names
"""
- import maya.cmds as cmds
+ from maya import cmds
- nodes_list = []
for shader in shader_nodes:
- connections = cmds.listConnections(cmds.listHistory(shader, f=1),
+ future = cmds.listHistory(shader, future=True)
+ connections = cmds.listConnections(future,
type='mesh')
if connections:
- for connection in connections:
- nodes_list.extend(cmds.listRelatives(connection,
- shapes=True))
- return nodes_list
+ # Ensure unique entries only to optimize query and results
+ connections = list(set(connections))
+ return cmds.listRelatives(connections,
+ shapes=True,
+ fullPath=True) or []
+ return []
diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py
index 74ca27ff3c..deadd5b9d3 100644
--- a/openpype/hosts/maya/plugins/load/load_reference.py
+++ b/openpype/hosts/maya/plugins/load/load_reference.py
@@ -221,6 +221,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self._lock_camera_transforms(members)
def _post_process_rig(self, name, namespace, context, options):
+
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, options=options, log=self.log
diff --git a/openpype/hosts/maya/plugins/publish/collect_current_file.py b/openpype/hosts/maya/plugins/publish/collect_current_file.py
new file mode 100644
index 0000000000..e777a209d4
--- /dev/null
+++ b/openpype/hosts/maya/plugins/publish/collect_current_file.py
@@ -0,0 +1,17 @@
+
+import pyblish.api
+
+from maya import cmds
+
+
+class CollectCurrentFile(pyblish.api.ContextPlugin):
+ """Inject the current working file."""
+
+ order = pyblish.api.CollectorOrder - 0.4
+ label = "Maya Current File"
+ hosts = ['maya']
+ families = ["workfile"]
+
+ def process(self, context):
+ """Inject the current working file"""
+ context.data['currentFile'] = cmds.file(query=True, sceneName=True)
diff --git a/openpype/hosts/maya/plugins/publish/collect_inputs.py b/openpype/hosts/maya/plugins/publish/collect_inputs.py
index 895c92762b..30ed21da9c 100644
--- a/openpype/hosts/maya/plugins/publish/collect_inputs.py
+++ b/openpype/hosts/maya/plugins/publish/collect_inputs.py
@@ -172,7 +172,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""
# Get the renderlayer
- renderlayer = instance.data.get("setMembers")
+ renderlayer = instance.data.get("renderlayer")
if renderlayer == "defaultRenderLayer":
# Assume all loaded containers in the scene are inputs
diff --git a/openpype/hosts/maya/plugins/publish/collect_instances.py b/openpype/hosts/maya/plugins/publish/collect_instances.py
index 74bdc11a2c..5f914b40d7 100644
--- a/openpype/hosts/maya/plugins/publish/collect_instances.py
+++ b/openpype/hosts/maya/plugins/publish/collect_instances.py
@@ -1,12 +1,11 @@
from maya import cmds
import pyblish.api
-import json
from openpype.hosts.maya.api.lib import get_all_children
-class CollectInstances(pyblish.api.ContextPlugin):
- """Gather instances by objectSet and pre-defined attribute
+class CollectNewInstances(pyblish.api.InstancePlugin):
+ """Gather members for instances and pre-defined attribute
This collector takes into account assets that are associated with
an objectSet and marked with a unique identifier;
@@ -25,134 +24,70 @@ class CollectInstances(pyblish.api.ContextPlugin):
"""
- label = "Collect Instances"
+ label = "Collect New Instance Data"
order = pyblish.api.CollectorOrder
hosts = ["maya"]
- def process(self, context):
+ def process(self, instance):
- objectset = cmds.ls("*.id", long=True, type="objectSet",
- recursive=True, objectsOnly=True)
+ objset = instance.data.get("instance_node")
+ if not objset:
+ self.log.debug("Instance has no `instance_node` data")
- context.data['objectsets'] = objectset
- for objset in objectset:
-
- if not cmds.attributeQuery("id", node=objset, exists=True):
- continue
-
- id_attr = "{}.id".format(objset)
- if cmds.getAttr(id_attr) != "pyblish.avalon.instance":
- continue
-
- # The developer is responsible for specifying
- # the family of each instance.
- has_family = cmds.attributeQuery("family",
- node=objset,
- exists=True)
- assert has_family, "\"%s\" was missing a family" % objset
-
- members = cmds.sets(objset, query=True)
- if members is None:
- self.log.warning("Skipped empty instance: \"%s\" " % objset)
- continue
-
- self.log.info("Creating instance for {}".format(objset))
-
- data = dict()
-
- # Apply each user defined attribute as data
- for attr in cmds.listAttr(objset, userDefined=True) or list():
- try:
- value = cmds.getAttr("%s.%s" % (objset, attr))
- except Exception:
- # Some attributes cannot be read directly,
- # such as mesh and color attributes. These
- # are considered non-essential to this
- # particular publishing pipeline.
- value = None
- data[attr] = value
-
- # temporarily translation of `active` to `publish` till issue has
- # been resolved, https://github.com/pyblish/pyblish-base/issues/307
- if "active" in data:
- data["publish"] = data["active"]
+ # TODO: We might not want to do this in the future
+ # Merge creator attributes into instance.data just backwards compatible
+ # code still runs as expected
+ creator_attributes = instance.data.get("creator_attributes", {})
+ if creator_attributes:
+ instance.data.update(creator_attributes)
+ members = cmds.sets(objset, query=True) or []
+ if members:
# Collect members
members = cmds.ls(members, long=True) or []
dag_members = cmds.ls(members, type="dagNode", long=True)
children = get_all_children(dag_members)
children = cmds.ls(children, noIntermediate=True, long=True)
-
- parents = []
- if data.get("includeParentHierarchy", True):
- # If `includeParentHierarchy` then include the parents
- # so they will also be picked up in the instance by validators
- parents = self.get_all_parents(members)
+ parents = (
+ self.get_all_parents(members)
+ if creator_attributes.get("includeParentHierarchy", True)
+ else []
+ )
members_hierarchy = list(set(members + children + parents))
- if 'families' not in data:
- data['families'] = [data.get('family')]
-
- # Create the instance
- instance = context.create_instance(objset)
instance[:] = members_hierarchy
- instance.data["objset"] = objset
- # Store the exact members of the object set
- instance.data["setMembers"] = members
+ elif instance.data["family"] != "workfile":
+ self.log.warning("Empty instance: \"%s\" " % objset)
+ # Store the exact members of the object set
+ instance.data["setMembers"] = members
- # Define nice label
- name = cmds.ls(objset, long=False)[0] # use short name
- label = "{0} ({1})".format(name, data["asset"])
+ # TODO: This might make more sense as a separate collector
+ # Convert frame values to integers
+ for attr_name in (
+ "handleStart", "handleEnd", "frameStart", "frameEnd",
+ ):
+ value = instance.data.get(attr_name)
+ if value is not None:
+ instance.data[attr_name] = int(value)
- # Convert frame values to integers
- for attr_name in (
- "handleStart", "handleEnd", "frameStart", "frameEnd",
- ):
- value = data.get(attr_name)
- if value is not None:
- data[attr_name] = int(value)
+ # Append start frame and end frame to label if present
+ if "frameStart" in instance.data and "frameEnd" in instance.data:
+ # Take handles from context if not set locally on the instance
+ for key in ["handleStart", "handleEnd"]:
+ if key not in instance.data:
+ value = instance.context.data[key]
+ if value is not None:
+ value = int(value)
+ instance.data[key] = value
- # Append start frame and end frame to label if present
- if "frameStart" in data and "frameEnd" in data:
- # Take handles from context if not set locally on the instance
- for key in ["handleStart", "handleEnd"]:
- if key not in data:
- value = context.data[key]
- if value is not None:
- value = int(value)
- data[key] = value
-
- data["frameStartHandle"] = int(
- data["frameStart"] - data["handleStart"]
- )
- data["frameEndHandle"] = int(
- data["frameEnd"] + data["handleEnd"]
- )
-
- label += " [{0}-{1}]".format(
- data["frameStartHandle"], data["frameEndHandle"]
- )
-
- instance.data["label"] = label
- instance.data.update(data)
- self.log.debug("{}".format(instance.data))
-
- # Produce diagnostic message for any graphical
- # user interface interested in visualising it.
- self.log.info("Found: \"%s\" " % instance.data["name"])
- self.log.debug(
- "DATA: {} ".format(json.dumps(instance.data, indent=4)))
-
- def sort_by_family(instance):
- """Sort by family"""
- return instance.data.get("families", instance.data.get("family"))
-
- # Sort/grouped by family (preserving local index)
- context[:] = sorted(context, key=sort_by_family)
-
- return context
+ instance.data["frameStartHandle"] = int(
+ instance.data["frameStart"] - instance.data["handleStart"]
+ )
+ instance.data["frameEndHandle"] = int(
+ instance.data["frameEnd"] + instance.data["handleEnd"]
+ )
def get_all_parents(self, nodes):
"""Get all parents by using string operations (optimization)
diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py
index 287ddc228b..b3da920566 100644
--- a/openpype/hosts/maya/plugins/publish/collect_look.py
+++ b/openpype/hosts/maya/plugins/publish/collect_look.py
@@ -285,17 +285,17 @@ class CollectLook(pyblish.api.InstancePlugin):
instance: Instance to collect.
"""
- self.log.info("Looking for look associations "
+ self.log.debug("Looking for look associations "
"for %s" % instance.data['name'])
# Discover related object sets
- self.log.info("Gathering sets ...")
+ self.log.debug("Gathering sets ...")
sets = self.collect_sets(instance)
# Lookup set (optimization)
instance_lookup = set(cmds.ls(instance, long=True))
- self.log.info("Gathering set relations ...")
+ self.log.debug("Gathering set relations ...")
# Ensure iteration happen in a list so we can remove keys from the
# dict within the loop
@@ -308,7 +308,7 @@ class CollectLook(pyblish.api.InstancePlugin):
# if node is specified as renderer node type, it will be
# serialized with its attributes.
if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES:
- self.log.info("- {} is {}".format(
+ self.log.debug("- {} is {}".format(
obj_set, cmds.nodeType(obj_set)))
node_attrs = []
@@ -354,13 +354,13 @@ class CollectLook(pyblish.api.InstancePlugin):
# Remove sets that didn't have any members assigned in the end
# Thus the data will be limited to only what we need.
- self.log.info("obj_set {}".format(sets[obj_set]))
+ self.log.debug("obj_set {}".format(sets[obj_set]))
if not sets[obj_set]["members"]:
self.log.info(
"Removing redundant set information: {}".format(obj_set))
sets.pop(obj_set, None)
- self.log.info("Gathering attribute changes to instance members..")
+ self.log.debug("Gathering attribute changes to instance members..")
attributes = self.collect_attributes_changed(instance)
# Store data on the instance
@@ -433,14 +433,14 @@ class CollectLook(pyblish.api.InstancePlugin):
for node_type in all_supported_nodes:
files.extend(cmds.ls(history, type=node_type, long=True))
- self.log.info("Collected file nodes:\n{}".format(files))
+ self.log.debug("Collected file nodes:\n{}".format(files))
# Collect textures if any file nodes are found
instance.data["resources"] = []
for n in files:
for res in self.collect_resources(n):
instance.data["resources"].append(res)
- self.log.info("Collected resources: {}".format(
+ self.log.debug("Collected resources: {}".format(
instance.data["resources"]))
# Log warning when no relevant sets were retrieved for the look.
@@ -536,7 +536,7 @@ class CollectLook(pyblish.api.InstancePlugin):
# Collect changes to "custom" attributes
node_attrs = get_look_attrs(node)
- self.log.info(
+ self.log.debug(
"Node \"{0}\" attributes: {1}".format(node, node_attrs)
)
diff --git a/openpype/hosts/maya/plugins/publish/collect_pointcache.py b/openpype/hosts/maya/plugins/publish/collect_pointcache.py
index d0430c5612..bb9065792f 100644
--- a/openpype/hosts/maya/plugins/publish/collect_pointcache.py
+++ b/openpype/hosts/maya/plugins/publish/collect_pointcache.py
@@ -16,14 +16,16 @@ class CollectPointcache(pyblish.api.InstancePlugin):
instance.data["families"].append("publish.farm")
proxy_set = None
- for node in instance.data["setMembers"]:
- if cmds.nodeType(node) != "objectSet":
- continue
- members = cmds.sets(node, query=True)
- if members is None:
- self.log.warning("Skipped empty objectset: \"%s\" " % node)
- continue
+ for node in cmds.ls(instance.data["setMembers"],
+ exactType="objectSet"):
+ # Find proxy_SET objectSet in the instance for proxy meshes
if node.endswith("proxy_SET"):
+ members = cmds.sets(node, query=True)
+ if members is None:
+ self.log.debug("Skipped empty proxy_SET: \"%s\" " % node)
+ continue
+ self.log.debug("Found proxy set: {}".format(node))
+
proxy_set = node
instance.data["proxy"] = []
instance.data["proxyRoots"] = []
@@ -36,8 +38,9 @@ class CollectPointcache(pyblish.api.InstancePlugin):
cmds.listRelatives(member, shapes=True, fullPath=True)
)
self.log.debug(
- "proxy members: {}".format(instance.data["proxy"])
+ "Found proxy members: {}".format(instance.data["proxy"])
)
+ break
if proxy_set:
instance.remove(proxy_set)
diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py
index babd494758..2b31e61d6d 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render.py
@@ -39,27 +39,29 @@ Provides:
instance -> pixelAspect
"""
-import re
import os
import platform
import json
from maya import cmds
-import maya.app.renderSetup.model.renderSetup as renderSetup
import pyblish.api
+from openpype.pipeline import KnownPublishError
from openpype.lib import get_formatted_current_time
-from openpype.pipeline import legacy_io
-from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501
+from openpype.hosts.maya.api.lib_renderproducts import (
+ get as get_layer_render_products,
+ UnsupportedRendererException
+)
from openpype.hosts.maya.api import lib
-class CollectMayaRender(pyblish.api.ContextPlugin):
+class CollectMayaRender(pyblish.api.InstancePlugin):
"""Gather all publishable render layers from renderSetup."""
order = pyblish.api.CollectorOrder + 0.01
hosts = ["maya"]
+ families = ["renderlayer"]
label = "Collect Render Layers"
sync_workfile_version = False
@@ -69,388 +71,251 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"underscore": "_"
}
- def process(self, context):
- """Entry point to collector."""
- render_instance = None
+ def process(self, instance):
- for instance in context:
- if "rendering" in instance.data["families"]:
- render_instance = instance
- render_instance.data["remove"] = True
+ # TODO: Re-add force enable of workfile instance?
+ # TODO: Re-add legacy layer support with LAYER_ prefix but in Creator
+ # TODO: Set and collect active state of RenderLayer in Creator using
+ # renderlayer.isRenderable()
+ context = instance.context
- # make sure workfile instance publishing is enabled
- if "workfile" in instance.data["families"]:
- instance.data["publish"] = True
-
- if not render_instance:
- self.log.info(
- "No render instance found, skipping render "
- "layer collection."
- )
- return
-
- render_globals = render_instance
- collected_render_layers = render_instance.data["setMembers"]
+ layer = instance.data["transientData"]["layer"]
+ objset = instance.data.get("instance_node")
filepath = context.data["currentFile"].replace("\\", "/")
- asset = legacy_io.Session["AVALON_ASSET"]
workspace = context.data["workspaceDir"]
- # Retrieve render setup layers
- rs = renderSetup.instance()
- maya_render_layers = {
- layer.name(): layer for layer in rs.getRenderLayers()
+ # check if layer is renderable
+ if not layer.isRenderable():
+ msg = "Render layer [ {} ] is not " "renderable".format(
+ layer.name()
+ )
+ self.log.warning(msg)
+
+ # detect if there are sets (subsets) to attach render to
+ sets = cmds.sets(objset, query=True) or []
+ attach_to = []
+ for s in sets:
+ if not cmds.attributeQuery("family", node=s, exists=True):
+ continue
+
+ attach_to.append(
+ {
+ "version": None, # we need integrator for that
+ "subset": s,
+ "family": cmds.getAttr("{}.family".format(s)),
+ }
+ )
+ self.log.info(" -> attach render to: {}".format(s))
+
+ layer_name = layer.name()
+
+ # collect all frames we are expecting to be rendered
+ # return all expected files for all cameras and aovs in given
+ # frame range
+ try:
+ layer_render_products = get_layer_render_products(layer.name())
+ except UnsupportedRendererException as exc:
+ raise KnownPublishError(exc)
+ render_products = layer_render_products.layer_data.products
+ assert render_products, "no render products generated"
+ expected_files = []
+ multipart = False
+ for product in render_products:
+ if product.multipart:
+ multipart = True
+ product_name = product.productName
+ if product.camera and layer_render_products.has_camera_token():
+ product_name = "{}{}".format(
+ product.camera,
+ "_{}".format(product_name) if product_name else "")
+ expected_files.append(
+ {
+ product_name: layer_render_products.get_files(
+ product)
+ })
+
+ has_cameras = any(product.camera for product in render_products)
+ assert has_cameras, "No render cameras found."
+
+ self.log.info("multipart: {}".format(
+ multipart))
+ assert expected_files, "no file names were generated, this is a bug"
+ self.log.info(
+ "expected files: {}".format(
+ json.dumps(expected_files, indent=4, sort_keys=True)
+ )
+ )
+
+ # if we want to attach render to subset, check if we have AOV's
+ # in expectedFiles. If so, raise error as we cannot attach AOV
+ # (considered to be subset on its own) to another subset
+ if attach_to:
+ assert isinstance(expected_files, list), (
+ "attaching multiple AOVs or renderable cameras to "
+ "subset is not supported"
+ )
+
+ # append full path
+ aov_dict = {}
+ default_render_folder = context.data.get("project_settings")\
+ .get("maya")\
+ .get("RenderSettings")\
+ .get("default_render_image_folder") or ""
+ # replace relative paths with absolute. Render products are
+ # returned as list of dictionaries.
+ publish_meta_path = None
+ for aov in expected_files:
+ full_paths = []
+ aov_first_key = list(aov.keys())[0]
+ for file in aov[aov_first_key]:
+ full_path = os.path.join(workspace, default_render_folder,
+ file)
+ full_path = full_path.replace("\\", "/")
+ full_paths.append(full_path)
+ publish_meta_path = os.path.dirname(full_path)
+ aov_dict[aov_first_key] = full_paths
+ full_exp_files = [aov_dict]
+ self.log.info(full_exp_files)
+
+ if publish_meta_path is None:
+ raise KnownPublishError("Unable to detect any expected output "
+ "images for: {}. Make sure you have a "
+ "renderable camera and a valid frame "
+ "range set for your renderlayer."
+ "".format(instance.name))
+
+ frame_start_render = int(self.get_render_attribute(
+ "startFrame", layer=layer_name))
+ frame_end_render = int(self.get_render_attribute(
+ "endFrame", layer=layer_name))
+
+ if (int(context.data["frameStartHandle"]) == frame_start_render
+ and int(context.data["frameEndHandle"]) == frame_end_render): # noqa: W503, E501
+
+ handle_start = context.data["handleStart"]
+ handle_end = context.data["handleEnd"]
+ frame_start = context.data["frameStart"]
+ frame_end = context.data["frameEnd"]
+ frame_start_handle = context.data["frameStartHandle"]
+ frame_end_handle = context.data["frameEndHandle"]
+ else:
+ handle_start = 0
+ handle_end = 0
+ frame_start = frame_start_render
+ frame_end = frame_end_render
+ frame_start_handle = frame_start_render
+ frame_end_handle = frame_end_render
+
+ # find common path to store metadata
+ # so if image prefix is branching to many directories
+ # metadata file will be located in top-most common
+ # directory.
+ # TODO: use `os.path.commonpath()` after switch to Python 3
+ publish_meta_path = os.path.normpath(publish_meta_path)
+ common_publish_meta_path = os.path.splitdrive(
+ publish_meta_path)[0]
+ if common_publish_meta_path:
+ common_publish_meta_path += os.path.sep
+ for part in publish_meta_path.replace(
+ common_publish_meta_path, "").split(os.path.sep):
+ common_publish_meta_path = os.path.join(
+ common_publish_meta_path, part)
+ if part == layer_name:
+ break
+
+ # TODO: replace this terrible linux hotfix with real solution :)
+ if platform.system().lower() in ["linux", "darwin"]:
+ common_publish_meta_path = "/" + common_publish_meta_path
+
+ self.log.info(
+ "Publish meta path: {}".format(common_publish_meta_path))
+
+ # Get layer specific settings, might be overrides
+ colorspace_data = lib.get_color_management_preferences()
+ data = {
+ "farm": True,
+ "attachTo": attach_to,
+
+ "multipartExr": multipart,
+ "review": instance.data.get("review") or False,
+
+ # Frame range
+ "handleStart": handle_start,
+ "handleEnd": handle_end,
+ "frameStart": frame_start,
+ "frameEnd": frame_end,
+ "frameStartHandle": frame_start_handle,
+ "frameEndHandle": frame_end_handle,
+ "byFrameStep": int(
+ self.get_render_attribute("byFrameStep",
+ layer=layer_name)),
+
+ # Renderlayer
+ "renderer": self.get_render_attribute(
+ "currentRenderer", layer=layer_name).lower(),
+ "setMembers": layer._getLegacyNodeName(), # legacy renderlayer
+ "renderlayer": layer_name,
+
+ # todo: is `time` and `author` still needed?
+ "time": get_formatted_current_time(),
+ "author": context.data["user"],
+
+ # Add source to allow tracing back to the scene from
+ # which was submitted originally
+ "source": filepath,
+ "expectedFiles": full_exp_files,
+ "publishRenderMetadataFolder": common_publish_meta_path,
+ "renderProducts": layer_render_products,
+ "resolutionWidth": lib.get_attr_in_layer(
+ "defaultResolution.width", layer=layer_name
+ ),
+ "resolutionHeight": lib.get_attr_in_layer(
+ "defaultResolution.height", layer=layer_name
+ ),
+ "pixelAspect": lib.get_attr_in_layer(
+ "defaultResolution.pixelAspect", layer=layer_name
+ ),
+
+ # todo: Following are likely not needed due to collecting from the
+ # instance itself if they are attribute definitions
+ "tileRendering": instance.data.get("tileRendering") or False, # noqa: E501
+ "tilesX": instance.data.get("tilesX") or 2,
+ "tilesY": instance.data.get("tilesY") or 2,
+ "convertToScanline": instance.data.get(
+ "convertToScanline") or False,
+ "useReferencedAovs": instance.data.get(
+ "useReferencedAovs") or instance.data.get(
+ "vrayUseReferencedAovs") or False,
+ "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501
+ "renderSetupIncludeLights": instance.data.get(
+ "renderSetupIncludeLights"
+ ),
+ "colorspaceConfig": colorspace_data["config"],
+ "colorspaceDisplay": colorspace_data["display"],
+ "colorspaceView": colorspace_data["view"],
}
- for layer in collected_render_layers:
- if layer.startswith("LAYER_"):
- # this is support for legacy mode where render layers
- # started with `LAYER_` prefix.
- layer_name_pattern = r"^LAYER_(.*)"
- else:
- # new way is to prefix render layer name with instance
- # namespace.
- layer_name_pattern = r"^.+:(.*)"
+ if self.sync_workfile_version:
+ data["version"] = context.data["version"]
+ for instance in context:
+ if instance.data['family'] == "workfile":
+ instance.data["version"] = context.data["version"]
- # todo: We should have a more explicit way to link the renderlayer
- match = re.match(layer_name_pattern, layer)
- if not match:
- msg = "Invalid layer name in set [ {} ]".format(layer)
- self.log.warning(msg)
- continue
-
- expected_layer_name = match.group(1)
- self.log.info("Processing '{}' as layer [ {} ]"
- "".format(layer, expected_layer_name))
-
- # check if layer is part of renderSetup
- if expected_layer_name not in maya_render_layers:
- msg = "Render layer [ {} ] is not in " "Render Setup".format(
- expected_layer_name
- )
- self.log.warning(msg)
- continue
-
- # check if layer is renderable
- if not maya_render_layers[expected_layer_name].isRenderable():
- msg = "Render layer [ {} ] is not " "renderable".format(
- expected_layer_name
- )
- self.log.warning(msg)
- continue
-
- # detect if there are sets (subsets) to attach render to
- sets = cmds.sets(layer, query=True) or []
- attach_to = []
- for s in sets:
- if not cmds.attributeQuery("family", node=s, exists=True):
- continue
-
- attach_to.append(
- {
- "version": None, # we need integrator for that
- "subset": s,
- "family": cmds.getAttr("{}.family".format(s)),
- }
- )
- self.log.info(" -> attach render to: {}".format(s))
-
- layer_name = "rs_{}".format(expected_layer_name)
-
- # collect all frames we are expecting to be rendered
- # return all expected files for all cameras and aovs in given
- # frame range
- layer_render_products = get_layer_render_products(layer_name)
- render_products = layer_render_products.layer_data.products
- assert render_products, "no render products generated"
- exp_files = []
- multipart = False
- for product in render_products:
- if product.multipart:
- multipart = True
- product_name = product.productName
- if product.camera and layer_render_products.has_camera_token():
- product_name = "{}{}".format(
- product.camera,
- "_" + product_name if product_name else "")
- exp_files.append(
- {
- product_name: layer_render_products.get_files(
- product)
- })
-
- has_cameras = any(product.camera for product in render_products)
- assert has_cameras, "No render cameras found."
-
- self.log.info("multipart: {}".format(
- multipart))
- assert exp_files, "no file names were generated, this is bug"
- self.log.info(
- "expected files: {}".format(
- json.dumps(exp_files, indent=4, sort_keys=True)
- )
- )
-
- # if we want to attach render to subset, check if we have AOV's
- # in expectedFiles. If so, raise error as we cannot attach AOV
- # (considered to be subset on its own) to another subset
- if attach_to:
- assert isinstance(exp_files, list), (
- "attaching multiple AOVs or renderable cameras to "
- "subset is not supported"
- )
-
- # append full path
- aov_dict = {}
- default_render_file = context.data.get('project_settings')\
- .get('maya')\
- .get('RenderSettings')\
- .get('default_render_image_folder') or ""
- # replace relative paths with absolute. Render products are
- # returned as list of dictionaries.
- publish_meta_path = None
- for aov in exp_files:
- full_paths = []
- aov_first_key = list(aov.keys())[0]
- for file in aov[aov_first_key]:
- full_path = os.path.join(workspace, default_render_file,
- file)
- full_path = full_path.replace("\\", "/")
- full_paths.append(full_path)
- publish_meta_path = os.path.dirname(full_path)
- aov_dict[aov_first_key] = full_paths
- full_exp_files = [aov_dict]
-
- frame_start_render = int(self.get_render_attribute(
- "startFrame", layer=layer_name))
- frame_end_render = int(self.get_render_attribute(
- "endFrame", layer=layer_name))
-
- if (int(context.data['frameStartHandle']) == frame_start_render
- and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
-
- handle_start = context.data['handleStart']
- handle_end = context.data['handleEnd']
- frame_start = context.data['frameStart']
- frame_end = context.data['frameEnd']
- frame_start_handle = context.data['frameStartHandle']
- frame_end_handle = context.data['frameEndHandle']
- else:
- handle_start = 0
- handle_end = 0
- frame_start = frame_start_render
- frame_end = frame_end_render
- frame_start_handle = frame_start_render
- frame_end_handle = frame_end_render
-
- # find common path to store metadata
- # so if image prefix is branching to many directories
- # metadata file will be located in top-most common
- # directory.
- # TODO: use `os.path.commonpath()` after switch to Python 3
- publish_meta_path = os.path.normpath(publish_meta_path)
- common_publish_meta_path = os.path.splitdrive(
- publish_meta_path)[0]
- if common_publish_meta_path:
- common_publish_meta_path += os.path.sep
- for part in publish_meta_path.replace(
- common_publish_meta_path, "").split(os.path.sep):
- common_publish_meta_path = os.path.join(
- common_publish_meta_path, part)
- if part == expected_layer_name:
- break
-
- # TODO: replace this terrible linux hotfix with real solution :)
- if platform.system().lower() in ["linux", "darwin"]:
- common_publish_meta_path = "/" + common_publish_meta_path
-
- self.log.info(
- "Publish meta path: {}".format(common_publish_meta_path))
-
- self.log.info(full_exp_files)
- self.log.info("collecting layer: {}".format(layer_name))
- # Get layer specific settings, might be overrides
- colorspace_data = lib.get_color_management_preferences()
- data = {
- "subset": expected_layer_name,
- "attachTo": attach_to,
- "setMembers": layer_name,
- "multipartExr": multipart,
- "review": render_instance.data.get("review") or False,
- "publish": True,
-
- "handleStart": handle_start,
- "handleEnd": handle_end,
- "frameStart": frame_start,
- "frameEnd": frame_end,
- "frameStartHandle": frame_start_handle,
- "frameEndHandle": frame_end_handle,
- "byFrameStep": int(
- self.get_render_attribute("byFrameStep",
- layer=layer_name)),
- "renderer": self.get_render_attribute(
- "currentRenderer", layer=layer_name).lower(),
- # instance subset
- "family": "renderlayer",
- "families": ["renderlayer"],
- "asset": asset,
- "time": get_formatted_current_time(),
- "author": context.data["user"],
- # Add source to allow tracing back to the scene from
- # which was submitted originally
- "source": filepath,
- "expectedFiles": full_exp_files,
- "publishRenderMetadataFolder": common_publish_meta_path,
- "renderProducts": layer_render_products,
- "resolutionWidth": lib.get_attr_in_layer(
- "defaultResolution.width", layer=layer_name
- ),
- "resolutionHeight": lib.get_attr_in_layer(
- "defaultResolution.height", layer=layer_name
- ),
- "pixelAspect": lib.get_attr_in_layer(
- "defaultResolution.pixelAspect", layer=layer_name
- ),
- "tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501
- "tilesX": render_instance.data.get("tilesX") or 2,
- "tilesY": render_instance.data.get("tilesY") or 2,
- "priority": render_instance.data.get("priority"),
- "convertToScanline": render_instance.data.get(
- "convertToScanline") or False,
- "useReferencedAovs": render_instance.data.get(
- "useReferencedAovs") or render_instance.data.get(
- "vrayUseReferencedAovs") or False,
- "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501
- "renderSetupIncludeLights": render_instance.data.get(
- "renderSetupIncludeLights"
- ),
- "colorspaceConfig": colorspace_data["config"],
- "colorspaceDisplay": colorspace_data["display"],
- "colorspaceView": colorspace_data["view"],
- "strict_error_checking": render_instance.data.get(
- "strict_error_checking", True
- )
- }
-
- # Collect Deadline url if Deadline module is enabled
- deadline_settings = (
- context.data["system_settings"]["modules"]["deadline"]
- )
- if deadline_settings["enabled"]:
- data["deadlineUrl"] = render_instance.data["deadlineUrl"]
-
- if self.sync_workfile_version:
- data["version"] = context.data["version"]
-
- for instance in context:
- if instance.data['family'] == "workfile":
- instance.data["version"] = context.data["version"]
-
- # handle standalone renderers
- if render_instance.data.get("vrayScene") is True:
- data["families"].append("vrayscene_render")
-
- if render_instance.data.get("assScene") is True:
- data["families"].append("assscene_render")
-
- # Include (optional) global settings
- # Get global overrides and translate to Deadline values
- overrides = self.parse_options(str(render_globals))
- data.update(**overrides)
-
- # get string values for pools
- primary_pool = overrides["renderGlobals"]["Pool"]
- secondary_pool = overrides["renderGlobals"].get("SecondaryPool")
- data["primaryPool"] = primary_pool
- data["secondaryPool"] = secondary_pool
-
- # Define nice label
- label = "{0} ({1})".format(expected_layer_name, data["asset"])
- label += " [{0}-{1}]".format(
- int(data["frameStartHandle"]), int(data["frameEndHandle"])
- )
-
- instance = context.create_instance(expected_layer_name)
- instance.data["label"] = label
- instance.data["farm"] = True
- instance.data.update(data)
-
- def parse_options(self, render_globals):
- """Get all overrides with a value, skip those without.
-
- Here's the kicker. These globals override defaults in the submission
- integrator, but an empty value means no overriding is made.
- Otherwise, Frames would override the default frames set under globals.
-
- Args:
- render_globals (str): collection of render globals
-
- Returns:
- dict: only overrides with values
-
- """
- attributes = lib.read(render_globals)
-
- options = {"renderGlobals": {}}
- options["renderGlobals"]["Priority"] = attributes["priority"]
-
- # Check for specific pools
- pool_a, pool_b = self._discover_pools(attributes)
- options["renderGlobals"].update({"Pool": pool_a})
- if pool_b:
- options["renderGlobals"].update({"SecondaryPool": pool_b})
-
- # Machine list
- machine_list = attributes["machineList"]
- if machine_list:
- key = "Whitelist" if attributes["whitelist"] else "Blacklist"
- options["renderGlobals"][key] = machine_list
-
- # Suspend publish job
- state = "Suspended" if attributes["suspendPublishJob"] else "Active"
- options["publishJobState"] = state
-
- chunksize = attributes.get("framesPerTask", 1)
- options["renderGlobals"]["ChunkSize"] = chunksize
+ # Define nice label
+ label = "{0} ({1})".format(layer_name, instance.data["asset"])
+ label += " [{0}-{1}]".format(
+ int(data["frameStartHandle"]), int(data["frameEndHandle"])
+ )
+ data["label"] = label
# Override frames should be False if extendFrames is False. This is
# to ensure it doesn't go off doing crazy unpredictable things
- override_frames = False
- extend_frames = attributes.get("extendFrames", False)
- if extend_frames:
- override_frames = attributes.get("overrideExistingFrame", False)
+ extend_frames = instance.data.get("extendFrames", False)
+ if not extend_frames:
+ instance.data["overrideExistingFrame"] = False
- options["extendFrames"] = extend_frames
- options["overrideExistingFrame"] = override_frames
-
- maya_render_plugin = "MayaBatch"
-
- options["mayaRenderPlugin"] = maya_render_plugin
-
- return options
-
- def _discover_pools(self, attributes):
-
- pool_a = None
- pool_b = None
-
- # Check for specific pools
- pool_b = []
- if "primaryPool" in attributes:
- pool_a = attributes["primaryPool"]
- if "secondaryPool" in attributes:
- pool_b = attributes["secondaryPool"]
-
- else:
- # Backwards compatibility
- pool_str = attributes.get("pools", None)
- if pool_str:
- pool_a, pool_b = pool_str.split(";")
-
- # Ensure empty entry token is caught
- if pool_b == "-":
- pool_b = None
-
- return pool_a, pool_b
+ # Update the instace
+ instance.data.update(data)
@staticmethod
def get_render_attribute(attr, layer):
diff --git a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
index 9666499c42..c3dc31ead9 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py
@@ -50,7 +50,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
result = []
# Collect all AOVs / Render Elements
- layer = instance.data["setMembers"]
+ layer = instance.data["renderlayer"]
node_type = rp_node_types[renderer]
render_elements = cmds.ls(type=node_type)
diff --git a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
index 93a37d8693..d1c3cf3b2c 100644
--- a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
+++ b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py
@@ -19,7 +19,7 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin):
if "vrayscene_layer" in instance.data.get("families", []):
layer = instance.data.get("layer")
else:
- layer = instance.data["setMembers"]
+ layer = instance.data["renderlayer"]
self.log.info("layer: {}".format(layer))
cameras = cmds.ls(type="camera", long=True)
diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py
index 5c190a4a7b..6cb10f9066 100644
--- a/openpype/hosts/maya/plugins/publish/collect_review.py
+++ b/openpype/hosts/maya/plugins/publish/collect_review.py
@@ -18,14 +18,10 @@ class CollectReview(pyblish.api.InstancePlugin):
def process(self, instance):
- self.log.debug('instance: {}'.format(instance))
-
- task = legacy_io.Session["AVALON_TASK"]
-
# Get panel.
instance.data["panel"] = cmds.playblast(
activeEditor=True
- ).split("|")[-1]
+ ).rsplit("|", 1)[-1]
# get cameras
members = instance.data['setMembers']
@@ -34,11 +30,12 @@ class CollectReview(pyblish.api.InstancePlugin):
camera = cameras[0] if cameras else None
context = instance.context
- objectset = context.data['objectsets']
+ objectset = {
+ i.data.get("instance_node") for i in context
+ }
- # Convert enum attribute index to string for Display Lights.
- index = instance.data.get("displayLights", 0)
- display_lights = lib.DISPLAY_LIGHTS_VALUES[index]
+ # Collect display lights.
+ display_lights = instance.data.get("displayLights", "default")
if display_lights == "project_settings":
settings = instance.context.data["project_settings"]
settings = settings["maya"]["publish"]["ExtractPlayblast"]
@@ -60,7 +57,7 @@ class CollectReview(pyblish.api.InstancePlugin):
burninDataMembers["focalLength"] = focal_length
# Account for nested instances like model.
- reviewable_subsets = list(set(members) & set(objectset))
+ reviewable_subsets = list(set(members) & objectset)
if reviewable_subsets:
if len(reviewable_subsets) > 1:
raise KnownPublishError(
@@ -97,7 +94,11 @@ class CollectReview(pyblish.api.InstancePlugin):
data["frameStart"] = instance.data["frameStart"]
data["frameEnd"] = instance.data["frameEnd"]
data['step'] = instance.data['step']
- data['fps'] = instance.data['fps']
+ # this (with other time related data) should be set on
+ # representations. Once plugins like Extract Review start
+ # using representations, this should be removed from here
+ # as Extract Playblast is already adding fps to representation.
+ data['fps'] = context.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
@@ -112,6 +113,7 @@ class CollectReview(pyblish.api.InstancePlugin):
instance.data['remove'] = True
else:
+ task = legacy_io.Session["AVALON_TASK"]
legacy_subset_name = task + 'Review'
asset_doc = instance.context.data['assetEntity']
project_name = legacy_io.active_project()
@@ -133,6 +135,11 @@ class CollectReview(pyblish.api.InstancePlugin):
instance.data["frameEndHandle"]
instance.data["displayLights"] = display_lights
instance.data["burninDataMembers"] = burninDataMembers
+ # this (with other time related data) should be set on
+ # representations. Once plugins like Extract Review start
+ # using representations, this should be removed from here
+ # as Extract Playblast is already adding fps to representation.
+ instance.data["fps"] = instance.context.data["fps"]
# make ftrack publishable
instance.data.setdefault("families", []).append('ftrack')
diff --git a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py
index 0bae9656f3..b9181337a9 100644
--- a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py
+++ b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py
@@ -24,129 +24,91 @@ class CollectVrayScene(pyblish.api.InstancePlugin):
def process(self, instance):
"""Collector entry point."""
- collected_render_layers = instance.data["setMembers"]
- instance.data["remove"] = True
+
context = instance.context
- _rs = renderSetup.instance()
- # current_layer = _rs.getVisibleRenderLayer()
+ layer = instance.data["transientData"]["layer"]
+ layer_name = layer.name()
+
+ renderer = self.get_render_attribute("currentRenderer",
+ layer=layer_name)
+ if renderer != "vray":
+ self.log.warning("Layer '{}' renderer is not set to V-Ray".format(
+ layer_name
+ ))
# collect all frames we are expecting to be rendered
- renderer = cmds.getAttr(
- "defaultRenderGlobals.currentRenderer"
- ).lower()
+ frame_start_render = int(self.get_render_attribute(
+ "startFrame", layer=layer_name))
+ frame_end_render = int(self.get_render_attribute(
+ "endFrame", layer=layer_name))
- if renderer != "vray":
- raise AssertionError("Vray is not enabled.")
+ if (int(context.data['frameStartHandle']) == frame_start_render
+ and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
- maya_render_layers = {
- layer.name(): layer for layer in _rs.getRenderLayers()
+ handle_start = context.data['handleStart']
+ handle_end = context.data['handleEnd']
+ frame_start = context.data['frameStart']
+ frame_end = context.data['frameEnd']
+ frame_start_handle = context.data['frameStartHandle']
+ frame_end_handle = context.data['frameEndHandle']
+ else:
+ handle_start = 0
+ handle_end = 0
+ frame_start = frame_start_render
+ frame_end = frame_end_render
+ frame_start_handle = frame_start_render
+ frame_end_handle = frame_end_render
+
+ # Get layer specific settings, might be overrides
+ data = {
+ "subset": layer_name,
+ "layer": layer_name,
+ # TODO: This likely needs fixing now
+ # Before refactor: cmds.sets(layer, q=True) or ["*"]
+ "setMembers": ["*"],
+ "review": False,
+ "publish": True,
+ "handleStart": handle_start,
+ "handleEnd": handle_end,
+ "frameStart": frame_start,
+ "frameEnd": frame_end,
+ "frameStartHandle": frame_start_handle,
+ "frameEndHandle": frame_end_handle,
+ "byFrameStep": int(
+ self.get_render_attribute("byFrameStep",
+ layer=layer_name)),
+ "renderer": renderer,
+ # instance subset
+ "family": "vrayscene_layer",
+ "families": ["vrayscene_layer"],
+ "time": get_formatted_current_time(),
+ "author": context.data["user"],
+ # Add source to allow tracing back to the scene from
+ # which was submitted originally
+ "source": context.data["currentFile"].replace("\\", "/"),
+ "resolutionWidth": lib.get_attr_in_layer(
+ "defaultResolution.height", layer=layer_name
+ ),
+ "resolutionHeight": lib.get_attr_in_layer(
+ "defaultResolution.width", layer=layer_name
+ ),
+ "pixelAspect": lib.get_attr_in_layer(
+ "defaultResolution.pixelAspect", layer=layer_name
+ ),
+ "priority": instance.data.get("priority"),
+ "useMultipleSceneFiles": instance.data.get(
+ "vraySceneMultipleFiles")
}
- layer_list = []
- for layer in collected_render_layers:
- # every layer in set should start with `LAYER_` prefix
- try:
- expected_layer_name = re.search(r"^.+:(.*)", layer).group(1)
- except IndexError:
- msg = "Invalid layer name in set [ {} ]".format(layer)
- self.log.warning(msg)
- continue
+ instance.data.update(data)
- self.log.info("processing %s" % layer)
- # check if layer is part of renderSetup
- if expected_layer_name not in maya_render_layers:
- msg = "Render layer [ {} ] is not in " "Render Setup".format(
- expected_layer_name
- )
- self.log.warning(msg)
- continue
-
- # check if layer is renderable
- if not maya_render_layers[expected_layer_name].isRenderable():
- msg = "Render layer [ {} ] is not " "renderable".format(
- expected_layer_name
- )
- self.log.warning(msg)
- continue
-
- layer_name = "rs_{}".format(expected_layer_name)
-
- self.log.debug(expected_layer_name)
- layer_list.append(expected_layer_name)
-
- frame_start_render = int(self.get_render_attribute(
- "startFrame", layer=layer_name))
- frame_end_render = int(self.get_render_attribute(
- "endFrame", layer=layer_name))
-
- if (int(context.data['frameStartHandle']) == frame_start_render
- and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
-
- handle_start = context.data['handleStart']
- handle_end = context.data['handleEnd']
- frame_start = context.data['frameStart']
- frame_end = context.data['frameEnd']
- frame_start_handle = context.data['frameStartHandle']
- frame_end_handle = context.data['frameEndHandle']
- else:
- handle_start = 0
- handle_end = 0
- frame_start = frame_start_render
- frame_end = frame_end_render
- frame_start_handle = frame_start_render
- frame_end_handle = frame_end_render
-
- # Get layer specific settings, might be overrides
- data = {
- "subset": expected_layer_name,
- "layer": layer_name,
- "setMembers": cmds.sets(layer, q=True) or ["*"],
- "review": False,
- "publish": True,
- "handleStart": handle_start,
- "handleEnd": handle_end,
- "frameStart": frame_start,
- "frameEnd": frame_end,
- "frameStartHandle": frame_start_handle,
- "frameEndHandle": frame_end_handle,
- "byFrameStep": int(
- self.get_render_attribute("byFrameStep",
- layer=layer_name)),
- "renderer": self.get_render_attribute("currentRenderer",
- layer=layer_name),
- # instance subset
- "family": "vrayscene_layer",
- "families": ["vrayscene_layer"],
- "asset": legacy_io.Session["AVALON_ASSET"],
- "time": get_formatted_current_time(),
- "author": context.data["user"],
- # Add source to allow tracing back to the scene from
- # which was submitted originally
- "source": context.data["currentFile"].replace("\\", "/"),
- "resolutionWidth": lib.get_attr_in_layer(
- "defaultResolution.height", layer=layer_name
- ),
- "resolutionHeight": lib.get_attr_in_layer(
- "defaultResolution.width", layer=layer_name
- ),
- "pixelAspect": lib.get_attr_in_layer(
- "defaultResolution.pixelAspect", layer=layer_name
- ),
- "priority": instance.data.get("priority"),
- "useMultipleSceneFiles": instance.data.get(
- "vraySceneMultipleFiles")
- }
-
- # Define nice label
- label = "{0} ({1})".format(expected_layer_name, data["asset"])
- label += " [{0}-{1}]".format(
- int(data["frameStartHandle"]), int(data["frameEndHandle"])
- )
-
- instance = context.create_instance(expected_layer_name)
- instance.data["label"] = label
- instance.data.update(data)
+ # Define nice label
+ label = "{0} ({1})".format(layer_name, instance.data["asset"])
+ label += " [{0}-{1}]".format(
+ int(data["frameStartHandle"]), int(data["frameEndHandle"])
+ )
+ instance.data["label"] = label
def get_render_attribute(self, attr, layer):
"""Get attribute from render options.
diff --git a/openpype/hosts/maya/plugins/publish/collect_workfile.py b/openpype/hosts/maya/plugins/publish/collect_workfile.py
index 12d86869ea..e2b64f1ebd 100644
--- a/openpype/hosts/maya/plugins/publish/collect_workfile.py
+++ b/openpype/hosts/maya/plugins/publish/collect_workfile.py
@@ -1,46 +1,30 @@
import os
import pyblish.api
-from maya import cmds
-from openpype.pipeline import legacy_io
-
-class CollectWorkfile(pyblish.api.ContextPlugin):
- """Inject the current working file into context"""
+class CollectWorkfileData(pyblish.api.InstancePlugin):
+ """Inject data into Workfile instance"""
order = pyblish.api.CollectorOrder - 0.01
label = "Maya Workfile"
hosts = ['maya']
+ families = ["workfile"]
- def process(self, context):
+ def process(self, instance):
"""Inject the current working file"""
- current_file = cmds.file(query=True, sceneName=True)
- context.data['currentFile'] = current_file
+ context = instance.context
+ current_file = instance.context.data['currentFile']
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
- task = legacy_io.Session["AVALON_TASK"]
-
- data = {}
-
- # create instance
- instance = context.create_instance(name=filename)
- subset = 'workfile' + task.capitalize()
-
- data.update({
- "subset": subset,
- "asset": os.getenv("AVALON_ASSET", None),
- "label": subset,
- "publish": True,
- "family": 'workfile',
- "families": ['workfile'],
+ data = { # noqa
"setMembers": [current_file],
"frameStart": context.data['frameStart'],
"frameEnd": context.data['frameEnd'],
"handleStart": context.data['handleStart'],
"handleEnd": context.data['handleEnd']
- })
+ }
data['representations'] = [{
'name': ext.lstrip("."),
@@ -50,8 +34,3 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
}]
instance.data.update(data)
-
- self.log.info('Collected instance: {}'.format(file))
- self.log.info('Scene path: {}'.format(current_file))
- self.log.info('staging Dir: {}'.format(folder))
- self.log.info('subset: {}'.format(subset))
diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py
index 35932003ee..9b2978d192 100644
--- a/openpype/hosts/maya/plugins/publish/extract_assembly.py
+++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py
@@ -31,7 +31,7 @@ class ExtractAssembly(publish.Extractor):
with open(json_path, "w") as filepath:
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
- self.log.info("Extracting point cache ..")
+ self.log.debug("Extracting pointcache ..")
cmds.select(instance.data["nodesHierarchy"])
# Run basic alembic exporter
diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
index 7467fa027d..30e6b89f2f 100644
--- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
+++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py
@@ -106,7 +106,7 @@ class ExtractCameraMayaScene(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
diff --git a/openpype/hosts/maya/plugins/publish/extract_import_reference.py b/openpype/hosts/maya/plugins/publish/extract_import_reference.py
index 51c82dde92..8bb82be9b6 100644
--- a/openpype/hosts/maya/plugins/publish/extract_import_reference.py
+++ b/openpype/hosts/maya/plugins/publish/extract_import_reference.py
@@ -8,10 +8,12 @@ import tempfile
from openpype.lib import run_subprocess
from openpype.pipeline import publish
+from openpype.pipeline.publish import OptionalPyblishPluginMixin
from openpype.hosts.maya.api import lib
-class ExtractImportReference(publish.Extractor):
+class ExtractImportReference(publish.Extractor,
+ OptionalPyblishPluginMixin):
"""
Extract the scene with imported reference.
@@ -32,11 +34,14 @@ class ExtractImportReference(publish.Extractor):
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py
index 3cc95a0b2e..e2c88ef44a 100644
--- a/openpype/hosts/maya/plugins/publish/extract_look.py
+++ b/openpype/hosts/maya/plugins/publish/extract_look.py
@@ -412,7 +412,7 @@ class ExtractLook(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
@@ -444,12 +444,12 @@ class ExtractLook(publish.Extractor):
# Remove all members of the sets so they are not included in the
# exported file by accident
- self.log.info("Processing sets..")
+ self.log.debug("Processing sets..")
lookdata = instance.data["lookData"]
relationships = lookdata["relationships"]
sets = list(relationships.keys())
if not sets:
- self.log.info("No sets found")
+ self.log.info("No sets found for the look")
return
# Specify texture processing executables to activate
diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
index c2411ca651..d87d6c208a 100644
--- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
+++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py
@@ -29,7 +29,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
diff --git a/openpype/hosts/maya/plugins/publish/extract_model.py b/openpype/hosts/maya/plugins/publish/extract_model.py
index 7c8c3a2981..5137dffd94 100644
--- a/openpype/hosts/maya/plugins/publish/extract_model.py
+++ b/openpype/hosts/maya/plugins/publish/extract_model.py
@@ -8,7 +8,8 @@ from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
-class ExtractModel(publish.Extractor):
+class ExtractModel(publish.Extractor,
+ publish.OptionalPyblishPluginMixin):
"""Extract as Model (Maya Scene).
Only extracts contents based on the original "setMembers" data to ensure
@@ -31,11 +32,14 @@ class ExtractModel(publish.Extractor):
def process(self, instance):
"""Plugin entry point."""
+ if not self.is_active(instance.data):
+ return
+
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py
index f44c13767c..9537a11ee4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py
+++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py
@@ -45,7 +45,7 @@ class ExtractAlembic(publish.Extractor):
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
attr_prefixes = [value for value in attr_prefixes if value.strip()]
- self.log.info("Extracting pointcache..")
+ self.log.debug("Extracting pointcache..")
dirname = self.staging_dir(instance)
parent_dir = self.staging_dir(instance)
@@ -86,7 +86,6 @@ class ExtractAlembic(publish.Extractor):
end=end))
suspend = not instance.data.get("refresh", False)
- self.log.info(nodes)
with suspended_refresh(suspend=suspend):
with maintained_selection():
cmds.select(nodes, noExpand=True)
diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
index 4377275635..834b335fc5 100644
--- a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
+++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
@@ -29,15 +29,21 @@ class ExtractRedshiftProxy(publish.Extractor):
if not anim_on:
# Remove animation information because it is not required for
# non-animated subsets
- instance.data.pop("proxyFrameStart", None)
- instance.data.pop("proxyFrameEnd", None)
+ keys = ["frameStart",
+ "frameEnd",
+ "handleStart",
+ "handleEnd",
+ "frameStartHandle",
+ "frameEndHandle"]
+ for key in keys:
+ instance.data.pop(key, None)
else:
- start_frame = instance.data["proxyFrameStart"]
- end_frame = instance.data["proxyFrameEnd"]
+ start_frame = instance.data["frameStartHandle"]
+ end_frame = instance.data["frameEndHandle"]
rs_options = "{}startFrame={};endFrame={};frameStep={};".format(
rs_options, start_frame,
- end_frame, instance.data["proxyFrameStep"]
+ end_frame, instance.data["step"]
)
root, ext = os.path.splitext(file_path)
@@ -48,7 +54,7 @@ class ExtractRedshiftProxy(publish.Extractor):
for frame in range(
int(start_frame),
int(end_frame) + 1,
- int(instance.data["proxyFrameStep"]),
+ int(instance.data["step"]),
)]
# vertex_colors = instance.data.get("vertexColors", False)
@@ -74,8 +80,6 @@ class ExtractRedshiftProxy(publish.Extractor):
'files': repr_files,
"stagingDir": staging_dir,
}
- if anim_on:
- representation["frameStart"] = instance.data["proxyFrameStart"]
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s"
diff --git a/openpype/hosts/maya/plugins/publish/extract_rig.py b/openpype/hosts/maya/plugins/publish/extract_rig.py
index c71a2f710d..be57b9de07 100644
--- a/openpype/hosts/maya/plugins/publish/extract_rig.py
+++ b/openpype/hosts/maya/plugins/publish/extract_rig.py
@@ -22,13 +22,13 @@ class ExtractRig(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
self.scene_type = ext_mapping[family]
self.log.info(
- "Using {} as scene type".format(self.scene_type))
+ "Using '.{}' as scene type".format(self.scene_type))
break
except AttributeError:
# no preset found
diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
index e1f847f31a..4a797eb462 100644
--- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
+++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py
@@ -32,7 +32,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
optional = True
def process(self, instance):
- self.log.info("Extracting pointcache..")
+ self.log.debug("Extracting pointcache..")
geo = cmds.listRelatives(
instance.data.get("geometry"), allDescendents=True, fullPath=True)
diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
index 1d0c5e88c3..9a46c31177 100644
--- a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
+++ b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py
@@ -104,7 +104,7 @@ class ExtractYetiRig(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
- self.log.info("Looking in settings for scene type ...")
+ self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
diff --git a/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml
new file mode 100644
index 0000000000..40169b28f9
--- /dev/null
+++ b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml
@@ -0,0 +1,21 @@
+
+
+
+Maya scene units
+## Invalid maya scene units
+
+Detected invalid maya scene units:
+
+{issues}
+
+
+
+### How to repair?
+
+You can automatically repair the scene units by clicking the Repair action on
+the right.
+
+After that restart publishing with Reload button.
+
+
+
diff --git a/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml
new file mode 100644
index 0000000000..2ef4bc95c2
--- /dev/null
+++ b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml
@@ -0,0 +1,29 @@
+
+
+
+Missing node ids
+## Nodes found with missing `cbId`
+
+Nodes were detected in your scene which are missing required `cbId`
+attributes for identification further in the pipeline.
+
+### How to repair?
+
+The node ids are auto-generated on scene save, and thus the easiest fix is to
+save your scene again.
+
+After that restart publishing with Reload button.
+
+
+### Invalid nodes
+
+{nodes}
+
+
+### How could this happen?
+
+This often happens if you've generated new nodes but haven't saved your scene
+after creating the new nodes.
+
+
+
diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
index 1a6463fb9d..298c3bd345 100644
--- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
+++ b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
@@ -288,7 +288,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
comment = context.data.get("comment", "")
scene = os.path.splitext(filename)[0]
dirname = os.path.join(workspace, "renders")
- renderlayer = instance.data['setMembers'] # rs_beauty
+ renderlayer = instance.data['renderlayer'] # rs_beauty
renderlayer_name = instance.data['subset'] # beauty
renderglobals = instance.data["renderGlobals"]
# legacy_layers = renderlayer_globals["UseLegacyRenderLayers"]
@@ -546,3 +546,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
"%f=%d was rounded off to nearest integer"
% (value, int(value))
)
+
+
+# TODO: Remove hack to avoid this plug-in in new publisher
+# This plug-in should actually be in dedicated module
+if not os.environ.get("MUSTER_REST_URL"):
+ del MayaSubmitMuster
diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_content.py b/openpype/hosts/maya/plugins/publish/validate_animation_content.py
index 9dbb09a046..99acdc7b8f 100644
--- a/openpype/hosts/maya/plugins/publish/validate_animation_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_animation_content.py
@@ -1,6 +1,9 @@
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError,
+ ValidateContentsOrder
+)
class ValidateAnimationContent(pyblish.api.InstancePlugin):
@@ -47,4 +50,5 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Animation content is invalid. See log.")
+ raise PublishValidationError(
+ "Animation content is invalid. See log.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
index 5a527031be..6f5f03ab39 100644
--- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
@@ -6,6 +6,7 @@ from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
@@ -35,8 +36,10 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin):
# if a deformer has been created on the shape
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes found with mismatching "
- "IDs: {0}".format(invalid))
+ # TODO: Message formatting can be improved
+ raise PublishValidationError("Nodes found with mismatching "
+ "IDs: {0}".format(invalid),
+ title="Invalid node ids")
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py
index 6975d583bb..49913fa42b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py
+++ b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py
@@ -23,11 +23,13 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
def process(self, instance):
# we cannot ask this until user open render settings as
- # `defaultArnoldRenderOptions` doesn't exists
+ # `defaultArnoldRenderOptions` doesn't exist
+ errors = []
+
try:
- relative_texture = cmds.getAttr(
+ absolute_texture = cmds.getAttr(
"defaultArnoldRenderOptions.absolute_texture_paths")
- relative_procedural = cmds.getAttr(
+ absolute_procedural = cmds.getAttr(
"defaultArnoldRenderOptions.absolute_procedural_paths")
texture_search_path = cmds.getAttr(
"defaultArnoldRenderOptions.tspath"
@@ -42,10 +44,11 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
scene_dir, scene_basename = os.path.split(cmds.file(q=True, loc=True))
scene_name, _ = os.path.splitext(scene_basename)
- assert self.maya_is_true(relative_texture) is not True, \
- ("Texture path is set to be absolute")
- assert self.maya_is_true(relative_procedural) is not True, \
- ("Procedural path is set to be absolute")
+
+ if self.maya_is_true(absolute_texture):
+ errors.append("Texture path is set to be absolute")
+ if self.maya_is_true(absolute_procedural):
+ errors.append("Procedural path is set to be absolute")
anatomy = instance.context.data["anatomy"]
@@ -57,15 +60,20 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
for k in keys:
paths.append("[{}]".format(k))
- self.log.info("discovered roots: {}".format(":".join(paths)))
+ self.log.debug("discovered roots: {}".format(":".join(paths)))
- assert ":".join(paths) in texture_search_path, (
- "Project roots are not in texture_search_path"
- )
+ if ":".join(paths) not in texture_search_path:
+ errors.append((
+ "Project roots {} are not in texture_search_path: {}"
+ ).format(paths, texture_search_path))
- assert ":".join(paths) in procedural_search_path, (
- "Project roots are not in procedural_search_path"
- )
+ if ":".join(paths) not in procedural_search_path:
+ errors.append((
+ "Project roots {} are not in procedural_search_path: {}"
+ ).format(paths, procedural_search_path))
+
+ if errors:
+ raise PublishValidationError("\n".join(errors))
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
index 02464b2302..bcc40760e0 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py
@@ -1,6 +1,9 @@
import pyblish.api
import maya.cmds as cmds
import openpype.hosts.maya.api.action
+from openpype.pipeline.publish import (
+ PublishValidationError
+)
class ValidateAssemblyName(pyblish.api.InstancePlugin):
@@ -47,5 +50,5 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found {} invalid named assembly "
+ raise PublishValidationError("Found {} invalid named assembly "
"items".format(len(invalid)))
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
index 229da63c42..41ef78aab4 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
@@ -1,6 +1,8 @@
import pyblish.api
import openpype.hosts.maya.api.action
-
+from openpype.pipeline.publish import (
+ PublishValidationError
+)
class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
"""Ensure namespaces are not nested
@@ -23,7 +25,7 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
self.log.info("Checking namespace for %s" % instance.name)
if self.get_invalid(instance):
- raise RuntimeError("Nested namespaces found")
+ raise PublishValidationError("Nested namespaces found")
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
index d1bca4091b..a24455ebaa 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
@@ -1,9 +1,8 @@
import pyblish.api
-
from maya import cmds
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import RepairAction
+from openpype.pipeline.publish import PublishValidationError, RepairAction
class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
@@ -38,8 +37,9 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found {} invalid transforms of assembly "
- "items".format(len(invalid)))
+ raise PublishValidationError(
+ ("Found {} invalid transforms of assembly "
+ "items").format(len(invalid)))
@classmethod
def get_invalid(cls, instance):
@@ -90,6 +90,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
"""
from qtpy import QtWidgets
+
from openpype.hosts.maya.api import lib
# Store namespace in variable, cosmetics thingy
diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py
index 7ebd9d7d03..c76d979fbf 100644
--- a/openpype/hosts/maya/plugins/publish/validate_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py
@@ -1,17 +1,16 @@
from collections import defaultdict
-from maya import cmds
-
import pyblish.api
+from maya import cmds
from openpype.hosts.maya.api.lib import set_attribute
from openpype.pipeline.publish import (
- RepairAction,
- ValidateContentsOrder,
-)
+ OptionalPyblishPluginMixin, PublishValidationError, RepairAction,
+ ValidateContentsOrder)
-class ValidateAttributes(pyblish.api.InstancePlugin):
+class ValidateAttributes(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Ensure attributes are consistent.
Attributes to validate and their values comes from the
@@ -32,13 +31,16 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
attributes = None
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
# Check for preset existence.
if not self.attributes:
return
invalid = self.get_invalid(instance, compute=True)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Found attributes with invalid values: {}".format(invalid)
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
index 13ea53a357..e5745612e9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
@@ -1,8 +1,9 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidateContentsOrder)
class ValidateCameraAttributes(pyblish.api.InstancePlugin):
@@ -65,4 +66,5 @@ class ValidateCameraAttributes(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid camera attributes: %s" % invalid)
+ raise PublishValidationError(
+ "Invalid camera attributes: {}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
index 1ce8026fc2..767ac55718 100644
--- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
@@ -1,8 +1,9 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidateContentsOrder)
class ValidateCameraContents(pyblish.api.InstancePlugin):
@@ -34,7 +35,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
cameras = cmds.ls(shapes, type='camera', long=True)
if len(cameras) != 1:
cls.log.error("Camera instance must have a single camera. "
- "Found {0}: {1}".format(len(cameras), cameras))
+ "Found {0}: {1}".format(len(cameras), cameras))
invalid.extend(cameras)
# We need to check this edge case because returning an extended
@@ -48,10 +49,12 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
"members: {}".format(members))
return members
- raise RuntimeError("No cameras found in empty instance.")
+ raise PublishValidationError(
+ "No cameras found in empty instance.")
if not cls.validate_shapes:
- cls.log.info("not validating shapes in the content")
+ cls.log.debug("Not validating shapes in the camera content"
+ " because 'validate shapes' is disabled")
return invalid
# non-camera shapes
@@ -60,13 +63,10 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
if shapes:
shapes = list(shapes)
cls.log.error("Camera instance should only contain camera "
- "shapes. Found: {0}".format(shapes))
+ "shapes. Found: {0}".format(shapes))
invalid.extend(shapes)
-
-
invalid = list(set(invalid))
-
return invalid
def process(self, instance):
@@ -74,5 +74,5 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid camera contents: "
+ raise PublishValidationError("Invalid camera contents: "
"{0}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py
index 7ce3cca61a..766124cd9e 100644
--- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py
+++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py
@@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
+ OptionalPyblishPluginMixin
)
-class ValidateColorSets(pyblish.api.Validator):
+class ValidateColorSets(pyblish.api.Validator,
+ OptionalPyblishPluginMixin):
"""Validate all meshes in the instance have unlocked normals
These can be removed manually through:
@@ -40,6 +42,8 @@ class ValidateColorSets(pyblish.api.Validator):
def process(self, instance):
"""Raise invalid when any of the meshes have ColorSets"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
diff --git a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
index 210ee4127c..24da091246 100644
--- a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
+++ b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
@@ -1,13 +1,14 @@
-from maya import cmds
-
import pyblish.api
+from maya import cmds
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import maintained_selection
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder)
-class ValidateCycleError(pyblish.api.InstancePlugin):
+class ValidateCycleError(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate nodes produce no cycle errors."""
order = ValidateContentsOrder + 0.05
@@ -18,9 +19,13 @@ class ValidateCycleError(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes produce a cycle error: %s" % invalid)
+ raise PublishValidationError(
+ "Nodes produce a cycle error: {}".format(invalid))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py
index ccb351c880..c6184ed348 100644
--- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py
+++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py
@@ -4,7 +4,8 @@ from maya import cmds
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
- PublishValidationError
+ PublishValidationError,
+ OptionalPyblishPluginMixin
)
from openpype.hosts.maya.api.lib_rendersetup import (
get_attr_overrides,
@@ -13,7 +14,8 @@ from openpype.hosts.maya.api.lib_rendersetup import (
from maya.app.renderSetup.model.override import AbsOverride
-class ValidateFrameRange(pyblish.api.InstancePlugin):
+class ValidateFrameRange(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validates the frame ranges.
This is an optional validator checking if the frame range on instance
@@ -40,6 +42,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
exclude_families = []
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
context = instance.context
if instance.data.get("tileRendering"):
self.log.info((
@@ -102,10 +107,12 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"({}).".format(label.title(), values[1], values[0])
)
- for e in errors:
- self.log.error(e)
+ if errors:
+ report = "Frame range settings are incorrect.\n\n"
+ for error in errors:
+ report += "- {}\n\n".format(error)
- assert len(errors) == 0, ("Frame range settings are incorrect")
+ raise PublishValidationError(report, title="Frame Range incorrect")
@classmethod
def repair(cls, instance):
@@ -150,7 +157,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
def repair_renderlayer(cls, instance):
"""Apply frame range in render settings"""
- layer = instance.data["setMembers"]
+ layer = instance.data["renderlayer"]
context = instance.context
start_attr = "defaultRenderGlobals.startFrame"
diff --git a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py
index 53c2cf548a..da065fcf94 100644
--- a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py
+++ b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py
@@ -4,7 +4,8 @@ from maya import cmds
import pyblish.api
from openpype.pipeline.publish import (
RepairAction,
- ValidateContentsOrder
+ ValidateContentsOrder,
+ PublishValidationError
)
@@ -21,7 +22,7 @@ class ValidateGLSLPlugin(pyblish.api.InstancePlugin):
def process(self, instance):
if not cmds.pluginInfo("maya2glTF", query=True, loaded=True):
- raise RuntimeError("maya2glTF is not loaded")
+ raise PublishValidationError("maya2glTF is not loaded")
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
index 63849cfd12..7234f5a025 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
@@ -1,6 +1,9 @@
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@@ -14,18 +17,23 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
invalid = list()
- if not instance.data["setMembers"]:
+ if not instance.data.get("setMembers"):
objectset_name = instance.data['name']
invalid.append(objectset_name)
return invalid
def process(self, instance):
- # Allow renderlayer and workfile to be empty
- skip_families = ["workfile", "renderlayer", "rendersetup"]
+ # Allow renderlayer, rendersetup and workfile to be empty
+ skip_families = {"workfile", "renderlayer", "rendersetup"}
if instance.data.get("family") in skip_families:
return
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Empty instances found: {0}".format(invalid))
+ # Invalid will always be a single entry, we log the single name
+ name = invalid[0]
+ raise PublishValidationError(
+ title="Empty instance",
+ message="Instance '{0}' is empty".format(name)
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py
index bb3dde761c..69e16efe57 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py
@@ -2,7 +2,10 @@ import pyblish.api
import string
import six
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
# Allow only characters, numbers and underscore
allowed = set(string.ascii_lowercase +
@@ -28,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin):
# Ensure subset data
if subset is None:
- raise RuntimeError("Instance is missing subset "
+ raise PublishValidationError("Instance is missing subset "
"name: {0}".format(subset))
if not isinstance(subset, six.string_types):
diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
index 32abe91f48..2f14693ef2 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py
@@ -1,7 +1,8 @@
import maya.cmds as cmds
-
import pyblish.api
+
from openpype.hosts.maya.api import lib
+from openpype.pipeline.publish import PublishValidationError
class ValidateInstancerContent(pyblish.api.InstancePlugin):
@@ -52,7 +53,8 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin):
error = True
if error:
- raise RuntimeError("Instancer Content is invalid. See log.")
+ raise PublishValidationError(
+ "Instancer Content is invalid. See log.")
def check_geometry_hidden(self, export_members):
diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
index 3514cf0a98..fcfcdce8b6 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py
@@ -1,7 +1,10 @@
import os
import re
+
import pyblish.api
+from openpype.pipeline.publish import PublishValidationError
+
VERBOSE = False
@@ -164,5 +167,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
if invalid:
self.log.error("Invalid nodes: {0}".format(invalid))
- raise RuntimeError("Invalid particle caches in instance. "
- "See logs for details.")
+ raise PublishValidationError(
+ ("Invalid particle caches in instance. "
+ "See logs for details."))
diff --git a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py
index 624074aaf9..eac13053db 100644
--- a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py
+++ b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py
@@ -2,7 +2,10 @@ import os
import pyblish.api
import maya.cmds as cmds
-from openpype.pipeline.publish import RepairContextAction
+from openpype.pipeline.publish import (
+ RepairContextAction,
+ PublishValidationError
+)
class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
@@ -35,7 +38,7 @@ class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
invalid = self.get_invalid(context)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Found forbidden plugin name: {}".format(", ".join(invalid))
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_contents.py b/openpype/hosts/maya/plugins/publish/validate_look_contents.py
index 2d38099f0f..433d997840 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_contents.py
@@ -1,6 +1,11 @@
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError,
+ ValidateContentsOrder
+)
+
+
from maya import cmds # noqa
@@ -28,19 +33,16 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
"""Process all the nodes in the instance"""
if not instance[:]:
- raise RuntimeError("Instance is empty")
+ raise PublishValidationError("Instance is empty")
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("'{}' has invalid look "
+ raise PublishValidationError("'{}' has invalid look "
"content".format(instance.name))
@classmethod
def get_invalid(cls, instance):
"""Get all invalid nodes"""
- cls.log.info("Validating look content for "
- "'{}'".format(instance.name))
-
# check if data has the right attributes and content
attributes = cls.validate_lookdata_attributes(instance)
# check the looks for ID
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py
index 20f561a892..0109f6ebd5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py
@@ -1,7 +1,10 @@
from maya import cmds
import pyblish.api
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin):
@@ -56,4 +59,4 @@ class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin):
invalid.append(plug)
if invalid:
- raise RuntimeError("Invalid connections.")
+ raise PublishValidationError("Invalid connections.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
index a266a0fd74..5075d4050d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
@@ -6,6 +6,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
@@ -30,7 +31,7 @@ class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid nodes %s" % (invalid,))
+ raise PublishValidationError("Invalid nodes %s" % (invalid,))
@staticmethod
def get_invalid(instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
index f81e511ff3..4e01b55249 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
@@ -1,8 +1,10 @@
from collections import defaultdict
import pyblish.api
+
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidatePipelineOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidatePipelineOrder)
class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
@@ -33,8 +35,9 @@ class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Members found without non-unique IDs: "
- "{0}".format(invalid))
+ raise PublishValidationError(
+ ("Members found without non-unique IDs: "
+ "{0}").format(invalid))
@staticmethod
def get_invalid(instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
index db6aadae8d..231331411b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
@@ -2,7 +2,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin):
@@ -37,7 +40,7 @@ class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid node relationships found: "
+ raise PublishValidationError("Invalid node relationships found: "
"{0}".format(invalid))
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
index 8434ddde04..657bab0479 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
@@ -1,7 +1,10 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateLookSets(pyblish.api.InstancePlugin):
@@ -48,16 +51,13 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("'{}' has invalid look "
+ raise PublishValidationError("'{}' has invalid look "
"content".format(instance.name))
@classmethod
def get_invalid(cls, instance):
"""Get all invalid nodes"""
- cls.log.info("Validating look content for "
- "'{}'".format(instance.name))
-
relationships = instance.data["lookData"]["relationships"]
invalid = []
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
index 9b57b06ee7..dbe7a70e6a 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
@@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
@@ -27,7 +28,7 @@ class ValidateShadingEngine(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Found shading engines with incorrect naming:"
"\n{}".format(invalid)
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
index 788e440d12..acd761a944 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
@@ -1,8 +1,9 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidateContentsOrder)
class ValidateSingleShader(pyblish.api.InstancePlugin):
@@ -23,9 +24,9 @@ class ValidateSingleShader(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found shapes which don't have a single shader "
- "assigned: "
- "\n{}".format(invalid))
+ raise PublishValidationError(
+ ("Found shapes which don't have a single shader "
+ "assigned:\n{}").format(invalid))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_maya_units.py b/openpype/hosts/maya/plugins/publish/validate_maya_units.py
index 011df0846c..1d5619795f 100644
--- a/openpype/hosts/maya/plugins/publish/validate_maya_units.py
+++ b/openpype/hosts/maya/plugins/publish/validate_maya_units.py
@@ -7,6 +7,7 @@ from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline.publish import (
RepairContextAction,
ValidateSceneOrder,
+ PublishXmlValidationError
)
@@ -26,6 +27,30 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
validate_fps = True
+ nice_message_format = (
+ "- {setting} must be {required_value}. "
+ "Your scene is set to {current_value}"
+ )
+ log_message_format = (
+ "Maya scene {setting} must be '{required_value}'. "
+ "Current value is '{current_value}'."
+ )
+
+ @classmethod
+ def apply_settings(cls, project_settings, system_settings):
+ """Apply project settings to creator"""
+ settings = (
+ project_settings["maya"]["publish"]["ValidateMayaUnits"]
+ )
+
+ cls.validate_linear_units = settings.get("validate_linear_units",
+ cls.validate_linear_units)
+ cls.linear_units = settings.get("linear_units", cls.linear_units)
+ cls.validate_angular_units = settings.get("validate_angular_units",
+ cls.validate_angular_units)
+ cls.angular_units = settings.get("angular_units", cls.angular_units)
+ cls.validate_fps = settings.get("validate_fps", cls.validate_fps)
+
def process(self, context):
# Collected units
@@ -34,15 +59,14 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
fps = context.data.get('fps')
- # TODO replace query with using 'context.data["assetEntity"]'
- asset_doc = get_current_project_asset()
+ asset_doc = context.data["assetEntity"]
asset_fps = mayalib.convert_to_maya_fps(asset_doc["data"]["fps"])
self.log.info('Units (linear): {0}'.format(linearunits))
self.log.info('Units (angular): {0}'.format(angularunits))
self.log.info('Units (time): {0} FPS'.format(fps))
- valid = True
+ invalid = []
# Check if units are correct
if (
@@ -50,26 +74,43 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
and linearunits
and linearunits != self.linear_units
):
- self.log.error("Scene linear units must be {}".format(
- self.linear_units))
- valid = False
+ invalid.append({
+ "setting": "Linear units",
+ "required_value": self.linear_units,
+ "current_value": linearunits
+ })
if (
self.validate_angular_units
and angularunits
and angularunits != self.angular_units
):
- self.log.error("Scene angular units must be {}".format(
- self.angular_units))
- valid = False
+ invalid.append({
+ "setting": "Angular units",
+ "required_value": self.angular_units,
+ "current_value": angularunits
+ })
if self.validate_fps and fps and fps != asset_fps:
- self.log.error(
- "Scene must be {} FPS (now is {})".format(asset_fps, fps))
- valid = False
+ invalid.append({
+ "setting": "FPS",
+ "required_value": asset_fps,
+ "current_value": fps
+ })
- if not valid:
- raise RuntimeError("Invalid units set.")
+ if invalid:
+
+ issues = []
+ for data in invalid:
+ self.log.error(self.log_message_format.format(**data))
+ issues.append(self.nice_message_format.format(**data))
+ issues = "\n".join(issues)
+
+ raise PublishXmlValidationError(
+ plugin=self,
+ message="Invalid maya scene units",
+ formatting_data={"issues": issues}
+ )
@classmethod
def repair(cls, context):
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
index a580a1c787..55624726ea 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
@@ -10,12 +10,15 @@ from openpype.hosts.maya.api.lib import (
set_attribute
)
from openpype.pipeline.publish import (
+ OptionalPyblishPluginMixin,
RepairAction,
ValidateMeshOrder,
+ PublishValidationError
)
-class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
+class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate the mesh has default Arnold attributes.
It compares all Arnold attributes from a default mesh. This is to ensure
@@ -30,12 +33,14 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
openpype.hosts.maya.api.action.SelectInvalidAction,
RepairAction
]
+
optional = True
- if cmds.getAttr(
- "defaultRenderGlobals.currentRenderer").lower() == "arnold":
- active = True
- else:
- active = False
+
+ @classmethod
+ def apply_settings(cls, project_settings, system_settings):
+ # todo: this should not be done this way
+ attr = "defaultRenderGlobals.currentRenderer"
+ cls.active = cmds.getAttr(attr).lower() == "arnold"
@classmethod
def get_default_attributes(cls):
@@ -50,7 +55,7 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
plug = "{}.{}".format(mesh, attr)
try:
defaults[attr] = get_attribute(plug)
- except RuntimeError:
+ except PublishValidationError:
cls.log.debug("Ignoring arnold attribute: {}".format(attr))
return defaults
@@ -101,10 +106,12 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
)
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid_attributes(instance, compute=True)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Non-default Arnold attributes found in instance:"
" {0}".format(invalid)
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py
index 848d66c4ae..c3264f3d98 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py
@@ -4,7 +4,8 @@ import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
- ValidateMeshOrder
+ ValidateMeshOrder,
+ PublishValidationError
)
@@ -49,6 +50,6 @@ class ValidateMeshEmpty(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"Meshes found in instance without any vertices: %s" % invalid
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
index b7836b3e92..c382d1b983 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
@@ -2,11 +2,16 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateMeshOrder
+from openpype.pipeline.publish import (
+ ValidateMeshOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
from openpype.hosts.maya.api.lib import len_flattened
-class ValidateMeshHasUVs(pyblish.api.InstancePlugin):
+class ValidateMeshHasUVs(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate the current mesh has UVs.
It validates whether the current UV set has non-zero UVs and
@@ -66,8 +71,19 @@ class ValidateMeshHasUVs(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Meshes found in instance without "
- "valid UVs: {0}".format(invalid))
+
+ names = "
".join(
+ " - {}".format(node) for node in invalid
+ )
+
+ raise PublishValidationError(
+ title="Mesh has missing UVs",
+ message="Model meshes are required to have UVs.
"
+ "Meshes detected with invalid or missing UVs:
"
+ "{0}".format(names)
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
index 664e2b5772..48b4d0f557 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
@@ -2,7 +2,17 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateMeshOrder
+from openpype.pipeline.publish import (
+ ValidateMeshOrder,
+ PublishValidationError
+)
+
+
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
class ValidateMeshNoNegativeScale(pyblish.api.Validator):
@@ -46,5 +56,9 @@ class ValidateMeshNoNegativeScale(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Meshes found with negative "
- "scale: {0}".format(invalid))
+ raise PublishValidationError(
+ "Meshes found with negative scale:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Negative scale"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
index d7711da722..6fd63fb29f 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
@@ -2,7 +2,17 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateMeshOrder
+from openpype.pipeline.publish import (
+ ValidateMeshOrder,
+ PublishValidationError
+)
+
+
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
class ValidateMeshNonManifold(pyblish.api.Validator):
@@ -16,7 +26,7 @@ class ValidateMeshNonManifold(pyblish.api.Validator):
order = ValidateMeshOrder
hosts = ['maya']
families = ['model']
- label = 'Mesh Non-Manifold Vertices/Edges'
+ label = 'Mesh Non-Manifold Edges/Vertices'
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
@staticmethod
@@ -38,5 +48,9 @@ class ValidateMeshNonManifold(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Meshes found with non-manifold "
- "edges/vertices: {0}".format(invalid))
+ raise PublishValidationError(
+ "Meshes found with non-manifold edges/vertices:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Non-Manifold Edges/Vertices"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
index b49ba85648..5ec6e5779b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
@@ -3,10 +3,15 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
-from openpype.pipeline.publish import ValidateMeshOrder
+from openpype.pipeline.publish import (
+ ValidateMeshOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
-class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin):
+class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate meshes don't have edges with a zero length.
Based on Maya's polyCleanup 'Edges with zero length'.
@@ -65,7 +70,14 @@ class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all meshes"""
+ if not self.is_active(instance.data):
+ return
+
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Meshes found with zero "
- "edge length: {0}".format(invalid))
+ label = "Meshes found with zero edge length"
+ raise PublishValidationError(
+ message="{}: {}".format(label, invalid),
+ title=label,
+ description="{}:\n- ".format(label) + "\n- ".join(invalid)
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
index 1b754a9829..7855e79119 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
@@ -6,10 +6,20 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
)
-class ValidateMeshNormalsUnlocked(pyblish.api.Validator):
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
+
+
+class ValidateMeshNormalsUnlocked(pyblish.api.Validator,
+ OptionalPyblishPluginMixin):
"""Validate all meshes in the instance have unlocked normals
These can be unlocked manually through:
@@ -47,12 +57,18 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator):
def process(self, instance):
"""Raise invalid when any of the meshes have locked normals"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Meshes found with "
- "locked normals: {0}".format(invalid))
+ raise PublishValidationError(
+ "Meshes found with locked normals:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Locked normals"
+ )
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
index 7dd66eed6c..88e1507dd3 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
@@ -6,7 +6,18 @@ import maya.api.OpenMaya as om
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateMeshOrder
+from openpype.pipeline.publish import (
+ ValidateMeshOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
+
+
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
class GetOverlappingUVs(object):
@@ -225,7 +236,8 @@ class GetOverlappingUVs(object):
return faces
-class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
+class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
""" Validate the current mesh overlapping UVs.
It validates whether the current UVs are overlapping or not.
@@ -281,9 +293,14 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
return instance.data.get("overlapping_faces", [])
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance, compute=True)
if invalid:
- raise RuntimeError(
- "Meshes found with overlapping UVs: {0}".format(invalid)
+ raise PublishValidationError(
+ "Meshes found with overlapping UVs:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Overlapping UVs"
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
index 2a0abe975c..1db7613999 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
@@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
+ PublishValidationError
)
@@ -102,7 +103,7 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Shapes found with invalid shader "
+ raise PublishValidationError("Shapes found with invalid shader "
"connections: {0}".format(invalid))
@staticmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
index faa360380e..46364735b9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
@@ -6,10 +6,12 @@ from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
+ OptionalPyblishPluginMixin
)
-class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin):
+class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Warn on multiple UV sets existing for each polygon mesh.
On versions prior to Maya 2017 this will force no multiple uv sets because
@@ -47,6 +49,8 @@ class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all the nodes in the instance 'objectSet'"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
index 40ddb916ca..116fecbcba 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
@@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
+ OptionalPyblishPluginMixin
)
-class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin):
+class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate model's default set exists and is named 'map1'.
In Maya meshes by default have a uv set named "map1" that cannot be
@@ -48,6 +50,8 @@ class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all the nodes in the instance 'objectSet'"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
index d885158004..7167859444 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
@@ -1,12 +1,10 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import (
- RepairAction,
- ValidateMeshOrder,
-)
from openpype.hosts.maya.api.lib import len_flattened
+from openpype.pipeline.publish import (
+ PublishValidationError, RepairAction, ValidateMeshOrder)
class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin):
@@ -40,8 +38,9 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin):
# This fix only works in Maya 2016 EXT2 and newer
if float(cmds.about(version=True)) <= 2016.0:
- raise RuntimeError("Repair not supported in Maya version below "
- "2016 EXT 2")
+ raise PublishValidationError(
+ ("Repair not supported in Maya version below "
+ "2016 EXT 2"))
invalid = cls.get_invalid(instance)
for node in invalid:
@@ -76,5 +75,6 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Meshes found in instance with vertices that "
- "have no edges: %s" % invalid)
+ raise PublishValidationError(
+ ("Meshes found in instance with vertices that "
+ "have no edges: {}").format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py
index 723346a285..9ba458a416 100644
--- a/openpype/hosts/maya/plugins/publish/validate_model_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py
@@ -3,7 +3,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateModelContent(pyblish.api.InstancePlugin):
@@ -28,7 +31,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin):
content_instance = instance.data.get("setMembers", None)
if not content_instance:
cls.log.error("Instance has no nodes!")
- return True
+ return [instance.data["name"]]
# All children will be included in the extracted export so we also
# validate *all* descendents of the set members and we skip any
@@ -97,4 +100,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Model content is invalid. See log.")
+ raise PublishValidationError(
+ title="Model content is invalid",
+ message="See log for more details"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py
index 0e7adc640f..6948dcf724 100644
--- a/openpype/hosts/maya/plugins/publish/validate_model_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py
@@ -1,22 +1,24 @@
# -*- coding: utf-8 -*-
"""Validate model nodes names."""
import os
-import re
import platform
+import re
+import gridfs
+import pyblish.api
from maya import cmds
-import pyblish.api
-from openpype.pipeline import legacy_io
-from openpype.pipeline.publish import ValidateContentsOrder
import openpype.hosts.maya.api.action
+from openpype.client.mongo import OpenPypeMongoConnection
from openpype.hosts.maya.api.shader_definition_editor import (
DEFINITION_FILENAME)
-from openpype.client.mongo import OpenPypeMongoConnection
-import gridfs
+from openpype.pipeline import legacy_io
+from openpype.pipeline.publish import (
+ OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder)
-class ValidateModelName(pyblish.api.InstancePlugin):
+class ValidateModelName(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate name of model
starts with (somename)_###_(materialID)_GEO
@@ -148,7 +150,11 @@ class ValidateModelName(pyblish.api.InstancePlugin):
def process(self, instance):
"""Plugin entry point."""
+ if not self.is_active(instance.data):
+ return
+
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Model naming is invalid. See the log.")
+ raise PublishValidationError(
+ "Model naming is invalid. See the log.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
index 04db5a061b..68784a165d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
@@ -1,14 +1,19 @@
import os
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
COLOUR_SPACES = ['sRGB', 'linear', 'auto']
MIPMAP_EXTENSIONS = ['tdl']
-class ValidateMvLookContents(pyblish.api.InstancePlugin):
+class ValidateMvLookContents(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
order = ValidateContentsOrder
families = ['mvLook']
hosts = ['maya']
@@ -23,6 +28,9 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
enforced_intents = ['-', 'Final']
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
intent = instance.context.data['intent']['value']
publishMipMap = instance.data["publishMipMap"]
enforced = True
@@ -35,7 +43,7 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
.format(intent))
if not instance[:]:
- raise RuntimeError("Instance is empty")
+ raise PublishValidationError("Instance is empty")
invalid = set()
@@ -62,12 +70,12 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
if enforced:
invalid.add(node)
self.log.error(msg)
- raise RuntimeError(msg)
+ raise PublishValidationError(msg)
else:
self.log.warning(msg)
if invalid:
- raise RuntimeError("'{}' has invalid look "
+ raise PublishValidationError("'{}' has invalid look "
"content".format(instance.name))
def valid_file(self, fname):
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_animation.py b/openpype/hosts/maya/plugins/publish/validate_no_animation.py
index 2e7cafe4ab..9ff189cf83 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_animation.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_animation.py
@@ -2,10 +2,22 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
-class ValidateNoAnimation(pyblish.api.Validator):
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
+
+
+class ValidateNoAnimation(pyblish.api.Validator,
+ OptionalPyblishPluginMixin):
"""Ensure no keyframes on nodes in the Instance.
Even though a Model would extract without animCurves correctly this avoids
@@ -22,10 +34,17 @@ class ValidateNoAnimation(pyblish.api.Validator):
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Keyframes found: {0}".format(invalid))
+ raise PublishValidationError(
+ "Keyframes found on:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Keyframes on model"
+ )
@staticmethod
def get_invalid(instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
index a4fb938d43..f0aa9261f7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
@@ -2,7 +2,17 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
+
+
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
class ValidateNoDefaultCameras(pyblish.api.InstancePlugin):
@@ -28,4 +38,10 @@ class ValidateNoDefaultCameras(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all the cameras in the instance"""
invalid = self.get_invalid(instance)
- assert not invalid, "Default cameras found: {0}".format(invalid)
+ if invalid:
+ raise PublishValidationError(
+ "Default cameras found:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Default cameras"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
index 0ff03f9165..13eeae5859 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
@@ -4,11 +4,19 @@ import pyblish.api
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
import openpype.hosts.maya.api.action
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
+
+
def get_namespace(node_name):
# ensure only node's name (not parent path)
node_name = node_name.rsplit("|", 1)[-1]
@@ -36,7 +44,12 @@ class ValidateNoNamespace(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Namespaces found: {0}".format(invalid))
+ raise PublishValidationError(
+ "Namespaces found:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Namespaces in model"
+ )
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
index f77fc81dc1..187135fdf3 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
@@ -5,9 +5,17 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
+
+
def has_shape_children(node):
# Check if any descendants
allDescendents = cmds.listRelatives(node,
@@ -64,7 +72,12 @@ class ValidateNoNullTransforms(pyblish.api.InstancePlugin):
"""Process all the transform nodes in the instance """
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Empty transforms found: {0}".format(invalid))
+ raise PublishValidationError(
+ "Empty transforms found without shapes:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Empty transforms"
+ )
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
index 2cfdc28128..6ae634be24 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
@@ -2,10 +2,22 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
-class ValidateNoUnknownNodes(pyblish.api.InstancePlugin):
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
+
+
+class ValidateNoUnknownNodes(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Checks to see if there are any unknown nodes in the instance.
This often happens if nodes from plug-ins are used but are not available
@@ -29,7 +41,14 @@ class ValidateNoUnknownNodes(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all the nodes in the instance"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Unknown nodes found: {0}".format(invalid))
+ raise PublishValidationError(
+ "Unknown nodes found:\n\n{0}".format(
+ _as_report_list(sorted(invalid))
+ ),
+ title="Unknown nodes"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py
index 27e5e6a006..22fd1edc29 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py
@@ -1,5 +1,13 @@
import pyblish.api
from maya import cmds
+from openpype.pipeline.publish import PublishValidationError
+
+
+def _as_report_list(values, prefix="- ", suffix="\n"):
+ """Return list as bullet point list for a report"""
+ if not values:
+ return ""
+ return prefix + (suffix + prefix).join(values)
class ValidateNoVRayMesh(pyblish.api.InstancePlugin):
@@ -11,6 +19,9 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin):
def process(self, instance):
+ if not cmds.pluginInfo("vrayformaya", query=True, loaded=True):
+ return
+
shapes = cmds.ls(instance,
shapes=True,
type="mesh")
@@ -20,5 +31,11 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin):
source=True) or []
vray_meshes = cmds.ls(inputs, type='VRayMesh')
if vray_meshes:
- raise RuntimeError("Meshes that are VRayMeshes shouldn't "
- "be pointcached: {0}".format(vray_meshes))
+ raise PublishValidationError(
+ "Meshes that are V-Ray Proxies should not be in an Alembic "
+ "pointcache.\n"
+ "Found V-Ray proxies:\n\n{}".format(
+ _as_report_list(sorted(vray_meshes))
+ ),
+ title="V-Ray Proxies in pointcache"
+ )
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_node_ids.py
index 796f4c8d76..0c7d647014 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids.py
@@ -1,6 +1,9 @@
import pyblish.api
-from openpype.pipeline.publish import ValidatePipelineOrder
+from openpype.pipeline.publish import (
+ ValidatePipelineOrder,
+ PublishXmlValidationError
+)
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
@@ -34,8 +37,14 @@ class ValidateNodeIDs(pyblish.api.InstancePlugin):
# Ensure all nodes have a cbId
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes found without "
- "IDs: {0}".format(invalid))
+ names = "\n".join(
+ "- {}".format(node) for node in invalid
+ )
+ raise PublishXmlValidationError(
+ plugin=self,
+ message="Nodes found without IDs: {}".format(invalid),
+ formatting_data={"nodes": names}
+ )
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
index 68c47f3a96..643c970463 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
@@ -1,12 +1,10 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
- RepairAction,
- ValidateContentsOrder,
-)
+ PublishValidationError, RepairAction, ValidateContentsOrder)
class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin):
@@ -35,8 +33,9 @@ class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin):
# if a deformer has been created on the shape
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Shapes found that are considered 'Deformed'"
- "without object ids: {0}".format(invalid))
+ raise PublishValidationError(
+ ("Shapes found that are considered 'Deformed'"
+ "without object ids: {0}").format(invalid))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
index b2f28fd4e5..f15aa2efa8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
@@ -1,10 +1,11 @@
import pyblish.api
-from openpype.client import get_assets
-from openpype.pipeline import legacy_io
-from openpype.pipeline.publish import ValidatePipelineOrder
import openpype.hosts.maya.api.action
+from openpype.client import get_assets
from openpype.hosts.maya.api import lib
+from openpype.pipeline import legacy_io
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidatePipelineOrder)
class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin):
@@ -29,9 +30,9 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found asset IDs which are not related to "
- "current project in instance: "
- "`%s`" % instance.name)
+ raise PublishValidationError(
+ ("Found asset IDs which are not related to "
+ "current project in instance: `{}`").format(instance.name))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
index f901dc58c4..52e706fec9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
@@ -1,11 +1,13 @@
import pyblish.api
-from openpype.pipeline.publish import ValidatePipelineOrder
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
+from openpype.pipeline.publish import (
+ OptionalPyblishPluginMixin, PublishValidationError, ValidatePipelineOrder)
-class ValidateNodeIDsRelated(pyblish.api.InstancePlugin):
+class ValidateNodeIDsRelated(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate nodes have a related Colorbleed Id to the instance.data[asset]
"""
@@ -23,12 +25,15 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all nodes in instance (including hierarchy)"""
+ if not self.is_active(instance.data):
+ return
+
# Ensure all nodes have a cbId
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes IDs found that are not related to asset "
- "'{}' : {}".format(instance.data['asset'],
- invalid))
+ raise PublishValidationError(
+ ("Nodes IDs found that are not related to asset "
+ "'{}' : {}").format(instance.data['asset'], invalid))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
index f7a5e6e292..61386fc939 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
@@ -1,7 +1,10 @@
from collections import defaultdict
import pyblish.api
-from openpype.pipeline.publish import ValidatePipelineOrder
+from openpype.pipeline.publish import (
+ ValidatePipelineOrder,
+ PublishValidationError
+)
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
@@ -29,8 +32,13 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin):
# Ensure all nodes have a cbId
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes found with non-unique "
- "asset IDs: {0}".format(invalid))
+ label = "Nodes found with non-unique asset IDs"
+ raise PublishValidationError(
+ message="{}: {}".format(label, invalid),
+ title="Non-unique asset ids on nodes",
+ description="{}\n- {}".format(label,
+ "\n- ".join(sorted(invalid)))
+ )
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
index 6135c9c695..78334cd01f 100644
--- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py
@@ -4,7 +4,10 @@ from maya import cmds
import pyblish.api
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
@@ -48,5 +51,5 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
"""Process all directories Set as Filenames in Non-Maya Nodes"""
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Non-existent Path "
+ raise PublishValidationError("Non-existent Path "
"found: {0}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py
index 78bb022785..f9aa7f82d0 100644
--- a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py
+++ b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py
@@ -1,10 +1,8 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
from openpype.pipeline.publish import (
- RepairAction,
- ValidateContentsOrder,
-)
+ PublishValidationError, RepairAction, ValidateContentsOrder)
class ValidateRenderImageRule(pyblish.api.InstancePlugin):
@@ -27,12 +25,12 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin):
required_images_rule = self.get_default_render_image_folder(instance)
current_images_rule = cmds.workspace(fileRuleEntry="images")
- assert current_images_rule == required_images_rule, (
- "Invalid workspace `images` file rule value: '{}'. "
- "Must be set to: '{}'".format(
- current_images_rule, required_images_rule
- )
- )
+ if current_images_rule != required_images_rule:
+ raise PublishValidationError(
+ (
+ "Invalid workspace `images` file rule value: '{}'. "
+ "Must be set to: '{}'"
+ ).format(current_images_rule, required_images_rule))
@classmethod
def repair(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
index 67ece75af8..9d4410186b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
+++ b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
@@ -3,7 +3,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError,
+)
class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin):
@@ -31,5 +34,7 @@ class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin):
"""Process all the cameras in the instance"""
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Renderable default cameras "
- "found: {0}".format(invalid))
+ raise PublishValidationError(
+ title="Rendering default cameras",
+ message="Renderable default cameras "
+ "found: {0}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
index 77322fefd5..2c0d604175 100644
--- a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
+++ b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
@@ -5,7 +5,10 @@ from maya import cmds
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib_rendersettings import RenderSettings
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
@@ -28,7 +31,7 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
"""Process all the cameras in the instance"""
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid cameras for render.")
+ raise PublishValidationError("Invalid cameras for render.")
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py
index 7919a6eaa1..f8de983e06 100644
--- a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py
+++ b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py
@@ -1,8 +1,9 @@
import pyblish.api
-from openpype.client import get_subset_by_name
import openpype.hosts.maya.api.action
+from openpype.client import get_subset_by_name
from openpype.pipeline import legacy_io
+from openpype.pipeline.publish import PublishValidationError
class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin):
@@ -30,7 +31,8 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found unregistered subsets: {}".format(invalid))
+ raise PublishValidationError(
+ "Found unregistered subsets: {}".format(invalid))
def get_invalid(self, instance):
invalid = []
diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
index 71b91b8e54..dccb4ade78 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
@@ -9,6 +9,7 @@ import pyblish.api
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError,
)
from openpype.hosts.maya.api import lib
@@ -112,17 +113,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
- assert invalid is False, ("Invalid render settings "
- "found for '{}'!".format(instance.name))
+ if invalid:
+ raise PublishValidationError(
+ title="Invalid Render Settings",
+ message=("Invalid render settings found "
+ "for '{}'!".format(instance.name))
+ )
@classmethod
def get_invalid(cls, instance):
invalid = False
- multipart = False
renderer = instance.data['renderer']
- layer = instance.data['setMembers']
+ layer = instance.data['renderlayer']
cameras = instance.data.get("cameras", [])
# Get the node attributes for current renderer
@@ -280,7 +284,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
render_value = cmds.getAttr(
"{}.{}".format(node, data["attribute"])
)
- except RuntimeError:
+ except PublishValidationError:
invalid = True
cls.log.error(
"Cannot get value of {}.{}".format(
diff --git a/openpype/hosts/maya/plugins/publish/validate_resources.py b/openpype/hosts/maya/plugins/publish/validate_resources.py
index b7bd47ad0a..7d894a2bef 100644
--- a/openpype/hosts/maya/plugins/publish/validate_resources.py
+++ b/openpype/hosts/maya/plugins/publish/validate_resources.py
@@ -2,7 +2,10 @@ import os
from collections import defaultdict
import pyblish.api
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateResources(pyblish.api.InstancePlugin):
@@ -54,4 +57,4 @@ class ValidateResources(pyblish.api.InstancePlugin):
)
if invalid_resources:
- raise RuntimeError("Invalid resources in instance.")
+ raise PublishValidationError("Invalid resources in instance.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
index 1096c95486..7b5392f8f9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py
@@ -1,7 +1,8 @@
+import pyblish.api
from maya import cmds
-import pyblish.api
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidateContentsOrder)
class ValidateRigContents(pyblish.api.InstancePlugin):
@@ -31,8 +32,9 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
# in the rig instance
set_members = instance.data['setMembers']
if not cmds.ls(set_members, type="dagNode", long=True):
- raise RuntimeError("No dag nodes in the pointcache instance. "
- "(Empty instance?)")
+ raise PublishValidationError(
+ ("No dag nodes in the pointcache instance. "
+ "(Empty instance?)"))
# Ensure contents in sets and retrieve long path for all objects
output_content = cmds.sets("out_SET", query=True) or []
@@ -79,7 +81,8 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
error = True
if error:
- raise RuntimeError("Invalid rig content. See log for details.")
+ raise PublishValidationError(
+ "Invalid rig content. See log for details.")
def validate_geometry(self, set_members):
"""Check if the out set passes the validations
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
index 1e42abdcd9..7bbf4257ab 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py
@@ -5,6 +5,7 @@ import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
RepairAction,
+ PublishValidationError
)
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import undo_chunk
@@ -51,7 +52,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError('{} failed, see log '
+ raise PublishValidationError('{} failed, see log '
'information'.format(self.label))
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
index 55b2ebd6d8..842c1de01b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
@@ -5,7 +5,9 @@ import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
RepairAction,
+ PublishValidationError
)
+
from openpype.hosts.maya.api import lib
import openpype.hosts.maya.api.action
@@ -48,7 +50,7 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError('{} failed, see log '
+ raise PublishValidationError('{} failed, see log '
'information'.format(self.label))
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
index 03ba381f8d..39f0941faa 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
@@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
@@ -37,7 +38,7 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
# if a deformer has been created on the shape
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes found with mismatching "
+ raise PublishValidationError("Nodes found with mismatching "
"IDs: {0}".format(invalid))
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
index cba70a21b7..75447fdfea 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
@@ -9,6 +9,7 @@ from openpype.hosts.maya.api.lib import get_id, set_id
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
@@ -34,7 +35,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance, compute=True)
if invalid:
- raise RuntimeError("Found nodes with mismatched IDs.")
+ raise PublishValidationError("Found nodes with mismatched IDs.")
@classmethod
def get_invalid(cls, instance, compute=False):
@@ -107,7 +108,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
set_id(instance_node, id_to_set, overwrite=True)
if multiple_ids_match:
- raise RuntimeError(
+ raise PublishValidationError(
"Multiple matched ids found. Please repair manually: "
"{}".format(multiple_ids_match)
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py
index f1fa4d3c4c..b48d67e416 100644
--- a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py
+++ b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py
@@ -1,10 +1,10 @@
import os
import maya.cmds as cmds
-
import pyblish.api
-from openpype.pipeline.publish import ValidatePipelineOrder
+from openpype.pipeline.publish import (
+ PublishValidationError, ValidatePipelineOrder)
def is_subdir(path, root_dir):
@@ -37,10 +37,11 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin):
scene_name = cmds.file(query=True, sceneName=True)
if not scene_name:
- raise RuntimeError("Scene hasn't been saved. Workspace can't be "
- "validated.")
+ raise PublishValidationError(
+ "Scene hasn't been saved. Workspace can't be validated.")
root_dir = cmds.workspace(query=True, rootDirectory=True)
if not is_subdir(scene_name, root_dir):
- raise RuntimeError("Maya workspace is not set correctly.")
+ raise PublishValidationError(
+ "Maya workspace is not set correctly.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py
index 034db471da..36bb2c1fee 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py
@@ -1,13 +1,15 @@
import re
-from maya import cmds
import pyblish.api
+from maya import cmds
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder)
-class ValidateShaderName(pyblish.api.InstancePlugin):
+class ValidateShaderName(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate shader name assigned.
It should be _<*>_SHD
@@ -23,12 +25,14 @@ class ValidateShaderName(pyblish.api.InstancePlugin):
# The default connections to check
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Found shapes with invalid shader names "
- "assigned: "
- "\n{}".format(invalid))
+ raise PublishValidationError(
+ ("Found shapes with invalid shader names "
+ "assigned:\n{}").format(invalid))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
index 4ab669f46b..d8ad366ed8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
@@ -8,6 +8,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
ValidateContentsOrder,
RepairAction,
+ OptionalPyblishPluginMixin
)
@@ -15,7 +16,8 @@ def short_name(node):
return node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
-class ValidateShapeDefaultNames(pyblish.api.InstancePlugin):
+class ValidateShapeDefaultNames(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validates that Shape names are using Maya's default format.
When you create a new polygon cube Maya will name the transform
@@ -77,6 +79,8 @@ class ValidateShapeDefaultNames(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all the shape nodes in the instance"""
+ if not self.is_active(instance.data):
+ return
invalid = self.get_invalid(instance)
if invalid:
diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py
index c0a9ddcf69..701c80a8af 100644
--- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py
+++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py
@@ -7,8 +7,10 @@ from openpype.hosts.maya.api.action import (
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
+ PublishValidationError
)
+
from maya import cmds
@@ -28,7 +30,7 @@ class ValidateSkeletalMeshTriangulated(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError(
+ raise PublishValidationError(
"The following objects needs to be triangulated: "
"{}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_step_size.py b/openpype/hosts/maya/plugins/publish/validate_step_size.py
index 294458f63c..493a6ee65c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_step_size.py
+++ b/openpype/hosts/maya/plugins/publish/validate_step_size.py
@@ -1,7 +1,10 @@
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ PublishValidationError,
+ ValidateContentsOrder
+)
class ValidateStepSize(pyblish.api.InstancePlugin):
@@ -40,4 +43,5 @@ class ValidateStepSize(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Invalid instances found: {0}".format(invalid))
+ raise PublishValidationError(
+ "Invalid instances found: {0}".format(invalid))
diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
index b2a83a80fb..cbc7ee9d5c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
+++ b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
@@ -5,10 +5,15 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
-class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
+class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validates transform suffix based on the type of its children shapes.
Suffices must be:
@@ -47,8 +52,8 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
def get_table_for_invalid(cls):
ss = []
for k, v in cls.SUFFIX_NAMING_TABLE.items():
- ss.append(" - {}: {}".format(k, ", ".join(v)))
- return "\n".join(ss)
+ ss.append(" - {}: {}".format(k, ", ".join(v)))
+ return "
".join(ss)
@staticmethod
def is_valid_name(node_name, shape_type,
@@ -110,9 +115,20 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
instance (:class:`pyblish.api.Instance`): published instance.
"""
+ if not self.is_active(instance.data):
+ return
+
invalid = self.get_invalid(instance)
if invalid:
valid = self.get_table_for_invalid()
- raise ValueError("Incorrectly named geometry "
- "transforms: {0}, accepted suffixes are: "
- "\n{1}".format(invalid, valid))
+
+ names = "
".join(
+ " - {}".format(node) for node in invalid
+ )
+ valid = valid.replace("\n", "
")
+
+ raise PublishValidationError(
+ title="Invalid naming suffix",
+ message="Valid suffixes are:
{0}
"
+ "Incorrectly named geometry transforms:
{1}"
+ "".format(valid, names))
diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
index abd9e00af1..906ff17ec9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
+++ b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
@@ -3,7 +3,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateTransformZero(pyblish.api.Validator):
@@ -62,5 +65,14 @@ class ValidateTransformZero(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
- raise ValueError("Nodes found with transform "
- "values: {0}".format(invalid))
+
+ names = "
".join(
+ " - {}".format(node) for node in invalid
+ )
+
+ raise PublishValidationError(
+ title="Transform Zero",
+ message="The model publish allows no transformations. You must"
+ " freeze transformations to continue.
"
+ "Nodes found with transform values: "
+ "{0}".format(names))
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
index 1425190b82..b2cb2ebda2 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
@@ -7,10 +7,15 @@ import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline import legacy_io
from openpype.settings import get_project_settings
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ OptionalPyblishPluginMixin,
+ PublishValidationError
+)
-class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin):
+class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
"""Validate name of Unreal Static Mesh
Unreals naming convention states that staticMesh should start with `SM`
@@ -131,6 +136,9 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+
if not self.validate_mesh and not self.validate_collision:
self.log.info("Validation of both mesh and collision names"
"is disabled.")
@@ -143,4 +151,4 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Model naming is invalid. See log.")
+ raise PublishValidationError("Model naming is invalid. See log.")
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py
index dd699735d9..a420dcb900 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py
@@ -6,10 +6,12 @@ import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
RepairAction,
+ OptionalPyblishPluginMixin
)
-class ValidateUnrealUpAxis(pyblish.api.ContextPlugin):
+class ValidateUnrealUpAxis(pyblish.api.ContextPlugin,
+ OptionalPyblishPluginMixin):
"""Validate if Z is set as up axis in Maya"""
optional = True
@@ -21,6 +23,9 @@ class ValidateUnrealUpAxis(pyblish.api.ContextPlugin):
actions = [RepairAction]
def process(self, context):
+ if not self.is_active(context.data):
+ return
+
assert cmds.upAxis(q=True, axis=True) == "z", (
"Invalid axis set as up axis"
)
diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py
index faf634f258..e72782e552 100644
--- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py
+++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py
@@ -2,7 +2,10 @@ import pyblish.api
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
@@ -27,7 +30,7 @@ class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
start, end = self.get_frame_range(instance)
- raise RuntimeError("No visible nodes found in "
+ raise PublishValidationError("No visible nodes found in "
"frame range {}-{}.".format(start, end))
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_vray.py b/openpype/hosts/maya/plugins/publish/validate_vray.py
index 045ac258a1..bef5967cc9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vray.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vray.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-from openpype.pipeline import PublishValidationError
+from openpype.pipeline.publish import PublishValidationError
class ValidateVray(pyblish.api.InstancePlugin):
diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
index 366f3bd10e..a71849da00 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py
@@ -1,11 +1,9 @@
import pyblish.api
+from maya import cmds
+
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
- ValidateContentsOrder,
- RepairAction,
-)
-
-from maya import cmds
+ PublishValidationError, RepairAction, ValidateContentsOrder)
class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin):
@@ -36,7 +34,7 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin):
vray_settings = cmds.ls("vraySettings", type="VRaySettingsNode")
assert vray_settings, "Please ensure a VRay Settings Node is present"
- renderlayer = instance.data['setMembers']
+ renderlayer = instance.data['renderlayer']
if not lib.get_attr_in_layer(self.enabled_attr, layer=renderlayer):
# If not distributed rendering enabled, ignore..
@@ -45,13 +43,14 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin):
# If distributed rendering is enabled but it is *not* set to ignore
# during batch mode we invalidate the instance
if not lib.get_attr_in_layer(self.ignored_attr, layer=renderlayer):
- raise RuntimeError("Renderlayer has distributed rendering enabled "
- "but is not set to ignore in batch mode.")
+ raise PublishValidationError(
+ ("Renderlayer has distributed rendering enabled "
+ "but is not set to ignore in batch mode."))
@classmethod
def repair(cls, instance):
- renderlayer = instance.data.get("setMembers")
+ renderlayer = instance.data.get("renderlayer")
with lib.renderlayer(renderlayer):
cls.log.info("Enabling Distributed Rendering "
"ignore in batch mode..")
diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py
index f49811c2c0..4474f08ba4 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py
@@ -5,6 +5,7 @@ from openpype.pipeline.publish import (
context_plugin_should_run,
RepairContextAction,
ValidateContentsOrder,
+ PublishValidationError
)
from maya import cmds
@@ -26,7 +27,10 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin):
invalid = self.get_invalid(context)
if invalid:
- raise RuntimeError("Found invalid VRay Translator settings!")
+ raise PublishValidationError(
+ message="Found invalid VRay Translator settings",
+ title=self.label
+ )
@classmethod
def get_invalid(cls, context):
@@ -35,7 +39,11 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin):
# Get vraySettings node
vray_settings = cmds.ls(type="VRaySettingsNode")
- assert vray_settings, "Please ensure a VRay Settings Node is present"
+ if not vray_settings:
+ raise PublishValidationError(
+ "Please ensure a VRay Settings Node is present",
+ title=cls.label
+ )
node = vray_settings[0]
diff --git a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
index 855a96e6b9..7b726de3a8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
@@ -3,6 +3,10 @@ import pyblish.api
from maya import cmds
import openpype.hosts.maya.api.action
+from openpype.pipeline.publish import (
+ PublishValidationError
+)
+
class ValidateVrayProxyMembers(pyblish.api.InstancePlugin):
@@ -19,7 +23,7 @@ class ValidateVrayProxyMembers(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("'%s' is invalid VRay Proxy for "
+ raise PublishValidationError("'%s' is invalid VRay Proxy for "
"export!" % instance.name)
@classmethod
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py
index 4842134b12..2b7249ad94 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py
@@ -1,7 +1,11 @@
import pyblish.api
import maya.cmds as cmds
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import RepairAction
+from openpype.pipeline.publish import (
+ RepairAction,
+ PublishValidationError
+)
+
class ValidateYetiRigCacheState(pyblish.api.InstancePlugin):
@@ -23,7 +27,7 @@ class ValidateYetiRigCacheState(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Nodes have incorrect I/O settings")
+ raise PublishValidationError("Nodes have incorrect I/O settings")
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
index ebef44774d..96fb475a0a 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
@@ -3,7 +3,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
-from openpype.pipeline.publish import ValidateContentsOrder
+from openpype.pipeline.publish import (
+ ValidateContentsOrder,
+ PublishValidationError
+)
class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator):
@@ -19,7 +22,7 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Yeti Rig has invalid input meshes")
+ raise PublishValidationError("Yeti Rig has invalid input meshes")
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py
index 9914277721..455bf5291a 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py
@@ -1,5 +1,7 @@
import pyblish.api
+from openpype.pipeline.publish import PublishValidationError
+
class ValidateYetiRigSettings(pyblish.api.InstancePlugin):
"""Validate Yeti Rig Settings have collected input connections.
@@ -18,8 +20,9 @@ class ValidateYetiRigSettings(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
- raise RuntimeError("Detected invalid Yeti Rig data. (See log) "
- "Tip: Save the scene")
+ raise PublishValidationError(
+ ("Detected invalid Yeti Rig data. (See log) "
+ "Tip: Save the scene"))
@classmethod
def get_invalid(cls, instance):
diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py
index 685ab3019d..0efc46edaf 100644
--- a/openpype/hosts/nuke/api/lib.py
+++ b/openpype/hosts/nuke/api/lib.py
@@ -553,7 +553,9 @@ def add_write_node_legacy(name, **kwarg):
w = nuke.createNode(
"Write",
- "name {}".format(name))
+ "name {}".format(name),
+ inpanel=False
+ )
w["file"].setValue(kwarg["file"])
@@ -589,7 +591,9 @@ def add_write_node(name, file_path, knobs, **kwarg):
w = nuke.createNode(
"Write",
- "name {}".format(name))
+ "name {}".format(name),
+ inpanel=False
+ )
w["file"].setValue(file_path)
@@ -1192,8 +1196,10 @@ def create_prenodes(
# create node
now_node = nuke.createNode(
- nodeclass, "name {}".format(name))
- now_node.hideControlPanel()
+ nodeclass,
+ "name {}".format(name),
+ inpanel=False
+ )
# add for dependency linking
for_dependency[name] = {
@@ -1317,12 +1323,17 @@ def create_write_node(
input_name = str(input.name()).replace(" ", "")
# if connected input node was defined
prev_node = nuke.createNode(
- "Input", "name {}".format(input_name))
+ "Input",
+ "name {}".format(input_name),
+ inpanel=False
+ )
else:
# generic input node connected to nothing
prev_node = nuke.createNode(
- "Input", "name {}".format("rgba"))
- prev_node.hideControlPanel()
+ "Input",
+ "name {}".format("rgba"),
+ inpanel=False
+ )
# creating pre-write nodes `prenodes`
last_prenode = create_prenodes(
@@ -1342,15 +1353,13 @@ def create_write_node(
imageio_writes["knobs"],
**data
)
- write_node.hideControlPanel()
# connect to previous node
now_node.setInput(0, prev_node)
# switch actual node to previous
prev_node = now_node
- now_node = nuke.createNode("Output", "name Output1")
- now_node.hideControlPanel()
+ now_node = nuke.createNode("Output", "name Output1", inpanel=False)
# connect to previous node
now_node.setInput(0, prev_node)
@@ -1517,8 +1526,10 @@ def create_write_node_legacy(
else:
# generic input node connected to nothing
prev_node = nuke.createNode(
- "Input", "name {}".format("rgba"))
- prev_node.hideControlPanel()
+ "Input",
+ "name {}".format("rgba"),
+ inpanel=False
+ )
# creating pre-write nodes `prenodes`
if prenodes:
for node in prenodes:
@@ -1530,8 +1541,10 @@ def create_write_node_legacy(
# create node
now_node = nuke.createNode(
- klass, "name {}".format(pre_node_name))
- now_node.hideControlPanel()
+ klass,
+ "name {}".format(pre_node_name),
+ inpanel=False
+ )
# add data to knob
for _knob in knobs:
@@ -1561,14 +1574,18 @@ def create_write_node_legacy(
if isinstance(dependent, (tuple or list)):
for i, node_name in enumerate(dependent):
input_node = nuke.createNode(
- "Input", "name {}".format(node_name))
- input_node.hideControlPanel()
+ "Input",
+ "name {}".format(node_name),
+ inpanel=False
+ )
now_node.setInput(1, input_node)
elif isinstance(dependent, str):
input_node = nuke.createNode(
- "Input", "name {}".format(node_name))
- input_node.hideControlPanel()
+ "Input",
+ "name {}".format(node_name),
+ inpanel=False
+ )
now_node.setInput(0, input_node)
else:
@@ -1583,15 +1600,13 @@ def create_write_node_legacy(
"inside_{}".format(name),
**_data
)
- write_node.hideControlPanel()
# connect to previous node
now_node.setInput(0, prev_node)
# switch actual node to previous
prev_node = now_node
- now_node = nuke.createNode("Output", "name Output1")
- now_node.hideControlPanel()
+ now_node = nuke.createNode("Output", "name Output1", inpanel=False)
# connect to previous node
now_node.setInput(0, prev_node)
diff --git a/openpype/hosts/nuke/plugins/load/load_camera_abc.py b/openpype/hosts/nuke/plugins/load/load_camera_abc.py
index 11cc63d25c..40822c9eb7 100644
--- a/openpype/hosts/nuke/plugins/load/load_camera_abc.py
+++ b/openpype/hosts/nuke/plugins/load/load_camera_abc.py
@@ -66,8 +66,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
object_name, file),
inpanel=False
)
- # hide property panel
- camera_node.hideControlPanel()
camera_node.forceValidate()
camera_node["frame_rate"].setValue(float(fps))
diff --git a/openpype/hosts/nuke/plugins/load/load_clip.py b/openpype/hosts/nuke/plugins/load/load_clip.py
index cb3da79ef5..8f24fe6861 100644
--- a/openpype/hosts/nuke/plugins/load/load_clip.py
+++ b/openpype/hosts/nuke/plugins/load/load_clip.py
@@ -144,10 +144,9 @@ class LoadClip(plugin.NukeLoader):
# Create the Loader with the filename path set
read_node = nuke.createNode(
"Read",
- "name {}".format(read_name))
-
- # hide property panel
- read_node.hideControlPanel()
+ "name {}".format(read_name),
+ inpanel=False
+ )
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
@@ -165,8 +164,8 @@ class LoadClip(plugin.NukeLoader):
"handleStart", "handleEnd"]
data_imprint = {}
- for k in add_keys:
- if k == 'version':
+ for key in add_keys:
+ if key == 'version':
version_doc = context["version"]
if version_doc["type"] == "hero_version":
version = "hero"
@@ -174,17 +173,20 @@ class LoadClip(plugin.NukeLoader):
version = version_doc.get("name")
if version:
- data_imprint[k] = version
+ data_imprint[key] = version
- elif k == 'colorspace':
- colorspace = representation["data"].get(k)
- colorspace = colorspace or version_data.get(k)
+ elif key == 'colorspace':
+ colorspace = representation["data"].get(key)
+ colorspace = colorspace or version_data.get(key)
data_imprint["db_colorspace"] = colorspace
if used_colorspace:
data_imprint["used_colorspace"] = used_colorspace
else:
- data_imprint[k] = context["version"]['data'].get(
- k, str(None))
+ value_ = context["version"]['data'].get(
+ key, str(None))
+ if isinstance(value_, (str)):
+ value_ = value_.replace("\\", "/")
+ data_imprint[key] = value_
data_imprint["objectName"] = read_name
diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py
index d49f87a094..eb1c905c4d 100644
--- a/openpype/hosts/nuke/plugins/load/load_effects.py
+++ b/openpype/hosts/nuke/plugins/load/load_effects.py
@@ -88,10 +88,9 @@ class LoadEffects(load.LoaderPlugin):
GN = nuke.createNode(
"Group",
- "name {}_1".format(object_name))
-
- # hide property panel
- GN.hideControlPanel()
+ "name {}_1".format(object_name),
+ inpanel=False
+ )
# adding content to the group node
with GN:
diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py
index bfe32c1ed9..03be8654ed 100644
--- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py
+++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py
@@ -89,10 +89,9 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
GN = nuke.createNode(
"Group",
- "name {}_1".format(object_name))
-
- # hide property panel
- GN.hideControlPanel()
+ "name {}_1".format(object_name),
+ inpanel=False
+ )
# adding content to the group node
with GN:
diff --git a/openpype/hosts/nuke/plugins/load/load_image.py b/openpype/hosts/nuke/plugins/load/load_image.py
index f82ee4db88..0a79ddada7 100644
--- a/openpype/hosts/nuke/plugins/load/load_image.py
+++ b/openpype/hosts/nuke/plugins/load/load_image.py
@@ -119,10 +119,9 @@ class LoadImage(load.LoaderPlugin):
with viewer_update_and_undo_stop():
r = nuke.createNode(
"Read",
- "name {}".format(read_name))
-
- # hide property panel
- r.hideControlPanel()
+ "name {}".format(read_name),
+ inpanel=False
+ )
r["file"].setValue(file)
diff --git a/openpype/hosts/nuke/plugins/load/load_model.py b/openpype/hosts/nuke/plugins/load/load_model.py
index f968da8475..36781993ea 100644
--- a/openpype/hosts/nuke/plugins/load/load_model.py
+++ b/openpype/hosts/nuke/plugins/load/load_model.py
@@ -65,9 +65,6 @@ class AlembicModelLoader(load.LoaderPlugin):
inpanel=False
)
- # hide property panel
- model_node.hideControlPanel()
-
model_node.forceValidate()
# Ensure all items are imported and selected.
diff --git a/openpype/hosts/nuke/plugins/load/load_script_precomp.py b/openpype/hosts/nuke/plugins/load/load_script_precomp.py
index 53e9a76003..b74fdf481a 100644
--- a/openpype/hosts/nuke/plugins/load/load_script_precomp.py
+++ b/openpype/hosts/nuke/plugins/load/load_script_precomp.py
@@ -70,10 +70,9 @@ class LinkAsGroup(load.LoaderPlugin):
# P = nuke.nodes.LiveGroup("file {}".format(file))
P = nuke.createNode(
"Precomp",
- "file {}".format(file))
-
- # hide property panel
- P.hideControlPanel()
+ "file {}".format(file),
+ inpanel=False
+ )
# Set colorspace defined in version data
colorspace = context["version"]["data"].get("colorspace", None)
diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py
index ed23950b35..b5c978d98f 100644
--- a/openpype/hosts/unreal/addon.py
+++ b/openpype/hosts/unreal/addon.py
@@ -1,7 +1,6 @@
import os
import re
from openpype.modules import IHostAddon, OpenPypeModule
-from openpype.widgets.message_window import Window
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -21,6 +20,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon):
from .lib import get_compatible_integration
+ from openpype.widgets.message_window import Window
+
pattern = re.compile(r'^\d+-\d+$')
if not pattern.match(app.name):
diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py
index e3e94d50cd..551a2f7373 100644
--- a/openpype/modules/deadline/abstract_submit_deadline.py
+++ b/openpype/modules/deadline/abstract_submit_deadline.py
@@ -19,7 +19,8 @@ import requests
import pyblish.api
from openpype.pipeline.publish import (
AbstractMetaInstancePlugin,
- KnownPublishError
+ KnownPublishError,
+ OpenPypePyblishPluginMixin
)
JSONDecodeError = getattr(json.decoder, "JSONDecodeError", ValueError)
@@ -35,7 +36,7 @@ def requests_post(*args, **kwargs):
Warning:
Disabling SSL certificate validation is defeating one line
- of defense SSL is providing and it is not recommended.
+ of defense SSL is providing, and it is not recommended.
"""
if 'verify' not in kwargs:
@@ -56,7 +57,7 @@ def requests_get(*args, **kwargs):
Warning:
Disabling SSL certificate validation is defeating one line
- of defense SSL is providing and it is not recommended.
+ of defense SSL is providing, and it is not recommended.
"""
if 'verify' not in kwargs:
@@ -395,14 +396,17 @@ class DeadlineJobInfo(object):
@six.add_metaclass(AbstractMetaInstancePlugin)
-class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
+class AbstractSubmitDeadline(pyblish.api.InstancePlugin,
+ OpenPypePyblishPluginMixin):
"""Class abstracting access to Deadline."""
label = "Submit to Deadline"
order = pyblish.api.IntegratorOrder + 0.1
+
import_reference = False
use_published = True
asset_dependencies = False
+ default_priority = 50
def __init__(self, *args, **kwargs):
super(AbstractSubmitDeadline, self).__init__(*args, **kwargs)
@@ -651,18 +655,18 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
@staticmethod
def _get_workfile_instance(context):
"""Find workfile instance in context"""
- for i in context:
+ for instance in context:
is_workfile = (
- "workfile" in i.data.get("families", []) or
- i.data["family"] == "workfile"
+ "workfile" in instance.data.get("families", []) or
+ instance.data["family"] == "workfile"
)
if not is_workfile:
continue
# test if there is instance of workfile waiting
# to be published.
- assert i.data.get("publish", True) is True, (
+ assert instance.data.get("publish", True) is True, (
"Workfile (scene) must be published along")
- return i
+ return instance
diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
index a6cdcb7e71..159ac43289 100644
--- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
@@ -30,8 +30,16 @@ import attr
from maya import cmds
-from openpype.pipeline import legacy_io
-
+from openpype.pipeline import (
+ legacy_io,
+ OpenPypePyblishPluginMixin
+)
+from openpype.lib import (
+ BoolDef,
+ NumberDef,
+ TextDef,
+ EnumDef
+)
from openpype.hosts.maya.api.lib_rendersettings import RenderSettings
from openpype.hosts.maya.api.lib import get_attr_in_layer
@@ -92,7 +100,8 @@ class ArnoldPluginInfo(object):
ArnoldFile = attr.ib(default=None)
-class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
+class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
+ OpenPypePyblishPluginMixin):
label = "Submit Render to Deadline"
hosts = ["maya"]
@@ -106,6 +115,24 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
jobInfo = {}
pluginInfo = {}
group = "none"
+ strict_error_checking = True
+
+ @classmethod
+ def apply_settings(cls, project_settings, system_settings):
+ settings = project_settings["deadline"]["publish"]["MayaSubmitDeadline"] # noqa
+
+ # Take some defaults from settings
+ cls.asset_dependencies = settings.get("asset_dependencies",
+ cls.asset_dependencies)
+ cls.import_reference = settings.get("import_reference",
+ cls.import_reference)
+ cls.use_published = settings.get("use_published", cls.use_published)
+ cls.priority = settings.get("priority", cls.priority)
+ cls.tile_priority = settings.get("tile_priority", cls.tile_priority)
+ cls.limit = settings.get("limit", cls.limit)
+ cls.group = settings.get("group", cls.group)
+ cls.strict_error_checking = settings.get("strict_error_checking",
+ cls.strict_error_checking)
def get_job_info(self):
job_info = DeadlineJobInfo(Plugin="MayaBatch")
@@ -151,6 +178,19 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
if self.limit:
job_info.LimitGroups = ",".join(self.limit)
+ attr_values = self.get_attr_values_from_data(instance.data)
+ render_globals = instance.data.setdefault("renderGlobals", dict())
+ machine_list = attr_values.get("machineList", "")
+ if machine_list:
+ if attr_values.get("whitelist", True):
+ machine_list_key = "Whitelist"
+ else:
+ machine_list_key = "Blacklist"
+ render_globals[machine_list_key] = machine_list
+
+ job_info.Priority = attr_values.get("priority")
+ job_info.ChunkSize = attr_values.get("chunkSize")
+
# Add options from RenderGlobals
render_globals = instance.data.get("renderGlobals", {})
job_info.update(render_globals)
@@ -223,8 +263,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
"renderSetupIncludeLights", default_rs_include_lights)
if rs_include_lights not in {"1", "0", True, False}:
rs_include_lights = default_rs_include_lights
- strict_error_checking = instance.data.get("strict_error_checking",
- True)
+
+ attr_values = self.get_attr_values_from_data(instance.data)
+ strict_error_checking = attr_values.get("strict_error_checking",
+ self.strict_error_checking)
plugin_info = MayaPluginInfo(
SceneFile=self.scene_path,
Version=cmds.about(version=True),
@@ -422,11 +464,13 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
assembly_job_info.Name += " - Tile Assembly Job"
assembly_job_info.Frames = 1
assembly_job_info.MachineLimit = 1
- assembly_job_info.Priority = instance.data.get(
- "tile_priority", self.tile_priority
- )
+
+ attr_values = self.get_attr_values_from_data(instance.data)
+ assembly_job_info.Priority = attr_values.get("tile_priority",
+ self.tile_priority)
assembly_job_info.TileJob = False
+ # TODO: This should be a new publisher attribute definition
pool = instance.context.data["project_settings"]["deadline"]
pool = pool["publish"]["ProcessSubmittedJobOnFarm"]["deadline_pool"]
assembly_job_info.Pool = pool or instance.data.get("primaryPool", "")
@@ -519,7 +563,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
"submitting assembly job {} of {}".format(i + 1,
num_assemblies)
)
- self.log.info(payload)
assembly_job_id = self.submit(payload)
assembly_job_ids.append(assembly_job_id)
@@ -782,6 +825,44 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
for file in exp:
yield file
+ @classmethod
+ def get_attribute_defs(cls):
+ defs = super(MayaSubmitDeadline, cls).get_attribute_defs()
+
+ defs.extend([
+ NumberDef("priority",
+ label="Priority",
+ default=cls.default_priority,
+ decimals=0),
+ NumberDef("chunkSize",
+ label="Frames Per Task",
+ default=1,
+ decimals=0,
+ minimum=1,
+ maximum=1000),
+ TextDef("machineList",
+ label="Machine List",
+ default="",
+ placeholder="machine1,machine2"),
+ EnumDef("whitelist",
+ label="Machine List (Allow/Deny)",
+ items={
+ True: "Allow List",
+ False: "Deny List",
+ },
+ default=False),
+ NumberDef("tile_priority",
+ label="Tile Assembler Priority",
+ decimals=0,
+ default=cls.tile_priority),
+ BoolDef("strict_error_checking",
+ label="Strict Error Checking",
+ default=cls.strict_error_checking),
+
+ ])
+
+ return defs
+
def _format_tiles(
filename,
diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py
index 25f859554f..3b04f6d3bc 100644
--- a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py
@@ -1,18 +1,33 @@
import os
-import requests
+import attr
from datetime import datetime
from maya import cmds
from openpype.pipeline import legacy_io, PublishXmlValidationError
-from openpype.settings import get_project_settings
from openpype.tests.lib import is_in_tests
from openpype.lib import is_running_from_build
+from openpype_modules.deadline import abstract_submit_deadline
+from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
import pyblish.api
-class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
+@attr.s
+class MayaPluginInfo(object):
+ Build = attr.ib(default=None) # Don't force build
+ StrictErrorChecking = attr.ib(default=True)
+
+ SceneFile = attr.ib(default=None) # Input scene
+ Version = attr.ib(default=None) # Mandatory for Deadline
+ ProjectPath = attr.ib(default=None)
+
+ ScriptJob = attr.ib(default=True)
+ ScriptFilename = attr.ib(default=None)
+
+
+class MayaSubmitRemotePublishDeadline(
+ abstract_submit_deadline.AbstractSubmitDeadline):
"""Submit Maya scene to perform a local publish in Deadline.
Publishing in Deadline can be helpful for scenes that publish very slow.
@@ -36,13 +51,6 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
targets = ["local"]
def process(self, instance):
- project_name = instance.context.data["projectName"]
- # TODO settings can be received from 'context.data["project_settings"]'
- settings = get_project_settings(project_name)
- # use setting for publish job on farm, no reason to have it separately
- deadline_publish_job_sett = (settings["deadline"]
- ["publish"]
- ["ProcessSubmittedJobOnFarm"])
# Ensure no errors so far
if not (all(result["success"]
@@ -54,54 +62,39 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
"Skipping submission..")
return
+ super(MayaSubmitRemotePublishDeadline, self).process(instance)
+
+ def get_job_info(self):
+ instance = self._instance
+ context = instance.context
+
+ project_name = instance.context.data["projectName"]
scene = instance.context.data["currentFile"]
scenename = os.path.basename(scene)
job_name = "{scene} [PUBLISH]".format(scene=scenename)
batch_name = "{code} - {scene}".format(code=project_name,
scene=scenename)
+
if is_in_tests():
batch_name += datetime.now().strftime("%d%m%Y%H%M%S")
- # Generate the payload for Deadline submission
- payload = {
- "JobInfo": {
- "Plugin": "MayaBatch",
- "BatchName": batch_name,
- "Name": job_name,
- "UserName": instance.context.data["user"],
- "Comment": instance.context.data.get("comment", ""),
- # "InitialStatus": state
- "Department": deadline_publish_job_sett["deadline_department"],
- "ChunkSize": deadline_publish_job_sett["deadline_chunk_size"],
- "Priority": deadline_publish_job_sett["deadline_priority"],
- "Group": deadline_publish_job_sett["deadline_group"],
- "Pool": deadline_publish_job_sett["deadline_pool"],
- },
- "PluginInfo": {
+ job_info = DeadlineJobInfo(Plugin="MayaBatch")
+ job_info.BatchName = batch_name
+ job_info.Name = job_name
+ job_info.UserName = context.data.get("user")
+ job_info.Comment = context.data.get("comment", "")
- "Build": None, # Don't force build
- "StrictErrorChecking": True,
- "ScriptJob": True,
+ # use setting for publish job on farm, no reason to have it separately
+ project_settings = context.data["project_settings"]
+ deadline_publish_job_sett = project_settings["deadline"]["publish"]["ProcessSubmittedJobOnFarm"] # noqa
+ job_info.Department = deadline_publish_job_sett["deadline_department"]
+ job_info.ChunkSize = deadline_publish_job_sett["deadline_chunk_size"]
+ job_info.Priority = deadline_publish_job_sett["deadline_priority"]
+ job_info.Group = deadline_publish_job_sett["deadline_group"]
+ job_info.Pool = deadline_publish_job_sett["deadline_pool"]
- # Inputs
- "SceneFile": scene,
- "ScriptFilename": "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py", # noqa
-
- # Mandatory for Deadline
- "Version": cmds.about(version=True),
-
- # Resolve relative references
- "ProjectPath": cmds.workspace(query=True,
- rootDirectory=True),
-
- },
-
- # Mandatory for Deadline, may be empty
- "AuxFiles": []
- }
-
- # Include critical environment variables with submission + api.Session
+ # Include critical environment variables with submission + Session
keys = [
"FTRACK_API_USER",
"FTRACK_API_KEY",
@@ -126,20 +119,18 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
environment["OPENPYPE_PUBLISH_SUBSET"] = instance.data["subset"]
environment["OPENPYPE_REMOTE_PUBLISH"] = "1"
- payload["JobInfo"].update({
- "EnvironmentKeyValue%d" % index: "{key}={value}".format(
- key=key,
- value=environment[key]
- ) for index, key in enumerate(environment)
- })
+ for key, value in environment.items():
+ job_info.EnvironmentKeyValue[key] = value
- self.log.info("Submitting Deadline job ...")
- deadline_url = instance.context.data["defaultDeadline"]
- # if custom one is set in instance, use that
- if instance.data.get("deadlineUrl"):
- deadline_url = instance.data.get("deadlineUrl")
- assert deadline_url, "Requires Deadline Webservice URL"
- url = "{}/api/jobs".format(deadline_url)
- response = requests.post(url, json=payload, timeout=10)
- if not response.ok:
- raise Exception(response.text)
+ def get_plugin_info(self):
+
+ scene = self._instance.context.data["currentFile"]
+
+ plugin_info = MayaPluginInfo()
+ plugin_info.SceneFile = scene
+ plugin_info.ScriptFilename = "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py" # noqa
+ plugin_info.Version = cmds.about(version=True)
+ plugin_info.ProjectPath = cmds.workspace(query=True,
+ rootDirectory=True)
+
+ return attr.asdict(plugin_info)
diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
index 292fe58cca..c893f43c4c 100644
--- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py
+++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
@@ -18,6 +18,8 @@ from openpype.pipeline import (
get_representation_path,
legacy_io,
)
+from openpype.pipeline.publish import OpenPypePyblishPluginMixin
+from openpype.lib import EnumDef
from openpype.tests.lib import is_in_tests
from openpype.pipeline.farm.patterning import match_aov_pattern
from openpype.lib import is_running_from_build
@@ -81,7 +83,7 @@ def get_resource_files(resources, frame_range=None):
class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
- publish.ColormanagedPyblishPluginMixin):
+ OpenPypePyblishPluginMixin):
"""Process Job submitted on farm.
These jobs are dependent on a deadline or muster job
@@ -278,6 +280,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
priority = self.deadline_priority or instance.data.get("priority", 50)
+ instance_settings = self.get_attr_values_from_data(instance.data)
+ initial_status = instance_settings.get("publishJobState", "Active")
+ # TODO: Remove this backwards compatibility of `suspend_publish`
+ if instance.data.get("suspend_publish"):
+ initial_status = "Suspended"
+
args = [
"--headless",
'publish',
@@ -304,6 +312,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
"Department": self.deadline_department,
"ChunkSize": self.deadline_chunk_size,
"Priority": priority,
+ "InitialStatus": initial_status,
"Group": self.deadline_group,
"Pool": self.deadline_pool or instance.data.get("primaryPool"),
@@ -336,9 +345,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
else:
payload["JobInfo"]["JobDependency0"] = job["_id"]
- if instance.data.get("suspend_publish"):
- payload["JobInfo"]["InitialStatus"] = "Suspended"
-
for index, (key_, value_) in enumerate(environment.items()):
payload["JobInfo"].update(
{
@@ -1251,3 +1257,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
publish_folder = os.path.dirname(file_path)
return publish_folder
+
+ @classmethod
+ def get_attribute_defs(cls):
+ return [
+ EnumDef("publishJobState",
+ label="Publish Job State",
+ items=["Active", "Suspended"],
+ default="Active")
+ ]
diff --git a/openpype/hosts/maya/plugins/publish/validate_muster_connection.py b/openpype/modules/muster/plugins/publish/validate_muster_connection.py
similarity index 100%
rename from openpype/hosts/maya/plugins/publish/validate_muster_connection.py
rename to openpype/modules/muster/plugins/publish/validate_muster_connection.py
diff --git a/openpype/pipeline/create/__init__.py b/openpype/pipeline/create/__init__.py
index 5eee18df0f..6755224c19 100644
--- a/openpype/pipeline/create/__init__.py
+++ b/openpype/pipeline/create/__init__.py
@@ -74,6 +74,8 @@ __all__ = (
"register_creator_plugin_path",
"deregister_creator_plugin_path",
+ "cache_and_get_instances",
+
"CreatedInstance",
"CreateContext",
diff --git a/openpype/pipeline/delivery.py b/openpype/pipeline/delivery.py
index 500f54040a..ddde45d4da 100644
--- a/openpype/pipeline/delivery.py
+++ b/openpype/pipeline/delivery.py
@@ -157,6 +157,8 @@ def deliver_single_file(
delivery_path = delivery_path.replace("..", ".")
# Make sure path is valid for all platforms
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
+ # Remove newlines from the end of the string to avoid OSError during copy
+ delivery_path = delivery_path.rstrip()
delivery_folder = os.path.dirname(delivery_path)
if not os.path.exists(delivery_folder):
diff --git a/openpype/plugins/publish/collect_scene_version.py b/openpype/plugins/publish/collect_scene_version.py
index cd3231a07d..70a0aca296 100644
--- a/openpype/plugins/publish/collect_scene_version.py
+++ b/openpype/plugins/publish/collect_scene_version.py
@@ -3,6 +3,7 @@ import pyblish.api
from openpype.lib import get_version_from_path
from openpype.tests.lib import is_in_tests
+from openpype.pipeline import KnownPublishError
class CollectSceneVersion(pyblish.api.ContextPlugin):
@@ -38,11 +39,15 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
if (
os.environ.get("HEADLESS_PUBLISH")
and not is_in_tests()
- and context.data["hostName"] in self.skip_hosts_headless_publish):
+ and context.data["hostName"] in self.skip_hosts_headless_publish
+ ):
self.log.debug("Skipping for headless publishing")
return
- assert context.data.get('currentFile'), "Cannot get current file"
+ if not context.data.get('currentFile'):
+ raise KnownPublishError("Cannot get current workfile path. "
+ "Make sure your scene is saved.")
+
filename = os.path.basename(context.data.get('currentFile'))
if '' in filename:
@@ -53,8 +58,9 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
)
version = get_version_from_path(filename)
- assert version, "Cannot determine version"
+ if version is None:
+ raise KnownPublishError("Unable to retrieve version number from "
+ "filename: {}".format(filename))
- rootVersion = int(version)
- context.data['version'] = rootVersion
+ context.data['version'] = int(version)
self.log.info('Scene Version: %s' % context.data.get('version'))
diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py
index 56a0fe60cd..042324ef8b 100644
--- a/openpype/pype_commands.py
+++ b/openpype/pype_commands.py
@@ -291,7 +291,14 @@ class PypeCommands:
folder = "../tests"
# disable warnings and show captured stdout even if success
- args = ["--disable-pytest-warnings", "-rP", folder]
+ args = [
+ "--disable-pytest-warnings",
+ "--capture=sys",
+ "--print",
+ "-W ignore::DeprecationWarning",
+ "-rP",
+ folder
+ ]
if mark:
args.extend(["-m", mark])
diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json
index e3fc5f0723..a25775e592 100644
--- a/openpype/settings/defaults/project_settings/maya.json
+++ b/openpype/settings/defaults/project_settings/maya.json
@@ -615,8 +615,8 @@
"maskOverride": false,
"maskDriver": false,
"maskFilter": false,
- "maskColor_manager": false,
- "maskOperator": false
+ "maskOperator": false,
+ "maskColor_manager": false
},
"CreateVrayProxy": {
"enabled": true,
diff --git a/openpype/settings/defaults/project_settings/substancepainter.json b/openpype/settings/defaults/project_settings/substancepainter.json
index 4adeff98ef..2f9344d435 100644
--- a/openpype/settings/defaults/project_settings/substancepainter.json
+++ b/openpype/settings/defaults/project_settings/substancepainter.json
@@ -2,11 +2,11 @@
"imageio": {
"activate_host_color_management": true,
"ocio_config": {
- "override_global_config": true,
+ "override_global_config": false,
"filepath": []
},
"file_rules": {
- "activate_host_rules": true,
+ "activate_host_rules": false,
"rules": {}
}
},
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json
index a8b76a0331..1c37638c90 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json
@@ -318,52 +318,52 @@
{
"type": "boolean",
"key": "maskOptions",
- "label": "Mask Options"
+ "label": "Export Options"
},
{
"type": "boolean",
"key": "maskCamera",
- "label": "Mask Camera"
+ "label": "Export Cameras"
},
{
"type": "boolean",
"key": "maskLight",
- "label": "Mask Light"
+ "label": "Export Lights"
},
{
"type": "boolean",
"key": "maskShape",
- "label": "Mask Shape"
+ "label": "Export Shapes"
},
{
"type": "boolean",
"key": "maskShader",
- "label": "Mask Shader"
+ "label": "Export Shaders"
},
{
"type": "boolean",
"key": "maskOverride",
- "label": "Mask Override"
+ "label": "Export Override Nodes"
},
{
"type": "boolean",
"key": "maskDriver",
- "label": "Mask Driver"
+ "label": "Export Drivers"
},
{
"type": "boolean",
"key": "maskFilter",
- "label": "Mask Filter"
- },
- {
- "type": "boolean",
- "key": "maskColor_manager",
- "label": "Mask Color Manager"
+ "label": "Export Filters"
},
{
"type": "boolean",
"key": "maskOperator",
- "label": "Mask Operator"
+ "label": "Export Operators"
+ },
+ {
+ "type": "boolean",
+ "key": "maskColor_manager",
+ "label": "Export Color Managers"
}
]
},
diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py
index 73d33392b9..57e6e24411 100644
--- a/openpype/tools/sceneinventory/view.py
+++ b/openpype/tools/sceneinventory/view.py
@@ -1,5 +1,6 @@
import collections
import logging
+import itertools
from functools import partial
from qtpy import QtWidgets, QtCore
@@ -195,20 +196,17 @@ class SceneInventoryView(QtWidgets.QTreeView):
version_name_by_id[version_doc["_id"]] = \
version_doc["name"]
+ # Specify version per item to update to
+ update_items = []
+ update_versions = []
for item in items:
repre_id = item["representation"]
version_id = version_id_by_repre_id.get(repre_id)
version_name = version_name_by_id.get(version_id)
if version_name is not None:
- try:
- update_container(item, version_name)
- except AssertionError:
- self._show_version_error_dialog(
- version_name, [item]
- )
- log.warning("Update failed", exc_info=True)
-
- self.data_changed.emit()
+ update_items.append(item)
+ update_versions.append(version_name)
+ self._update_containers(update_items, update_versions)
update_icon = qtawesome.icon(
"fa.asterisk",
@@ -225,16 +223,6 @@ class SceneInventoryView(QtWidgets.QTreeView):
update_to_latest_action = None
if has_outdated or has_loaded_hero_versions:
- # update to latest version
- def _on_update_to_latest(items):
- for item in items:
- try:
- update_container(item, -1)
- except AssertionError:
- self._show_version_error_dialog(None, [item])
- log.warning("Update failed", exc_info=True)
- self.data_changed.emit()
-
update_icon = qtawesome.icon(
"fa.angle-double-up",
color=DEFAULT_COLOR
@@ -245,21 +233,11 @@ class SceneInventoryView(QtWidgets.QTreeView):
menu
)
update_to_latest_action.triggered.connect(
- lambda: _on_update_to_latest(items)
+ lambda: self._update_containers(items, version=-1)
)
change_to_hero = None
if has_available_hero_version:
- # change to hero version
- def _on_update_to_hero(items):
- for item in items:
- try:
- update_container(item, HeroVersionType(-1))
- except AssertionError:
- self._show_version_error_dialog('hero', [item])
- log.warning("Update failed", exc_info=True)
- self.data_changed.emit()
-
# TODO change icon
change_icon = qtawesome.icon(
"fa.asterisk",
@@ -271,7 +249,8 @@ class SceneInventoryView(QtWidgets.QTreeView):
menu
)
change_to_hero.triggered.connect(
- lambda: _on_update_to_hero(items)
+ lambda: self._update_containers(items,
+ version=HeroVersionType(-1))
)
# set version
@@ -740,14 +719,7 @@ class SceneInventoryView(QtWidgets.QTreeView):
if label:
version = versions_by_label[label]
- for item in items:
- try:
- update_container(item, version)
- except AssertionError:
- self._show_version_error_dialog(version, [item])
- log.warning("Update failed", exc_info=True)
- # refresh model when done
- self.data_changed.emit()
+ self._update_containers(items, version)
def _show_switch_dialog(self, items):
"""Display Switch dialog"""
@@ -782,9 +754,9 @@ class SceneInventoryView(QtWidgets.QTreeView):
Args:
version: str or int or None
"""
- if not version:
+ if version == -1:
version_str = "latest"
- elif version == "hero":
+ elif isinstance(version, HeroVersionType):
version_str = "hero"
elif isinstance(version, int):
version_str = "v{:03d}".format(version)
@@ -841,10 +813,43 @@ class SceneInventoryView(QtWidgets.QTreeView):
return
# Trigger update to latest
- for item in outdated_items:
- try:
- update_container(item, -1)
- except AssertionError:
- self._show_version_error_dialog(None, [item])
- log.warning("Update failed", exc_info=True)
- self.data_changed.emit()
+ self._update_containers(outdated_items, version=-1)
+
+ def _update_containers(self, items, version):
+ """Helper to update items to given version (or version per item)
+
+ If at least one item is specified this will always try to refresh
+ the inventory even if errors occurred on any of the items.
+
+ Arguments:
+ items (list): Items to update
+ version (int or list): Version to set to.
+ This can be a list specifying a version for each item.
+ Like `update_container` version -1 sets the latest version
+ and HeroTypeVersion instances set the hero version.
+
+ """
+
+ if isinstance(version, (list, tuple)):
+ # We allow a unique version to be specified per item. In that case
+ # the length must match with the items
+ assert len(items) == len(version), (
+ "Number of items mismatches number of versions: "
+ "{} items - {} versions".format(len(items), len(version))
+ )
+ versions = version
+ else:
+ # Repeat the same version infinitely
+ versions = itertools.repeat(version)
+
+ # Trigger update to latest
+ try:
+ for item, item_version in zip(items, versions):
+ try:
+ update_container(item, item_version)
+ except AssertionError:
+ self._show_version_error_dialog(item_version, [item])
+ log.warning("Update failed", exc_info=True)
+ finally:
+ # Always update the scene inventory view, even if errors occurred
+ self.data_changed.emit()
diff --git a/openpype/version.py b/openpype/version.py
index 4a6131a26a..03f618ff75 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.15.12-nightly.3"
+__version__ = "3.16.0-nightly.1"
diff --git a/pyproject.toml b/pyproject.toml
index 06a74d9126..1d124574c7 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
-version = "3.15.11" # OpenPype
+version = "3.15.12" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"