Merge branch 'develop' into enhancement/OP-6227_3dsMax-delete-from-container

This commit is contained in:
Kayla Man 2023-07-12 11:57:54 +08:00
commit 38cb6c4be0
171 changed files with 4073 additions and 2379 deletions

View file

@ -35,6 +35,9 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.16.0-nightly.1
- 3.15.12
- 3.15.12-nightly.4
- 3.15.12-nightly.3
- 3.15.12-nightly.2
- 3.15.12-nightly.1
@ -132,9 +135,6 @@ body:
- 3.14.5-nightly.3
- 3.14.5-nightly.2
- 3.14.5-nightly.1
- 3.14.4
- 3.14.4-nightly.4
- 3.14.4-nightly.3
validations:
required: true
- type: dropdown

View file

@ -1,6 +1,609 @@
# Changelog
## [3.15.12](https://github.com/ynput/OpenPype/tree/3.15.12)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.11...3.15.12)
### **🆕 New features**
<details>
<summary>Tray Publisher: User can set colorspace per instance explicitly <a href="https://github.com/ynput/OpenPype/pull/4901">#4901</a></summary>
With this feature a user can set/override the colorspace for the representations of an instance explicitly instead of relying on the File Rules from project settings or alike. This way you can ingest any file and explicitly say "this file is colorspace X".
___
</details>
<details>
<summary>Review Family in Max <a href="https://github.com/ynput/OpenPype/pull/5001">#5001</a></summary>
Review Feature by creating preview animation in 3dsmax(The code is still cleaning up so there is going to be some updates until it is ready for review)
___
</details>
<details>
<summary>AfterEffects: support for workfile template builder <a href="https://github.com/ynput/OpenPype/pull/5163">#5163</a></summary>
This PR add functionality of templated workfile builder. It allows someone to prepare AE workfile with placeholders as for automatically loading particular representation of particular subset of particular asset from context where workfile is opened.Selection from multiple prepared workfiles is provided with usage of templates, specific type of tasks could use particular workfile template etc.Artists then can build workfile from template when opening new workfile.
___
</details>
<details>
<summary>CreatePlugin: Get next version helper <a href="https://github.com/ynput/OpenPype/pull/5242">#5242</a></summary>
Implemented helper functions to get next available versions for create instances.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Maya: Improve Templates <a href="https://github.com/ynput/OpenPype/pull/4854">#4854</a></summary>
Use library method for fetching reference node and support parent in hierarchy.
___
</details>
<details>
<summary>Bug: Maya - xgen sidecar files arent moved when saving workfile as an new asset workfile changing context - OP-6222 <a href="https://github.com/ynput/OpenPype/pull/5215">#5215</a></summary>
This PR manages the Xgen files when switching context in the Workfiles app.
___
</details>
<details>
<summary>node references to check for duplicates in Max <a href="https://github.com/ynput/OpenPype/pull/5192">#5192</a></summary>
No duplicates for node references in Max when users trying to select nodes before publishing
___
</details>
<details>
<summary>Tweak profiles logging to debug level <a href="https://github.com/ynput/OpenPype/pull/5194">#5194</a></summary>
Tweak profiles logging to debug level since they aren't artist facing logs.
___
</details>
<details>
<summary>Enhancement: Reduce more visual clutter for artists in new publisher reports <a href="https://github.com/ynput/OpenPype/pull/5208">#5208</a></summary>
Got this from one of our artists' reports - figured some of these logs were definitely not for the artist, reduced those logs to debug level.
___
</details>
<details>
<summary>Cosmetics: Tweak pyblish repair actions (icon, logs, docstring) <a href="https://github.com/ynput/OpenPype/pull/5213">#5213</a></summary>
- Add icon to RepairContextAction
- logs to debug level
- also add attempt repair for RepairAction for consistency
- fix RepairContextAction docstring to mention correct argument name
#### Additional info
We should not forget to remove this ["deprecated" actions.py file](https://github.com/ynput/OpenPype/blob/3501d0d23a78fbaef106da2fffe946cb49bef855/openpype/action.py) in 3.16 (next-minor)
## Testing notes:
1. Run some fabulous repairs!
___
</details>
<details>
<summary>Maya: fix save file prompt on launch last workfile with color management enabled + restructure `set_colorspace` <a href="https://github.com/ynput/OpenPype/pull/5225">#5225</a></summary>
- Only set `configFilePath` when OCIO env var is not set since it doesn't do anything if OCIO var is set anyway.
- Set the Maya 2022+ default OCIO path using the resources path instead of "" to avoid Maya Save File on new file after launch
- **Bugfix: This is what fixes the Save prompt on open last workfile feature with Global color management enabled**
- Move all code related to applying the maya settings together after querying the settings
- Swap around the `if use_workfile_settings` since the check was reversed
- Use `get_current_project_name()` instead of environment vars
___
</details>
<details>
<summary>Enhancement: More descriptive error messages for Loaders <a href="https://github.com/ynput/OpenPype/pull/5227">#5227</a></summary>
Tweak raised errors and error messages for loader errors.
___
</details>
<details>
<summary>Houdini: add select invalid action for ValidateSopOutputNode <a href="https://github.com/ynput/OpenPype/pull/5231">#5231</a></summary>
This PR adds `SelectROPAction` action to `houdini\api\action.py`and it's used in `Validate Output Node``SelectROPAction` is used to select the associated ROPs with the errored instances.
___
</details>
<details>
<summary>Remove new lines from the delivery template string <a href="https://github.com/ynput/OpenPype/pull/5235">#5235</a></summary>
If the delivery template has a new line symbol at the end, say it was copied from the text editor, the delivery process will fail with `OSError` due to incorrect destination path. To avoid that I added `rstrip()` to the `delivery_path` processing.
___
</details>
<details>
<summary>Houdini: better selection on pointcache creation <a href="https://github.com/ynput/OpenPype/pull/5250">#5250</a></summary>
Houdini allows `ObjNode` path as `sop_path` in the `ROP` unlike OP/ Ayon require `sop_path` to be set to a sop node path explicitly In this code, better selection is used to filter out invalid selections from OP/ Ayon point of viewValid selections are
- `SopNode` that has parent of type `geo` or `subnet`
- `ObjNode` of type `geo` that has
- `SopNode` of type `output`
- `SopNode` with render flag `on` (if no `Sopnode` of type `output`)this effectively filter
- empty `ObjNode`
- `ObjNode`(s) of other types like `cam` and `dopnet`
- `SopNode`(s) that thier parents of other types like `cam` and `sop solver`
___
</details>
<details>
<summary>Update scene inventory even if any errors occurred during update <a href="https://github.com/ynput/OpenPype/pull/5252">#5252</a></summary>
When selecting many items in the scene inventory to update versions and one of the items would error out the updating stops. However, before this PR the scene inventory would also NOT refresh making you think it did nothing.Also implemented as method to allow some code deduplication.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Convert frame values to integers <a href="https://github.com/ynput/OpenPype/pull/5188">#5188</a></summary>
Convert frame values to integers.
___
</details>
<details>
<summary>Maya: fix the register_event_callback correctly collecting workfile save after <a href="https://github.com/ynput/OpenPype/pull/5214">#5214</a></summary>
fixing the bug of register_event_callback not being able to collect action of "workfile_save_after" for lock file action
___
</details>
<details>
<summary>Maya: aligning default settings to distributed aces 1.2 config <a href="https://github.com/ynput/OpenPype/pull/5233">#5233</a></summary>
Maya colorspace setttings defaults are set the way they align our distributed ACES 1.2 config file set in global colorspace configs.
___
</details>
<details>
<summary>RepairAction and SelectInvalidAction filter instances failed on the exact plugin <a href="https://github.com/ynput/OpenPype/pull/5240">#5240</a></summary>
RepairAction and SelectInvalidAction actually filter to instances that failed on the exact plugin - not on "any failure"
___
</details>
<details>
<summary>Maya: Bugfix look update nodes by id with non-unique shape names (query with `fullPath`) <a href="https://github.com/ynput/OpenPype/pull/5257">#5257</a></summary>
Fixes a bug where updating attributes on nodes with assigned shader if shape name existed more than once in the scene due to `cmds.listRelatives` call not being done with the `fullPath=True` flag.Original error:
```python
# Traceback (most recent call last):
# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 264, in <lambda>
# lambda: self._show_version_dialog(items))
# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 722, in _show_version_dialog
# self._update_containers(items, version)
# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 849, in _update_containers
# update_container(item, item_version)
# File "E:\openpype\OpenPype\openpype\pipeline\load\utils.py", line 502, in update_container
# return loader.update(container, new_representation)
# File "E:\openpype\OpenPype\openpype\hosts\maya\plugins\load\load_look.py", line 119, in update
# nodes_by_id[lib.get_id(n)].append(n)
# File "E:\openpype\OpenPype\openpype\hosts\maya\api\lib.py", line 1420, in get_id
# sel.add(node)
```
___
</details>
<details>
<summary>Nuke: Create nodes with inpanel=False <a href="https://github.com/ynput/OpenPype/pull/5051">#5051</a></summary>
This PR is meant to remove the annoyance of the UI changing focus to the properties window just for the property window of the newly created node to disappear. Instead of using node.hideControlPanel I'm implementing the concealment during the creation of the node which will not change the focus of the current window.
___
</details>
<details>
<summary>Fix the reset frame range not setting up the right timeline in Max <a href="https://github.com/ynput/OpenPype/pull/5187">#5187</a></summary>
Resolve #5181
___
</details>
<details>
<summary>Resolve: after launch automatization fixes <a href="https://github.com/ynput/OpenPype/pull/5193">#5193</a></summary>
Workfile is no correctly created and aligned witch actual project. Also the launching mechanism is now fixed so even no workfile had been saved yet it will open OpenPype menu automatically.
___
</details>
<details>
<summary>General: Revert backward incompatible change of path to template to multiplatform <a href="https://github.com/ynput/OpenPype/pull/5197">#5197</a></summary>
Now platformity is still handed by usage of `work[root]` (or any other root that is accessible across platforms.)
___
</details>
<details>
<summary>Nuke: root set format updating in node graph <a href="https://github.com/ynput/OpenPype/pull/5198">#5198</a></summary>
Nuke root node needs to be reset on some values so any knobs could be updated in node graph. This works the same way as an user would change frame number so expressions would update its values in knobs.
___
</details>
<details>
<summary>Hiero: fixing otio current project and cosmetics <a href="https://github.com/ynput/OpenPype/pull/5200">#5200</a></summary>
Otio were not returning correct current project once additional Untitled project was open in project manager stack.
___
</details>
<details>
<summary>Max: Publisher instances dont hold its enabled disabled states when Publisher reopened again <a href="https://github.com/ynput/OpenPype/pull/5202">#5202</a></summary>
Resolve #5183, general maxscript conversion issue to python (e.g. bool conversion, true in maxscript while True in Python)(Also resolve the ValueError when you change the subset to publish into list view menu)
___
</details>
<details>
<summary>Burnins: Filter script is defined only for video streams <a href="https://github.com/ynput/OpenPype/pull/5205">#5205</a></summary>
Burnins are working for inputs with audio.
___
</details>
<details>
<summary>Colorspace lib fix compatible python version comparison <a href="https://github.com/ynput/OpenPype/pull/5212">#5212</a></summary>
Fix python version comparison.
___
</details>
<details>
<summary>Houdini: Fix `get_color_management_preferences` <a href="https://github.com/ynput/OpenPype/pull/5217">#5217</a></summary>
Fix the issue described here where the logic for retrieving the current OCIO display and view was incorrectly trying to apply a regex to it.
___
</details>
<details>
<summary>Houdini: Redshift ROP image format bug <a href="https://github.com/ynput/OpenPype/pull/5218">#5218</a></summary>
Problem :
"RS_outputFileFormat" parm value was missing
and there were more "image_format" than redshift rop supports
Fix:
1) removed unnecessary formats from `image_format_enum`
2) add the selected format value to `RS_outputFileFormat`
___
</details>
<details>
<summary>Colorspace: check PyOpenColorIO rather then python version <a href="https://github.com/ynput/OpenPype/pull/5223">#5223</a></summary>
Fixing previously merged PR (https://github.com/ynput/OpenPype/pull/5212) And applying better way to check compatibility with PyOpenColorIO python api.
___
</details>
<details>
<summary>Validate delivery action representations status <a href="https://github.com/ynput/OpenPype/pull/5228">#5228</a></summary>
- disable delivery button if no representations checked
- fix macos combobox layout
- add error message if no delivery templates found
___
</details>
<details>
<summary> Houdini: Add geometry check for pointcache family <a href="https://github.com/ynput/OpenPype/pull/5230">#5230</a></summary>
When `sop_path` on ABC ROP node points to a non `SopNode`, these validators `validate_abc_primitive_to_detail.py`, `validate_primitive_hierarchy_paths.py` will error and crash when this line is executed `geo = output_node.geometryAtFrame(frame)`
___
</details>
<details>
<summary>Houdini: Add geometry check for VDB family <a href="https://github.com/ynput/OpenPype/pull/5232">#5232</a></summary>
When `sop_path` on Geometry ROP node points to a non SopNode, this validator `validate_vdb_output_node.py` will error and crash when this line is executed`sop_node.geometryAtFrame(frame)`
___
</details>
<details>
<summary>Substance Painter: Include the setting only in publish tab <a href="https://github.com/ynput/OpenPype/pull/5234">#5234</a></summary>
Instead of having two settings in both create and publish tab, there is solely one setting in the publish tab for users to set up the parameters.Resolve #5172
___
</details>
<details>
<summary>Maya: Fix collecting arnold prefix when none <a href="https://github.com/ynput/OpenPype/pull/5243">#5243</a></summary>
When no prefix is specified in render settings, the renderlayer collector would error.
___
</details>
<details>
<summary>Deadline: OPENPYPE_VERSION should only be added when running from build <a href="https://github.com/ynput/OpenPype/pull/5244">#5244</a></summary>
When running from source the environment variable `OPENPYPE_VERSION` should not be added. This is a bugfix for the feature #4489
___
</details>
<details>
<summary>Fix no prompt for "unsaved changes" showing when opening workfile in Houdini <a href="https://github.com/ynput/OpenPype/pull/5246">#5246</a></summary>
Fix no prompt for "unsaved changes" showing when opening workfile in Houdini.
___
</details>
<details>
<summary>Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter <a href="https://github.com/ynput/OpenPype/pull/5248">#5248</a></summary>
Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter.
___
</details>
<details>
<summary>General: add the os library before os.environ.get <a href="https://github.com/ynput/OpenPype/pull/5249">#5249</a></summary>
Adding os library into `creator_plugins.py` due to `os.environ.get` in line 667
___
</details>
<details>
<summary>Maya: Fix set_attribute for enum attributes <a href="https://github.com/ynput/OpenPype/pull/5261">#5261</a></summary>
Fix for #5260
___
</details>
<details>
<summary>Unreal: Move Qt imports away from module init <a href="https://github.com/ynput/OpenPype/pull/5268">#5268</a></summary>
Importing `Window` creates errors in headless mode.
```
*** WRN: >>> { ModulesLoader }: [ FAILED to import host folder unreal ]
=============================
No Qt bindings could be found
=============================
Traceback (most recent call last):
File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 252, in <module>
from PySide6 import __version__ as PYSIDE_VERSION # analysis:ignore
ModuleNotFoundERROR: No module named 'PySide6'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\tokejepsen\OpenPype\openpype\modules\base.py", line 385, in _load_modules
default_module = __import__(
File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\__init__.py", line 1, in <module>
from .addon import UnrealAddon
File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\addon.py", line 4, in <module>
from openpype.widgets.message_window import Window
File "C:\Users\tokejepsen\OpenPype\openpype\widgets\__init__.py", line 1, in <module>
from .password_dialog import PasswordDialog
File "C:\Users\tokejepsen\OpenPype\openpype\widgets\password_dialog.py", line 1, in <module>
from qtpy import QtWidgets, QtCore, QtGui
File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 259, in <module>
raise QtBindingsNotFoundERROR()
qtpy.QtBindingsNotFoundERROR: No Qt bindings could be found
```
___
</details>
### **🔀 Refactored code**
<details>
<summary>Maya: Minor refactoring and code cleanup <a href="https://github.com/ynput/OpenPype/pull/5226">#5226</a></summary>
Some small cleanup and refactoring of logic. Removing old comments, unused imports and some minor optimization. Also removed the prints of the loader names of each container the scene in `fix_incompatible_containers` + optimizing by using `set` and defining only once. Moved some UI related code/tweaks to run `on_init` only if not in headless mode. Removed an empty `obj.py` file.Each commit message kind of describes why the change was made.
___
</details>
### **Merged pull requests**
<details>
<summary>Bug: Template builder fails when loading data without outliner representation <a href="https://github.com/ynput/OpenPype/pull/5222">#5222</a></summary>
I add an assertion management in case the container does not have a represention in outliner.
___
</details>
<details>
<summary>AfterEffects - add container check validator to AE settings <a href="https://github.com/ynput/OpenPype/pull/5203">#5203</a></summary>
Adds check if scene contains only latest version of loaded containers.
___
</details>
## [3.15.11](https://github.com/ynput/OpenPype/tree/3.15.11)

View file

@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating pointcache alembics."""
from openpype.hosts.houdini.api import plugin
from openpype.pipeline import CreatedInstance
import hou
@ -14,15 +13,13 @@ class CreatePointCache(plugin.HoudiniCreator):
icon = "gears"
def create(self, subset_name, instance_data, pre_create_data):
import hou
instance_data.pop("active", None)
instance_data.update({"node_type": "alembic"})
instance = super(CreatePointCache, self).create(
subset_name,
instance_data,
pre_create_data) # type: CreatedInstance
pre_create_data)
instance_node = hou.node(instance.get("instance_node"))
parms = {
@ -37,13 +34,44 @@ class CreatePointCache(plugin.HoudiniCreator):
}
if self.selected_nodes:
parms["sop_path"] = self.selected_nodes[0].path()
selected_node = self.selected_nodes[0]
# try to find output node
for child in self.selected_nodes[0].children():
if child.type().name() == "output":
parms["sop_path"] = child.path()
break
# Although Houdini allows ObjNode path on `sop_path` for the
# the ROP node we prefer it set to the SopNode path explicitly
# Allow sop level paths (e.g. /obj/geo1/box1)
if isinstance(selected_node, hou.SopNode):
parms["sop_path"] = selected_node.path()
self.log.debug(
"Valid SopNode selection, 'SOP Path' in ROP will be set to '%s'."
% selected_node.path()
)
# Allow object level paths to Geometry nodes (e.g. /obj/geo1)
# but do not allow other object level nodes types like cameras, etc.
elif isinstance(selected_node, hou.ObjNode) and \
selected_node.type().name() in ["geo"]:
# get the output node with the minimum
# 'outputidx' or the node with display flag
sop_path = self.get_obj_output(selected_node)
if sop_path:
parms["sop_path"] = sop_path.path()
self.log.debug(
"Valid ObjNode selection, 'SOP Path' in ROP will be set to "
"the child path '%s'."
% sop_path.path()
)
if not parms.get("sop_path", None):
self.log.debug(
"Selection isn't valid. 'SOP Path' in ROP will be empty."
)
else:
self.log.debug(
"No Selection. 'SOP Path' in ROP will be empty."
)
instance_node.setParms(parms)
instance_node.parm("trange").set(1)
@ -57,3 +85,23 @@ class CreatePointCache(plugin.HoudiniCreator):
hou.ropNodeTypeCategory(),
hou.sopNodeTypeCategory()
]
def get_obj_output(self, obj_node):
"""Find output node with the smallest 'outputidx'."""
outputs = obj_node.subnetOutputs()
# if obj_node is empty
if not outputs:
return
# if obj_node has one output child whether its
# sop output node or a node with the render flag
elif len(outputs) == 1:
return outputs[0]
# if there are more than one, then it have multiple ouput nodes
# return the one with the minimum 'outputidx'
else:
return min(outputs,
key=lambda node: node.evalParm('outputidx'))

View file

@ -56,7 +56,7 @@ class ExtractAlembic(publish.Extractor):
container = instance.data["instance_node"]
self.log.info("Extracting pointcache ...")
self.log.debug("Extracting pointcache ...")
parent_dir = self.staging_dir(instance)
file_name = "{name}.abc".format(**instance.data)

View file

@ -32,13 +32,11 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.create import (
legacy_create,
get_legacy_creator_by_name,
)
from openpype.lib import NumberDef
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline.create import CreateContext
from openpype.pipeline.context_tools import (
get_current_asset_name,
get_current_project_asset,
get_current_project_name,
get_current_task_name
)
@ -122,16 +120,14 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
DISPLAY_LIGHTS_VALUES = [
"project_settings", "default", "all", "selected", "flat", "none"
]
DISPLAY_LIGHTS_LABELS = [
"Use Project Settings",
"Default Lighting",
"All Lights",
"Selected Lights",
"Flat Lighting",
"No Lights"
DISPLAY_LIGHTS_ENUM = [
{"label": "Use Project Settings", "value": "project_settings"},
{"label": "Default Lighting", "value": "default"},
{"label": "All Lights", "value": "all"},
{"label": "Selected Lights", "value": "selected"},
{"label": "Flat Lighting", "value": "flat"},
{"label": "No Lights", "value": "none"}
]
@ -343,8 +339,8 @@ def pairwise(iterable):
return zip(a, a)
def collect_animation_data(fps=False):
"""Get the basic animation data
def collect_animation_defs(fps=False):
"""Get the basic animation attribute defintions for the publisher.
Returns:
OrderedDict
@ -363,17 +359,42 @@ def collect_animation_data(fps=False):
handle_end = frame_end_handle - frame_end
# build attributes
data = OrderedDict()
data["frameStart"] = frame_start
data["frameEnd"] = frame_end
data["handleStart"] = handle_start
data["handleEnd"] = handle_end
data["step"] = 1.0
defs = [
NumberDef("frameStart",
label="Frame Start",
default=frame_start,
decimals=0),
NumberDef("frameEnd",
label="Frame End",
default=frame_end,
decimals=0),
NumberDef("handleStart",
label="Handle Start",
default=handle_start,
decimals=0),
NumberDef("handleEnd",
label="Handle End",
default=handle_end,
decimals=0),
NumberDef("step",
label="Step size",
tooltip="A smaller step size means more samples and larger "
"output files.\n"
"A 1.0 step size is a single sample every frame.\n"
"A 0.5 step size is two samples per frame.\n"
"A 0.2 step size is five samples per frame.",
default=1.0,
decimals=3),
]
if fps:
data["fps"] = mel.eval('currentTimeUnitToFPS()')
current_fps = mel.eval('currentTimeUnitToFPS()')
fps_def = NumberDef(
"fps", label="FPS", default=current_fps, decimals=5
)
defs.append(fps_def)
return data
return defs
def imprint(node, data):
@ -459,10 +480,10 @@ def lsattrs(attrs):
attrs (dict): Name and value pairs of expected matches
Example:
>> # Return nodes with an `age` of five.
>> lsattr({"age": "five"})
>> # Return nodes with both `age` and `color` of five and blue.
>> lsattr({"age": "five", "color": "blue"})
>>> # Return nodes with an `age` of five.
>>> lsattrs({"age": "five"})
>>> # Return nodes with both `age` and `color` of five and blue.
>>> lsattrs({"age": "five", "color": "blue"})
Return:
list: matching nodes.
@ -1522,7 +1543,15 @@ def set_attribute(attribute, value, node):
cmds.addAttr(node, longName=attribute, **kwargs)
node_attr = "{}.{}".format(node, attribute)
if "dataType" in kwargs:
enum_type = cmds.attributeQuery(attribute, node=node, enum=True)
if enum_type and value_type == "str":
enum_string_values = cmds.attributeQuery(
attribute, node=node, listEnum=True
)[0].split(":")
cmds.setAttr(
"{}.{}".format(node, attribute), enum_string_values.index(value)
)
elif "dataType" in kwargs:
attr_type = kwargs["dataType"]
cmds.setAttr(node_attr, value, type=attr_type)
else:
@ -4078,12 +4107,10 @@ def create_rig_animation_instance(
)
assert roots, "No root nodes in rig, this is a bug."
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
custom_subset = options.get("animationSubsetName")
if custom_subset:
formatting_data = {
# TODO remove 'asset_type' and replace 'asset_name' with 'asset'
"asset_name": context['asset']['name'],
"asset_type": context['asset']['type'],
"subset": context['subset']['name'],
@ -4101,14 +4128,17 @@ def create_rig_animation_instance(
if log:
log.info("Creating subset: {}".format(namespace))
# Fill creator identifier
creator_identifier = "io.openpype.creators.maya.animation"
host = registered_host()
create_context = CreateContext(host)
# Create the animation instance
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
legacy_create(
creator_plugin,
name=namespace,
asset=asset,
options={"useSelection": True},
data={"dependencies": dependency}
create_context.create(
creator_identifier=creator_identifier,
variant=namespace,
pre_create_data={"use_selection": True}
)

View file

@ -177,7 +177,7 @@ def get(layer, render_instance=None):
}.get(renderer_name.lower(), None)
if renderer is None:
raise UnsupportedRendererException(
"unsupported {}".format(renderer_name)
"Unsupported renderer: {}".format(renderer_name)
)
return renderer(layer, render_instance)

View file

@ -66,10 +66,12 @@ def install():
cmds.menuItem(divider=True)
# Create default items
cmds.menuItem(
"Create...",
command=lambda *args: host_tools.show_creator(parent=parent_widget)
command=lambda *args: host_tools.show_publisher(
parent=parent_widget,
tab="create"
)
)
cmds.menuItem(
@ -82,8 +84,9 @@ def install():
cmds.menuItem(
"Publish...",
command=lambda *args: host_tools.show_publish(
parent=parent_widget
command=lambda *args: host_tools.show_publisher(
parent=parent_widget,
tab="publish"
),
image=pyblish_icon
)

View file

@ -1,3 +1,5 @@
import json
import base64
import os
import errno
import logging
@ -14,6 +16,7 @@ from openpype.host import (
HostBase,
IWorkfileHost,
ILoadHost,
IPublishHost,
HostDirmap,
)
from openpype.tools.utils import host_tools
@ -64,7 +67,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
name = "maya"
def __init__(self):
@ -150,6 +153,20 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
with lib.maintained_selection():
yield
def get_context_data(self):
data = cmds.fileInfo("OpenPypeContext", query=True)
if not data:
return {}
data = data[0] # Maya seems to return a list
decoded = base64.b64decode(data).decode("utf-8")
return json.loads(decoded)
def update_context_data(self, data, changes):
json_str = json.dumps(data)
encoded = base64.b64encode(json_str.encode("utf-8"))
return cmds.fileInfo("OpenPypeContext", encoded)
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:

View file

@ -1,29 +1,39 @@
import json
import os
from maya import cmds
from abc import ABCMeta
import qargparse
import six
from maya import cmds
from maya.app.renderSetup.model import renderSetup
from openpype.lib import Logger
from openpype.lib import BoolDef, Logger
from openpype.pipeline import AVALON_CONTAINER_ID, Anatomy, CreatedInstance
from openpype.pipeline import Creator as NewCreator
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
get_representation_path,
AVALON_CONTAINER_ID,
Anatomy,
)
CreatorError, LegacyCreator, LoaderPlugin, get_representation_path,
legacy_io)
from openpype.pipeline.load import LoadError
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib
from . import lib
from .lib import imprint, read
from .pipeline import containerise
log = Logger.get_logger()
def _get_attr(node, attr, default=None):
"""Helper to get attribute which allows attribute to not exist."""
if not cmds.attributeQuery(attr, node=node, exists=True):
return default
return cmds.getAttr("{}.{}".format(node, attr))
# Backwards compatibility: these functions has been moved to lib.
def get_reference_node(*args, **kwargs):
"""
"""Get the reference node from the container members
Deprecated:
This function was moved and will be removed in 3.16.x.
"""
@ -60,6 +70,379 @@ class Creator(LegacyCreator):
return instance
@six.add_metaclass(ABCMeta)
class MayaCreatorBase(object):
@staticmethod
def cache_subsets(shared_data):
"""Cache instances for Creators to shared data.
Create `maya_cached_subsets` key when needed in shared data and
fill it with all collected instances from the scene under its
respective creator identifiers.
If legacy instances are detected in the scene, create
`maya_cached_legacy_subsets` there and fill it with
all legacy subsets under family as a key.
Args:
Dict[str, Any]: Shared data.
Return:
Dict[str, Any]: Shared data dictionary.
"""
if shared_data.get("maya_cached_subsets") is None:
cache = dict()
cache_legacy = dict()
for node in cmds.ls(type="objectSet"):
if _get_attr(node, attr="id") != "pyblish.avalon.instance":
continue
creator_id = _get_attr(node, attr="creator_identifier")
if creator_id is not None:
# creator instance
cache.setdefault(creator_id, []).append(node)
else:
# legacy instance
family = _get_attr(node, attr="family")
if family is None:
# must be a broken instance
continue
cache_legacy.setdefault(family, []).append(node)
shared_data["maya_cached_subsets"] = cache
shared_data["maya_cached_legacy_subsets"] = cache_legacy
return shared_data
def imprint_instance_node(self, node, data):
# We never store the instance_node as value on the node since
# it's the node name itself
data.pop("instance_node", None)
# We store creator attributes at the root level and assume they
# will not clash in names with `subset`, `task`, etc. and other
# default names. This is just so these attributes in many cases
# are still editable in the maya UI by artists.
# pop to move to end of dict to sort attributes last on the node
creator_attributes = data.pop("creator_attributes", {})
data.update(creator_attributes)
# We know the "publish_attributes" will be complex data of
# settings per plugins, we'll store this as a flattened json structure
# pop to move to end of dict to sort attributes last on the node
data["publish_attributes"] = json.dumps(
data.pop("publish_attributes", {})
)
# Since we flattened the data structure for creator attributes we want
# to correctly detect which flattened attributes should end back in the
# creator attributes when reading the data from the node, so we store
# the relevant keys as a string
data["__creator_attributes_keys"] = ",".join(creator_attributes.keys())
# Kill any existing attributes just so we can imprint cleanly again
for attr in data.keys():
if cmds.attributeQuery(attr, node=node, exists=True):
cmds.deleteAttr("{}.{}".format(node, attr))
return imprint(node, data)
def read_instance_node(self, node):
node_data = read(node)
# Never care about a cbId attribute on the object set
# being read as 'data'
node_data.pop("cbId", None)
# Move the relevant attributes into "creator_attributes" that
# we flattened originally
node_data["creator_attributes"] = {}
creator_attribute_keys = node_data.pop("__creator_attributes_keys",
"").split(",")
for key in creator_attribute_keys:
if key in node_data:
node_data["creator_attributes"][key] = node_data.pop(key)
publish_attributes = node_data.get("publish_attributes")
if publish_attributes:
node_data["publish_attributes"] = json.loads(publish_attributes)
# Explicitly re-parse the node name
node_data["instance_node"] = node
return node_data
@six.add_metaclass(ABCMeta)
class MayaCreator(NewCreator, MayaCreatorBase):
def create(self, subset_name, instance_data, pre_create_data):
members = list()
if pre_create_data.get("use_selection"):
members = cmds.ls(selection=True)
with lib.undo_chunk():
instance_node = cmds.sets(members, name=subset_name)
instance_data["instance_node"] = instance_node
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self)
self._add_instance_to_context(instance)
self.imprint_instance_node(instance_node,
data=instance.data_to_store())
return instance
def collect_instances(self):
self.cache_subsets(self.collection_shared_data)
cached_subsets = self.collection_shared_data["maya_cached_subsets"]
for node in cached_subsets.get(self.identifier, []):
node_data = self.read_instance_node(node)
created_instance = CreatedInstance.from_existing(node_data, self)
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
data = created_inst.data_to_store()
node = data.get("instance_node")
self.imprint_instance_node(node, data)
def remove_instances(self, instances):
"""Remove specified instance from the scene.
This is only removing `id` parameter so instance is no longer
instance, because it might contain valuable data for artist.
"""
for instance in instances:
node = instance.data.get("instance_node")
if node:
cmds.delete(node)
self._remove_instance_from_context(instance)
def get_pre_create_attr_defs(self):
return [
BoolDef("use_selection",
label="Use selection",
default=True)
]
def ensure_namespace(namespace):
"""Make sure the namespace exists.
Args:
namespace (str): The preferred namespace name.
Returns:
str: The generated or existing namespace
"""
exists = cmds.namespace(exists=namespace)
if exists:
return namespace
else:
return cmds.namespace(add=namespace)
class RenderlayerCreator(NewCreator, MayaCreatorBase):
"""Creator which creates an instance per renderlayer in the workfile.
Create and manages renderlayer subset per renderLayer in workfile.
This generates a singleton node in the scene which, if it exists, tells the
Creator to collect Maya rendersetup renderlayers as individual instances.
As such, triggering create doesn't actually create the instance node per
layer but only the node which tells the Creator it may now collect
an instance per renderlayer.
"""
# These are required to be overridden in subclass
singleton_node_name = ""
# These are optional to be overridden in subclass
layer_instance_prefix = None
def _get_singleton_node(self, return_all=False):
nodes = lib.lsattr("pre_creator_identifier", self.identifier)
if nodes:
return nodes if return_all else nodes[0]
def create(self, subset_name, instance_data, pre_create_data):
# A Renderlayer is never explicitly created using the create method.
# Instead, renderlayers from the scene are collected. Thus "create"
# would only ever be called to say, 'hey, please refresh collect'
self.create_singleton_node()
# if no render layers are present, create default one with
# asterisk selector
rs = renderSetup.instance()
if not rs.getRenderLayers():
render_layer = rs.createRenderLayer("Main")
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
# By RenderLayerCreator.create we make it so that the renderlayer
# instances directly appear even though it just collects scene
# renderlayers. This doesn't actually 'create' any scene contents.
self.collect_instances()
def create_singleton_node(self):
if self._get_singleton_node():
raise CreatorError("A Render instance already exists - only "
"one can be configured.")
with lib.undo_chunk():
node = cmds.sets(empty=True, name=self.singleton_node_name)
lib.imprint(node, data={
"pre_creator_identifier": self.identifier
})
return node
def collect_instances(self):
# We only collect if the global render instance exists
if not self._get_singleton_node():
return
rs = renderSetup.instance()
layers = rs.getRenderLayers()
for layer in layers:
layer_instance_node = self.find_layer_instance_node(layer)
if layer_instance_node:
data = self.read_instance_node(layer_instance_node)
instance = CreatedInstance.from_existing(data, creator=self)
else:
# No existing scene instance node for this layer. Note that
# this instance will not have the `instance_node` data yet
# until it's been saved/persisted at least once.
# TODO: Correctly define the subset name using templates
prefix = self.layer_instance_prefix or self.family
subset_name = "{}{}".format(prefix, layer.name())
instance_data = {
"asset": legacy_io.Session["AVALON_ASSET"],
"task": legacy_io.Session["AVALON_TASK"],
"variant": layer.name(),
}
instance = CreatedInstance(
family=self.family,
subset_name=subset_name,
data=instance_data,
creator=self
)
instance.transient_data["layer"] = layer
self._add_instance_to_context(instance)
def find_layer_instance_node(self, layer):
connected_sets = cmds.listConnections(
"{}.message".format(layer.name()),
source=False,
destination=True,
type="objectSet"
) or []
for node in connected_sets:
if not cmds.attributeQuery("creator_identifier",
node=node,
exists=True):
continue
creator_identifier = cmds.getAttr(node + ".creator_identifier")
if creator_identifier == self.identifier:
self.log.info(f"Found node: {node}")
return node
def _create_layer_instance_node(self, layer):
# We only collect if a CreateRender instance exists
create_render_set = self._get_singleton_node()
if not create_render_set:
raise CreatorError("Creating a renderlayer instance node is not "
"allowed if no 'CreateRender' instance exists")
namespace = "_{}".format(self.singleton_node_name)
namespace = ensure_namespace(namespace)
name = "{}:{}".format(namespace, layer.name())
render_set = cmds.sets(name=name, empty=True)
# Keep an active link with the renderlayer so we can retrieve it
# later by a physical maya connection instead of relying on the layer
# name
cmds.addAttr(render_set, longName="renderlayer", at="message")
cmds.connectAttr("{}.message".format(layer.name()),
"{}.renderlayer".format(render_set), force=True)
# Add the set to the 'CreateRender' set.
cmds.sets(render_set, forceElement=create_render_set)
return render_set
def update_instances(self, update_list):
# We only generate the persisting layer data into the scene once
# we save with the UI on e.g. validate or publish
for instance, _changes in update_list:
instance_node = instance.data.get("instance_node")
# Ensure a node exists to persist the data to
if not instance_node:
layer = instance.transient_data["layer"]
instance_node = self._create_layer_instance_node(layer)
instance.data["instance_node"] = instance_node
self.imprint_instance_node(instance_node,
data=instance.data_to_store())
def imprint_instance_node(self, node, data):
# Do not ever try to update the `renderlayer` since it'll try
# to remove the attribute and recreate it but fail to keep it a
# message attribute link. We only ever imprint that on the initial
# node creation.
# TODO: Improve how this is handled
data.pop("renderlayer", None)
data.get("creator_attributes", {}).pop("renderlayer", None)
return super(RenderlayerCreator, self).imprint_instance_node(node,
data=data)
def remove_instances(self, instances):
"""Remove specified instances from the scene.
This is only removing `id` parameter so instance is no longer
instance, because it might contain valuable data for artist.
"""
# Instead of removing the single instance or renderlayers we instead
# remove the CreateRender node this creator relies on to decide whether
# it should collect anything at all.
nodes = self._get_singleton_node(return_all=True)
if nodes:
cmds.delete(nodes)
# Remove ALL the instances even if only one gets deleted
for instance in list(self.create_context.instances):
if instance.get("creator_identifier") == self.identifier:
self._remove_instance_from_context(instance)
# Remove the stored settings per renderlayer too
node = instance.data.get("instance_node")
if node and cmds.objExists(node):
cmds.delete(node)
class Loader(LoaderPlugin):
hosts = ["maya"]
@ -186,6 +569,7 @@ class ReferenceLoader(Loader):
def update(self, container, representation):
from maya import cmds
from openpype.hosts.maya.api.lib import get_container_members
node = container["objectName"]

View file

@ -0,0 +1,165 @@
from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.maya.api import plugin
from openpype.hosts.maya.api.lib import read
from maya import cmds
from maya.app.renderSetup.model import renderSetup
class MayaLegacyConvertor(SubsetConvertorPlugin,
plugin.MayaCreatorBase):
"""Find and convert any legacy subsets in the scene.
This Convertor will find all legacy subsets in the scene and will
transform them to the current system. Since the old subsets doesn't
retain any information about their original creators, the only mapping
we can do is based on their families.
Its limitation is that you can have multiple creators creating subset
of the same family and there is no way to handle it. This code should
nevertheless cover all creators that came with OpenPype.
"""
identifier = "io.openpype.creators.maya.legacy"
# Cases where the identifier or new family doesn't correspond to the
# original family on the legacy instances
special_family_conversions = {
"rendering": "io.openpype.creators.maya.renderlayer",
}
def find_instances(self):
self.cache_subsets(self.collection_shared_data)
legacy = self.collection_shared_data.get("maya_cached_legacy_subsets")
if not legacy:
return
self.add_convertor_item("Convert legacy instances")
def convert(self):
self.remove_convertor_item()
# We can't use the collected shared data cache here
# we re-query it here directly to convert all found.
cache = {}
self.cache_subsets(cache)
legacy = cache.get("maya_cached_legacy_subsets")
if not legacy:
return
# From all current new style manual creators find the mapping
# from family to identifier
family_to_id = {}
for identifier, creator in self.create_context.manual_creators.items():
family = getattr(creator, "family", None)
if not family:
continue
if family in family_to_id:
# We have a clash of family -> identifier. Multiple
# new style creators use the same family
self.log.warning("Clash on family->identifier: "
"{}".format(identifier))
family_to_id[family] = identifier
family_to_id.update(self.special_family_conversions)
# We also embed the current 'task' into the instance since legacy
# instances didn't store that data on the instances. The old style
# logic was thus to be live to the current task to begin with.
data = dict()
data["task"] = self.create_context.get_current_task_name()
for family, instance_nodes in legacy.items():
if family not in family_to_id:
self.log.warning(
"Unable to convert legacy instance with family '{}'"
" because there is no matching new creator's family"
"".format(family)
)
continue
creator_id = family_to_id[family]
creator = self.create_context.manual_creators[creator_id]
data["creator_identifier"] = creator_id
if isinstance(creator, plugin.RenderlayerCreator):
self._convert_per_renderlayer(instance_nodes, data, creator)
else:
self._convert_regular(instance_nodes, data)
def _convert_regular(self, instance_nodes, data):
# We only imprint the creator identifier for it to identify
# as the new style creator
for instance_node in instance_nodes:
self.imprint_instance_node(instance_node,
data=data.copy())
def _convert_per_renderlayer(self, instance_nodes, data, creator):
# Split the instance into an instance per layer
rs = renderSetup.instance()
layers = rs.getRenderLayers()
if not layers:
self.log.error(
"Can't convert legacy renderlayer instance because no existing"
" renderSetup layers exist in the scene."
)
return
creator_attribute_names = {
attr_def.key for attr_def in creator.get_instance_attr_defs()
}
for instance_node in instance_nodes:
# Ensure we have the new style singleton node generated
# TODO: Make function public
singleton_node = creator._get_singleton_node()
if singleton_node:
self.log.error(
"Can't convert legacy renderlayer instance '{}' because"
" new style instance '{}' already exists".format(
instance_node,
singleton_node
)
)
continue
creator.create_singleton_node()
# We are creating new nodes to replace the original instance
# Copy the attributes of the original instance to the new node
original_data = read(instance_node)
# The family gets converted to the new family (this is due to
# "rendering" family being converted to "renderlayer" family)
original_data["family"] = creator.family
# Convert to creator attributes when relevant
creator_attributes = {}
for key in list(original_data.keys()):
# Iterate in order of the original attributes to preserve order
# in the output creator attributes
if key in creator_attribute_names:
creator_attributes[key] = original_data.pop(key)
original_data["creator_attributes"] = creator_attributes
# For layer in maya layers
for layer in layers:
layer_instance_node = creator.find_layer_instance_node(layer)
if not layer_instance_node:
# TODO: Make function public
layer_instance_node = creator._create_layer_instance_node(
layer
)
# Transfer the main attributes of the original instance
layer_data = original_data.copy()
layer_data.update(data)
self.imprint_instance_node(layer_instance_node,
data=layer_data)
# Delete the legacy instance node
cmds.delete(instance_node)

View file

@ -2,9 +2,13 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import (
BoolDef,
TextDef
)
class CreateAnimation(plugin.Creator):
class CreateAnimation(plugin.MayaCreator):
"""Animation output for character rigs"""
# We hide the animation creator from the UI since the creation of it
@ -13,48 +17,71 @@ class CreateAnimation(plugin.Creator):
# Note: This setting is actually applied from project settings
enabled = False
identifier = "io.openpype.creators.maya.animation"
name = "animationDefault"
label = "Animation"
family = "animation"
icon = "male"
write_color_sets = False
write_face_sets = False
include_parent_hierarchy = False
include_user_defined_attributes = False
def __init__(self, *args, **kwargs):
super(CreateAnimation, self).__init__(*args, **kwargs)
# TODO: Would be great if we could visually hide this from the creator
# by default but do allow to generate it through code.
# create an ordered dict with the existing data first
def get_instance_attr_defs(self):
# get basic animation data : start / end / handles / steps
for key, value in lib.collect_animation_data().items():
self.data[key] = value
defs = lib.collect_animation_defs()
# Write vertex colors with the geometry.
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = self.write_face_sets
# Include only renderable visible shapes.
# Skips locators and empty transforms
self.data["renderableOnly"] = False
# Include only nodes that are visible at least once during the
# frame range.
self.data["visibleOnly"] = False
# Include the groups above the out_SET content
self.data["includeParentHierarchy"] = self.include_parent_hierarchy
# Default to exporting world-space
self.data["worldSpace"] = True
defs.extend([
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=self.write_color_sets),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=self.write_face_sets),
BoolDef("writeNormals",
label="Write normals",
tooltip="Write normals with the deforming geometry",
default=True),
BoolDef("renderableOnly",
label="Renderable Only",
tooltip="Only export renderable visible shapes",
default=False),
BoolDef("visibleOnly",
label="Visible Only",
tooltip="Only export dag objects visible during "
"frame range",
default=False),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=self.include_parent_hierarchy),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
BoolDef("includeUserDefinedAttributes",
label="Include User Defined Attributes",
default=self.include_user_defined_attributes),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
])
# TODO: Implement these on a Deadline plug-in instead?
"""
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
"""
# Default to write normals.
self.data["writeNormals"] = True
value = self.include_user_defined_attributes
self.data["includeUserDefinedAttributes"] = value
return defs

View file

@ -2,17 +2,20 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from maya import cmds
from openpype.lib import (
NumberDef,
BoolDef
)
class CreateArnoldSceneSource(plugin.Creator):
class CreateArnoldSceneSource(plugin.MayaCreator):
"""Arnold Scene Source"""
name = "ass"
identifier = "io.openpype.creators.maya.ass"
label = "Arnold Scene Source"
family = "ass"
icon = "cube"
expandProcedurals = False
motionBlur = True
motionBlurKeys = 2
@ -28,39 +31,71 @@ class CreateArnoldSceneSource(plugin.Creator):
maskColor_manager = False
maskOperator = False
def __init__(self, *args, **kwargs):
super(CreateArnoldSceneSource, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# Add animation data
self.data.update(lib.collect_animation_data())
defs = lib.collect_animation_defs()
self.data["expandProcedurals"] = self.expandProcedurals
self.data["motionBlur"] = self.motionBlur
self.data["motionBlurKeys"] = self.motionBlurKeys
self.data["motionBlurLength"] = self.motionBlurLength
defs.extend([
BoolDef("expandProcedural",
label="Expand Procedural",
default=self.expandProcedurals),
BoolDef("motionBlur",
label="Motion Blur",
default=self.motionBlur),
NumberDef("motionBlurKeys",
label="Motion Blur Keys",
decimals=0,
default=self.motionBlurKeys),
NumberDef("motionBlurLength",
label="Motion Blur Length",
decimals=3,
default=self.motionBlurLength),
# Masks
self.data["maskOptions"] = self.maskOptions
self.data["maskCamera"] = self.maskCamera
self.data["maskLight"] = self.maskLight
self.data["maskShape"] = self.maskShape
self.data["maskShader"] = self.maskShader
self.data["maskOverride"] = self.maskOverride
self.data["maskDriver"] = self.maskDriver
self.data["maskFilter"] = self.maskFilter
self.data["maskColor_manager"] = self.maskColor_manager
self.data["maskOperator"] = self.maskOperator
# Masks
BoolDef("maskOptions",
label="Export Options",
default=self.maskOptions),
BoolDef("maskCamera",
label="Export Cameras",
default=self.maskCamera),
BoolDef("maskLight",
label="Export Lights",
default=self.maskLight),
BoolDef("maskShape",
label="Export Shapes",
default=self.maskShape),
BoolDef("maskShader",
label="Export Shaders",
default=self.maskShader),
BoolDef("maskOverride",
label="Export Override Nodes",
default=self.maskOverride),
BoolDef("maskDriver",
label="Export Drivers",
default=self.maskDriver),
BoolDef("maskFilter",
label="Export Filters",
default=self.maskFilter),
BoolDef("maskOperator",
label="Export Operators",
default=self.maskOperator),
BoolDef("maskColor_manager",
label="Export Color Managers",
default=self.maskColor_manager),
])
def process(self):
instance = super(CreateArnoldSceneSource, self).process()
return defs
nodes = []
def create(self, subset_name, instance_data, pre_create_data):
if (self.options or {}).get("useSelection"):
nodes = cmds.ls(selection=True)
from maya import cmds
cmds.sets(nodes, rm=instance)
instance = super(CreateArnoldSceneSource, self).create(
subset_name, instance_data, pre_create_data
)
assContent = cmds.sets(name=instance + "_content_SET")
assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True)
cmds.sets([assContent, assProxy], forceElement=instance)
instance_node = instance.get("instance_node")
content = cmds.sets(name=instance_node + "_content_SET", empty=True)
proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
cmds.sets([content, proxy], forceElement=instance_node)

View file

@ -1,10 +1,10 @@
from openpype.hosts.maya.api import plugin
class CreateAssembly(plugin.Creator):
class CreateAssembly(plugin.MayaCreator):
"""A grouped package of loaded content"""
name = "assembly"
identifier = "io.openpype.creators.maya.assembly"
label = "Assembly"
family = "assembly"
icon = "cubes"

View file

@ -2,33 +2,35 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import BoolDef
class CreateCamera(plugin.Creator):
class CreateCamera(plugin.MayaCreator):
"""Single baked camera"""
name = "cameraMain"
identifier = "io.openpype.creators.maya.camera"
label = "Camera"
family = "camera"
icon = "video-camera"
def __init__(self, *args, **kwargs):
super(CreateCamera, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# get basic animation data : start / end / handles / steps
animation_data = lib.collect_animation_data()
for key, value in animation_data.items():
self.data[key] = value
defs = lib.collect_animation_defs()
# Bake to world space by default, when this is False it will also
# include the parent hierarchy in the baked results
self.data['bakeToWorldSpace'] = True
defs.extend([
BoolDef("bakeToWorldSpace",
label="Bake to World-Space",
tooltip="Bake to World-Space",
default=True),
])
return defs
class CreateCameraRig(plugin.Creator):
class CreateCameraRig(plugin.MayaCreator):
"""Complex hierarchy with camera."""
name = "camerarigMain"
identifier = "io.openpype.creators.maya.camerarig"
label = "Camera Rig"
family = "camerarig"
icon = "video-camera"

View file

@ -1,16 +1,21 @@
from openpype.hosts.maya.api import plugin
from openpype.lib import BoolDef
class CreateLayout(plugin.Creator):
class CreateLayout(plugin.MayaCreator):
"""A grouped package of loaded content"""
name = "layoutMain"
identifier = "io.openpype.creators.maya.layout"
label = "Layout"
family = "layout"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateLayout, self).__init__(*args, **kwargs)
# enable this when you want to
# publish group of loaded asset
self.data["groupLoadedAssets"] = False
def get_instance_attr_defs(self):
return [
BoolDef("groupLoadedAssets",
label="Group Loaded Assets",
tooltip="Enable this when you want to publish group of "
"loaded asset",
default=False)
]

View file

@ -1,29 +1,53 @@
from openpype.hosts.maya.api import (
lib,
plugin
plugin,
lib
)
from openpype.lib import (
BoolDef,
TextDef
)
class CreateLook(plugin.Creator):
class CreateLook(plugin.MayaCreator):
"""Shader connections defining shape look"""
name = "look"
identifier = "io.openpype.creators.maya.look"
label = "Look"
family = "look"
icon = "paint-brush"
make_tx = True
rs_tex = False
def __init__(self, *args, **kwargs):
super(CreateLook, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
self.data["renderlayer"] = lib.get_current_renderlayer()
return [
# TODO: This value should actually get set on create!
TextDef("renderLayer",
# TODO: Bug: Hidden attribute's label is still shown in UI?
hidden=True,
default=lib.get_current_renderlayer(),
label="Renderlayer",
tooltip="Renderlayer to extract the look from"),
BoolDef("maketx",
label="MakeTX",
tooltip="Whether to generate .tx files for your textures",
default=self.make_tx),
BoolDef("rstex",
label="Convert textures to .rstex",
tooltip="Whether to generate Redshift .rstex files for "
"your textures",
default=self.rs_tex),
BoolDef("forceCopy",
label="Force Copy",
tooltip="Enable users to force a copy instead of hardlink."
"\nNote: On Windows copy is always forced due to "
"bugs in windows' implementation of hardlinks.",
default=False)
]
# Whether to automatically convert the textures to .tx upon publish.
self.data["maketx"] = self.make_tx
# Whether to automatically convert the textures to .rstex upon publish.
self.data["rstex"] = self.rs_tex
# Enable users to force a copy.
# - on Windows is "forceCopy" always changed to `True` because of
# windows implementation of hardlinks
self.data["forceCopy"] = False
def get_pre_create_attr_defs(self):
# Show same attributes on create but include use selection
defs = super(CreateLook, self).get_pre_create_attr_defs()
defs.extend(self.get_instance_attr_defs())
return defs

View file

@ -1,9 +1,10 @@
from openpype.hosts.maya.api import plugin
class CreateMayaScene(plugin.Creator):
class CreateMayaScene(plugin.MayaCreator):
"""Raw Maya Scene file export"""
identifier = "io.openpype.creators.maya.mayascene"
name = "mayaScene"
label = "Maya Scene"
family = "mayaScene"

View file

@ -1,26 +1,43 @@
from openpype.hosts.maya.api import plugin
from openpype.lib import (
BoolDef,
TextDef
)
class CreateModel(plugin.Creator):
class CreateModel(plugin.MayaCreator):
"""Polygonal static geometry"""
name = "modelMain"
identifier = "io.openpype.creators.maya.model"
label = "Model"
family = "model"
icon = "cube"
defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"]
write_color_sets = False
write_face_sets = False
def __init__(self, *args, **kwargs):
super(CreateModel, self).__init__(*args, **kwargs)
# Vertex colors with the geometry
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = self.write_face_sets
def get_instance_attr_defs(self):
# Include attributes by attribute name or prefix
self.data["attr"] = ""
self.data["attrPrefix"] = ""
# Whether to include parent hierarchy of nodes in the instance
self.data["includeParentHierarchy"] = False
return [
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=self.write_color_sets),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=self.write_face_sets),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=False),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
]

View file

@ -1,15 +1,27 @@
from openpype.hosts.maya.api import plugin
from openpype.lib import (
BoolDef,
EnumDef
)
class CreateMultiverseLook(plugin.Creator):
class CreateMultiverseLook(plugin.MayaCreator):
"""Create Multiverse Look"""
name = "mvLook"
identifier = "io.openpype.creators.maya.mvlook"
label = "Multiverse Look"
family = "mvLook"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseLook, self).__init__(*args, **kwargs)
self.data["fileFormat"] = ["usda", "usd"]
self.data["publishMipMap"] = True
def get_instance_attr_defs(self):
return [
EnumDef("fileFormat",
label="File Format",
tooltip="USD export file format",
items=["usda", "usd"],
default="usda"),
BoolDef("publishMipMap",
label="Publish MipMap",
default=True),
]

View file

@ -1,53 +1,135 @@
from openpype.hosts.maya.api import plugin, lib
from openpype.lib import (
BoolDef,
NumberDef,
TextDef,
EnumDef
)
class CreateMultiverseUsd(plugin.Creator):
class CreateMultiverseUsd(plugin.MayaCreator):
"""Create Multiverse USD Asset"""
name = "mvUsdMain"
identifier = "io.openpype.creators.maya.mvusdasset"
label = "Multiverse USD Asset"
family = "usd"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsd, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda", "usdz"],
default="usd"),
BoolDef("stripNamespaces",
label="Strip Namespaces",
default=True),
BoolDef("mergeTransformAndShape",
label="Merge Transform and Shape",
default=False),
BoolDef("writeAncestors",
label="Write Ancestors",
default=True),
BoolDef("flattenParentXforms",
label="Flatten Parent Xforms",
default=False),
BoolDef("writeSparseOverrides",
label="Write Sparse Overrides",
default=False),
BoolDef("useMetaPrimPath",
label="Use Meta Prim Path",
default=False),
TextDef("customRootPath",
label="Custom Root Path",
default=''),
TextDef("customAttributes",
label="Custom Attributes",
tooltip="Comma-separated list of attribute names",
default=''),
TextDef("nodeTypesToIgnore",
label="Node Types to Ignore",
tooltip="Comma-separated list of node types to be ignored",
default=''),
BoolDef("writeMeshes",
label="Write Meshes",
default=True),
BoolDef("writeCurves",
label="Write Curves",
default=True),
BoolDef("writeParticles",
label="Write Particles",
default=True),
BoolDef("writeCameras",
label="Write Cameras",
default=False),
BoolDef("writeLights",
label="Write Lights",
default=False),
BoolDef("writeJoints",
label="Write Joints",
default=False),
BoolDef("writeCollections",
label="Write Collections",
default=False),
BoolDef("writePositions",
label="Write Positions",
default=True),
BoolDef("writeNormals",
label="Write Normals",
default=True),
BoolDef("writeUVs",
label="Write UVs",
default=True),
BoolDef("writeColorSets",
label="Write Color Sets",
default=False),
BoolDef("writeTangents",
label="Write Tangents",
default=False),
BoolDef("writeRefPositions",
label="Write Ref Positions",
default=True),
BoolDef("writeBlendShapes",
label="Write BlendShapes",
default=False),
BoolDef("writeDisplayColor",
label="Write Display Color",
default=True),
BoolDef("writeSkinWeights",
label="Write Skin Weights",
default=False),
BoolDef("writeMaterialAssignment",
label="Write Material Assignment",
default=False),
BoolDef("writeHardwareShader",
label="Write Hardware Shader",
default=False),
BoolDef("writeShadingNetworks",
label="Write Shading Networks",
default=False),
BoolDef("writeTransformMatrix",
label="Write Transform Matrix",
default=True),
BoolDef("writeUsdAttributes",
label="Write USD Attributes",
default=True),
BoolDef("writeInstancesAsReferences",
label="Write Instances as References",
default=False),
BoolDef("timeVaryingTopology",
label="Time Varying Topology",
default=False),
TextDef("customMaterialNamespace",
label="Custom Material Namespace",
default=''),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
self.data["fileFormat"] = ["usd", "usda", "usdz"]
self.data["stripNamespaces"] = True
self.data["mergeTransformAndShape"] = False
self.data["writeAncestors"] = True
self.data["flattenParentXforms"] = False
self.data["writeSparseOverrides"] = False
self.data["useMetaPrimPath"] = False
self.data["customRootPath"] = ''
self.data["customAttributes"] = ''
self.data["nodeTypesToIgnore"] = ''
self.data["writeMeshes"] = True
self.data["writeCurves"] = True
self.data["writeParticles"] = True
self.data["writeCameras"] = False
self.data["writeLights"] = False
self.data["writeJoints"] = False
self.data["writeCollections"] = False
self.data["writePositions"] = True
self.data["writeNormals"] = True
self.data["writeUVs"] = True
self.data["writeColorSets"] = False
self.data["writeTangents"] = False
self.data["writeRefPositions"] = True
self.data["writeBlendShapes"] = False
self.data["writeDisplayColor"] = True
self.data["writeSkinWeights"] = False
self.data["writeMaterialAssignment"] = False
self.data["writeHardwareShader"] = False
self.data["writeShadingNetworks"] = False
self.data["writeTransformMatrix"] = True
self.data["writeUsdAttributes"] = True
self.data["writeInstancesAsReferences"] = False
self.data["timeVaryingTopology"] = False
self.data["customMaterialNamespace"] = ''
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0
return defs

View file

@ -1,26 +1,48 @@
from openpype.hosts.maya.api import plugin, lib
from openpype.lib import (
BoolDef,
NumberDef,
EnumDef
)
class CreateMultiverseUsdComp(plugin.Creator):
class CreateMultiverseUsdComp(plugin.MayaCreator):
"""Create Multiverse USD Composition"""
name = "mvUsdCompositionMain"
identifier = "io.openpype.creators.maya.mvusdcomposition"
label = "Multiverse USD Composition"
family = "mvUsdComposition"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda"],
default="usd"),
BoolDef("stripNamespaces",
label="Strip Namespaces",
default=False),
BoolDef("mergeTransformAndShape",
label="Merge Transform and Shape",
default=False),
BoolDef("flattenContent",
label="Flatten Content",
default=False),
BoolDef("writeAsCompoundLayers",
label="Write As Compound Layers",
default=False),
BoolDef("writePendingOverrides",
label="Write Pending Overrides",
default=False),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
# Order of `fileFormat` must match extract_multiverse_usd_comp.py
self.data["fileFormat"] = ["usda", "usd"]
self.data["stripNamespaces"] = False
self.data["mergeTransformAndShape"] = False
self.data["flattenContent"] = False
self.data["writeAsCompoundLayers"] = False
self.data["writePendingOverrides"] = False
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0
return defs

View file

@ -1,30 +1,59 @@
from openpype.hosts.maya.api import plugin, lib
from openpype.lib import (
BoolDef,
NumberDef,
EnumDef
)
class CreateMultiverseUsdOver(plugin.Creator):
"""Create Multiverse USD Override"""
name = "mvUsdOverrideMain"
identifier = "io.openpype.creators.maya.mvusdoverride"
label = "Multiverse USD Override"
family = "mvUsdOverride"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda"],
default="usd"),
BoolDef("writeAll",
label="Write All",
default=False),
BoolDef("writeTransforms",
label="Write Transforms",
default=True),
BoolDef("writeVisibility",
label="Write Visibility",
default=True),
BoolDef("writeAttributes",
label="Write Attributes",
default=True),
BoolDef("writeMaterials",
label="Write Materials",
default=True),
BoolDef("writeVariants",
label="Write Variants",
default=True),
BoolDef("writeVariantsDefinition",
label="Write Variants Definition",
default=True),
BoolDef("writeActiveState",
label="Write Active State",
default=True),
BoolDef("writeNamespaces",
label="Write Namespaces",
default=False),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
# Order of `fileFormat` must match extract_multiverse_usd_over.py
self.data["fileFormat"] = ["usda", "usd"]
self.data["writeAll"] = False
self.data["writeTransforms"] = True
self.data["writeVisibility"] = True
self.data["writeAttributes"] = True
self.data["writeMaterials"] = True
self.data["writeVariants"] = True
self.data["writeVariantsDefinition"] = True
self.data["writeActiveState"] = True
self.data["writeNamespaces"] = False
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0
return defs

View file

@ -4,47 +4,85 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import (
BoolDef,
TextDef
)
class CreatePointCache(plugin.Creator):
class CreatePointCache(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
name = "pointcache"
label = "Point Cache"
identifier = "io.openpype.creators.maya.pointcache"
label = "Pointcache"
family = "pointcache"
icon = "gears"
write_color_sets = False
write_face_sets = False
include_user_defined_attributes = False
def __init__(self, *args, **kwargs):
super(CreatePointCache, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# Add animation data
self.data.update(lib.collect_animation_data())
defs = lib.collect_animation_defs()
# Vertex colors with the geometry.
self.data["writeColorSets"] = self.write_color_sets
# Vertex colors with the geometry.
self.data["writeFaceSets"] = self.write_face_sets
self.data["renderableOnly"] = False # Only renderable visible shapes
self.data["visibleOnly"] = False # only nodes that are visible
self.data["includeParentHierarchy"] = False # Include parent groups
self.data["worldSpace"] = True # Default to exporting world-space
self.data["refresh"] = False # Default to suspend refresh.
# Add options for custom attributes
value = self.include_user_defined_attributes
self.data["includeUserDefinedAttributes"] = value
self.data["attr"] = ""
self.data["attrPrefix"] = ""
defs.extend([
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=False),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=False),
BoolDef("renderableOnly",
label="Renderable Only",
tooltip="Only export renderable visible shapes",
default=False),
BoolDef("visibleOnly",
label="Visible Only",
tooltip="Only export dag objects visible during "
"frame range",
default=False),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=False),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
BoolDef("refresh",
label="Refresh viewport during export",
default=False),
BoolDef("includeUserDefinedAttributes",
label="Include User Defined Attributes",
default=self.include_user_defined_attributes),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
default="",
placeholder="prefix1, prefix2")
])
# TODO: Implement these on a Deadline plug-in instead?
"""
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
"""
def process(self):
instance = super(CreatePointCache, self).process()
return defs
assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True)
cmds.sets(assProxy, forceElement=instance)
def create(self, subset_name, instance_data, pre_create_data):
instance = super(CreatePointCache, self).create(
subset_name, instance_data, pre_create_data
)
instance_node = instance.get("instance_node")
# For Arnold standin proxy
proxy_set = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
cmds.sets(proxy_set, forceElement=instance_node)

View file

@ -2,34 +2,49 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import (
BoolDef,
TextDef
)
class CreateProxyAlembic(plugin.Creator):
class CreateProxyAlembic(plugin.MayaCreator):
"""Proxy Alembic for animated data"""
name = "proxyAbcMain"
identifier = "io.openpype.creators.maya.proxyabc"
label = "Proxy Alembic"
family = "proxyAbc"
icon = "gears"
write_color_sets = False
write_face_sets = False
def __init__(self, *args, **kwargs):
super(CreateProxyAlembic, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
# Add animation data
self.data.update(lib.collect_animation_data())
defs = lib.collect_animation_defs()
# Vertex colors with the geometry.
self.data["writeColorSets"] = self.write_color_sets
# Vertex colors with the geometry.
self.data["writeFaceSets"] = self.write_face_sets
# Default to exporting world-space
self.data["worldSpace"] = True
defs.extend([
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=self.write_color_sets),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=self.write_face_sets),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
TextDef("nameSuffix",
label="Name Suffix for Bounding Box",
default="_BBox",
placeholder="_BBox"),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
])
# name suffix for the bounding box
self.data["nameSuffix"] = "_BBox"
# Add options for custom attributes
self.data["attr"] = ""
self.data["attrPrefix"] = ""
return defs

View file

@ -2,22 +2,24 @@
"""Creator of Redshift proxy subset types."""
from openpype.hosts.maya.api import plugin, lib
from openpype.lib import BoolDef
class CreateRedshiftProxy(plugin.Creator):
class CreateRedshiftProxy(plugin.MayaCreator):
"""Create instance of Redshift Proxy subset."""
name = "redshiftproxy"
identifier = "io.openpype.creators.maya.redshiftproxy"
label = "Redshift Proxy"
family = "redshiftproxy"
icon = "gears"
def __init__(self, *args, **kwargs):
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
animation_data = lib.collect_animation_data()
defs = [
BoolDef("animation",
label="Export animation",
default=False)
]
self.data["animation"] = False
self.data["proxyFrameStart"] = animation_data["frameStart"]
self.data["proxyFrameEnd"] = animation_data["frameEnd"]
self.data["proxyFrameStep"] = animation_data["step"]
defs.extend(lib.collect_animation_defs())
return defs

View file

@ -1,425 +1,108 @@
# -*- coding: utf-8 -*-
"""Create ``Render`` instance in Maya."""
import json
import os
import appdirs
import requests
from maya import cmds
from maya.app.renderSetup.model import renderSetup
from openpype.settings import (
get_system_settings,
get_project_settings,
)
from openpype.lib import requests_get
from openpype.modules import ModulesManager
from openpype.pipeline import legacy_io
from openpype.hosts.maya.api import (
lib,
lib_rendersettings,
plugin
)
from openpype.pipeline import CreatorError
from openpype.lib import (
BoolDef,
NumberDef,
)
class CreateRender(plugin.Creator):
"""Create *render* instance.
class CreateRenderlayer(plugin.RenderlayerCreator):
"""Create and manages renderlayer subset per renderLayer in workfile.
Render instances are not actually published, they hold options for
collecting of render data. It render instance is present, it will trigger
collection of render layers, AOVs, cameras for either direct submission
to render farm or export as various standalone formats (like V-Rays
``vrscenes`` or Arnolds ``ass`` files) and then submitting them to render
farm.
Instance has following attributes::
primaryPool (list of str): Primary list of slave machine pool to use.
secondaryPool (list of str): Optional secondary list of slave pools.
suspendPublishJob (bool): Suspend the job after it is submitted.
extendFrames (bool): Use already existing frames from previous version
to extend current render.
overrideExistingFrame (bool): Overwrite already existing frames.
priority (int): Submitted job priority
framesPerTask (int): How many frames per task to render. This is
basically job division on render farm.
whitelist (list of str): White list of slave machines
machineList (list of str): Specific list of slave machines to use
useMayaBatch (bool): Use Maya batch mode to render as opposite to
Maya interactive mode. This consumes different licenses.
vrscene (bool): Submit as ``vrscene`` file for standalone V-Ray
renderer.
ass (bool): Submit as ``ass`` file for standalone Arnold renderer.
tileRendering (bool): Instance is set to tile rendering mode. We
won't submit actual render, but we'll make publish job to wait
for Tile Assembly job done and then publish.
strict_error_checking (bool): Enable/disable error checking on DL
See Also:
https://pype.club/docs/artist_hosts_maya#creating-basic-render-setup
This generates a single node in the scene which tells the Creator to if
it exists collect Maya rendersetup renderlayers as individual instances.
As such, triggering create doesn't actually create the instance node per
layer but only the node which tells the Creator it may now collect
the renderlayers.
"""
identifier = "io.openpype.creators.maya.renderlayer"
family = "renderlayer"
label = "Render"
family = "rendering"
icon = "eye"
_token = None
_user = None
_password = None
_project_settings = None
layer_instance_prefix = "render"
singleton_node_name = "renderingMain"
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateRender, self).__init__(*args, **kwargs)
render_settings = {}
# Defaults
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
@classmethod
def apply_settings(cls, project_settings, system_settings):
cls.render_settings = project_settings["maya"]["RenderSettings"]
def create(self, subset_name, instance_data, pre_create_data):
# Only allow a single render instance to exist
if self._get_singleton_node():
raise CreatorError("A Render instance already exists - only "
"one can be configured.")
# Apply default project render settings on create
if self.render_settings.get("apply_render_settings"):
lib_rendersettings.RenderSettings().set_default_renderer_settings()
# Deadline-only
manager = ModulesManager()
deadline_settings = get_system_settings()["modules"]["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
self.deadline_module = manager.modules_by_name["deadline"]
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
self._project_settings["deadline"]["deadline_servers"]
)
self.deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
super(CreateRenderlayer, self).create(subset_name,
instance_data,
pre_create_data)
if not self.deadline_servers:
self.deadline_servers = default_servers
except AttributeError:
# Handle situation were we had only one url for deadline.
# get default deadline webservice url from deadline module
self.deadline_servers = self.deadline_module.deadline_urls
def process(self):
"""Entry point."""
exists = cmds.ls(self.name)
if exists:
cmds.warning("%s already exists." % exists[0])
return
use_selection = self.options.get("useSelection")
with lib.undo_chunk():
self._create_render_settings()
self.instance = super(CreateRender, self).process()
# create namespace with instance
index = 1
namespace_name = "_{}".format(str(self.instance))
try:
cmds.namespace(rm=namespace_name)
except RuntimeError:
# namespace is not empty, so we leave it untouched
pass
while cmds.namespace(exists=namespace_name):
namespace_name = "_{}{}".format(str(self.instance), index)
index += 1
namespace = cmds.namespace(add=namespace_name)
# add Deadline server selection list
if self.deadline_servers:
cmds.scriptJob(
attributeChange=[
"{}.deadlineServers".format(self.instance),
self._deadline_webservice_changed
])
cmds.setAttr("{}.machineList".format(self.instance), lock=True)
rs = renderSetup.instance()
layers = rs.getRenderLayers()
if use_selection:
self.log.info("Processing existing layers")
sets = []
for layer in layers:
self.log.info(" - creating set for {}:{}".format(
namespace, layer.name()))
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
sets.append(render_set)
cmds.sets(sets, forceElement=self.instance)
# if no render layers are present, create default one with
# asterisk selector
if not layers:
render_layer = rs.createRenderLayer('Main')
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
return self.instance
def _deadline_webservice_changed(self):
"""Refresh Deadline server dependent options."""
# get selected server
webservice = self.deadline_servers[
self.server_aliases[
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
]
pools = self.deadline_module.get_deadline_pools(webservice, self.log)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
sorted_pools = self._set_default_pool(list(pools), primary_pool)
cmds.addAttr(
self.instance,
longName="primaryPool",
attributeType="enum",
enumName=":".join(sorted_pools)
)
cmds.setAttr(
"{}.primaryPool".format(self.instance),
0,
keyable=False,
channelBox=True
)
pools = ["-"] + pools
secondary_pool = pool_setting["secondary_pool"]
sorted_pools = self._set_default_pool(list(pools), secondary_pool)
cmds.addAttr(
self.instance,
longName="secondaryPool",
attributeType="enum",
enumName=":".join(sorted_pools)
)
cmds.setAttr(
"{}.secondaryPool".format(self.instance),
0,
keyable=False,
channelBox=True
)
def _create_render_settings(self):
def get_instance_attr_defs(self):
"""Create instance settings."""
# get pools (slave machines of the render farm)
pool_names = []
default_priority = 50
self.data["suspendPublishJob"] = False
self.data["review"] = True
self.data["extendFrames"] = False
self.data["overrideExistingFrame"] = True
# self.data["useLegacyRenderLayers"] = True
self.data["priority"] = default_priority
self.data["tile_priority"] = default_priority
self.data["framesPerTask"] = 1
self.data["whitelist"] = False
self.data["machineList"] = ""
self.data["useMayaBatch"] = False
self.data["tileRendering"] = False
self.data["tilesX"] = 2
self.data["tilesY"] = 2
self.data["convertToScanline"] = False
self.data["useReferencedAovs"] = False
self.data["renderSetupIncludeLights"] = (
self._project_settings.get(
"maya", {}).get(
"RenderSettings", {}).get(
"enable_all_lights", False)
)
# Disable for now as this feature is not working yet
# self.data["assScene"] = False
return [
BoolDef("review",
label="Review",
tooltip="Mark as reviewable",
default=True),
BoolDef("extendFrames",
label="Extend Frames",
tooltip="Extends the frames on top of the previous "
"publish.\nIf the previous was 1001-1050 and you "
"would now submit 1020-1070 only the new frames "
"1051-1070 would be rendered and published "
"together with the previously rendered frames.\n"
"If 'overrideExistingFrame' is enabled it *will* "
"render any existing frames.",
default=False),
BoolDef("overrideExistingFrame",
label="Override Existing Frame",
tooltip="Override existing rendered frames "
"(if they exist).",
default=True),
system_settings = get_system_settings()["modules"]
# TODO: Should these move to submit_maya_deadline plugin?
# Tile rendering
BoolDef("tileRendering",
label="Enable tiled rendering",
default=False),
NumberDef("tilesX",
label="Tiles X",
default=2,
minimum=1,
decimals=0),
NumberDef("tilesY",
label="Tiles Y",
default=2,
minimum=1,
decimals=0),
deadline_enabled = system_settings["deadline"]["enabled"]
muster_enabled = system_settings["muster"]["enabled"]
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
# Additional settings
BoolDef("convertToScanline",
label="Convert to Scanline",
tooltip="Convert the output images to scanline images",
default=False),
BoolDef("useReferencedAovs",
label="Use Referenced AOVs",
tooltip="Consider the AOVs from referenced scenes as well",
default=False),
if deadline_enabled and muster_enabled:
self.log.error(
"Both Deadline and Muster are enabled. " "Cannot support both."
)
raise RuntimeError("Both Deadline and Muster are enabled")
if deadline_enabled:
self.server_aliases = list(self.deadline_servers.keys())
self.data["deadlineServers"] = self.server_aliases
try:
deadline_url = self.deadline_servers["default"]
except KeyError:
# if 'default' server is not between selected,
# use first one for initial list of pools.
deadline_url = next(iter(self.deadline_servers.values()))
# Uses function to get pool machines from the assigned deadline
# url in settings
pool_names = self.deadline_module.get_deadline_pools(deadline_url,
self.log)
maya_submit_dl = self._project_settings.get(
"deadline", {}).get(
"publish", {}).get(
"MayaSubmitDeadline", {})
priority = maya_submit_dl.get("priority", default_priority)
self.data["priority"] = priority
tile_priority = maya_submit_dl.get("tile_priority",
default_priority)
self.data["tile_priority"] = tile_priority
strict_error_checking = maya_submit_dl.get("strict_error_checking",
True)
self.data["strict_error_checking"] = strict_error_checking
# Pool attributes should be last since they will be recreated when
# the deadline server changes.
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
self.data["primaryPool"] = self._set_default_pool(pool_names,
primary_pool)
# We add a string "-" to allow the user to not
# set any secondary pools
pool_names = ["-"] + pool_names
secondary_pool = pool_setting["secondary_pool"]
self.data["secondaryPool"] = self._set_default_pool(pool_names,
secondary_pool)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
self._load_credentials()
self.log.info(">>> Getting pools ...")
pools = []
try:
pools = self._get_muster_pools()
except requests.exceptions.HTTPError as e:
if e.startswith("401"):
self.log.warning("access token expired")
self._show_login()
raise RuntimeError("Access token expired")
except requests.exceptions.ConnectionError:
self.log.error("Cannot connect to Muster API endpoint.")
raise RuntimeError("Cannot connect to {}".format(muster_url))
for pool in pools:
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
self.options = {"useSelection": False} # Force no content
def _set_default_pool(self, pool_names, pool_value):
"""Reorder pool names, default should come first"""
if pool_value and pool_value in pool_names:
pool_names.remove(pool_value)
pool_names = [pool_value] + pool_names
return pool_names
def _load_credentials(self):
"""Load Muster credentials.
Load Muster credentials from file and set ``MUSTER_USER``,
``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from settings.
Raises:
RuntimeError: If loaded credentials are invalid.
AttributeError: If ``MUSTER_REST_URL`` is not set.
"""
app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype"))
file_name = "muster_cred.json"
fpath = os.path.join(app_dir, file_name)
file = open(fpath, "r")
muster_json = json.load(file)
self._token = muster_json.get("token", None)
if not self._token:
self._show_login()
raise RuntimeError("Invalid access token for Muster")
file.close()
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if not self.MUSTER_REST_URL:
raise AttributeError("Muster REST API url not set")
def _get_muster_pools(self):
"""Get render pools from Muster.
Raises:
Exception: If pool list cannot be obtained from Muster.
"""
params = {"authToken": self._token}
api_entry = "/api/pools/list"
response = requests_get(self.MUSTER_REST_URL + api_entry,
params=params)
if response.status_code != 200:
if response.status_code == 401:
self.log.warning("Authentication token expired.")
self._show_login()
else:
self.log.error(
("Cannot get pools from "
"Muster: {}").format(response.status_code)
)
raise Exception("Cannot get pools from Muster")
try:
pools = response.json()["ResponseData"]["pools"]
except ValueError as e:
self.log.error("Invalid response from Muster server {}".format(e))
raise Exception("Invalid response from Muster server")
return pools
def _show_login(self):
# authentication token expired so we need to login to Muster
# again to get it. We use Pype API call to show login window.
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
self.log.debug(api_url)
login_response = requests_get(api_url, timeout=1)
if login_response.status_code != 200:
self.log.error("Cannot show login form to Muster")
raise Exception("Cannot show login form to Muster")
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)
BoolDef("renderSetupIncludeLights",
label="Render Setup Include Lights",
default=self.render_settings.get("enable_all_lights",
False))
]

View file

@ -1,55 +1,31 @@
from openpype.hosts.maya.api import (
lib,
plugin
)
from maya import cmds
from openpype.hosts.maya.api import plugin
from openpype.pipeline import CreatorError
class CreateRenderSetup(plugin.Creator):
class CreateRenderSetup(plugin.MayaCreator):
"""Create rendersetup template json data"""
name = "rendersetup"
identifier = "io.openpype.creators.maya.rendersetup"
label = "Render Setup Preset"
family = "rendersetup"
icon = "tablet"
def __init__(self, *args, **kwargs):
super(CreateRenderSetup, self).__init__(*args, **kwargs)
def get_pre_create_attr_defs(self):
# Do not show the "use_selection" setting from parent class
return []
# here we can pre-create renderSetup layers, possibly utlizing
# settings for it.
def create(self, subset_name, instance_data, pre_create_data):
# _____
# / __\__
# | / __\__
# | | / \
# | | | |
# \__| | |
# \__| |
# \_____/
existing_instance = None
for instance in self.create_context.instances:
if instance.family == self.family:
existing_instance = instance
break
# from pype.api import get_project_settings
# import maya.app.renderSetup.model.renderSetup as renderSetup
# settings = get_project_settings(os.environ['AVALON_PROJECT'])
# layer = settings['maya']['create']['renderSetup']["layer"]
if existing_instance:
raise CreatorError("A RenderSetup instance already exists - only "
"one can be configured.")
# rs = renderSetup.instance()
# rs.createRenderLayer(layer)
self.options = {"useSelection": False} # Force no content
def process(self):
exists = cmds.ls(self.name)
assert len(exists) <= 1, (
"More than one renderglobal exists, this is a bug"
)
if exists:
return cmds.warning("%s already exists." % exists[0])
with lib.undo_chunk():
instance = super(CreateRenderSetup, self).process()
self.data["renderSetup"] = "42"
null = cmds.sets(name="null_SET", empty=True)
cmds.sets([null], forceElement=instance)
super(CreateRenderSetup, self).create(subset_name,
instance_data,
pre_create_data)

View file

@ -1,76 +1,142 @@
import os
from collections import OrderedDict
import json
from maya import cmds
from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.settings import get_project_settings
from openpype.pipeline import get_current_project_name, get_current_task_name
from openpype.lib import (
BoolDef,
NumberDef,
EnumDef
)
from openpype.pipeline import CreatedInstance
from openpype.client import get_asset_by_name
TRANSPARENCIES = [
"preset",
"simple",
"object sorting",
"weighted average",
"depth peeling",
"alpha cut"
]
class CreateReview(plugin.Creator):
"""Single baked camera"""
name = "reviewDefault"
class CreateReview(plugin.MayaCreator):
"""Playblast reviewable"""
identifier = "io.openpype.creators.maya.review"
label = "Review"
family = "review"
icon = "video-camera"
keepImages = False
isolate = False
imagePlane = True
Width = 0
Height = 0
transparency = [
"preset",
"simple",
"object sorting",
"weighted average",
"depth peeling",
"alpha cut"
]
useMayaTimeline = True
panZoom = False
def __init__(self, *args, **kwargs):
super(CreateReview, self).__init__(*args, **kwargs)
data = OrderedDict(**self.data)
# Overriding "create" method to prefill values from settings.
def create(self, subset_name, instance_data, pre_create_data):
project_name = get_current_project_name()
asset_doc = get_asset_by_name(project_name, data["asset"])
task_name = get_current_task_name()
members = list()
if pre_create_data.get("use_selection"):
members = cmds.ls(selection=True)
project_name = self.project_name
asset_doc = get_asset_by_name(project_name, instance_data["asset"])
task_name = instance_data["task"]
preset = lib.get_capture_preset(
task_name,
asset_doc["data"]["tasks"][task_name]["type"],
data["subset"],
get_project_settings(project_name),
subset_name,
self.project_settings,
self.log
)
if os.environ.get("OPENPYPE_DEBUG") == "1":
self.log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
self.log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
)
with lib.undo_chunk():
instance_node = cmds.sets(members, name=subset_name)
instance_data["instance_node"] = instance_node
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self)
creator_attribute_defs_by_key = {
x.key: x for x in instance.creator_attribute_defs
}
mapping = {
"review_width": preset["Resolution"]["width"],
"review_height": preset["Resolution"]["height"],
"isolate": preset["Generic"]["isolate_view"],
"imagePlane": preset["Viewport Options"]["imagePlane"],
"panZoom": preset["Generic"]["pan_zoom"]
}
for key, value in mapping.items():
creator_attribute_defs_by_key[key].default = value
self._add_instance_to_context(instance)
self.imprint_instance_node(instance_node,
data=instance.data_to_store())
return instance
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
# Option for using Maya or asset frame range in settings.
frame_range = lib.get_frame_range()
if self.useMayaTimeline:
frame_range = lib.collect_animation_data(fps=True)
for key, value in frame_range.items():
data[key] = value
if not self.useMayaTimeline:
# Update the defaults to be the asset frame range
frame_range = lib.get_frame_range()
defs_by_key = {attr_def.key: attr_def for attr_def in defs}
for key, value in frame_range.items():
if key not in defs_by_key:
raise RuntimeError("Attribute definition not found to be "
"updated for key: {}".format(key))
attr_def = defs_by_key[key]
attr_def.default = value
data["fps"] = lib.collect_animation_data(fps=True)["fps"]
defs.extend([
NumberDef("review_width",
label="Review width",
tooltip="A value of zero will use the asset resolution.",
decimals=0,
minimum=0,
default=0),
NumberDef("review_height",
label="Review height",
tooltip="A value of zero will use the asset resolution.",
decimals=0,
minimum=0,
default=0),
BoolDef("keepImages",
label="Keep Images",
tooltip="Whether to also publish along the image sequence "
"next to the video reviewable.",
default=False),
BoolDef("isolate",
label="Isolate render members of instance",
tooltip="When enabled only the members of the instance "
"will be included in the playblast review.",
default=False),
BoolDef("imagePlane",
label="Show Image Plane",
default=True),
EnumDef("transparency",
label="Transparency",
items=TRANSPARENCIES),
BoolDef("panZoom",
label="Enable camera pan/zoom",
default=True),
EnumDef("displayLights",
label="Display Lights",
items=lib.DISPLAY_LIGHTS_ENUM),
])
data["keepImages"] = self.keepImages
data["transparency"] = self.transparency
data["review_width"] = preset["Resolution"]["width"]
data["review_height"] = preset["Resolution"]["height"]
data["isolate"] = preset["Generic"]["isolate_view"]
data["imagePlane"] = preset["Viewport Options"]["imagePlane"]
data["panZoom"] = preset["Generic"]["pan_zoom"]
data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS
self.data = data
return defs

View file

@ -1,25 +1,25 @@
from maya import cmds
from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.hosts.maya.api import plugin
class CreateRig(plugin.Creator):
class CreateRig(plugin.MayaCreator):
"""Artist-friendly rig with controls to direct motion"""
name = "rigDefault"
identifier = "io.openpype.creators.maya.rig"
label = "Rig"
family = "rig"
icon = "wheelchair"
def process(self):
def create(self, subset_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateRig, self).process()
instance = super(CreateRig, self).create(subset_name,
instance_data,
pre_create_data)
self.log.info("Creating Rig instance set up ...")
controls = cmds.sets(name="controls_SET", empty=True)
pointcache = cmds.sets(name="out_SET", empty=True)
cmds.sets([controls, pointcache], forceElement=instance)
instance_node = instance.get("instance_node")
self.log.info("Creating Rig instance set up ...")
controls = cmds.sets(name="controls_SET", empty=True)
pointcache = cmds.sets(name="out_SET", empty=True)
cmds.sets([controls, pointcache], forceElement=instance_node)

View file

@ -1,16 +1,19 @@
from openpype.hosts.maya.api import plugin
from openpype.lib import BoolDef
class CreateSetDress(plugin.Creator):
class CreateSetDress(plugin.MayaCreator):
"""A grouped package of loaded content"""
name = "setdressMain"
identifier = "io.openpype.creators.maya.setdress"
label = "Set Dress"
family = "setdress"
icon = "cubes"
defaults = ["Main", "Anim"]
def __init__(self, *args, **kwargs):
super(CreateSetDress, self).__init__(*args, **kwargs)
self.data["exactSetMembersOnly"] = True
def get_instance_attr_defs(self):
return [
BoolDef("exactSetMembersOnly",
label="Exact Set Members Only",
default=True)
]

View file

@ -1,47 +1,63 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Skeletal Meshes."""
from openpype.hosts.maya.api import plugin, lib
from openpype.pipeline import legacy_io
from openpype.lib import (
BoolDef,
TextDef
)
from maya import cmds # noqa
class CreateUnrealSkeletalMesh(plugin.Creator):
class CreateUnrealSkeletalMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
name = "staticMeshMain"
identifier = "io.openpype.creators.maya.unrealskeletalmesh"
label = "Unreal - Skeletal Mesh"
family = "skeletalMesh"
icon = "thumbs-up"
dynamic_subset_keys = ["asset"]
joint_hints = []
# Defined in settings
joint_hints = set()
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateUnrealSkeletalMesh, self).__init__(*args, **kwargs)
@classmethod
def get_dynamic_data(
cls, variant, task_name, asset_id, project_name, host_name
):
dynamic_data = super(CreateUnrealSkeletalMesh, cls).get_dynamic_data(
variant, task_name, asset_id, project_name, host_name
def apply_settings(self, project_settings, system_settings):
"""Apply project settings to creator"""
settings = (
project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"]
)
dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET")
self.joint_hints = set(settings.get("joint_hints", []))
def get_dynamic_data(
self, variant, task_name, asset_doc, project_name, host_name, instance
):
"""
The default subset name templates for Unreal include {asset} and thus
we should pass that along as dynamic data.
"""
dynamic_data = super(CreateUnrealSkeletalMesh, self).get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name, instance
)
dynamic_data["asset"] = asset_doc["name"]
return dynamic_data
def process(self):
self.name = "{}_{}".format(self.family, self.name)
with lib.undo_chunk():
instance = super(CreateUnrealSkeletalMesh, self).process()
content = cmds.sets(instance, query=True)
def create(self, subset_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateUnrealSkeletalMesh, self).create(
subset_name, instance_data, pre_create_data)
instance_node = instance.get("instance_node")
# We reorganize the geometry that was originally added into the
# set into either 'joints_SET' or 'geometry_SET' based on the
# joint_hints from project settings
members = cmds.sets(instance_node, query=True)
cmds.sets(clear=instance_node)
# empty set and process its former content
cmds.sets(content, rm=instance)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
joints_set = cmds.sets(name="joints_SET", empty=True)
cmds.sets([geometry_set, joints_set], forceElement=instance)
members = cmds.ls(content) or []
cmds.sets([geometry_set, joints_set], forceElement=instance_node)
for node in members:
if node in self.joint_hints:
@ -49,20 +65,38 @@ class CreateUnrealSkeletalMesh(plugin.Creator):
else:
cmds.sets(node, forceElement=geometry_set)
# Add animation data
self.data.update(lib.collect_animation_data())
def get_instance_attr_defs(self):
# Only renderable visible shapes
self.data["renderableOnly"] = False
# only nodes that are visible
self.data["visibleOnly"] = False
# Include parent groups
self.data["includeParentHierarchy"] = False
# Default to exporting world-space
self.data["worldSpace"] = True
# Default to suspend refresh.
self.data["refresh"] = False
defs = lib.collect_animation_defs()
# Add options for custom attributes
self.data["attr"] = ""
self.data["attrPrefix"] = ""
defs.extend([
BoolDef("renderableOnly",
label="Renderable Only",
tooltip="Only export renderable visible shapes",
default=False),
BoolDef("visibleOnly",
label="Visible Only",
tooltip="Only export dag objects visible during "
"frame range",
default=False),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=False),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
BoolDef("refresh",
label="Refresh viewport during export",
default=False),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
])
return defs

View file

@ -1,58 +1,90 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from openpype.hosts.maya.api import plugin, lib
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from maya import cmds # noqa
class CreateUnrealStaticMesh(plugin.Creator):
class CreateUnrealStaticMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
name = "staticMeshMain"
identifier = "io.openpype.creators.maya.unrealstaticmesh"
label = "Unreal - Static Mesh"
family = "staticMesh"
icon = "cube"
dynamic_subset_keys = ["asset"]
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs)
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
# Defined in settings
collision_prefixes = []
def apply_settings(self, project_settings, system_settings):
"""Apply project settings to creator"""
settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"]
self.collision_prefixes = settings["collision_prefixes"]
@classmethod
def get_dynamic_data(
cls, variant, task_name, asset_id, project_name, host_name
self, variant, task_name, asset_doc, project_name, host_name, instance
):
dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data(
variant, task_name, asset_id, project_name, host_name
"""
The default subset name templates for Unreal include {asset} and thus
we should pass that along as dynamic data.
"""
dynamic_data = super(CreateUnrealStaticMesh, self).get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name, instance
)
dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET")
dynamic_data["asset"] = asset_doc["name"]
return dynamic_data
def process(self):
self.name = "{}_{}".format(self.family, self.name)
with lib.undo_chunk():
instance = super(CreateUnrealStaticMesh, self).process()
content = cmds.sets(instance, query=True)
def create(self, subset_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateUnrealStaticMesh, self).create(
subset_name, instance_data, pre_create_data)
instance_node = instance.get("instance_node")
# We reorganize the geometry that was originally added into the
# set into either 'collision_SET' or 'geometry_SET' based on the
# collision_prefixes from project settings
members = cmds.sets(instance_node, query=True)
cmds.sets(clear=instance_node)
# empty set and process its former content
cmds.sets(content, rm=instance)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
collisions_set = cmds.sets(name="collisions_SET", empty=True)
cmds.sets([geometry_set, collisions_set], forceElement=instance)
cmds.sets([geometry_set, collisions_set],
forceElement=instance_node)
members = cmds.ls(content, long=True) or []
members = cmds.ls(members, long=True) or []
children = cmds.listRelatives(members, allDescendents=True,
fullPath=True) or []
children = cmds.ls(children, type="transform")
for node in children:
if cmds.listRelatives(node, type="shape"):
if [
n for n in self.collision_prefixes
if node.startswith(n)
]:
cmds.sets(node, forceElement=collisions_set)
else:
cmds.sets(node, forceElement=geometry_set)
transforms = cmds.ls(members + children, type="transform")
for transform in transforms:
if not cmds.listRelatives(transform,
type="shape",
noIntermediate=True):
# Exclude all transforms that have no direct shapes
continue
if self.has_collision_prefix(transform):
cmds.sets(transform, forceElement=collisions_set)
else:
cmds.sets(transform, forceElement=geometry_set)
def has_collision_prefix(self, node_path):
"""Return whether node name of path matches collision prefix.
If the node name matches the collision prefix we add it to the
`collisions_SET` instead of the `geometry_SET`.
Args:
node_path (str): Maya node path.
Returns:
bool: Whether the node should be considered a collision mesh.
"""
node_name = node_path.rsplit("|", 1)[-1]
for prefix in self.collision_prefixes:
if node_name.startswith(prefix):
return True
return False

View file

@ -1,10 +1,14 @@
from openpype.hosts.maya.api import plugin
from openpype.hosts.maya.api import (
plugin,
lib
)
from openpype.lib import BoolDef
class CreateVrayProxy(plugin.Creator):
class CreateVrayProxy(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
name = "vrayproxy"
identifier = "io.openpype.creators.maya.vrayproxy"
label = "VRay Proxy"
family = "vrayproxy"
icon = "gears"
@ -12,15 +16,35 @@ class CreateVrayProxy(plugin.Creator):
vrmesh = True
alembic = True
def __init__(self, *args, **kwargs):
super(CreateVrayProxy, self).__init__(*args, **kwargs)
def get_instance_attr_defs(self):
self.data["animation"] = False
self.data["frameStart"] = 1
self.data["frameEnd"] = 1
defs = [
BoolDef("animation",
label="Export Animation",
default=False)
]
# Write vertex colors
self.data["vertexColors"] = False
# Add time range attributes but remove some attributes
# which this instance actually doesn't use
defs.extend(lib.collect_animation_defs())
remove = {"handleStart", "handleEnd", "step"}
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
self.data["vrmesh"] = self.vrmesh
self.data["alembic"] = self.alembic
defs.extend([
BoolDef("vertexColors",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=False),
BoolDef("vrmesh",
label="Export VRayMesh",
tooltip="Publish a .vrmesh (VRayMesh) file for "
"this VRayProxy",
default=self.vrmesh),
BoolDef("alembic",
label="Export Alembic",
tooltip="Publish a .abc (Alembic) file for "
"this VRayProxy",
default=self.alembic),
])
return defs

View file

@ -1,266 +1,52 @@
# -*- coding: utf-8 -*-
"""Create instance of vrayscene."""
import os
import json
import appdirs
import requests
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
from openpype.hosts.maya.api import (
lib,
lib_rendersettings,
plugin
)
from openpype.settings import (
get_system_settings,
get_project_settings
)
from openpype.lib import requests_get
from openpype.pipeline import (
CreatorError,
legacy_io,
)
from openpype.modules import ModulesManager
from openpype.pipeline import CreatorError
from openpype.lib import BoolDef
class CreateVRayScene(plugin.Creator):
class CreateVRayScene(plugin.RenderlayerCreator):
"""Create Vray Scene."""
label = "VRay Scene"
identifier = "io.openpype.creators.maya.vrayscene"
family = "vrayscene"
label = "VRay Scene"
icon = "cubes"
_project_settings = None
render_settings = {}
singleton_node_name = "vraysceneMain"
def __init__(self, *args, **kwargs):
"""Entry."""
super(CreateVRayScene, self).__init__(*args, **kwargs)
self._rs = renderSetup.instance()
self.data["exportOnFarm"] = False
deadline_settings = get_system_settings()["modules"]["deadline"]
@classmethod
def apply_settings(cls, project_settings, system_settings):
cls.render_settings = project_settings["maya"]["RenderSettings"]
manager = ModulesManager()
self.deadline_module = manager.modules_by_name["deadline"]
def create(self, subset_name, instance_data, pre_create_data):
# Only allow a single render instance to exist
if self._get_singleton_node():
raise CreatorError("A Render instance already exists - only "
"one can be configured.")
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
super(CreateVRayScene, self).create(subset_name,
instance_data,
pre_create_data)
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
self._project_settings["deadline"]["deadline_servers"]
)
self.deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
# Apply default project render settings on create
if self.render_settings.get("apply_render_settings"):
lib_rendersettings.RenderSettings().set_default_renderer_settings()
if not self.deadline_servers:
self.deadline_servers = default_servers
def get_instance_attr_defs(self):
"""Create instance settings."""
except AttributeError:
# Handle situation were we had only one url for deadline.
# get default deadline webservice url from deadline module
self.deadline_servers = self.deadline_module.deadline_urls
def process(self):
"""Entry point."""
exists = cmds.ls(self.name)
if exists:
return cmds.warning("%s already exists." % exists[0])
use_selection = self.options.get("useSelection")
with lib.undo_chunk():
self._create_vray_instance_settings()
self.instance = super(CreateVRayScene, self).process()
index = 1
namespace_name = "_{}".format(str(self.instance))
try:
cmds.namespace(rm=namespace_name)
except RuntimeError:
# namespace is not empty, so we leave it untouched
pass
while(cmds.namespace(exists=namespace_name)):
namespace_name = "_{}{}".format(str(self.instance), index)
index += 1
namespace = cmds.namespace(add=namespace_name)
# add Deadline server selection list
if self.deadline_servers:
cmds.scriptJob(
attributeChange=[
"{}.deadlineServers".format(self.instance),
self._deadline_webservice_changed
])
# create namespace with instance
layers = self._rs.getRenderLayers()
if use_selection:
print(">>> processing existing layers")
sets = []
for layer in layers:
print(" - creating set for {}".format(layer.name()))
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
sets.append(render_set)
cmds.sets(sets, forceElement=self.instance)
# if no render layers are present, create default one with
# asterix selector
if not layers:
render_layer = self._rs.createRenderLayer('Main')
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
def _deadline_webservice_changed(self):
"""Refresh Deadline server dependent options."""
# get selected server
from maya import cmds
webservice = self.deadline_servers[
self.server_aliases[
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
return [
BoolDef("vraySceneMultipleFiles",
label="V-Ray Scene Multiple Files",
default=False),
BoolDef("exportOnFarm",
label="Export on farm",
default=False)
]
pools = self.deadline_module.get_deadline_pools(webservice)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
cmds.addAttr(self.instance, longName="primaryPool",
attributeType="enum",
enumName=":".join(pools))
cmds.addAttr(self.instance, longName="secondaryPool",
attributeType="enum",
enumName=":".join(["-"] + pools))
def _create_vray_instance_settings(self):
# get pools
pools = []
system_settings = get_system_settings()["modules"]
deadline_enabled = system_settings["deadline"]["enabled"]
muster_enabled = system_settings["muster"]["enabled"]
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
if deadline_enabled and muster_enabled:
self.log.error(
"Both Deadline and Muster are enabled. " "Cannot support both."
)
raise CreatorError("Both Deadline and Muster are enabled")
self.server_aliases = self.deadline_servers.keys()
self.data["deadlineServers"] = self.server_aliases
if deadline_enabled:
# if default server is not between selected, use first one for
# initial list of pools.
try:
deadline_url = self.deadline_servers["default"]
except KeyError:
deadline_url = [
self.deadline_servers[k]
for k in self.deadline_servers.keys()
][0]
pool_names = self.deadline_module.get_deadline_pools(deadline_url)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
self._load_credentials()
self.log.info(">>> Getting pools ...")
try:
pools = self._get_muster_pools()
except requests.exceptions.HTTPError as e:
if e.startswith("401"):
self.log.warning("access token expired")
self._show_login()
raise CreatorError("Access token expired")
except requests.exceptions.ConnectionError:
self.log.error("Cannot connect to Muster API endpoint.")
raise CreatorError("Cannot connect to {}".format(muster_url))
pool_names = []
for pool in pools:
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
self.data["primaryPool"] = pool_names
self.data["suspendPublishJob"] = False
self.data["priority"] = 50
self.data["whitelist"] = False
self.data["machineList"] = ""
self.data["vraySceneMultipleFiles"] = False
self.options = {"useSelection": False} # Force no content
def _load_credentials(self):
"""Load Muster credentials.
Load Muster credentials from file and set ``MUSTER_USER``,
``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets.
Raises:
CreatorError: If loaded credentials are invalid.
AttributeError: If ``MUSTER_REST_URL`` is not set.
"""
app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype"))
file_name = "muster_cred.json"
fpath = os.path.join(app_dir, file_name)
file = open(fpath, "r")
muster_json = json.load(file)
self._token = muster_json.get("token", None)
if not self._token:
self._show_login()
raise CreatorError("Invalid access token for Muster")
file.close()
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if not self.MUSTER_REST_URL:
raise AttributeError("Muster REST API url not set")
def _get_muster_pools(self):
"""Get render pools from Muster.
Raises:
CreatorError: If pool list cannot be obtained from Muster.
"""
params = {"authToken": self._token}
api_entry = "/api/pools/list"
response = requests_get(self.MUSTER_REST_URL + api_entry,
params=params)
if response.status_code != 200:
if response.status_code == 401:
self.log.warning("Authentication token expired.")
self._show_login()
else:
self.log.error(
("Cannot get pools from "
"Muster: {}").format(response.status_code)
)
raise CreatorError("Cannot get pools from Muster")
try:
pools = response.json()["ResponseData"]["pools"]
except ValueError as e:
self.log.error("Invalid response from Muster server {}".format(e))
raise CreatorError("Invalid response from Muster server")
return pools
def _show_login(self):
# authentication token expired so we need to login to Muster
# again to get it. We use Pype API call to show login window.
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
self.log.debug(api_url)
login_response = requests_get(api_url, timeout=1)
if login_response.status_code != 200:
self.log.error("Cannot show login form to Muster")
raise CreatorError("Cannot show login form to Muster")

View file

@ -0,0 +1,88 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating workfiles."""
from openpype.pipeline import CreatedInstance, AutoCreator
from openpype.client import get_asset_by_name
from openpype.hosts.maya.api import plugin
from maya import cmds
class CreateWorkfile(plugin.MayaCreatorBase, AutoCreator):
"""Workfile auto-creator."""
identifier = "io.openpype.creators.maya.workfile"
label = "Workfile"
family = "workfile"
icon = "fa5.file"
default_variant = "Main"
def create(self):
variant = self.default_variant
current_instance = next(
(
instance for instance in self.create_context.instances
if instance.creator_identifier == self.identifier
), None)
project_name = self.project_name
asset_name = self.create_context.get_current_asset_name()
task_name = self.create_context.get_current_task_name()
host_name = self.create_context.host_name
if current_instance is None:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
}
data.update(
self.get_dynamic_data(
variant, task_name, asset_doc,
project_name, host_name, current_instance)
)
self.log.info("Auto-creating workfile instance...")
current_instance = CreatedInstance(
self.family, subset_name, data, self
)
self._add_instance_to_context(current_instance)
elif (
current_instance["asset"] != asset_name
or current_instance["task"] != task_name
):
# Update instance context if is not the same
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
current_instance["asset"] = asset_name
current_instance["task"] = task_name
current_instance["subset"] = subset_name
def collect_instances(self):
self.cache_subsets(self.collection_shared_data)
cached_subsets = self.collection_shared_data["maya_cached_subsets"]
for node in cached_subsets.get(self.identifier, []):
node_data = self.read_instance_node(node)
created_instance = CreatedInstance.from_existing(node_data, self)
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
data = created_inst.data_to_store()
node = data.get("instance_node")
if not node:
node = self.create_node()
created_inst["instance_node"] = node
data = created_inst.data_to_store()
self.imprint_instance_node(node, data)
def create_node(self):
node = cmds.sets(empty=True, name="workfileMain")
cmds.setAttr(node + ".hiddenInOutliner", True)
return node

View file

@ -1,10 +1,10 @@
from openpype.hosts.maya.api import plugin
class CreateXgen(plugin.Creator):
class CreateXgen(plugin.MayaCreator):
"""Xgen"""
name = "xgen"
identifier = "io.openpype.creators.maya.xgen"
label = "Xgen"
family = "xgen"
icon = "pagelines"

View file

@ -1,15 +1,14 @@
from collections import OrderedDict
from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import NumberDef
class CreateYetiCache(plugin.Creator):
class CreateYetiCache(plugin.MayaCreator):
"""Output for procedural plugin nodes of Yeti """
name = "yetiDefault"
identifier = "io.openpype.creators.maya.yeticache"
label = "Yeti Cache"
family = "yeticache"
icon = "pagelines"
@ -17,14 +16,23 @@ class CreateYetiCache(plugin.Creator):
def __init__(self, *args, **kwargs):
super(CreateYetiCache, self).__init__(*args, **kwargs)
self.data["preroll"] = 0
defs = [
NumberDef("preroll",
label="Preroll",
minimum=0,
default=0,
decimals=0)
]
# Add animation data without step and handles
anim_data = lib.collect_animation_data()
anim_data.pop("step")
anim_data.pop("handleStart")
anim_data.pop("handleEnd")
self.data.update(anim_data)
defs.extend(lib.collect_animation_defs())
remove = {"step", "handleStart", "handleEnd"}
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
# Add samples
self.data["samples"] = 3
# Add samples after frame range
defs.append(
NumberDef("samples",
label="Samples",
default=3,
decimals=0)
)

View file

@ -6,18 +6,22 @@ from openpype.hosts.maya.api import (
)
class CreateYetiRig(plugin.Creator):
class CreateYetiRig(plugin.MayaCreator):
"""Output for procedural plugin nodes ( Yeti / XGen / etc)"""
identifier = "io.openpype.creators.maya.yetirig"
label = "Yeti Rig"
family = "yetiRig"
icon = "usb"
def process(self):
def create(self, subset_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateYetiRig, self).process()
instance = super(CreateYetiRig, self).create(subset_name,
instance_data,
pre_create_data)
instance_node = instance.get("instance_node")
self.log.info("Creating Rig instance set up ...")
input_meshes = cmds.sets(name="input_SET", empty=True)
cmds.sets(input_meshes, forceElement=instance)
cmds.sets(input_meshes, forceElement=instance_node)

View file

@ -29,7 +29,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
color = "orange"
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
from maya import cmds
with lib.maintained_selection():
file_url = self.prepare_root_value(self.fname,
@ -113,8 +113,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# region compute lookup
nodes_by_id = defaultdict(list)
for n in nodes:
nodes_by_id[lib.get_id(n)].append(n)
for node in nodes:
nodes_by_id[lib.get_id(node)].append(node)
lib.apply_attributes(attributes, nodes_by_id)
def _get_nodes_with_shader(self, shader_nodes):
@ -125,14 +125,16 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
Returns
<list> node names
"""
import maya.cmds as cmds
from maya import cmds
nodes_list = []
for shader in shader_nodes:
connections = cmds.listConnections(cmds.listHistory(shader, f=1),
future = cmds.listHistory(shader, future=True)
connections = cmds.listConnections(future,
type='mesh')
if connections:
for connection in connections:
nodes_list.extend(cmds.listRelatives(connection,
shapes=True))
return nodes_list
# Ensure unique entries only to optimize query and results
connections = list(set(connections))
return cmds.listRelatives(connections,
shapes=True,
fullPath=True) or []
return []

View file

@ -221,6 +221,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self._lock_camera_transforms(members)
def _post_process_rig(self, name, namespace, context, options):
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, options=options, log=self.log

View file

@ -0,0 +1,17 @@
import pyblish.api
from maya import cmds
class CollectCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file."""
order = pyblish.api.CollectorOrder - 0.4
label = "Maya Current File"
hosts = ['maya']
families = ["workfile"]
def process(self, context):
"""Inject the current working file"""
context.data['currentFile'] = cmds.file(query=True, sceneName=True)

View file

@ -172,7 +172,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""
# Get the renderlayer
renderlayer = instance.data.get("setMembers")
renderlayer = instance.data.get("renderlayer")
if renderlayer == "defaultRenderLayer":
# Assume all loaded containers in the scene are inputs

View file

@ -1,12 +1,11 @@
from maya import cmds
import pyblish.api
import json
from openpype.hosts.maya.api.lib import get_all_children
class CollectInstances(pyblish.api.ContextPlugin):
"""Gather instances by objectSet and pre-defined attribute
class CollectNewInstances(pyblish.api.InstancePlugin):
"""Gather members for instances and pre-defined attribute
This collector takes into account assets that are associated with
an objectSet and marked with a unique identifier;
@ -25,134 +24,70 @@ class CollectInstances(pyblish.api.ContextPlugin):
"""
label = "Collect Instances"
label = "Collect New Instance Data"
order = pyblish.api.CollectorOrder
hosts = ["maya"]
def process(self, context):
def process(self, instance):
objectset = cmds.ls("*.id", long=True, type="objectSet",
recursive=True, objectsOnly=True)
objset = instance.data.get("instance_node")
if not objset:
self.log.debug("Instance has no `instance_node` data")
context.data['objectsets'] = objectset
for objset in objectset:
if not cmds.attributeQuery("id", node=objset, exists=True):
continue
id_attr = "{}.id".format(objset)
if cmds.getAttr(id_attr) != "pyblish.avalon.instance":
continue
# The developer is responsible for specifying
# the family of each instance.
has_family = cmds.attributeQuery("family",
node=objset,
exists=True)
assert has_family, "\"%s\" was missing a family" % objset
members = cmds.sets(objset, query=True)
if members is None:
self.log.warning("Skipped empty instance: \"%s\" " % objset)
continue
self.log.info("Creating instance for {}".format(objset))
data = dict()
# Apply each user defined attribute as data
for attr in cmds.listAttr(objset, userDefined=True) or list():
try:
value = cmds.getAttr("%s.%s" % (objset, attr))
except Exception:
# Some attributes cannot be read directly,
# such as mesh and color attributes. These
# are considered non-essential to this
# particular publishing pipeline.
value = None
data[attr] = value
# temporarily translation of `active` to `publish` till issue has
# been resolved, https://github.com/pyblish/pyblish-base/issues/307
if "active" in data:
data["publish"] = data["active"]
# TODO: We might not want to do this in the future
# Merge creator attributes into instance.data just backwards compatible
# code still runs as expected
creator_attributes = instance.data.get("creator_attributes", {})
if creator_attributes:
instance.data.update(creator_attributes)
members = cmds.sets(objset, query=True) or []
if members:
# Collect members
members = cmds.ls(members, long=True) or []
dag_members = cmds.ls(members, type="dagNode", long=True)
children = get_all_children(dag_members)
children = cmds.ls(children, noIntermediate=True, long=True)
parents = []
if data.get("includeParentHierarchy", True):
# If `includeParentHierarchy` then include the parents
# so they will also be picked up in the instance by validators
parents = self.get_all_parents(members)
parents = (
self.get_all_parents(members)
if creator_attributes.get("includeParentHierarchy", True)
else []
)
members_hierarchy = list(set(members + children + parents))
if 'families' not in data:
data['families'] = [data.get('family')]
# Create the instance
instance = context.create_instance(objset)
instance[:] = members_hierarchy
instance.data["objset"] = objset
# Store the exact members of the object set
instance.data["setMembers"] = members
elif instance.data["family"] != "workfile":
self.log.warning("Empty instance: \"%s\" " % objset)
# Store the exact members of the object set
instance.data["setMembers"] = members
# Define nice label
name = cmds.ls(objset, long=False)[0] # use short name
label = "{0} ({1})".format(name, data["asset"])
# TODO: This might make more sense as a separate collector
# Convert frame values to integers
for attr_name in (
"handleStart", "handleEnd", "frameStart", "frameEnd",
):
value = instance.data.get(attr_name)
if value is not None:
instance.data[attr_name] = int(value)
# Convert frame values to integers
for attr_name in (
"handleStart", "handleEnd", "frameStart", "frameEnd",
):
value = data.get(attr_name)
if value is not None:
data[attr_name] = int(value)
# Append start frame and end frame to label if present
if "frameStart" in instance.data and "frameEnd" in instance.data:
# Take handles from context if not set locally on the instance
for key in ["handleStart", "handleEnd"]:
if key not in instance.data:
value = instance.context.data[key]
if value is not None:
value = int(value)
instance.data[key] = value
# Append start frame and end frame to label if present
if "frameStart" in data and "frameEnd" in data:
# Take handles from context if not set locally on the instance
for key in ["handleStart", "handleEnd"]:
if key not in data:
value = context.data[key]
if value is not None:
value = int(value)
data[key] = value
data["frameStartHandle"] = int(
data["frameStart"] - data["handleStart"]
)
data["frameEndHandle"] = int(
data["frameEnd"] + data["handleEnd"]
)
label += " [{0}-{1}]".format(
data["frameStartHandle"], data["frameEndHandle"]
)
instance.data["label"] = label
instance.data.update(data)
self.log.debug("{}".format(instance.data))
# Produce diagnostic message for any graphical
# user interface interested in visualising it.
self.log.info("Found: \"%s\" " % instance.data["name"])
self.log.debug(
"DATA: {} ".format(json.dumps(instance.data, indent=4)))
def sort_by_family(instance):
"""Sort by family"""
return instance.data.get("families", instance.data.get("family"))
# Sort/grouped by family (preserving local index)
context[:] = sorted(context, key=sort_by_family)
return context
instance.data["frameStartHandle"] = int(
instance.data["frameStart"] - instance.data["handleStart"]
)
instance.data["frameEndHandle"] = int(
instance.data["frameEnd"] + instance.data["handleEnd"]
)
def get_all_parents(self, nodes):
"""Get all parents by using string operations (optimization)

View file

@ -285,17 +285,17 @@ class CollectLook(pyblish.api.InstancePlugin):
instance: Instance to collect.
"""
self.log.info("Looking for look associations "
self.log.debug("Looking for look associations "
"for %s" % instance.data['name'])
# Discover related object sets
self.log.info("Gathering sets ...")
self.log.debug("Gathering sets ...")
sets = self.collect_sets(instance)
# Lookup set (optimization)
instance_lookup = set(cmds.ls(instance, long=True))
self.log.info("Gathering set relations ...")
self.log.debug("Gathering set relations ...")
# Ensure iteration happen in a list so we can remove keys from the
# dict within the loop
@ -308,7 +308,7 @@ class CollectLook(pyblish.api.InstancePlugin):
# if node is specified as renderer node type, it will be
# serialized with its attributes.
if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES:
self.log.info("- {} is {}".format(
self.log.debug("- {} is {}".format(
obj_set, cmds.nodeType(obj_set)))
node_attrs = []
@ -354,13 +354,13 @@ class CollectLook(pyblish.api.InstancePlugin):
# Remove sets that didn't have any members assigned in the end
# Thus the data will be limited to only what we need.
self.log.info("obj_set {}".format(sets[obj_set]))
self.log.debug("obj_set {}".format(sets[obj_set]))
if not sets[obj_set]["members"]:
self.log.info(
"Removing redundant set information: {}".format(obj_set))
sets.pop(obj_set, None)
self.log.info("Gathering attribute changes to instance members..")
self.log.debug("Gathering attribute changes to instance members..")
attributes = self.collect_attributes_changed(instance)
# Store data on the instance
@ -433,14 +433,14 @@ class CollectLook(pyblish.api.InstancePlugin):
for node_type in all_supported_nodes:
files.extend(cmds.ls(history, type=node_type, long=True))
self.log.info("Collected file nodes:\n{}".format(files))
self.log.debug("Collected file nodes:\n{}".format(files))
# Collect textures if any file nodes are found
instance.data["resources"] = []
for n in files:
for res in self.collect_resources(n):
instance.data["resources"].append(res)
self.log.info("Collected resources: {}".format(
self.log.debug("Collected resources: {}".format(
instance.data["resources"]))
# Log warning when no relevant sets were retrieved for the look.
@ -536,7 +536,7 @@ class CollectLook(pyblish.api.InstancePlugin):
# Collect changes to "custom" attributes
node_attrs = get_look_attrs(node)
self.log.info(
self.log.debug(
"Node \"{0}\" attributes: {1}".format(node, node_attrs)
)

View file

@ -16,14 +16,16 @@ class CollectPointcache(pyblish.api.InstancePlugin):
instance.data["families"].append("publish.farm")
proxy_set = None
for node in instance.data["setMembers"]:
if cmds.nodeType(node) != "objectSet":
continue
members = cmds.sets(node, query=True)
if members is None:
self.log.warning("Skipped empty objectset: \"%s\" " % node)
continue
for node in cmds.ls(instance.data["setMembers"],
exactType="objectSet"):
# Find proxy_SET objectSet in the instance for proxy meshes
if node.endswith("proxy_SET"):
members = cmds.sets(node, query=True)
if members is None:
self.log.debug("Skipped empty proxy_SET: \"%s\" " % node)
continue
self.log.debug("Found proxy set: {}".format(node))
proxy_set = node
instance.data["proxy"] = []
instance.data["proxyRoots"] = []
@ -36,8 +38,9 @@ class CollectPointcache(pyblish.api.InstancePlugin):
cmds.listRelatives(member, shapes=True, fullPath=True)
)
self.log.debug(
"proxy members: {}".format(instance.data["proxy"])
"Found proxy members: {}".format(instance.data["proxy"])
)
break
if proxy_set:
instance.remove(proxy_set)

View file

@ -39,27 +39,29 @@ Provides:
instance -> pixelAspect
"""
import re
import os
import platform
import json
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
import pyblish.api
from openpype.pipeline import KnownPublishError
from openpype.lib import get_formatted_current_time
from openpype.pipeline import legacy_io
from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501
from openpype.hosts.maya.api.lib_renderproducts import (
get as get_layer_render_products,
UnsupportedRendererException
)
from openpype.hosts.maya.api import lib
class CollectMayaRender(pyblish.api.ContextPlugin):
class CollectMayaRender(pyblish.api.InstancePlugin):
"""Gather all publishable render layers from renderSetup."""
order = pyblish.api.CollectorOrder + 0.01
hosts = ["maya"]
families = ["renderlayer"]
label = "Collect Render Layers"
sync_workfile_version = False
@ -69,388 +71,251 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"underscore": "_"
}
def process(self, context):
"""Entry point to collector."""
render_instance = None
def process(self, instance):
for instance in context:
if "rendering" in instance.data["families"]:
render_instance = instance
render_instance.data["remove"] = True
# TODO: Re-add force enable of workfile instance?
# TODO: Re-add legacy layer support with LAYER_ prefix but in Creator
# TODO: Set and collect active state of RenderLayer in Creator using
# renderlayer.isRenderable()
context = instance.context
# make sure workfile instance publishing is enabled
if "workfile" in instance.data["families"]:
instance.data["publish"] = True
if not render_instance:
self.log.info(
"No render instance found, skipping render "
"layer collection."
)
return
render_globals = render_instance
collected_render_layers = render_instance.data["setMembers"]
layer = instance.data["transientData"]["layer"]
objset = instance.data.get("instance_node")
filepath = context.data["currentFile"].replace("\\", "/")
asset = legacy_io.Session["AVALON_ASSET"]
workspace = context.data["workspaceDir"]
# Retrieve render setup layers
rs = renderSetup.instance()
maya_render_layers = {
layer.name(): layer for layer in rs.getRenderLayers()
# check if layer is renderable
if not layer.isRenderable():
msg = "Render layer [ {} ] is not " "renderable".format(
layer.name()
)
self.log.warning(msg)
# detect if there are sets (subsets) to attach render to
sets = cmds.sets(objset, query=True) or []
attach_to = []
for s in sets:
if not cmds.attributeQuery("family", node=s, exists=True):
continue
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
layer_name = layer.name()
# collect all frames we are expecting to be rendered
# return all expected files for all cameras and aovs in given
# frame range
try:
layer_render_products = get_layer_render_products(layer.name())
except UnsupportedRendererException as exc:
raise KnownPublishError(exc)
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
expected_files = []
multipart = False
for product in render_products:
if product.multipart:
multipart = True
product_name = product.productName
if product.camera and layer_render_products.has_camera_token():
product_name = "{}{}".format(
product.camera,
"_{}".format(product_name) if product_name else "")
expected_files.append(
{
product_name: layer_render_products.get_files(
product)
})
has_cameras = any(product.camera for product in render_products)
assert has_cameras, "No render cameras found."
self.log.info("multipart: {}".format(
multipart))
assert expected_files, "no file names were generated, this is a bug"
self.log.info(
"expected files: {}".format(
json.dumps(expected_files, indent=4, sort_keys=True)
)
)
# if we want to attach render to subset, check if we have AOV's
# in expectedFiles. If so, raise error as we cannot attach AOV
# (considered to be subset on its own) to another subset
if attach_to:
assert isinstance(expected_files, list), (
"attaching multiple AOVs or renderable cameras to "
"subset is not supported"
)
# append full path
aov_dict = {}
default_render_folder = context.data.get("project_settings")\
.get("maya")\
.get("RenderSettings")\
.get("default_render_image_folder") or ""
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
for aov in expected_files:
full_paths = []
aov_first_key = list(aov.keys())[0]
for file in aov[aov_first_key]:
full_path = os.path.join(workspace, default_render_folder,
file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov_first_key] = full_paths
full_exp_files = [aov_dict]
self.log.info(full_exp_files)
if publish_meta_path is None:
raise KnownPublishError("Unable to detect any expected output "
"images for: {}. Make sure you have a "
"renderable camera and a valid frame "
"range set for your renderlayer."
"".format(instance.name))
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
frame_end_render = int(self.get_render_attribute(
"endFrame", layer=layer_name))
if (int(context.data["frameStartHandle"]) == frame_start_render
and int(context.data["frameEndHandle"]) == frame_end_render): # noqa: W503, E501
handle_start = context.data["handleStart"]
handle_end = context.data["handleEnd"]
frame_start = context.data["frameStart"]
frame_end = context.data["frameEnd"]
frame_start_handle = context.data["frameStartHandle"]
frame_end_handle = context.data["frameEndHandle"]
else:
handle_start = 0
handle_end = 0
frame_start = frame_start_render
frame_end = frame_end_render
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
# find common path to store metadata
# so if image prefix is branching to many directories
# metadata file will be located in top-most common
# directory.
# TODO: use `os.path.commonpath()` after switch to Python 3
publish_meta_path = os.path.normpath(publish_meta_path)
common_publish_meta_path = os.path.splitdrive(
publish_meta_path)[0]
if common_publish_meta_path:
common_publish_meta_path += os.path.sep
for part in publish_meta_path.replace(
common_publish_meta_path, "").split(os.path.sep):
common_publish_meta_path = os.path.join(
common_publish_meta_path, part)
if part == layer_name:
break
# TODO: replace this terrible linux hotfix with real solution :)
if platform.system().lower() in ["linux", "darwin"]:
common_publish_meta_path = "/" + common_publish_meta_path
self.log.info(
"Publish meta path: {}".format(common_publish_meta_path))
# Get layer specific settings, might be overrides
colorspace_data = lib.get_color_management_preferences()
data = {
"farm": True,
"attachTo": attach_to,
"multipartExr": multipart,
"review": instance.data.get("review") or False,
# Frame range
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
"byFrameStep": int(
self.get_render_attribute("byFrameStep",
layer=layer_name)),
# Renderlayer
"renderer": self.get_render_attribute(
"currentRenderer", layer=layer_name).lower(),
"setMembers": layer._getLegacyNodeName(), # legacy renderlayer
"renderlayer": layer_name,
# todo: is `time` and `author` still needed?
"time": get_formatted_current_time(),
"author": context.data["user"],
# Add source to allow tracing back to the scene from
# which was submitted originally
"source": filepath,
"expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path,
"renderProducts": layer_render_products,
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.height", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
# todo: Following are likely not needed due to collecting from the
# instance itself if they are attribute definitions
"tileRendering": instance.data.get("tileRendering") or False, # noqa: E501
"tilesX": instance.data.get("tilesX") or 2,
"tilesY": instance.data.get("tilesY") or 2,
"convertToScanline": instance.data.get(
"convertToScanline") or False,
"useReferencedAovs": instance.data.get(
"useReferencedAovs") or instance.data.get(
"vrayUseReferencedAovs") or False,
"aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501
"renderSetupIncludeLights": instance.data.get(
"renderSetupIncludeLights"
),
"colorspaceConfig": colorspace_data["config"],
"colorspaceDisplay": colorspace_data["display"],
"colorspaceView": colorspace_data["view"],
}
for layer in collected_render_layers:
if layer.startswith("LAYER_"):
# this is support for legacy mode where render layers
# started with `LAYER_` prefix.
layer_name_pattern = r"^LAYER_(.*)"
else:
# new way is to prefix render layer name with instance
# namespace.
layer_name_pattern = r"^.+:(.*)"
if self.sync_workfile_version:
data["version"] = context.data["version"]
for instance in context:
if instance.data['family'] == "workfile":
instance.data["version"] = context.data["version"]
# todo: We should have a more explicit way to link the renderlayer
match = re.match(layer_name_pattern, layer)
if not match:
msg = "Invalid layer name in set [ {} ]".format(layer)
self.log.warning(msg)
continue
expected_layer_name = match.group(1)
self.log.info("Processing '{}' as layer [ {} ]"
"".format(layer, expected_layer_name))
# check if layer is part of renderSetup
if expected_layer_name not in maya_render_layers:
msg = "Render layer [ {} ] is not in " "Render Setup".format(
expected_layer_name
)
self.log.warning(msg)
continue
# check if layer is renderable
if not maya_render_layers[expected_layer_name].isRenderable():
msg = "Render layer [ {} ] is not " "renderable".format(
expected_layer_name
)
self.log.warning(msg)
continue
# detect if there are sets (subsets) to attach render to
sets = cmds.sets(layer, query=True) or []
attach_to = []
for s in sets:
if not cmds.attributeQuery("family", node=s, exists=True):
continue
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
layer_name = "rs_{}".format(expected_layer_name)
# collect all frames we are expecting to be rendered
# return all expected files for all cameras and aovs in given
# frame range
layer_render_products = get_layer_render_products(layer_name)
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
exp_files = []
multipart = False
for product in render_products:
if product.multipart:
multipart = True
product_name = product.productName
if product.camera and layer_render_products.has_camera_token():
product_name = "{}{}".format(
product.camera,
"_" + product_name if product_name else "")
exp_files.append(
{
product_name: layer_render_products.get_files(
product)
})
has_cameras = any(product.camera for product in render_products)
assert has_cameras, "No render cameras found."
self.log.info("multipart: {}".format(
multipart))
assert exp_files, "no file names were generated, this is bug"
self.log.info(
"expected files: {}".format(
json.dumps(exp_files, indent=4, sort_keys=True)
)
)
# if we want to attach render to subset, check if we have AOV's
# in expectedFiles. If so, raise error as we cannot attach AOV
# (considered to be subset on its own) to another subset
if attach_to:
assert isinstance(exp_files, list), (
"attaching multiple AOVs or renderable cameras to "
"subset is not supported"
)
# append full path
aov_dict = {}
default_render_file = context.data.get('project_settings')\
.get('maya')\
.get('RenderSettings')\
.get('default_render_image_folder') or ""
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
for aov in exp_files:
full_paths = []
aov_first_key = list(aov.keys())[0]
for file in aov[aov_first_key]:
full_path = os.path.join(workspace, default_render_file,
file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov_first_key] = full_paths
full_exp_files = [aov_dict]
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
frame_end_render = int(self.get_render_attribute(
"endFrame", layer=layer_name))
if (int(context.data['frameStartHandle']) == frame_start_render
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
handle_start = context.data['handleStart']
handle_end = context.data['handleEnd']
frame_start = context.data['frameStart']
frame_end = context.data['frameEnd']
frame_start_handle = context.data['frameStartHandle']
frame_end_handle = context.data['frameEndHandle']
else:
handle_start = 0
handle_end = 0
frame_start = frame_start_render
frame_end = frame_end_render
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
# find common path to store metadata
# so if image prefix is branching to many directories
# metadata file will be located in top-most common
# directory.
# TODO: use `os.path.commonpath()` after switch to Python 3
publish_meta_path = os.path.normpath(publish_meta_path)
common_publish_meta_path = os.path.splitdrive(
publish_meta_path)[0]
if common_publish_meta_path:
common_publish_meta_path += os.path.sep
for part in publish_meta_path.replace(
common_publish_meta_path, "").split(os.path.sep):
common_publish_meta_path = os.path.join(
common_publish_meta_path, part)
if part == expected_layer_name:
break
# TODO: replace this terrible linux hotfix with real solution :)
if platform.system().lower() in ["linux", "darwin"]:
common_publish_meta_path = "/" + common_publish_meta_path
self.log.info(
"Publish meta path: {}".format(common_publish_meta_path))
self.log.info(full_exp_files)
self.log.info("collecting layer: {}".format(layer_name))
# Get layer specific settings, might be overrides
colorspace_data = lib.get_color_management_preferences()
data = {
"subset": expected_layer_name,
"attachTo": attach_to,
"setMembers": layer_name,
"multipartExr": multipart,
"review": render_instance.data.get("review") or False,
"publish": True,
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
"byFrameStep": int(
self.get_render_attribute("byFrameStep",
layer=layer_name)),
"renderer": self.get_render_attribute(
"currentRenderer", layer=layer_name).lower(),
# instance subset
"family": "renderlayer",
"families": ["renderlayer"],
"asset": asset,
"time": get_formatted_current_time(),
"author": context.data["user"],
# Add source to allow tracing back to the scene from
# which was submitted originally
"source": filepath,
"expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path,
"renderProducts": layer_render_products,
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.height", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
"tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501
"tilesX": render_instance.data.get("tilesX") or 2,
"tilesY": render_instance.data.get("tilesY") or 2,
"priority": render_instance.data.get("priority"),
"convertToScanline": render_instance.data.get(
"convertToScanline") or False,
"useReferencedAovs": render_instance.data.get(
"useReferencedAovs") or render_instance.data.get(
"vrayUseReferencedAovs") or False,
"aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501
"renderSetupIncludeLights": render_instance.data.get(
"renderSetupIncludeLights"
),
"colorspaceConfig": colorspace_data["config"],
"colorspaceDisplay": colorspace_data["display"],
"colorspaceView": colorspace_data["view"],
"strict_error_checking": render_instance.data.get(
"strict_error_checking", True
)
}
# Collect Deadline url if Deadline module is enabled
deadline_settings = (
context.data["system_settings"]["modules"]["deadline"]
)
if deadline_settings["enabled"]:
data["deadlineUrl"] = render_instance.data["deadlineUrl"]
if self.sync_workfile_version:
data["version"] = context.data["version"]
for instance in context:
if instance.data['family'] == "workfile":
instance.data["version"] = context.data["version"]
# handle standalone renderers
if render_instance.data.get("vrayScene") is True:
data["families"].append("vrayscene_render")
if render_instance.data.get("assScene") is True:
data["families"].append("assscene_render")
# Include (optional) global settings
# Get global overrides and translate to Deadline values
overrides = self.parse_options(str(render_globals))
data.update(**overrides)
# get string values for pools
primary_pool = overrides["renderGlobals"]["Pool"]
secondary_pool = overrides["renderGlobals"].get("SecondaryPool")
data["primaryPool"] = primary_pool
data["secondaryPool"] = secondary_pool
# Define nice label
label = "{0} ({1})".format(expected_layer_name, data["asset"])
label += " [{0}-{1}]".format(
int(data["frameStartHandle"]), int(data["frameEndHandle"])
)
instance = context.create_instance(expected_layer_name)
instance.data["label"] = label
instance.data["farm"] = True
instance.data.update(data)
def parse_options(self, render_globals):
"""Get all overrides with a value, skip those without.
Here's the kicker. These globals override defaults in the submission
integrator, but an empty value means no overriding is made.
Otherwise, Frames would override the default frames set under globals.
Args:
render_globals (str): collection of render globals
Returns:
dict: only overrides with values
"""
attributes = lib.read(render_globals)
options = {"renderGlobals": {}}
options["renderGlobals"]["Priority"] = attributes["priority"]
# Check for specific pools
pool_a, pool_b = self._discover_pools(attributes)
options["renderGlobals"].update({"Pool": pool_a})
if pool_b:
options["renderGlobals"].update({"SecondaryPool": pool_b})
# Machine list
machine_list = attributes["machineList"]
if machine_list:
key = "Whitelist" if attributes["whitelist"] else "Blacklist"
options["renderGlobals"][key] = machine_list
# Suspend publish job
state = "Suspended" if attributes["suspendPublishJob"] else "Active"
options["publishJobState"] = state
chunksize = attributes.get("framesPerTask", 1)
options["renderGlobals"]["ChunkSize"] = chunksize
# Define nice label
label = "{0} ({1})".format(layer_name, instance.data["asset"])
label += " [{0}-{1}]".format(
int(data["frameStartHandle"]), int(data["frameEndHandle"])
)
data["label"] = label
# Override frames should be False if extendFrames is False. This is
# to ensure it doesn't go off doing crazy unpredictable things
override_frames = False
extend_frames = attributes.get("extendFrames", False)
if extend_frames:
override_frames = attributes.get("overrideExistingFrame", False)
extend_frames = instance.data.get("extendFrames", False)
if not extend_frames:
instance.data["overrideExistingFrame"] = False
options["extendFrames"] = extend_frames
options["overrideExistingFrame"] = override_frames
maya_render_plugin = "MayaBatch"
options["mayaRenderPlugin"] = maya_render_plugin
return options
def _discover_pools(self, attributes):
pool_a = None
pool_b = None
# Check for specific pools
pool_b = []
if "primaryPool" in attributes:
pool_a = attributes["primaryPool"]
if "secondaryPool" in attributes:
pool_b = attributes["secondaryPool"]
else:
# Backwards compatibility
pool_str = attributes.get("pools", None)
if pool_str:
pool_a, pool_b = pool_str.split(";")
# Ensure empty entry token is caught
if pool_b == "-":
pool_b = None
return pool_a, pool_b
# Update the instace
instance.data.update(data)
@staticmethod
def get_render_attribute(attr, layer):

View file

@ -50,7 +50,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
result = []
# Collect all AOVs / Render Elements
layer = instance.data["setMembers"]
layer = instance.data["renderlayer"]
node_type = rp_node_types[renderer]
render_elements = cmds.ls(type=node_type)

View file

@ -19,7 +19,7 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin):
if "vrayscene_layer" in instance.data.get("families", []):
layer = instance.data.get("layer")
else:
layer = instance.data["setMembers"]
layer = instance.data["renderlayer"]
self.log.info("layer: {}".format(layer))
cameras = cmds.ls(type="camera", long=True)

View file

@ -18,14 +18,10 @@ class CollectReview(pyblish.api.InstancePlugin):
def process(self, instance):
self.log.debug('instance: {}'.format(instance))
task = legacy_io.Session["AVALON_TASK"]
# Get panel.
instance.data["panel"] = cmds.playblast(
activeEditor=True
).split("|")[-1]
).rsplit("|", 1)[-1]
# get cameras
members = instance.data['setMembers']
@ -34,11 +30,12 @@ class CollectReview(pyblish.api.InstancePlugin):
camera = cameras[0] if cameras else None
context = instance.context
objectset = context.data['objectsets']
objectset = {
i.data.get("instance_node") for i in context
}
# Convert enum attribute index to string for Display Lights.
index = instance.data.get("displayLights", 0)
display_lights = lib.DISPLAY_LIGHTS_VALUES[index]
# Collect display lights.
display_lights = instance.data.get("displayLights", "default")
if display_lights == "project_settings":
settings = instance.context.data["project_settings"]
settings = settings["maya"]["publish"]["ExtractPlayblast"]
@ -60,7 +57,7 @@ class CollectReview(pyblish.api.InstancePlugin):
burninDataMembers["focalLength"] = focal_length
# Account for nested instances like model.
reviewable_subsets = list(set(members) & set(objectset))
reviewable_subsets = list(set(members) & objectset)
if reviewable_subsets:
if len(reviewable_subsets) > 1:
raise KnownPublishError(
@ -97,7 +94,11 @@ class CollectReview(pyblish.api.InstancePlugin):
data["frameStart"] = instance.data["frameStart"]
data["frameEnd"] = instance.data["frameEnd"]
data['step'] = instance.data['step']
data['fps'] = instance.data['fps']
# this (with other time related data) should be set on
# representations. Once plugins like Extract Review start
# using representations, this should be removed from here
# as Extract Playblast is already adding fps to representation.
data['fps'] = context.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
@ -112,6 +113,7 @@ class CollectReview(pyblish.api.InstancePlugin):
instance.data['remove'] = True
else:
task = legacy_io.Session["AVALON_TASK"]
legacy_subset_name = task + 'Review'
asset_doc = instance.context.data['assetEntity']
project_name = legacy_io.active_project()
@ -133,6 +135,11 @@ class CollectReview(pyblish.api.InstancePlugin):
instance.data["frameEndHandle"]
instance.data["displayLights"] = display_lights
instance.data["burninDataMembers"] = burninDataMembers
# this (with other time related data) should be set on
# representations. Once plugins like Extract Review start
# using representations, this should be removed from here
# as Extract Playblast is already adding fps to representation.
instance.data["fps"] = instance.context.data["fps"]
# make ftrack publishable
instance.data.setdefault("families", []).append('ftrack')

View file

@ -24,129 +24,91 @@ class CollectVrayScene(pyblish.api.InstancePlugin):
def process(self, instance):
"""Collector entry point."""
collected_render_layers = instance.data["setMembers"]
instance.data["remove"] = True
context = instance.context
_rs = renderSetup.instance()
# current_layer = _rs.getVisibleRenderLayer()
layer = instance.data["transientData"]["layer"]
layer_name = layer.name()
renderer = self.get_render_attribute("currentRenderer",
layer=layer_name)
if renderer != "vray":
self.log.warning("Layer '{}' renderer is not set to V-Ray".format(
layer_name
))
# collect all frames we are expecting to be rendered
renderer = cmds.getAttr(
"defaultRenderGlobals.currentRenderer"
).lower()
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
frame_end_render = int(self.get_render_attribute(
"endFrame", layer=layer_name))
if renderer != "vray":
raise AssertionError("Vray is not enabled.")
if (int(context.data['frameStartHandle']) == frame_start_render
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
maya_render_layers = {
layer.name(): layer for layer in _rs.getRenderLayers()
handle_start = context.data['handleStart']
handle_end = context.data['handleEnd']
frame_start = context.data['frameStart']
frame_end = context.data['frameEnd']
frame_start_handle = context.data['frameStartHandle']
frame_end_handle = context.data['frameEndHandle']
else:
handle_start = 0
handle_end = 0
frame_start = frame_start_render
frame_end = frame_end_render
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
# Get layer specific settings, might be overrides
data = {
"subset": layer_name,
"layer": layer_name,
# TODO: This likely needs fixing now
# Before refactor: cmds.sets(layer, q=True) or ["*"]
"setMembers": ["*"],
"review": False,
"publish": True,
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
"byFrameStep": int(
self.get_render_attribute("byFrameStep",
layer=layer_name)),
"renderer": renderer,
# instance subset
"family": "vrayscene_layer",
"families": ["vrayscene_layer"],
"time": get_formatted_current_time(),
"author": context.data["user"],
# Add source to allow tracing back to the scene from
# which was submitted originally
"source": context.data["currentFile"].replace("\\", "/"),
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.height", layer=layer_name
),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
"priority": instance.data.get("priority"),
"useMultipleSceneFiles": instance.data.get(
"vraySceneMultipleFiles")
}
layer_list = []
for layer in collected_render_layers:
# every layer in set should start with `LAYER_` prefix
try:
expected_layer_name = re.search(r"^.+:(.*)", layer).group(1)
except IndexError:
msg = "Invalid layer name in set [ {} ]".format(layer)
self.log.warning(msg)
continue
instance.data.update(data)
self.log.info("processing %s" % layer)
# check if layer is part of renderSetup
if expected_layer_name not in maya_render_layers:
msg = "Render layer [ {} ] is not in " "Render Setup".format(
expected_layer_name
)
self.log.warning(msg)
continue
# check if layer is renderable
if not maya_render_layers[expected_layer_name].isRenderable():
msg = "Render layer [ {} ] is not " "renderable".format(
expected_layer_name
)
self.log.warning(msg)
continue
layer_name = "rs_{}".format(expected_layer_name)
self.log.debug(expected_layer_name)
layer_list.append(expected_layer_name)
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
frame_end_render = int(self.get_render_attribute(
"endFrame", layer=layer_name))
if (int(context.data['frameStartHandle']) == frame_start_render
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
handle_start = context.data['handleStart']
handle_end = context.data['handleEnd']
frame_start = context.data['frameStart']
frame_end = context.data['frameEnd']
frame_start_handle = context.data['frameStartHandle']
frame_end_handle = context.data['frameEndHandle']
else:
handle_start = 0
handle_end = 0
frame_start = frame_start_render
frame_end = frame_end_render
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
# Get layer specific settings, might be overrides
data = {
"subset": expected_layer_name,
"layer": layer_name,
"setMembers": cmds.sets(layer, q=True) or ["*"],
"review": False,
"publish": True,
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
"byFrameStep": int(
self.get_render_attribute("byFrameStep",
layer=layer_name)),
"renderer": self.get_render_attribute("currentRenderer",
layer=layer_name),
# instance subset
"family": "vrayscene_layer",
"families": ["vrayscene_layer"],
"asset": legacy_io.Session["AVALON_ASSET"],
"time": get_formatted_current_time(),
"author": context.data["user"],
# Add source to allow tracing back to the scene from
# which was submitted originally
"source": context.data["currentFile"].replace("\\", "/"),
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.height", layer=layer_name
),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
"priority": instance.data.get("priority"),
"useMultipleSceneFiles": instance.data.get(
"vraySceneMultipleFiles")
}
# Define nice label
label = "{0} ({1})".format(expected_layer_name, data["asset"])
label += " [{0}-{1}]".format(
int(data["frameStartHandle"]), int(data["frameEndHandle"])
)
instance = context.create_instance(expected_layer_name)
instance.data["label"] = label
instance.data.update(data)
# Define nice label
label = "{0} ({1})".format(layer_name, instance.data["asset"])
label += " [{0}-{1}]".format(
int(data["frameStartHandle"]), int(data["frameEndHandle"])
)
instance.data["label"] = label
def get_render_attribute(self, attr, layer):
"""Get attribute from render options.

View file

@ -1,46 +1,30 @@
import os
import pyblish.api
from maya import cmds
from openpype.pipeline import legacy_io
class CollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
class CollectWorkfileData(pyblish.api.InstancePlugin):
"""Inject data into Workfile instance"""
order = pyblish.api.CollectorOrder - 0.01
label = "Maya Workfile"
hosts = ['maya']
families = ["workfile"]
def process(self, context):
def process(self, instance):
"""Inject the current working file"""
current_file = cmds.file(query=True, sceneName=True)
context.data['currentFile'] = current_file
context = instance.context
current_file = instance.context.data['currentFile']
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
task = legacy_io.Session["AVALON_TASK"]
data = {}
# create instance
instance = context.create_instance(name=filename)
subset = 'workfile' + task.capitalize()
data.update({
"subset": subset,
"asset": os.getenv("AVALON_ASSET", None),
"label": subset,
"publish": True,
"family": 'workfile',
"families": ['workfile'],
data = { # noqa
"setMembers": [current_file],
"frameStart": context.data['frameStart'],
"frameEnd": context.data['frameEnd'],
"handleStart": context.data['handleStart'],
"handleEnd": context.data['handleEnd']
})
}
data['representations'] = [{
'name': ext.lstrip("."),
@ -50,8 +34,3 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
}]
instance.data.update(data)
self.log.info('Collected instance: {}'.format(file))
self.log.info('Scene path: {}'.format(current_file))
self.log.info('staging Dir: {}'.format(folder))
self.log.info('subset: {}'.format(subset))

View file

@ -31,7 +31,7 @@ class ExtractAssembly(publish.Extractor):
with open(json_path, "w") as filepath:
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
self.log.info("Extracting point cache ..")
self.log.debug("Extracting pointcache ..")
cmds.select(instance.data["nodesHierarchy"])
# Run basic alembic exporter

View file

@ -106,7 +106,7 @@ class ExtractCameraMayaScene(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:

View file

@ -8,10 +8,12 @@ import tempfile
from openpype.lib import run_subprocess
from openpype.pipeline import publish
from openpype.pipeline.publish import OptionalPyblishPluginMixin
from openpype.hosts.maya.api import lib
class ExtractImportReference(publish.Extractor):
class ExtractImportReference(publish.Extractor,
OptionalPyblishPluginMixin):
"""
Extract the scene with imported reference.
@ -32,11 +34,14 @@ class ExtractImportReference(publish.Extractor):
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
def process(self, instance):
if not self.is_active(instance.data):
return
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:

View file

@ -412,7 +412,7 @@ class ExtractLook(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
@ -444,12 +444,12 @@ class ExtractLook(publish.Extractor):
# Remove all members of the sets so they are not included in the
# exported file by accident
self.log.info("Processing sets..")
self.log.debug("Processing sets..")
lookdata = instance.data["lookData"]
relationships = lookdata["relationships"]
sets = list(relationships.keys())
if not sets:
self.log.info("No sets found")
self.log.info("No sets found for the look")
return
# Specify texture processing executables to activate

View file

@ -29,7 +29,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:

View file

@ -8,7 +8,8 @@ from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractModel(publish.Extractor):
class ExtractModel(publish.Extractor,
publish.OptionalPyblishPluginMixin):
"""Extract as Model (Maya Scene).
Only extracts contents based on the original "setMembers" data to ensure
@ -31,11 +32,14 @@ class ExtractModel(publish.Extractor):
def process(self, instance):
"""Plugin entry point."""
if not self.is_active(instance.data):
return
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:

View file

@ -45,7 +45,7 @@ class ExtractAlembic(publish.Extractor):
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
attr_prefixes = [value for value in attr_prefixes if value.strip()]
self.log.info("Extracting pointcache..")
self.log.debug("Extracting pointcache..")
dirname = self.staging_dir(instance)
parent_dir = self.staging_dir(instance)
@ -86,7 +86,6 @@ class ExtractAlembic(publish.Extractor):
end=end))
suspend = not instance.data.get("refresh", False)
self.log.info(nodes)
with suspended_refresh(suspend=suspend):
with maintained_selection():
cmds.select(nodes, noExpand=True)

View file

@ -29,15 +29,21 @@ class ExtractRedshiftProxy(publish.Extractor):
if not anim_on:
# Remove animation information because it is not required for
# non-animated subsets
instance.data.pop("proxyFrameStart", None)
instance.data.pop("proxyFrameEnd", None)
keys = ["frameStart",
"frameEnd",
"handleStart",
"handleEnd",
"frameStartHandle",
"frameEndHandle"]
for key in keys:
instance.data.pop(key, None)
else:
start_frame = instance.data["proxyFrameStart"]
end_frame = instance.data["proxyFrameEnd"]
start_frame = instance.data["frameStartHandle"]
end_frame = instance.data["frameEndHandle"]
rs_options = "{}startFrame={};endFrame={};frameStep={};".format(
rs_options, start_frame,
end_frame, instance.data["proxyFrameStep"]
end_frame, instance.data["step"]
)
root, ext = os.path.splitext(file_path)
@ -48,7 +54,7 @@ class ExtractRedshiftProxy(publish.Extractor):
for frame in range(
int(start_frame),
int(end_frame) + 1,
int(instance.data["proxyFrameStep"]),
int(instance.data["step"]),
)]
# vertex_colors = instance.data.get("vertexColors", False)
@ -74,8 +80,6 @@ class ExtractRedshiftProxy(publish.Extractor):
'files': repr_files,
"stagingDir": staging_dir,
}
if anim_on:
representation["frameStart"] = instance.data["proxyFrameStart"]
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s"

View file

@ -22,13 +22,13 @@ class ExtractRig(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
self.scene_type = ext_mapping[family]
self.log.info(
"Using {} as scene type".format(self.scene_type))
"Using '.{}' as scene type".format(self.scene_type))
break
except AttributeError:
# no preset found

View file

@ -32,7 +32,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
optional = True
def process(self, instance):
self.log.info("Extracting pointcache..")
self.log.debug("Extracting pointcache..")
geo = cmds.listRelatives(
instance.data.get("geometry"), allDescendents=True, fullPath=True)

View file

@ -104,7 +104,7 @@ class ExtractYetiRig(publish.Extractor):
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
self.log.debug("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:

View file

@ -0,0 +1,21 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Maya scene units</title>
<description>## Invalid maya scene units
Detected invalid maya scene units:
{issues}
</description>
<detail>
### How to repair?
You can automatically repair the scene units by clicking the Repair action on
the right.
After that restart publishing with Reload button.
</detail>
</error>
</root>

View file

@ -0,0 +1,29 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Missing node ids</title>
<description>## Nodes found with missing `cbId`
Nodes were detected in your scene which are missing required `cbId`
attributes for identification further in the pipeline.
### How to repair?
The node ids are auto-generated on scene save, and thus the easiest fix is to
save your scene again.
After that restart publishing with Reload button.
</description>
<detail>
### Invalid nodes
{nodes}
### How could this happen?
This often happens if you've generated new nodes but haven't saved your scene
after creating the new nodes.
</detail>
</error>
</root>

View file

@ -288,7 +288,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
comment = context.data.get("comment", "")
scene = os.path.splitext(filename)[0]
dirname = os.path.join(workspace, "renders")
renderlayer = instance.data['setMembers'] # rs_beauty
renderlayer = instance.data['renderlayer'] # rs_beauty
renderlayer_name = instance.data['subset'] # beauty
renderglobals = instance.data["renderGlobals"]
# legacy_layers = renderlayer_globals["UseLegacyRenderLayers"]
@ -546,3 +546,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
"%f=%d was rounded off to nearest integer"
% (value, int(value))
)
# TODO: Remove hack to avoid this plug-in in new publisher
# This plug-in should actually be in dedicated module
if not os.environ.get("MUSTER_REST_URL"):
del MayaSubmitMuster

View file

@ -1,6 +1,9 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishValidationError,
ValidateContentsOrder
)
class ValidateAnimationContent(pyblish.api.InstancePlugin):
@ -47,4 +50,5 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Animation content is invalid. See log.")
raise PublishValidationError(
"Animation content is invalid. See log.")

View file

@ -6,6 +6,7 @@ from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
)
@ -35,8 +36,10 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin):
# if a deformer has been created on the shape
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Nodes found with mismatching "
"IDs: {0}".format(invalid))
# TODO: Message formatting can be improved
raise PublishValidationError("Nodes found with mismatching "
"IDs: {0}".format(invalid),
title="Invalid node ids")
@classmethod
def get_invalid(cls, instance):

View file

@ -23,11 +23,13 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
def process(self, instance):
# we cannot ask this until user open render settings as
# `defaultArnoldRenderOptions` doesn't exists
# `defaultArnoldRenderOptions` doesn't exist
errors = []
try:
relative_texture = cmds.getAttr(
absolute_texture = cmds.getAttr(
"defaultArnoldRenderOptions.absolute_texture_paths")
relative_procedural = cmds.getAttr(
absolute_procedural = cmds.getAttr(
"defaultArnoldRenderOptions.absolute_procedural_paths")
texture_search_path = cmds.getAttr(
"defaultArnoldRenderOptions.tspath"
@ -42,10 +44,11 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
scene_dir, scene_basename = os.path.split(cmds.file(q=True, loc=True))
scene_name, _ = os.path.splitext(scene_basename)
assert self.maya_is_true(relative_texture) is not True, \
("Texture path is set to be absolute")
assert self.maya_is_true(relative_procedural) is not True, \
("Procedural path is set to be absolute")
if self.maya_is_true(absolute_texture):
errors.append("Texture path is set to be absolute")
if self.maya_is_true(absolute_procedural):
errors.append("Procedural path is set to be absolute")
anatomy = instance.context.data["anatomy"]
@ -57,15 +60,20 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
for k in keys:
paths.append("[{}]".format(k))
self.log.info("discovered roots: {}".format(":".join(paths)))
self.log.debug("discovered roots: {}".format(":".join(paths)))
assert ":".join(paths) in texture_search_path, (
"Project roots are not in texture_search_path"
)
if ":".join(paths) not in texture_search_path:
errors.append((
"Project roots {} are not in texture_search_path: {}"
).format(paths, texture_search_path))
assert ":".join(paths) in procedural_search_path, (
"Project roots are not in procedural_search_path"
)
if ":".join(paths) not in procedural_search_path:
errors.append((
"Project roots {} are not in procedural_search_path: {}"
).format(paths, procedural_search_path))
if errors:
raise PublishValidationError("\n".join(errors))
@classmethod
def repair(cls, instance):

View file

@ -1,6 +1,9 @@
import pyblish.api
import maya.cmds as cmds
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
PublishValidationError
)
class ValidateAssemblyName(pyblish.api.InstancePlugin):
@ -47,5 +50,5 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Found {} invalid named assembly "
raise PublishValidationError("Found {} invalid named assembly "
"items".format(len(invalid)))

View file

@ -1,6 +1,8 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
PublishValidationError
)
class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
"""Ensure namespaces are not nested
@ -23,7 +25,7 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
self.log.info("Checking namespace for %s" % instance.name)
if self.get_invalid(instance):
raise RuntimeError("Nested namespaces found")
raise PublishValidationError("Nested namespaces found")
@classmethod
def get_invalid(cls, instance):

View file

@ -1,9 +1,8 @@
import pyblish.api
from maya import cmds
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import RepairAction
from openpype.pipeline.publish import PublishValidationError, RepairAction
class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
@ -38,8 +37,9 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Found {} invalid transforms of assembly "
"items".format(len(invalid)))
raise PublishValidationError(
("Found {} invalid transforms of assembly "
"items").format(len(invalid)))
@classmethod
def get_invalid(cls, instance):
@ -90,6 +90,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
"""
from qtpy import QtWidgets
from openpype.hosts.maya.api import lib
# Store namespace in variable, cosmetics thingy

View file

@ -1,17 +1,16 @@
from collections import defaultdict
from maya import cmds
import pyblish.api
from maya import cmds
from openpype.hosts.maya.api.lib import set_attribute
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
)
OptionalPyblishPluginMixin, PublishValidationError, RepairAction,
ValidateContentsOrder)
class ValidateAttributes(pyblish.api.InstancePlugin):
class ValidateAttributes(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Ensure attributes are consistent.
Attributes to validate and their values comes from the
@ -32,13 +31,16 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
attributes = None
def process(self, instance):
if not self.is_active(instance.data):
return
# Check for preset existence.
if not self.attributes:
return
invalid = self.get_invalid(instance, compute=True)
if invalid:
raise RuntimeError(
raise PublishValidationError(
"Found attributes with invalid values: {}".format(invalid)
)

View file

@ -1,8 +1,9 @@
import pyblish.api
from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishValidationError, ValidateContentsOrder)
class ValidateCameraAttributes(pyblish.api.InstancePlugin):
@ -65,4 +66,5 @@ class ValidateCameraAttributes(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Invalid camera attributes: %s" % invalid)
raise PublishValidationError(
"Invalid camera attributes: {}".format(invalid))

View file

@ -1,8 +1,9 @@
import pyblish.api
from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishValidationError, ValidateContentsOrder)
class ValidateCameraContents(pyblish.api.InstancePlugin):
@ -34,7 +35,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
cameras = cmds.ls(shapes, type='camera', long=True)
if len(cameras) != 1:
cls.log.error("Camera instance must have a single camera. "
"Found {0}: {1}".format(len(cameras), cameras))
"Found {0}: {1}".format(len(cameras), cameras))
invalid.extend(cameras)
# We need to check this edge case because returning an extended
@ -48,10 +49,12 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
"members: {}".format(members))
return members
raise RuntimeError("No cameras found in empty instance.")
raise PublishValidationError(
"No cameras found in empty instance.")
if not cls.validate_shapes:
cls.log.info("not validating shapes in the content")
cls.log.debug("Not validating shapes in the camera content"
" because 'validate shapes' is disabled")
return invalid
# non-camera shapes
@ -60,13 +63,10 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
if shapes:
shapes = list(shapes)
cls.log.error("Camera instance should only contain camera "
"shapes. Found: {0}".format(shapes))
"shapes. Found: {0}".format(shapes))
invalid.extend(shapes)
invalid = list(set(invalid))
return invalid
def process(self, instance):
@ -74,5 +74,5 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Invalid camera contents: "
raise PublishValidationError("Invalid camera contents: "
"{0}".format(invalid))

View file

@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
OptionalPyblishPluginMixin
)
class ValidateColorSets(pyblish.api.Validator):
class ValidateColorSets(pyblish.api.Validator,
OptionalPyblishPluginMixin):
"""Validate all meshes in the instance have unlocked normals
These can be removed manually through:
@ -40,6 +42,8 @@ class ValidateColorSets(pyblish.api.Validator):
def process(self, instance):
"""Raise invalid when any of the meshes have ColorSets"""
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance)

View file

@ -1,13 +1,14 @@
from maya import cmds
import pyblish.api
from maya import cmds
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder)
class ValidateCycleError(pyblish.api.InstancePlugin):
class ValidateCycleError(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validate nodes produce no cycle errors."""
order = ValidateContentsOrder + 0.05
@ -18,9 +19,13 @@ class ValidateCycleError(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Nodes produce a cycle error: %s" % invalid)
raise PublishValidationError(
"Nodes produce a cycle error: {}".format(invalid))
@classmethod
def get_invalid(cls, instance):

View file

@ -4,7 +4,8 @@ from maya import cmds
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
PublishValidationError,
OptionalPyblishPluginMixin
)
from openpype.hosts.maya.api.lib_rendersetup import (
get_attr_overrides,
@ -13,7 +14,8 @@ from openpype.hosts.maya.api.lib_rendersetup import (
from maya.app.renderSetup.model.override import AbsOverride
class ValidateFrameRange(pyblish.api.InstancePlugin):
class ValidateFrameRange(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validates the frame ranges.
This is an optional validator checking if the frame range on instance
@ -40,6 +42,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
exclude_families = []
def process(self, instance):
if not self.is_active(instance.data):
return
context = instance.context
if instance.data.get("tileRendering"):
self.log.info((
@ -102,10 +107,12 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"({}).".format(label.title(), values[1], values[0])
)
for e in errors:
self.log.error(e)
if errors:
report = "Frame range settings are incorrect.\n\n"
for error in errors:
report += "- {}\n\n".format(error)
assert len(errors) == 0, ("Frame range settings are incorrect")
raise PublishValidationError(report, title="Frame Range incorrect")
@classmethod
def repair(cls, instance):
@ -150,7 +157,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
def repair_renderlayer(cls, instance):
"""Apply frame range in render settings"""
layer = instance.data["setMembers"]
layer = instance.data["renderlayer"]
context = instance.context
start_attr = "defaultRenderGlobals.startFrame"

View file

@ -4,7 +4,8 @@ from maya import cmds
import pyblish.api
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder
ValidateContentsOrder,
PublishValidationError
)
@ -21,7 +22,7 @@ class ValidateGLSLPlugin(pyblish.api.InstancePlugin):
def process(self, instance):
if not cmds.pluginInfo("maya2glTF", query=True, loaded=True):
raise RuntimeError("maya2glTF is not loaded")
raise PublishValidationError("maya2glTF is not loaded")
@classmethod
def repair(cls, instance):

View file

@ -1,6 +1,9 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
)
class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@ -14,18 +17,23 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
invalid = list()
if not instance.data["setMembers"]:
if not instance.data.get("setMembers"):
objectset_name = instance.data['name']
invalid.append(objectset_name)
return invalid
def process(self, instance):
# Allow renderlayer and workfile to be empty
skip_families = ["workfile", "renderlayer", "rendersetup"]
# Allow renderlayer, rendersetup and workfile to be empty
skip_families = {"workfile", "renderlayer", "rendersetup"}
if instance.data.get("family") in skip_families:
return
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Empty instances found: {0}".format(invalid))
# Invalid will always be a single entry, we log the single name
name = invalid[0]
raise PublishValidationError(
title="Empty instance",
message="Instance '{0}' is empty".format(name)
)

View file

@ -2,7 +2,10 @@ import pyblish.api
import string
import six
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
)
# Allow only characters, numbers and underscore
allowed = set(string.ascii_lowercase +
@ -28,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin):
# Ensure subset data
if subset is None:
raise RuntimeError("Instance is missing subset "
raise PublishValidationError("Instance is missing subset "
"name: {0}".format(subset))
if not isinstance(subset, six.string_types):

View file

@ -1,7 +1,8 @@
import maya.cmds as cmds
import pyblish.api
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import PublishValidationError
class ValidateInstancerContent(pyblish.api.InstancePlugin):
@ -52,7 +53,8 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin):
error = True
if error:
raise RuntimeError("Instancer Content is invalid. See log.")
raise PublishValidationError(
"Instancer Content is invalid. See log.")
def check_geometry_hidden(self, export_members):

View file

@ -1,7 +1,10 @@
import os
import re
import pyblish.api
from openpype.pipeline.publish import PublishValidationError
VERBOSE = False
@ -164,5 +167,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
if invalid:
self.log.error("Invalid nodes: {0}".format(invalid))
raise RuntimeError("Invalid particle caches in instance. "
"See logs for details.")
raise PublishValidationError(
("Invalid particle caches in instance. "
"See logs for details."))

View file

@ -2,7 +2,10 @@ import os
import pyblish.api
import maya.cmds as cmds
from openpype.pipeline.publish import RepairContextAction
from openpype.pipeline.publish import (
RepairContextAction,
PublishValidationError
)
class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
@ -35,7 +38,7 @@ class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
invalid = self.get_invalid(context)
if invalid:
raise RuntimeError(
raise PublishValidationError(
"Found forbidden plugin name: {}".format(", ".join(invalid))
)

View file

@ -1,6 +1,11 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishValidationError,
ValidateContentsOrder
)
from maya import cmds # noqa
@ -28,19 +33,16 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
"""Process all the nodes in the instance"""
if not instance[:]:
raise RuntimeError("Instance is empty")
raise PublishValidationError("Instance is empty")
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("'{}' has invalid look "
raise PublishValidationError("'{}' has invalid look "
"content".format(instance.name))
@classmethod
def get_invalid(cls, instance):
"""Get all invalid nodes"""
cls.log.info("Validating look content for "
"'{}'".format(instance.name))
# check if data has the right attributes and content
attributes = cls.validate_lookdata_attributes(instance)
# check the looks for ID

View file

@ -1,7 +1,10 @@
from maya import cmds
import pyblish.api
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
)
class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin):
@ -56,4 +59,4 @@ class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin):
invalid.append(plug)
if invalid:
raise RuntimeError("Invalid connections.")
raise PublishValidationError("Invalid connections.")

View file

@ -6,6 +6,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
)
@ -30,7 +31,7 @@ class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Invalid nodes %s" % (invalid,))
raise PublishValidationError("Invalid nodes %s" % (invalid,))
@staticmethod
def get_invalid(instance):

View file

@ -1,8 +1,10 @@
from collections import defaultdict
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidatePipelineOrder
from openpype.pipeline.publish import (
PublishValidationError, ValidatePipelineOrder)
class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
@ -33,8 +35,9 @@ class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Members found without non-unique IDs: "
"{0}".format(invalid))
raise PublishValidationError(
("Members found without non-unique IDs: "
"{0}").format(invalid))
@staticmethod
def get_invalid(instance):

View file

@ -2,7 +2,10 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
)
class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin):
@ -37,7 +40,7 @@ class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Invalid node relationships found: "
raise PublishValidationError("Invalid node relationships found: "
"{0}".format(invalid))
@classmethod

View file

@ -1,7 +1,10 @@
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError
)
class ValidateLookSets(pyblish.api.InstancePlugin):
@ -48,16 +51,13 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("'{}' has invalid look "
raise PublishValidationError("'{}' has invalid look "
"content".format(instance.name))
@classmethod
def get_invalid(cls, instance):
"""Get all invalid nodes"""
cls.log.info("Validating look content for "
"'{}'".format(instance.name))
relationships = instance.data["lookData"]["relationships"]
invalid = []

View file

@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
)
@ -27,7 +28,7 @@ class ValidateShadingEngine(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
raise PublishValidationError(
"Found shading engines with incorrect naming:"
"\n{}".format(invalid)
)

View file

@ -1,8 +1,9 @@
import pyblish.api
from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishValidationError, ValidateContentsOrder)
class ValidateSingleShader(pyblish.api.InstancePlugin):
@ -23,9 +24,9 @@ class ValidateSingleShader(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Found shapes which don't have a single shader "
"assigned: "
"\n{}".format(invalid))
raise PublishValidationError(
("Found shapes which don't have a single shader "
"assigned:\n{}").format(invalid))
@classmethod
def get_invalid(cls, instance):

View file

@ -7,6 +7,7 @@ from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline.publish import (
RepairContextAction,
ValidateSceneOrder,
PublishXmlValidationError
)
@ -26,6 +27,30 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
validate_fps = True
nice_message_format = (
"- <b>{setting}</b> must be <b>{required_value}</b>. "
"Your scene is set to <b>{current_value}</b>"
)
log_message_format = (
"Maya scene {setting} must be '{required_value}'. "
"Current value is '{current_value}'."
)
@classmethod
def apply_settings(cls, project_settings, system_settings):
"""Apply project settings to creator"""
settings = (
project_settings["maya"]["publish"]["ValidateMayaUnits"]
)
cls.validate_linear_units = settings.get("validate_linear_units",
cls.validate_linear_units)
cls.linear_units = settings.get("linear_units", cls.linear_units)
cls.validate_angular_units = settings.get("validate_angular_units",
cls.validate_angular_units)
cls.angular_units = settings.get("angular_units", cls.angular_units)
cls.validate_fps = settings.get("validate_fps", cls.validate_fps)
def process(self, context):
# Collected units
@ -34,15 +59,14 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
fps = context.data.get('fps')
# TODO replace query with using 'context.data["assetEntity"]'
asset_doc = get_current_project_asset()
asset_doc = context.data["assetEntity"]
asset_fps = mayalib.convert_to_maya_fps(asset_doc["data"]["fps"])
self.log.info('Units (linear): {0}'.format(linearunits))
self.log.info('Units (angular): {0}'.format(angularunits))
self.log.info('Units (time): {0} FPS'.format(fps))
valid = True
invalid = []
# Check if units are correct
if (
@ -50,26 +74,43 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
and linearunits
and linearunits != self.linear_units
):
self.log.error("Scene linear units must be {}".format(
self.linear_units))
valid = False
invalid.append({
"setting": "Linear units",
"required_value": self.linear_units,
"current_value": linearunits
})
if (
self.validate_angular_units
and angularunits
and angularunits != self.angular_units
):
self.log.error("Scene angular units must be {}".format(
self.angular_units))
valid = False
invalid.append({
"setting": "Angular units",
"required_value": self.angular_units,
"current_value": angularunits
})
if self.validate_fps and fps and fps != asset_fps:
self.log.error(
"Scene must be {} FPS (now is {})".format(asset_fps, fps))
valid = False
invalid.append({
"setting": "FPS",
"required_value": asset_fps,
"current_value": fps
})
if not valid:
raise RuntimeError("Invalid units set.")
if invalid:
issues = []
for data in invalid:
self.log.error(self.log_message_format.format(**data))
issues.append(self.nice_message_format.format(**data))
issues = "\n".join(issues)
raise PublishXmlValidationError(
plugin=self,
message="Invalid maya scene units",
formatting_data={"issues": issues}
)
@classmethod
def repair(cls, context):

View file

@ -10,12 +10,15 @@ from openpype.hosts.maya.api.lib import (
set_attribute
)
from openpype.pipeline.publish import (
OptionalPyblishPluginMixin,
RepairAction,
ValidateMeshOrder,
PublishValidationError
)
class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validate the mesh has default Arnold attributes.
It compares all Arnold attributes from a default mesh. This is to ensure
@ -30,12 +33,14 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
openpype.hosts.maya.api.action.SelectInvalidAction,
RepairAction
]
optional = True
if cmds.getAttr(
"defaultRenderGlobals.currentRenderer").lower() == "arnold":
active = True
else:
active = False
@classmethod
def apply_settings(cls, project_settings, system_settings):
# todo: this should not be done this way
attr = "defaultRenderGlobals.currentRenderer"
cls.active = cmds.getAttr(attr).lower() == "arnold"
@classmethod
def get_default_attributes(cls):
@ -50,7 +55,7 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
plug = "{}.{}".format(mesh, attr)
try:
defaults[attr] = get_attribute(plug)
except RuntimeError:
except PublishValidationError:
cls.log.debug("Ignoring arnold attribute: {}".format(attr))
return defaults
@ -101,10 +106,12 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
)
def process(self, instance):
if not self.is_active(instance.data):
return
invalid = self.get_invalid_attributes(instance, compute=True)
if invalid:
raise RuntimeError(
raise PublishValidationError(
"Non-default Arnold attributes found in instance:"
" {0}".format(invalid)
)

View file

@ -4,7 +4,8 @@ import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder
ValidateMeshOrder,
PublishValidationError
)
@ -49,6 +50,6 @@ class ValidateMeshEmpty(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
raise PublishValidationError(
"Meshes found in instance without any vertices: %s" % invalid
)

View file

@ -2,11 +2,16 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
from openpype.pipeline.publish import (
ValidateMeshOrder,
OptionalPyblishPluginMixin,
PublishValidationError
)
from openpype.hosts.maya.api.lib import len_flattened
class ValidateMeshHasUVs(pyblish.api.InstancePlugin):
class ValidateMeshHasUVs(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validate the current mesh has UVs.
It validates whether the current UV set has non-zero UVs and
@ -66,8 +71,19 @@ class ValidateMeshHasUVs(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Meshes found in instance without "
"valid UVs: {0}".format(invalid))
names = "<br>".join(
" - {}".format(node) for node in invalid
)
raise PublishValidationError(
title="Mesh has missing UVs",
message="Model meshes are required to have UVs.<br><br>"
"Meshes detected with invalid or missing UVs:<br>"
"{0}".format(names)
)

View file

@ -2,7 +2,17 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
from openpype.pipeline.publish import (
ValidateMeshOrder,
PublishValidationError
)
def _as_report_list(values, prefix="- ", suffix="\n"):
"""Return list as bullet point list for a report"""
if not values:
return ""
return prefix + (suffix + prefix).join(values)
class ValidateMeshNoNegativeScale(pyblish.api.Validator):
@ -46,5 +56,9 @@ class ValidateMeshNoNegativeScale(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
raise ValueError("Meshes found with negative "
"scale: {0}".format(invalid))
raise PublishValidationError(
"Meshes found with negative scale:\n\n{0}".format(
_as_report_list(sorted(invalid))
),
title="Negative scale"
)

View file

@ -2,7 +2,17 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
from openpype.pipeline.publish import (
ValidateMeshOrder,
PublishValidationError
)
def _as_report_list(values, prefix="- ", suffix="\n"):
"""Return list as bullet point list for a report"""
if not values:
return ""
return prefix + (suffix + prefix).join(values)
class ValidateMeshNonManifold(pyblish.api.Validator):
@ -16,7 +26,7 @@ class ValidateMeshNonManifold(pyblish.api.Validator):
order = ValidateMeshOrder
hosts = ['maya']
families = ['model']
label = 'Mesh Non-Manifold Vertices/Edges'
label = 'Mesh Non-Manifold Edges/Vertices'
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
@staticmethod
@ -38,5 +48,9 @@ class ValidateMeshNonManifold(pyblish.api.Validator):
invalid = self.get_invalid(instance)
if invalid:
raise ValueError("Meshes found with non-manifold "
"edges/vertices: {0}".format(invalid))
raise PublishValidationError(
"Meshes found with non-manifold edges/vertices:\n\n{0}".format(
_as_report_list(sorted(invalid))
),
title="Non-Manifold Edges/Vertices"
)

View file

@ -3,10 +3,15 @@ from maya import cmds
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateMeshOrder
from openpype.pipeline.publish import (
ValidateMeshOrder,
OptionalPyblishPluginMixin,
PublishValidationError
)
class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin):
class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validate meshes don't have edges with a zero length.
Based on Maya's polyCleanup 'Edges with zero length'.
@ -65,7 +70,14 @@ class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin):
def process(self, instance):
"""Process all meshes"""
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Meshes found with zero "
"edge length: {0}".format(invalid))
label = "Meshes found with zero edge length"
raise PublishValidationError(
message="{}: {}".format(label, invalid),
title=label,
description="{}:\n- ".format(label) + "\n- ".join(invalid)
)

View file

@ -6,10 +6,20 @@ import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
ValidateMeshOrder,
OptionalPyblishPluginMixin,
PublishValidationError
)
class ValidateMeshNormalsUnlocked(pyblish.api.Validator):
def _as_report_list(values, prefix="- ", suffix="\n"):
"""Return list as bullet point list for a report"""
if not values:
return ""
return prefix + (suffix + prefix).join(values)
class ValidateMeshNormalsUnlocked(pyblish.api.Validator,
OptionalPyblishPluginMixin):
"""Validate all meshes in the instance have unlocked normals
These can be unlocked manually through:
@ -47,12 +57,18 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator):
def process(self, instance):
"""Raise invalid when any of the meshes have locked normals"""
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance)
if invalid:
raise ValueError("Meshes found with "
"locked normals: {0}".format(invalid))
raise PublishValidationError(
"Meshes found with locked normals:\n\n{0}".format(
_as_report_list(sorted(invalid))
),
title="Locked normals"
)
@classmethod
def repair(cls, instance):

View file

@ -6,7 +6,18 @@ import maya.api.OpenMaya as om
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
from openpype.pipeline.publish import (
ValidateMeshOrder,
OptionalPyblishPluginMixin,
PublishValidationError
)
def _as_report_list(values, prefix="- ", suffix="\n"):
"""Return list as bullet point list for a report"""
if not values:
return ""
return prefix + (suffix + prefix).join(values)
class GetOverlappingUVs(object):
@ -225,7 +236,8 @@ class GetOverlappingUVs(object):
return faces
class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
""" Validate the current mesh overlapping UVs.
It validates whether the current UVs are overlapping or not.
@ -281,9 +293,14 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
return instance.data.get("overlapping_faces", [])
def process(self, instance):
if not self.is_active(instance.data):
return
invalid = self.get_invalid(instance, compute=True)
if invalid:
raise RuntimeError(
"Meshes found with overlapping UVs: {0}".format(invalid)
raise PublishValidationError(
"Meshes found with overlapping UVs:\n\n{0}".format(
_as_report_list(sorted(invalid))
),
title="Overlapping UVs"
)

Some files were not shown because too many files have changed in this diff Show more