mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-26 13:52:15 +01:00
add debug message for custom attributes and resolved conflict from the latest develop
This commit is contained in:
commit
4b7303d87c
67 changed files with 2871 additions and 1114 deletions
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,9 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.17.5-nightly.1
|
||||
- 3.17.4
|
||||
- 3.17.4-nightly.2
|
||||
- 3.17.4-nightly.1
|
||||
- 3.17.3
|
||||
- 3.17.3-nightly.2
|
||||
|
|
@ -132,9 +135,6 @@ body:
|
|||
- 3.15.1-nightly.3
|
||||
- 3.15.1-nightly.2
|
||||
- 3.15.1-nightly.1
|
||||
- 3.15.0
|
||||
- 3.15.0-nightly.1
|
||||
- 3.14.11-nightly.4
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
268
CHANGELOG.md
268
CHANGELOG.md
|
|
@ -1,6 +1,274 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.17.4](https://github.com/ynput/OpenPype/tree/3.17.4)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.3...3.17.4)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add Support for Husk-AYON Integration <a href="https://github.com/ynput/OpenPype/pull/5816">#5816</a></summary>
|
||||
|
||||
This draft pull request introduces support for integrating Husk with AYON within the OpenPype repository.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Push to project tool: Prepare push to project tool for AYON <a href="https://github.com/ynput/OpenPype/pull/5770">#5770</a></summary>
|
||||
|
||||
Cloned Push to project tool for AYON and modified it.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: tycache family support <a href="https://github.com/ynput/OpenPype/pull/5624">#5624</a></summary>
|
||||
|
||||
Tycache family supports for Tyflow Plugin in Max
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Changed behaviour for updating assets <a href="https://github.com/ynput/OpenPype/pull/5670">#5670</a></summary>
|
||||
|
||||
Changed how assets are updated in Unreal.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Improved error reporting for Sequence Frame Validator <a href="https://github.com/ynput/OpenPype/pull/5730">#5730</a></summary>
|
||||
|
||||
Improved error reporting for Sequence Frame Validator.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Setting tweaks on Review Family <a href="https://github.com/ynput/OpenPype/pull/5744">#5744</a></summary>
|
||||
|
||||
- Bug fix of not being able to publish the preferred visual style when creating preview animation
|
||||
- Exposes the parameters after creating instance
|
||||
- Add the Quality settings and viewport texture settings for preview animation
|
||||
- add use selection for create review
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Add families with frame range extractions back to the frame range validator <a href="https://github.com/ynput/OpenPype/pull/5757">#5757</a></summary>
|
||||
|
||||
In 3dsMax, there are some instances which exports the files in frame range but not being added to the optional frame range validator. In this PR, these instances would have the optional frame range validators to allow users to check if frame range aligns with the context data from DB.The following families have been added to have optional frame range validator:
|
||||
- maxrender
|
||||
- review
|
||||
- camera
|
||||
- redshift proxy
|
||||
- pointcache
|
||||
- point cloud(tyFlow PRT)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TimersManager: Use available data to get context info <a href="https://github.com/ynput/OpenPype/pull/5804">#5804</a></summary>
|
||||
|
||||
Get context information from pyblish context data instead of using `legacy_io`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Removed unused variable from `AbstractCollectRender` <a href="https://github.com/ynput/OpenPype/pull/5805">#5805</a></summary>
|
||||
|
||||
Removed unused `_asset` variable from `RenderInstance`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix/houdini: wrong frame calculation with handles <a href="https://github.com/ynput/OpenPype/pull/5698">#5698</a></summary>
|
||||
|
||||
This PR make collect plugins to consider `handleStart` and `handleEnd` when collecting frame range it affects three parts:
|
||||
- get frame range in collect plugins
|
||||
- expected file in render plugins
|
||||
- submit houdini job deadline plugin
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: ayon server settings improvements <a href="https://github.com/ynput/OpenPype/pull/5746">#5746</a></summary>
|
||||
|
||||
Nuke settings were not aligned with OpenPype settings. Also labels needed to be improved.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix pointcache family and fix alembic extractor <a href="https://github.com/ynput/OpenPype/pull/5747">#5747</a></summary>
|
||||
|
||||
Fixed `pointcache` family and fixed behaviour of the alembic extractor.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Remove 'shotgun_api3' from dependencies <a href="https://github.com/ynput/OpenPype/pull/5803">#5803</a></summary>
|
||||
|
||||
Removed `shotgun_api3` dependency from openpype dependencies for AYON launcher. The dependency is already defined in shotgrid addon and change of version causes clashes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Fix typo in filename <a href="https://github.com/ynput/OpenPype/pull/5807">#5807</a></summary>
|
||||
|
||||
Move content of `contants.py` into `constants.py`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Create context respects instance changes <a href="https://github.com/ynput/OpenPype/pull/5809">#5809</a></summary>
|
||||
|
||||
Fix issue with unrespected change propagation in `CreateContext`. All successfully saved instances are marked as saved so they have no changes. Origin data of an instance are explicitly not handled directly by the object but by the attribute wrappers.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix tools handling in AYON mode <a href="https://github.com/ynput/OpenPype/pull/5811">#5811</a></summary>
|
||||
|
||||
Skip logic in `before_window_show` in blender when in AYON mode. Most of the stuff called there happes on show automatically.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Include Grease Pencil in review and thumbnails <a href="https://github.com/ynput/OpenPype/pull/5812">#5812</a></summary>
|
||||
|
||||
Include Grease Pencil in review and thumbnails.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Workfiles tool AYON: Fix double click of workfile <a href="https://github.com/ynput/OpenPype/pull/5813">#5813</a></summary>
|
||||
|
||||
Fix double click on workfiles in workfiles tool to open the file.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Webpublisher: removal of usage of no_of_frames in error message <a href="https://github.com/ynput/OpenPype/pull/5819">#5819</a></summary>
|
||||
|
||||
If it throws exception, `no_of_frames` value wont be available, so it doesn't make sense to log it.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Attribute Defs: Hide multivalue widget in Number by default <a href="https://github.com/ynput/OpenPype/pull/5821">#5821</a></summary>
|
||||
|
||||
Fixed default look of `NumberAttrWidget` by hiding its multiselection widget.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Corrected a typo in Readme.md (Top -> To) <a href="https://github.com/ynput/OpenPype/pull/5800">#5800</a></summary>
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: Removed redundant copy of extension.zxp <a href="https://github.com/ynput/OpenPype/pull/5802">#5802</a></summary>
|
||||
|
||||
`extension.zxp` shouldn't be inside of extension folder.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.3](https://github.com/ynput/OpenPype/tree/3.17.3)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -568,29 +568,64 @@ def get_template_from_value(key, value):
|
|||
return parm
|
||||
|
||||
|
||||
def get_frame_data(node):
|
||||
"""Get the frame data: start frame, end frame and steps.
|
||||
def get_frame_data(node, handle_start=0, handle_end=0, log=None):
|
||||
"""Get the frame data: start frame, end frame, steps,
|
||||
start frame with start handle and end frame with end handle.
|
||||
|
||||
This function uses Houdini node's `trange`, `t1, `t2` and `t3`
|
||||
parameters as the source of truth for the full inclusive frame
|
||||
range to render, as such these are considered as the frame
|
||||
range including the handles.
|
||||
|
||||
The non-inclusive frame start and frame end without handles
|
||||
are computed by subtracting the handles from the inclusive
|
||||
frame range.
|
||||
|
||||
Args:
|
||||
node(hou.Node)
|
||||
node (hou.Node): ROP node to retrieve frame range from,
|
||||
the frame range is assumed to be the frame range
|
||||
*including* the start and end handles.
|
||||
handle_start (int): Start handles.
|
||||
handle_end (int): End handles.
|
||||
log (logging.Logger): Logger to log to.
|
||||
|
||||
Returns:
|
||||
dict: frame data for star, end and steps.
|
||||
dict: frame data for start, end, steps,
|
||||
start with handle and end with handle
|
||||
|
||||
"""
|
||||
|
||||
if log is None:
|
||||
log = self.log
|
||||
|
||||
data = {}
|
||||
|
||||
if node.parm("trange") is None:
|
||||
|
||||
log.debug(
|
||||
"Node has no 'trange' parameter: {}".format(node.path())
|
||||
)
|
||||
return data
|
||||
|
||||
if node.evalParm("trange") == 0:
|
||||
self.log.debug("trange is 0")
|
||||
return data
|
||||
data["frameStartHandle"] = hou.intFrame()
|
||||
data["frameEndHandle"] = hou.intFrame()
|
||||
data["byFrameStep"] = 1.0
|
||||
|
||||
data["frameStart"] = node.evalParm("f1")
|
||||
data["frameEnd"] = node.evalParm("f2")
|
||||
data["steps"] = node.evalParm("f3")
|
||||
log.info(
|
||||
"Node '{}' has 'Render current frame' set.\n"
|
||||
"Asset Handles are ignored.\n"
|
||||
"frameStart and frameEnd are set to the "
|
||||
"current frame.".format(node.path())
|
||||
)
|
||||
else:
|
||||
data["frameStartHandle"] = int(node.evalParm("f1"))
|
||||
data["frameEndHandle"] = int(node.evalParm("f2"))
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
data["handleStart"] = handle_start
|
||||
data["handleEnd"] = handle_end
|
||||
data["frameStart"] = data["frameStartHandle"] + data["handleStart"]
|
||||
data["frameEnd"] = data["frameEndHandle"] - data["handleEnd"]
|
||||
|
||||
return data
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
label = "Arnold ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
hosts = ["houdini"]
|
||||
families = ["arnold_rop"]
|
||||
|
||||
|
|
@ -126,8 +128,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
|
|
|||
|
|
@ -1,56 +0,0 @@
|
|||
import hou
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectInstanceNodeFrameRange(pyblish.api.InstancePlugin):
|
||||
"""Collect time range frame data for the instance node."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.001
|
||||
label = "Instance Node Frame Range"
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
node_path = instance.data.get("instance_node")
|
||||
node = hou.node(node_path) if node_path else None
|
||||
if not node_path or not node:
|
||||
self.log.debug("No instance node found for instance: "
|
||||
"{}".format(instance))
|
||||
return
|
||||
|
||||
frame_data = self.get_frame_data(node)
|
||||
if not frame_data:
|
||||
return
|
||||
|
||||
self.log.info("Collected time data: {}".format(frame_data))
|
||||
instance.data.update(frame_data)
|
||||
|
||||
def get_frame_data(self, node):
|
||||
"""Get the frame data: start frame, end frame and steps
|
||||
Args:
|
||||
node(hou.Node)
|
||||
|
||||
Returns:
|
||||
dict
|
||||
|
||||
"""
|
||||
|
||||
data = {}
|
||||
|
||||
if node.parm("trange") is None:
|
||||
self.log.debug("Node has no 'trange' parameter: "
|
||||
"{}".format(node.path()))
|
||||
return data
|
||||
|
||||
if node.evalParm("trange") == 0:
|
||||
# Ignore 'render current frame'
|
||||
self.log.debug("Node '{}' has 'Render current frame' set. "
|
||||
"Time range data ignored.".format(node.path()))
|
||||
return data
|
||||
|
||||
data["frameStart"] = node.evalParm("f1")
|
||||
data["frameEnd"] = node.evalParm("f2")
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
return data
|
||||
|
|
@ -91,27 +91,3 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
context[:] = sorted(context, key=sort_by_family)
|
||||
|
||||
return context
|
||||
|
||||
def get_frame_data(self, node):
|
||||
"""Get the frame data: start frame, end frame and steps
|
||||
Args:
|
||||
node(hou.Node)
|
||||
|
||||
Returns:
|
||||
dict
|
||||
|
||||
"""
|
||||
|
||||
data = {}
|
||||
|
||||
if node.parm("trange") is None:
|
||||
return data
|
||||
|
||||
if node.evalParm("trange") == 0:
|
||||
return data
|
||||
|
||||
data["frameStart"] = node.evalParm("f1")
|
||||
data["frameEnd"] = node.evalParm("f2")
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
return data
|
||||
|
|
|
|||
|
|
@ -24,7 +24,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
label = "Karma ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
hosts = ["houdini"]
|
||||
families = ["karma_rop"]
|
||||
|
||||
|
|
@ -95,8 +97,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
|
|
|||
|
|
@ -24,7 +24,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
label = "Mantra ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
hosts = ["houdini"]
|
||||
families = ["mantra_rop"]
|
||||
|
||||
|
|
@ -118,8 +120,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
|
|
|||
|
|
@ -24,7 +24,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
label = "Redshift ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
hosts = ["houdini"]
|
||||
families = ["redshift_rop"]
|
||||
|
||||
|
|
@ -132,8 +134,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
|
|
|||
|
|
@ -2,40 +2,106 @@
|
|||
"""Collector plugin for frames data on ROP instances."""
|
||||
import hou # noqa
|
||||
import pyblish.api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.houdini.api import lib
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectRopFrameRange(pyblish.api.InstancePlugin):
|
||||
class CollectRopFrameRange(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
|
||||
"""Collect all frames which would be saved from the ROP nodes"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["houdini"]
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectAnatomyInstanceData
|
||||
order = pyblish.api.CollectorOrder + 0.499
|
||||
label = "Collect RopNode Frame Range"
|
||||
use_asset_handles = True
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
node_path = instance.data.get("instance_node")
|
||||
if node_path is None:
|
||||
# Instance without instance node like a workfile instance
|
||||
self.log.debug(
|
||||
"No instance node found for instance: {}".format(instance)
|
||||
)
|
||||
return
|
||||
|
||||
ropnode = hou.node(node_path)
|
||||
frame_data = lib.get_frame_data(ropnode)
|
||||
|
||||
if "frameStart" in frame_data and "frameEnd" in frame_data:
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
|
||||
# Log artist friendly message about the collected frame range
|
||||
message = (
|
||||
"Frame range {0[frameStart]} - {0[frameEnd]}"
|
||||
).format(frame_data)
|
||||
if frame_data.get("step", 1.0) != 1.0:
|
||||
message += " with step {0[step]}".format(frame_data)
|
||||
self.log.info(message)
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
handle_start = asset_data.get("handleStart", 0)
|
||||
handle_end = asset_data.get("handleEnd", 0)
|
||||
else:
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
|
||||
instance.data.update(frame_data)
|
||||
frame_data = lib.get_frame_data(
|
||||
ropnode, handle_start, handle_end, self.log
|
||||
)
|
||||
|
||||
# Add frame range to label if the instance has a frame range.
|
||||
label = instance.data.get("label", instance.data["name"])
|
||||
instance.data["label"] = (
|
||||
"{0} [{1[frameStart]} - {1[frameEnd]}]".format(label,
|
||||
frame_data)
|
||||
if not frame_data:
|
||||
return
|
||||
|
||||
# Log debug message about the collected frame range
|
||||
frame_start = frame_data["frameStart"]
|
||||
frame_end = frame_data["frameEnd"]
|
||||
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
self.log.debug(
|
||||
"Full Frame range with Handles "
|
||||
"[{frame_start_handle} - {frame_end_handle}]"
|
||||
.format(
|
||||
frame_start_handle=frame_data["frameStartHandle"],
|
||||
frame_end_handle=frame_data["frameEndHandle"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.log.debug(
|
||||
"Use handles is deactivated for this instance, "
|
||||
"start and end handles are set to 0."
|
||||
)
|
||||
|
||||
# Log collected frame range to the user
|
||||
message = "Frame range [{frame_start} - {frame_end}]".format(
|
||||
frame_start=frame_start,
|
||||
frame_end=frame_end
|
||||
)
|
||||
if handle_start or handle_end:
|
||||
message += " with handles [{handle_start}]-[{handle_end}]".format(
|
||||
handle_start=handle_start,
|
||||
handle_end=handle_end
|
||||
)
|
||||
self.log.info(message)
|
||||
|
||||
if frame_data.get("byFrameStep", 1.0) != 1.0:
|
||||
self.log.info("Frame steps {}".format(frame_data["byFrameStep"]))
|
||||
|
||||
instance.data.update(frame_data)
|
||||
|
||||
# Add frame range to label if the instance has a frame range.
|
||||
label = instance.data.get("label", instance.data["name"])
|
||||
instance.data["label"] = (
|
||||
"{label} [{frame_start} - {frame_end}]"
|
||||
.format(
|
||||
label=label,
|
||||
frame_start=frame_start,
|
||||
frame_end=frame_end
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("use_handles",
|
||||
tooltip="Disable this if you want the publisher to"
|
||||
" ignore start and end handles specified in the"
|
||||
" asset data for this publish instance",
|
||||
default=cls.use_asset_handles,
|
||||
label="Use asset handles")
|
||||
]
|
||||
|
|
|
|||
|
|
@ -24,7 +24,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
label = "VRay ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
hosts = ["houdini"]
|
||||
families = ["vray_rop"]
|
||||
|
||||
|
|
@ -115,8 +117,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
|
|
|||
|
|
@ -0,0 +1,96 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.houdini.api.action import SelectInvalidAction
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class DisableUseAssetHandlesAction(RepairAction):
|
||||
label = "Disable use asset handles"
|
||||
icon = "mdi.toggle-switch-off"
|
||||
|
||||
|
||||
class ValidateFrameRange(pyblish.api.InstancePlugin):
|
||||
"""Validate Frame Range.
|
||||
|
||||
Due to the usage of start and end handles,
|
||||
then Frame Range must be >= (start handle + end handle)
|
||||
which results that frameEnd be smaller than frameStart
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.1
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Frame Range"
|
||||
actions = [DisableUseAssetHandlesAction, SelectInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
title="Invalid Frame Range",
|
||||
message=(
|
||||
"Invalid frame range because the instance "
|
||||
"start frame ({0[frameStart]}) is higher than "
|
||||
"the end frame ({0[frameEnd]})"
|
||||
.format(instance.data)
|
||||
),
|
||||
description=(
|
||||
"## Invalid Frame Range\n"
|
||||
"The frame range for the instance is invalid because "
|
||||
"the start frame is higher than the end frame.\n\nThis "
|
||||
"is likely due to asset handles being applied to your "
|
||||
"instance or the ROP node's start frame "
|
||||
"is set higher than the end frame.\n\nIf your ROP frame "
|
||||
"range is correct and you do not want to apply asset "
|
||||
"handles make sure to disable Use asset handles on the "
|
||||
"publish instance."
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
if not instance.data.get("instance_node"):
|
||||
return
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
if instance.data["frameStart"] > instance.data["frameEnd"]:
|
||||
cls.log.info(
|
||||
"The ROP node render range is set to "
|
||||
"{0[frameStartHandle]} - {0[frameEndHandle]} "
|
||||
"The asset handles applied to the instance are start handle "
|
||||
"{0[handleStart]} and end handle {0[handleEnd]}"
|
||||
.format(instance.data)
|
||||
)
|
||||
return [rop_node]
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
if not cls.get_invalid(instance):
|
||||
# Already fixed
|
||||
return
|
||||
|
||||
# Disable use asset handles
|
||||
context = instance.context
|
||||
create_context = context.data["create_context"]
|
||||
instance_id = instance.data.get("instance_id")
|
||||
if not instance_id:
|
||||
cls.log.debug("'{}' must have instance id"
|
||||
.format(instance))
|
||||
return
|
||||
|
||||
created_instance = create_context.get_instance_by_id(instance_id)
|
||||
if not instance_id:
|
||||
cls.log.debug("Unable to find instance '{}' by id"
|
||||
.format(instance))
|
||||
return
|
||||
|
||||
created_instance.publish_attributes["CollectRopFrameRange"]["use_handles"] = False # noqa
|
||||
|
||||
create_context.save_changes()
|
||||
cls.log.debug("use asset handles is turned off for '{}'"
|
||||
.format(instance))
|
||||
|
|
@ -333,21 +333,6 @@ def is_headless():
|
|||
return rt.maxops.isInNonInteractiveMode()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_camera(camera):
|
||||
original = rt.viewport.getCamera()
|
||||
if not original:
|
||||
# if there is no original camera
|
||||
# use the current camera as original
|
||||
original = rt.getNodeByName(camera)
|
||||
review_camera = rt.getNodeByName(camera)
|
||||
try:
|
||||
rt.viewport.setCamera(review_camera)
|
||||
yield
|
||||
finally:
|
||||
rt.viewport.setCamera(original)
|
||||
|
||||
|
||||
def set_timeline(frameStart, frameEnd):
|
||||
"""Set frame range for timeline editor in Max
|
||||
"""
|
||||
|
|
@ -511,6 +496,25 @@ def get_plugins() -> list:
|
|||
return plugin_info_list
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def render_resolution(width, height):
|
||||
"""Set render resolution option during context
|
||||
|
||||
Args:
|
||||
width (int): render width
|
||||
height (int): render height
|
||||
"""
|
||||
current_renderWidth = rt.renderWidth
|
||||
current_renderHeight = rt.renderHeight
|
||||
try:
|
||||
rt.renderWidth = width
|
||||
rt.renderHeight = height
|
||||
yield
|
||||
finally:
|
||||
rt.renderWidth = current_renderWidth
|
||||
rt.renderHeight = current_renderHeight
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def suspended_refresh():
|
||||
"""Suspended refresh for scene and modify panel redraw.
|
||||
|
|
|
|||
309
openpype/hosts/max/api/preview_animation.py
Normal file
309
openpype/hosts/max/api/preview_animation.py
Normal file
|
|
@ -0,0 +1,309 @@
|
|||
import logging
|
||||
import contextlib
|
||||
from pymxs import runtime as rt
|
||||
from .lib import get_max_version, render_resolution
|
||||
|
||||
log = logging.getLogger("openpype.hosts.max")
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def play_preview_when_done(has_autoplay):
|
||||
"""Set preview playback option during context
|
||||
|
||||
Args:
|
||||
has_autoplay (bool): autoplay during creating
|
||||
preview animation
|
||||
"""
|
||||
current_playback = rt.preferences.playPreviewWhenDone
|
||||
try:
|
||||
rt.preferences.playPreviewWhenDone = has_autoplay
|
||||
yield
|
||||
finally:
|
||||
rt.preferences.playPreviewWhenDone = current_playback
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_camera(camera):
|
||||
"""Set viewport camera during context
|
||||
***For 3dsMax 2024+
|
||||
Args:
|
||||
camera (str): viewport camera
|
||||
"""
|
||||
original = rt.viewport.getCamera()
|
||||
if not original:
|
||||
# if there is no original camera
|
||||
# use the current camera as original
|
||||
original = rt.getNodeByName(camera)
|
||||
review_camera = rt.getNodeByName(camera)
|
||||
try:
|
||||
rt.viewport.setCamera(review_camera)
|
||||
yield
|
||||
finally:
|
||||
rt.viewport.setCamera(original)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_preference_setting(general_viewport,
|
||||
nitrous_viewport,
|
||||
vp_button_mgr):
|
||||
"""Function to set viewport setting during context
|
||||
***For Max Version < 2024
|
||||
Args:
|
||||
camera (str): Viewport camera for review render
|
||||
general_viewport (dict): General viewport setting
|
||||
nitrous_viewport (dict): Nitrous setting for
|
||||
preview animation
|
||||
vp_button_mgr (dict): Viewport button manager Setting
|
||||
preview_preferences (dict): Preview Preferences Setting
|
||||
"""
|
||||
orig_vp_grid = rt.viewport.getGridVisibility(1)
|
||||
orig_vp_bkg = rt.viewport.IsSolidBackgroundColorMode()
|
||||
|
||||
nitrousGraphicMgr = rt.NitrousGraphicsManager
|
||||
viewport_setting = nitrousGraphicMgr.GetActiveViewportSetting()
|
||||
vp_button_mgr_original = {
|
||||
key: getattr(rt.ViewportButtonMgr, key) for key in vp_button_mgr
|
||||
}
|
||||
nitrous_viewport_original = {
|
||||
key: getattr(viewport_setting, key) for key in nitrous_viewport
|
||||
}
|
||||
|
||||
try:
|
||||
rt.viewport.setGridVisibility(1, general_viewport["dspGrid"])
|
||||
rt.viewport.EnableSolidBackgroundColorMode(general_viewport["dspBkg"])
|
||||
for key, value in vp_button_mgr.items():
|
||||
setattr(rt.ViewportButtonMgr, key, value)
|
||||
for key, value in nitrous_viewport.items():
|
||||
if nitrous_viewport[key] != nitrous_viewport_original[key]:
|
||||
setattr(viewport_setting, key, value)
|
||||
yield
|
||||
|
||||
finally:
|
||||
rt.viewport.setGridVisibility(1, orig_vp_grid)
|
||||
rt.viewport.EnableSolidBackgroundColorMode(orig_vp_bkg)
|
||||
for key, value in vp_button_mgr_original.items():
|
||||
setattr(rt.ViewportButtonMgr, key, value)
|
||||
for key, value in nitrous_viewport_original.items():
|
||||
setattr(viewport_setting, key, value)
|
||||
|
||||
|
||||
def _render_preview_animation_max_2024(
|
||||
filepath, start, end, percentSize, ext, viewport_options):
|
||||
"""Render viewport preview with MaxScript using `CreateAnimation`.
|
||||
****For 3dsMax 2024+
|
||||
Args:
|
||||
filepath (str): filepath for render output without frame number and
|
||||
extension, for example: /path/to/file
|
||||
start (int): startFrame
|
||||
end (int): endFrame
|
||||
percentSize (float): render resolution multiplier by 100
|
||||
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
|
||||
viewport_options (dict): viewport setting options, e.g.
|
||||
{"vpStyle": "defaultshading", "vpPreset": "highquality"}
|
||||
Returns:
|
||||
list: Created files
|
||||
"""
|
||||
# the percentSize argument must be integer
|
||||
percent = int(percentSize)
|
||||
filepath = filepath.replace("\\", "/")
|
||||
preview_output = f"{filepath}..{ext}"
|
||||
frame_template = f"{filepath}.{{:04d}}.{ext}"
|
||||
job_args = []
|
||||
for key, value in viewport_options.items():
|
||||
if isinstance(value, bool):
|
||||
if value:
|
||||
job_args.append(f"{key}:{value}")
|
||||
elif isinstance(value, str):
|
||||
if key == "vpStyle":
|
||||
if value == "Realistic":
|
||||
value = "defaultshading"
|
||||
elif value == "Shaded":
|
||||
log.warning(
|
||||
"'Shaded' Mode not supported in "
|
||||
"preview animation in Max 2024.\n"
|
||||
"Using 'defaultshading' instead.")
|
||||
value = "defaultshading"
|
||||
elif value == "ConsistentColors":
|
||||
value = "flatcolor"
|
||||
else:
|
||||
value = value.lower()
|
||||
elif key == "vpPreset":
|
||||
if value == "Quality":
|
||||
value = "highquality"
|
||||
elif value == "Customize":
|
||||
value = "userdefined"
|
||||
else:
|
||||
value = value.lower()
|
||||
job_args.append(f"{key}: #{value}")
|
||||
|
||||
job_str = (
|
||||
f'CreatePreview filename:"{preview_output}" outputAVI:false '
|
||||
f"percentSize:{percent} start:{start} end:{end} "
|
||||
f"{' '.join(job_args)} "
|
||||
"autoPlay:false"
|
||||
)
|
||||
rt.completeRedraw()
|
||||
rt.execute(job_str)
|
||||
# Return the created files
|
||||
return [frame_template.format(frame) for frame in range(start, end + 1)]
|
||||
|
||||
|
||||
def _render_preview_animation_max_pre_2024(
|
||||
filepath, startFrame, endFrame, percentSize, ext):
|
||||
"""Render viewport animation by creating bitmaps
|
||||
***For 3dsMax Version <2024
|
||||
Args:
|
||||
filepath (str): filepath without frame numbers and extension
|
||||
startFrame (int): start frame
|
||||
endFrame (int): end frame
|
||||
percentSize (float): render resolution multiplier by 100
|
||||
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
|
||||
ext (str): image extension
|
||||
Returns:
|
||||
list: Created filepaths
|
||||
"""
|
||||
# get the screenshot
|
||||
percent = percentSize / 100.0
|
||||
res_width = int(round(rt.renderWidth * percent))
|
||||
res_height = int(round(rt.renderHeight * percent))
|
||||
viewportRatio = float(res_width / res_height)
|
||||
frame_template = "{}.{{:04}}.{}".format(filepath, ext)
|
||||
frame_template.replace("\\", "/")
|
||||
files = []
|
||||
user_cancelled = False
|
||||
for frame in range(startFrame, endFrame + 1):
|
||||
rt.sliderTime = frame
|
||||
filepath = frame_template.format(frame)
|
||||
preview_res = rt.bitmap(
|
||||
res_width, res_height, filename=filepath
|
||||
)
|
||||
dib = rt.gw.getViewportDib()
|
||||
dib_width = float(dib.width)
|
||||
dib_height = float(dib.height)
|
||||
renderRatio = float(dib_width / dib_height)
|
||||
if viewportRatio <= renderRatio:
|
||||
heightCrop = (dib_width / renderRatio)
|
||||
topEdge = int((dib_height - heightCrop) / 2.0)
|
||||
tempImage_bmp = rt.bitmap(dib_width, heightCrop)
|
||||
src_box_value = rt.Box2(0, topEdge, dib_width, heightCrop)
|
||||
else:
|
||||
widthCrop = dib_height * renderRatio
|
||||
leftEdge = int((dib_width - widthCrop) / 2.0)
|
||||
tempImage_bmp = rt.bitmap(widthCrop, dib_height)
|
||||
src_box_value = rt.Box2(0, leftEdge, dib_width, dib_height)
|
||||
rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0))
|
||||
# copy the bitmap and close it
|
||||
rt.copy(tempImage_bmp, preview_res)
|
||||
rt.close(tempImage_bmp)
|
||||
rt.save(preview_res)
|
||||
rt.close(preview_res)
|
||||
rt.close(dib)
|
||||
files.append(filepath)
|
||||
if rt.keyboard.escPressed:
|
||||
user_cancelled = True
|
||||
break
|
||||
# clean up the cache
|
||||
rt.gc(delayed=True)
|
||||
if user_cancelled:
|
||||
raise RuntimeError("User cancelled rendering of viewport animation.")
|
||||
return files
|
||||
|
||||
|
||||
def render_preview_animation(
|
||||
filepath,
|
||||
ext,
|
||||
camera,
|
||||
start_frame=None,
|
||||
end_frame=None,
|
||||
percentSize=100.0,
|
||||
width=1920,
|
||||
height=1080,
|
||||
viewport_options=None):
|
||||
"""Render camera review animation
|
||||
Args:
|
||||
filepath (str): filepath to render to, without frame number and
|
||||
extension
|
||||
ext (str): output file extension
|
||||
camera (str): viewport camera for preview render
|
||||
start_frame (int): start frame
|
||||
end_frame (int): end frame
|
||||
percentSize (float): render resolution multiplier by 100
|
||||
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
|
||||
width (int): render resolution width
|
||||
height (int): render resolution height
|
||||
viewport_options (dict): viewport setting options
|
||||
Returns:
|
||||
list: Rendered output files
|
||||
"""
|
||||
if start_frame is None:
|
||||
start_frame = int(rt.animationRange.start)
|
||||
if end_frame is None:
|
||||
end_frame = int(rt.animationRange.end)
|
||||
|
||||
if viewport_options is None:
|
||||
viewport_options = viewport_options_for_preview_animation()
|
||||
with play_preview_when_done(False):
|
||||
with viewport_camera(camera):
|
||||
with render_resolution(width, height):
|
||||
if int(get_max_version()) < 2024:
|
||||
with viewport_preference_setting(
|
||||
viewport_options["general_viewport"],
|
||||
viewport_options["nitrous_viewport"],
|
||||
viewport_options["vp_btn_mgr"]
|
||||
):
|
||||
return _render_preview_animation_max_pre_2024(
|
||||
filepath,
|
||||
start_frame,
|
||||
end_frame,
|
||||
percentSize,
|
||||
ext
|
||||
)
|
||||
else:
|
||||
return _render_preview_animation_max_2024(
|
||||
filepath,
|
||||
start_frame,
|
||||
end_frame,
|
||||
percentSize,
|
||||
ext,
|
||||
viewport_options
|
||||
)
|
||||
|
||||
|
||||
def viewport_options_for_preview_animation():
|
||||
"""Get default viewport options for `render_preview_animation`.
|
||||
|
||||
Returns:
|
||||
dict: viewport setting options
|
||||
"""
|
||||
# viewport_options should be the dictionary
|
||||
if int(get_max_version()) < 2024:
|
||||
return {
|
||||
"visualStyleMode": "defaultshading",
|
||||
"viewportPreset": "highquality",
|
||||
"vpTexture": False,
|
||||
"dspGeometry": True,
|
||||
"dspShapes": False,
|
||||
"dspLights": False,
|
||||
"dspCameras": False,
|
||||
"dspHelpers": False,
|
||||
"dspParticles": True,
|
||||
"dspBones": False,
|
||||
"dspBkg": True,
|
||||
"dspGrid": False,
|
||||
"dspSafeFrame": False,
|
||||
"dspFrameNums": False
|
||||
}
|
||||
else:
|
||||
viewport_options = {}
|
||||
viewport_options["general_viewport"] = {
|
||||
"dspBkg": True,
|
||||
"dspGrid": False
|
||||
}
|
||||
viewport_options["nitrous_viewport"] = {
|
||||
"VisualStyleMode": "defaultshading",
|
||||
"ViewportPreset": "highquality",
|
||||
"UseTextureEnabled": False
|
||||
}
|
||||
viewport_options["vp_btn_mgr"] = {
|
||||
"EnableButtons": False}
|
||||
return viewport_options
|
||||
|
|
@ -13,31 +13,50 @@ class CreateReview(plugin.MaxCreator):
|
|||
icon = "video-camera"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
||||
instance_data["imageFormat"] = pre_create_data.get("imageFormat")
|
||||
instance_data["keepImages"] = pre_create_data.get("keepImages")
|
||||
instance_data["percentSize"] = pre_create_data.get("percentSize")
|
||||
instance_data["rndLevel"] = pre_create_data.get("rndLevel")
|
||||
# Transfer settings from pre create to instance
|
||||
creator_attributes = instance_data.setdefault(
|
||||
"creator_attributes", dict())
|
||||
for key in ["imageFormat",
|
||||
"keepImages",
|
||||
"review_width",
|
||||
"review_height",
|
||||
"percentSize",
|
||||
"visualStyleMode",
|
||||
"viewportPreset",
|
||||
"vpTexture"]:
|
||||
if key in pre_create_data:
|
||||
creator_attributes[key] = pre_create_data[key]
|
||||
|
||||
super(CreateReview, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateReview, self).get_pre_create_attr_defs()
|
||||
def get_instance_attr_defs(self):
|
||||
image_format_enum = ["exr", "jpg", "png"]
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "hdr", "rgb", "png",
|
||||
"rla", "rpf", "dds", "sgi", "tga", "tif", "vrimg"
|
||||
visual_style_preset_enum = [
|
||||
"Realistic", "Shaded", "Facets",
|
||||
"ConsistentColors", "HiddenLine",
|
||||
"Wireframe", "BoundingBox", "Ink",
|
||||
"ColorInk", "Acrylic", "Tech", "Graphite",
|
||||
"ColorPencil", "Pastel", "Clay", "ModelAssist"
|
||||
]
|
||||
preview_preset_enum = [
|
||||
"Quality", "Standard", "Performance",
|
||||
"DXMode", "Customize"]
|
||||
|
||||
rndLevel_enum = [
|
||||
"smoothhighlights", "smooth", "facethighlights",
|
||||
"facet", "flat", "litwireframe", "wireframe", "box"
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
return [
|
||||
NumberDef("review_width",
|
||||
label="Review width",
|
||||
decimals=0,
|
||||
minimum=0,
|
||||
default=1920),
|
||||
NumberDef("review_height",
|
||||
label="Review height",
|
||||
decimals=0,
|
||||
minimum=0,
|
||||
default=1080),
|
||||
BoolDef("keepImages",
|
||||
label="Keep Image Sequences",
|
||||
default=False),
|
||||
|
|
@ -50,8 +69,20 @@ class CreateReview(plugin.MaxCreator):
|
|||
default=100,
|
||||
minimum=1,
|
||||
decimals=0),
|
||||
EnumDef("rndLevel",
|
||||
rndLevel_enum,
|
||||
default="smoothhighlights",
|
||||
label="Preference")
|
||||
EnumDef("visualStyleMode",
|
||||
visual_style_preset_enum,
|
||||
default="Realistic",
|
||||
label="Preference"),
|
||||
EnumDef("viewportPreset",
|
||||
preview_preset_enum,
|
||||
default="Quality",
|
||||
label="Pre-View Preset"),
|
||||
BoolDef("vpTexture",
|
||||
label="Viewport Texture",
|
||||
default=False)
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attributes
|
||||
attrs = super().get_pre_create_attr_defs()
|
||||
return attrs + self.get_instance_attr_defs()
|
||||
|
|
|
|||
11
openpype/hosts/max/plugins/create/create_tycache.py
Normal file
11
openpype/hosts/max/plugins/create/create_tycache.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating TyCache."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
|
||||
|
||||
class CreateTyCache(plugin.MaxCreator):
|
||||
"""Creator plugin for TyCache."""
|
||||
identifier = "io.openpype.creators.max.tycache"
|
||||
label = "TyCache"
|
||||
family = "tycache"
|
||||
icon = "gear"
|
||||
64
openpype/hosts/max/plugins/load/load_tycache.py
Normal file
64
openpype/hosts/max/plugins/load/load_tycache.py
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
import os
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
get_previous_loaded_object,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class TyCacheLoader(load.LoaderPlugin):
|
||||
"""TyCache Loader."""
|
||||
|
||||
families = ["tycache"]
|
||||
representations = ["tyc"]
|
||||
order = -8
|
||||
icon = "code-fork"
|
||||
color = "green"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
"""Load tyCache"""
|
||||
from pymxs import runtime as rt
|
||||
filepath = os.path.normpath(self.filepath_from_context(context))
|
||||
obj = rt.tyCache()
|
||||
obj.filename = filepath
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
obj.name = f"{namespace}:{obj.name}"
|
||||
|
||||
return containerise(
|
||||
name, [obj], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""update the container"""
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
node_list = get_previous_loaded_object(node)
|
||||
update_custom_attribute_data(node, node_list)
|
||||
with maintained_selection():
|
||||
for tyc in node_list:
|
||||
tyc.filename = path
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
"""remove the container"""
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
@ -5,7 +5,10 @@ import pyblish.api
|
|||
from pymxs import runtime as rt
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.max.api.lib import get_max_version
|
||||
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
||||
from openpype.pipeline.publish import (
|
||||
OpenPypePyblishPluginMixin,
|
||||
KnownPublishError
|
||||
)
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.InstancePlugin,
|
||||
|
|
@ -19,30 +22,41 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
|
||||
def process(self, instance):
|
||||
nodes = instance.data["members"]
|
||||
focal_length = None
|
||||
camera_name = None
|
||||
for node in nodes:
|
||||
if rt.classOf(node) in rt.Camera.classes:
|
||||
camera_name = node.name
|
||||
focal_length = node.fov
|
||||
|
||||
def is_camera(node):
|
||||
is_camera_class = rt.classOf(node) in rt.Camera.classes
|
||||
return is_camera_class and rt.isProperty(node, "fov")
|
||||
|
||||
# Use first camera in instance
|
||||
cameras = [node for node in nodes if is_camera(node)]
|
||||
if cameras:
|
||||
if len(cameras) > 1:
|
||||
self.log.warning(
|
||||
"Found more than one camera in instance, using first "
|
||||
f"one found: {cameras[0]}"
|
||||
)
|
||||
camera = cameras[0]
|
||||
camera_name = camera.name
|
||||
focal_length = camera.fov
|
||||
else:
|
||||
raise KnownPublishError(
|
||||
"Unable to find a valid camera in 'Review' container."
|
||||
" Only native max Camera supported. "
|
||||
f"Found objects: {nodes}"
|
||||
)
|
||||
creator_attrs = instance.data["creator_attributes"]
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
data = {
|
||||
|
||||
general_preview_data = {
|
||||
"review_camera": camera_name,
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
"percentSize": creator_attrs["percentSize"],
|
||||
"imageFormat": creator_attrs["imageFormat"],
|
||||
"keepImages": creator_attrs["keepImages"],
|
||||
"fps": instance.context.data["fps"],
|
||||
"dspGeometry": attr_values.get("dspGeometry"),
|
||||
"dspShapes": attr_values.get("dspShapes"),
|
||||
"dspLights": attr_values.get("dspLights"),
|
||||
"dspCameras": attr_values.get("dspCameras"),
|
||||
"dspHelpers": attr_values.get("dspHelpers"),
|
||||
"dspParticles": attr_values.get("dspParticles"),
|
||||
"dspBones": attr_values.get("dspBones"),
|
||||
"dspBkg": attr_values.get("dspBkg"),
|
||||
"dspGrid": attr_values.get("dspGrid"),
|
||||
"dspSafeFrame": attr_values.get("dspSafeFrame"),
|
||||
"dspFrameNums": attr_values.get("dspFrameNums")
|
||||
"review_width": creator_attrs["review_width"],
|
||||
"review_height": creator_attrs["review_height"],
|
||||
}
|
||||
|
||||
if int(get_max_version()) >= 2024:
|
||||
|
|
@ -55,14 +69,46 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
instance.data["colorspaceDisplay"] = display
|
||||
instance.data["colorspaceView"] = view_transform
|
||||
|
||||
preview_data = {
|
||||
"vpStyle": creator_attrs["visualStyleMode"],
|
||||
"vpPreset": creator_attrs["viewportPreset"],
|
||||
"vpTextures": creator_attrs["vpTexture"],
|
||||
"dspGeometry": attr_values.get("dspGeometry"),
|
||||
"dspShapes": attr_values.get("dspShapes"),
|
||||
"dspLights": attr_values.get("dspLights"),
|
||||
"dspCameras": attr_values.get("dspCameras"),
|
||||
"dspHelpers": attr_values.get("dspHelpers"),
|
||||
"dspParticles": attr_values.get("dspParticles"),
|
||||
"dspBones": attr_values.get("dspBones"),
|
||||
"dspBkg": attr_values.get("dspBkg"),
|
||||
"dspGrid": attr_values.get("dspGrid"),
|
||||
"dspSafeFrame": attr_values.get("dspSafeFrame"),
|
||||
"dspFrameNums": attr_values.get("dspFrameNums")
|
||||
}
|
||||
else:
|
||||
general_viewport = {
|
||||
"dspBkg": attr_values.get("dspBkg"),
|
||||
"dspGrid": attr_values.get("dspGrid")
|
||||
}
|
||||
nitrous_viewport = {
|
||||
"VisualStyleMode": creator_attrs["visualStyleMode"],
|
||||
"ViewportPreset": creator_attrs["viewportPreset"],
|
||||
"UseTextureEnabled": creator_attrs["vpTexture"]
|
||||
}
|
||||
preview_data = {
|
||||
"general_viewport": general_viewport,
|
||||
"nitrous_viewport": nitrous_viewport,
|
||||
"vp_btn_mgr": {"EnableButtons": False}
|
||||
}
|
||||
|
||||
# Enable ftrack functionality
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
burnin_members = instance.data.setdefault("burninDataMembers", {})
|
||||
burnin_members["focalLength"] = focal_length
|
||||
|
||||
self.log.debug(f"data:{data}")
|
||||
instance.data.update(data)
|
||||
instance.data.update(general_preview_data)
|
||||
instance.data["viewport_options"] = preview_data
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
|
|
|
|||
|
|
@ -0,0 +1,76 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib import EnumDef, TextDef
|
||||
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectTyCacheData(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
"""Collect Channel Attributes for TyCache Export"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.02
|
||||
label = "Collect tyCache attribute Data"
|
||||
hosts = ['max']
|
||||
families = ["tycache"]
|
||||
|
||||
def process(self, instance):
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
attributes = {}
|
||||
for attr_key in attr_values.get("tycacheAttributes", []):
|
||||
attributes[attr_key] = True
|
||||
|
||||
for key in ["tycacheLayer", "tycacheObjectName"]:
|
||||
attributes[key] = attr_values.get(key, "")
|
||||
|
||||
# Collect the selected channel data before exporting
|
||||
instance.data["tyc_attrs"] = attributes
|
||||
self.log.debug(
|
||||
f"Found tycache attributes: {attributes}"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
# TODO: Support the attributes with maxObject array
|
||||
tyc_attr_enum = ["tycacheChanAge", "tycacheChanGroups",
|
||||
"tycacheChanPos", "tycacheChanRot",
|
||||
"tycacheChanScale", "tycacheChanVel",
|
||||
"tycacheChanSpin", "tycacheChanShape",
|
||||
"tycacheChanMatID", "tycacheChanMapping",
|
||||
"tycacheChanMaterials", "tycacheChanCustomFloat"
|
||||
"tycacheChanCustomVector", "tycacheChanCustomTM",
|
||||
"tycacheChanPhysX", "tycacheMeshBackup",
|
||||
"tycacheCreateObject",
|
||||
"tycacheCreateObjectIfNotCreated",
|
||||
"tycacheAdditionalCloth",
|
||||
"tycacheAdditionalSkin",
|
||||
"tycacheAdditionalSkinID",
|
||||
"tycacheAdditionalSkinIDValue",
|
||||
"tycacheAdditionalTerrain",
|
||||
"tycacheAdditionalVDB",
|
||||
"tycacheAdditionalSplinePaths",
|
||||
"tycacheAdditionalGeo",
|
||||
"tycacheAdditionalGeoActivateModifiers",
|
||||
"tycacheSplines",
|
||||
"tycacheSplinesAdditionalSplines"
|
||||
]
|
||||
tyc_default_attrs = ["tycacheChanGroups", "tycacheChanPos",
|
||||
"tycacheChanRot", "tycacheChanScale",
|
||||
"tycacheChanVel", "tycacheChanShape",
|
||||
"tycacheChanMatID", "tycacheChanMapping",
|
||||
"tycacheChanMaterials",
|
||||
"tycacheCreateObjectIfNotCreated"]
|
||||
return [
|
||||
EnumDef("tycacheAttributes",
|
||||
tyc_attr_enum,
|
||||
default=tyc_default_attrs,
|
||||
multiselection=True,
|
||||
label="TyCache Attributes"),
|
||||
TextDef("tycacheLayer",
|
||||
label="TyCache Layer",
|
||||
tooltip="Name of tycache layer",
|
||||
default="$(tyFlowLayer)"),
|
||||
TextDef("tycacheObjectName",
|
||||
label="TyCache Object Name",
|
||||
tooltip="TyCache Object Name",
|
||||
default="$(tyFlowName)_tyCache")
|
||||
]
|
||||
|
|
@ -91,6 +91,9 @@ class ExtractAlembic(publish.Extractor,
|
|||
end = instance.data["frameEndHandle"]
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
custom_attrs = attr_values.get("custom_attrs", False)
|
||||
if not custom_attrs:
|
||||
self.log.debug(
|
||||
"No Custom Attributes included in this abc export...")
|
||||
rt.AlembicExport.ArchiveType = rt.Name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.Name("maya")
|
||||
rt.AlembicExport.StartFrame = start
|
||||
|
|
@ -121,6 +124,9 @@ class ExtractModel(ExtractAlembic):
|
|||
def _set_abc_attributes(self, instance):
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
custom_attrs = attr_values.get("custom_attrs", False)
|
||||
if not custom_attrs:
|
||||
self.log.debug(
|
||||
"No Custom Attributes included in this abc export...")
|
||||
rt.AlembicExport.ArchiveType = rt.name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.name("maya")
|
||||
rt.AlembicExport.CustomAttributes = custom_attrs
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.max.api.lib import viewport_camera, get_max_version
|
||||
from openpype.hosts.max.api.preview_animation import (
|
||||
render_preview_animation
|
||||
)
|
||||
|
||||
|
||||
class ExtractReviewAnimation(publish.Extractor):
|
||||
|
|
@ -18,24 +19,26 @@ class ExtractReviewAnimation(publish.Extractor):
|
|||
def process(self, instance):
|
||||
staging_dir = self.staging_dir(instance)
|
||||
ext = instance.data.get("imageFormat")
|
||||
filename = "{0}..{1}".format(instance.name, ext)
|
||||
start = int(instance.data["frameStart"])
|
||||
end = int(instance.data["frameEnd"])
|
||||
fps = int(instance.data["fps"])
|
||||
filepath = os.path.join(staging_dir, filename)
|
||||
filepath = filepath.replace("\\", "/")
|
||||
filenames = self.get_files(
|
||||
instance.name, start, end, ext)
|
||||
|
||||
filepath = os.path.join(staging_dir, instance.name)
|
||||
self.log.debug(
|
||||
"Writing Review Animation to"
|
||||
" '%s' to '%s'" % (filename, staging_dir))
|
||||
"Writing Review Animation to '{}'".format(filepath))
|
||||
|
||||
review_camera = instance.data["review_camera"]
|
||||
with viewport_camera(review_camera):
|
||||
preview_arg = self.set_preview_arg(
|
||||
instance, filepath, start, end, fps)
|
||||
rt.execute(preview_arg)
|
||||
viewport_options = instance.data.get("viewport_options", {})
|
||||
files = render_preview_animation(
|
||||
filepath,
|
||||
ext,
|
||||
review_camera,
|
||||
start,
|
||||
end,
|
||||
percentSize=instance.data["percentSize"],
|
||||
width=instance.data["review_width"],
|
||||
height=instance.data["review_height"],
|
||||
viewport_options=viewport_options)
|
||||
|
||||
filenames = [os.path.basename(path) for path in files]
|
||||
|
||||
tags = ["review"]
|
||||
if not instance.data.get("keepImages"):
|
||||
|
|
@ -59,44 +62,3 @@ class ExtractReviewAnimation(publish.Extractor):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
def get_files(self, filename, start, end, ext):
|
||||
file_list = []
|
||||
for frame in range(int(start), int(end) + 1):
|
||||
actual_name = "{}.{:04}.{}".format(
|
||||
filename, frame, ext)
|
||||
file_list.append(actual_name)
|
||||
|
||||
return file_list
|
||||
|
||||
def set_preview_arg(self, instance, filepath,
|
||||
start, end, fps):
|
||||
job_args = list()
|
||||
default_option = f'CreatePreview filename:"{filepath}"'
|
||||
job_args.append(default_option)
|
||||
frame_option = f"outputAVI:false start:{start} end:{end} fps:{fps}" # noqa
|
||||
job_args.append(frame_option)
|
||||
rndLevel = instance.data.get("rndLevel")
|
||||
if rndLevel:
|
||||
option = f"rndLevel:#{rndLevel}"
|
||||
job_args.append(option)
|
||||
options = [
|
||||
"percentSize", "dspGeometry", "dspShapes",
|
||||
"dspLights", "dspCameras", "dspHelpers", "dspParticles",
|
||||
"dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums"
|
||||
]
|
||||
|
||||
for key in options:
|
||||
enabled = instance.data.get(key)
|
||||
if enabled:
|
||||
job_args.append(f"{key}:{enabled}")
|
||||
|
||||
if get_max_version() == 2024:
|
||||
# hardcoded for current stage
|
||||
auto_play_option = "autoPlay:false"
|
||||
job_args.append(auto_play_option)
|
||||
|
||||
job_str = " ".join(job_args)
|
||||
self.log.debug(job_str)
|
||||
|
||||
return job_str
|
||||
|
|
|
|||
|
|
@ -1,14 +1,11 @@
|
|||
import os
|
||||
import tempfile
|
||||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.max.api.lib import viewport_camera, get_max_version
|
||||
from openpype.hosts.max.api.preview_animation import render_preview_animation
|
||||
|
||||
|
||||
class ExtractThumbnail(publish.Extractor):
|
||||
"""
|
||||
Extract Thumbnail for Review
|
||||
"""Extract Thumbnail for Review
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder
|
||||
|
|
@ -17,34 +14,33 @@ class ExtractThumbnail(publish.Extractor):
|
|||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
# TODO: Create temp directory for thumbnail
|
||||
# - this is to avoid "override" of source file
|
||||
tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
self.log.debug(
|
||||
f"Create temp directory {tmp_staging} for thumbnail"
|
||||
)
|
||||
fps = int(instance.data["fps"])
|
||||
frame = int(instance.data["frameStartHandle"])
|
||||
instance.context.data["cleanupFullPaths"].append(tmp_staging)
|
||||
filename = "{name}_thumbnail..png".format(**instance.data)
|
||||
filepath = os.path.join(tmp_staging, filename)
|
||||
filepath = filepath.replace("\\", "/")
|
||||
thumbnail = self.get_filename(instance.name, frame)
|
||||
ext = instance.data.get("imageFormat")
|
||||
frame = int(instance.data["frameStart"])
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filepath = os.path.join(
|
||||
staging_dir, f"{instance.name}_thumbnail")
|
||||
self.log.debug("Writing Thumbnail to '{}'".format(filepath))
|
||||
|
||||
self.log.debug(
|
||||
"Writing Thumbnail to"
|
||||
" '%s' to '%s'" % (filename, tmp_staging))
|
||||
review_camera = instance.data["review_camera"]
|
||||
with viewport_camera(review_camera):
|
||||
preview_arg = self.set_preview_arg(
|
||||
instance, filepath, fps, frame)
|
||||
rt.execute(preview_arg)
|
||||
viewport_options = instance.data.get("viewport_options", {})
|
||||
files = render_preview_animation(
|
||||
filepath,
|
||||
ext,
|
||||
review_camera,
|
||||
start_frame=frame,
|
||||
end_frame=frame,
|
||||
percentSize=instance.data["percentSize"],
|
||||
width=instance.data["review_width"],
|
||||
height=instance.data["review_height"],
|
||||
viewport_options=viewport_options)
|
||||
|
||||
thumbnail = next(os.path.basename(path) for path in files)
|
||||
|
||||
representation = {
|
||||
"name": "thumbnail",
|
||||
"ext": "png",
|
||||
"ext": ext,
|
||||
"files": thumbnail,
|
||||
"stagingDir": tmp_staging,
|
||||
"stagingDir": staging_dir,
|
||||
"thumbnail": True
|
||||
}
|
||||
|
||||
|
|
@ -53,39 +49,3 @@ class ExtractThumbnail(publish.Extractor):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
def get_filename(self, filename, target_frame):
|
||||
thumbnail_name = "{}_thumbnail.{:04}.png".format(
|
||||
filename, target_frame
|
||||
)
|
||||
return thumbnail_name
|
||||
|
||||
def set_preview_arg(self, instance, filepath, fps, frame):
|
||||
job_args = list()
|
||||
default_option = f'CreatePreview filename:"{filepath}"'
|
||||
job_args.append(default_option)
|
||||
frame_option = f"outputAVI:false start:{frame} end:{frame} fps:{fps}" # noqa
|
||||
job_args.append(frame_option)
|
||||
rndLevel = instance.data.get("rndLevel")
|
||||
if rndLevel:
|
||||
option = f"rndLevel:#{rndLevel}"
|
||||
job_args.append(option)
|
||||
options = [
|
||||
"percentSize", "dspGeometry", "dspShapes",
|
||||
"dspLights", "dspCameras", "dspHelpers", "dspParticles",
|
||||
"dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums"
|
||||
]
|
||||
|
||||
for key in options:
|
||||
enabled = instance.data.get(key)
|
||||
if enabled:
|
||||
job_args.append(f"{key}:{enabled}")
|
||||
if get_max_version() == 2024:
|
||||
# hardcoded for current stage
|
||||
auto_play_option = "autoPlay:false"
|
||||
job_args.append(auto_play_option)
|
||||
|
||||
job_str = " ".join(job_args)
|
||||
self.log.debug(job_str)
|
||||
|
||||
return job_str
|
||||
|
|
|
|||
157
openpype/hosts/max/plugins/publish/extract_tycache.py
Normal file
157
openpype/hosts/max/plugins/publish/extract_tycache.py
Normal file
|
|
@ -0,0 +1,157 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractTyCache(publish.Extractor):
|
||||
"""Extract tycache format with tyFlow operators.
|
||||
Notes:
|
||||
- TyCache only works for TyFlow Pro Plugin.
|
||||
|
||||
Methods:
|
||||
self.get_export_particles_job_args(): sets up all job arguments
|
||||
for attributes to be exported in MAXscript
|
||||
|
||||
self.get_operators(): get the export_particle operator
|
||||
|
||||
self.get_files(): get the files with tyFlow naming convention
|
||||
before publishing
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.2
|
||||
label = "Extract TyCache"
|
||||
hosts = ["max"]
|
||||
families = ["tycache"]
|
||||
|
||||
def process(self, instance):
|
||||
# TODO: let user decide the param
|
||||
start = int(instance.context.data["frameStart"])
|
||||
end = int(instance.context.data.get("frameEnd"))
|
||||
self.log.debug("Extracting Tycache...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.tyc".format(**instance.data)
|
||||
path = os.path.join(stagingdir, filename)
|
||||
filenames = self.get_files(instance, start, end)
|
||||
additional_attributes = instance.data.get("tyc_attrs", {})
|
||||
|
||||
with maintained_selection():
|
||||
job_args = self.get_export_particles_job_args(
|
||||
instance.data["members"],
|
||||
start, end, path,
|
||||
additional_attributes)
|
||||
for job in job_args:
|
||||
rt.Execute(job)
|
||||
representations = instance.data.setdefault("representations", [])
|
||||
representation = {
|
||||
'name': 'tyc',
|
||||
'ext': 'tyc',
|
||||
'files': filenames if len(filenames) > 1 else filenames[0],
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
representations.append(representation)
|
||||
|
||||
# Get the tyMesh filename for extraction
|
||||
mesh_filename = f"{instance.name}__tyMesh.tyc"
|
||||
mesh_repres = {
|
||||
'name': 'tyMesh',
|
||||
'ext': 'tyc',
|
||||
'files': mesh_filename,
|
||||
"stagingDir": stagingdir,
|
||||
"outputName": '__tyMesh'
|
||||
}
|
||||
representations.append(mesh_repres)
|
||||
self.log.debug(f"Extracted instance '{instance.name}' to: {filenames}")
|
||||
|
||||
def get_files(self, instance, start_frame, end_frame):
|
||||
"""Get file names for tyFlow in tyCache format.
|
||||
|
||||
Set the filenames accordingly to the tyCache file
|
||||
naming extension(.tyc) for the publishing purpose
|
||||
|
||||
Actual File Output from tyFlow in tyCache format:
|
||||
<InstanceName>__tyPart_<frame>.tyc
|
||||
|
||||
e.g. tycacheMain__tyPart_00000.tyc
|
||||
|
||||
Args:
|
||||
instance (pyblish.api.Instance): instance.
|
||||
start_frame (int): Start frame.
|
||||
end_frame (int): End frame.
|
||||
|
||||
Returns:
|
||||
filenames(list): list of filenames
|
||||
|
||||
"""
|
||||
filenames = []
|
||||
for frame in range(int(start_frame), int(end_frame) + 1):
|
||||
filename = f"{instance.name}__tyPart_{frame:05}.tyc"
|
||||
filenames.append(filename)
|
||||
return filenames
|
||||
|
||||
def get_export_particles_job_args(self, members, start, end,
|
||||
filepath, additional_attributes):
|
||||
"""Sets up all job arguments for attributes.
|
||||
|
||||
Those attributes are to be exported in MAX Script.
|
||||
|
||||
Args:
|
||||
members (list): Member nodes of the instance.
|
||||
start (int): Start frame.
|
||||
end (int): End frame.
|
||||
filepath (str): Output path of the TyCache file.
|
||||
additional_attributes (dict): channel attributes data
|
||||
which needed to be exported
|
||||
|
||||
Returns:
|
||||
list of arguments for MAX Script.
|
||||
|
||||
"""
|
||||
settings = {
|
||||
"exportMode": 2,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
"tyCacheFilename": filepath.replace("\\", "/")
|
||||
}
|
||||
settings.update(additional_attributes)
|
||||
|
||||
job_args = []
|
||||
for operator in self.get_operators(members):
|
||||
for key, value in settings.items():
|
||||
if isinstance(value, str):
|
||||
# embed in quotes
|
||||
value = f'"{value}"'
|
||||
|
||||
job_args.append(f"{operator}.{key}={value}")
|
||||
job_args.append(f"{operator}.exportTyCache()")
|
||||
return job_args
|
||||
|
||||
@staticmethod
|
||||
def get_operators(members):
|
||||
"""Get Export Particles Operator.
|
||||
|
||||
Args:
|
||||
members (list): Instance members.
|
||||
|
||||
Returns:
|
||||
list of particle operators
|
||||
|
||||
"""
|
||||
opt_list = []
|
||||
for member in members:
|
||||
obj = member.baseobject
|
||||
# TODO: see if it can use maxscript instead
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
boolean = rt.IsProperty(sub_anim, "Export_Particles")
|
||||
if boolean:
|
||||
event_name = sub_anim.Name
|
||||
opt = f"${member.Name}.{event_name}.export_particles"
|
||||
opt_list.append(opt)
|
||||
|
||||
return opt_list
|
||||
|
|
@ -14,29 +14,16 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
"""
|
||||
Notes:
|
||||
|
||||
1. Validate the container only include tyFlow objects
|
||||
2. Validate if tyFlow operator Export Particle exists
|
||||
3. Validate if the export mode of Export Particle is at PRT format
|
||||
4. Validate the partition count and range set as default value
|
||||
1. Validate if the export mode of Export Particle is at PRT format
|
||||
2. Validate the partition count and range set as default value
|
||||
Partition Count : 100
|
||||
Partition Range : 1 to 1
|
||||
5. Validate if the custom attribute(s) exist as parameter(s)
|
||||
3. Validate if the custom attribute(s) exist as parameter(s)
|
||||
of export_particle operator
|
||||
|
||||
"""
|
||||
report = []
|
||||
|
||||
invalid_object = self.get_tyflow_object(instance)
|
||||
if invalid_object:
|
||||
report.append(f"Non tyFlow object found: {invalid_object}")
|
||||
|
||||
invalid_operator = self.get_tyflow_operator(instance)
|
||||
if invalid_operator:
|
||||
report.append((
|
||||
"tyFlow ExportParticle operator not "
|
||||
f"found: {invalid_operator}"))
|
||||
|
||||
if self.validate_export_mode(instance):
|
||||
report.append("The export mode is not at PRT")
|
||||
|
||||
|
|
@ -52,46 +39,6 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
if report:
|
||||
raise PublishValidationError(f"{report}")
|
||||
|
||||
def get_tyflow_object(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info(f"Validating tyFlow container for {container}")
|
||||
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
sel_tmp = str(sel)
|
||||
if rt.ClassOf(sel) in [rt.tyFlow,
|
||||
rt.Editable_Mesh]:
|
||||
if "tyFlow" not in sel_tmp:
|
||||
invalid.append(sel)
|
||||
else:
|
||||
invalid.append(sel)
|
||||
|
||||
return invalid
|
||||
|
||||
def get_tyflow_operator(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info(f"Validating tyFlow object for {container}")
|
||||
selection_list = instance.data["members"]
|
||||
bool_list = []
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get all the names of the related tyFlow nodes
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
boolean = rt.IsProperty(sub_anim, "Export_Particles")
|
||||
bool_list.append(str(boolean))
|
||||
# if the export_particles property is not there
|
||||
# it means there is not a "Export Particle" operator
|
||||
if "True" not in bool_list:
|
||||
self.log.error("Operator 'Export Particles' not found!")
|
||||
invalid.append(sel)
|
||||
|
||||
return invalid
|
||||
|
||||
def validate_custom_attribute(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
width, height = self.get_db_resolution(instance)
|
||||
current_width = rt.renderwidth
|
||||
current_width = rt.renderWidth
|
||||
current_height = rt.renderHeight
|
||||
if current_width != width and current_height != height:
|
||||
raise PublishValidationError("Resolution Setting "
|
||||
|
|
|
|||
88
openpype/hosts/max/plugins/publish/validate_tyflow_data.py
Normal file
88
openpype/hosts/max/plugins/publish/validate_tyflow_data.py
Normal file
|
|
@ -0,0 +1,88 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateTyFlowData(pyblish.api.InstancePlugin):
|
||||
"""Validate TyFlow plugins or relevant operators are set correctly."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["pointcloud", "tycache"]
|
||||
hosts = ["max"]
|
||||
label = "TyFlow Data"
|
||||
|
||||
def process(self, instance):
|
||||
"""
|
||||
Notes:
|
||||
1. Validate the container only include tyFlow objects
|
||||
2. Validate if tyFlow operator Export Particle exists
|
||||
|
||||
"""
|
||||
|
||||
invalid_object = self.get_tyflow_object(instance)
|
||||
if invalid_object:
|
||||
self.log.error(f"Non tyFlow object found: {invalid_object}")
|
||||
|
||||
invalid_operator = self.get_tyflow_operator(instance)
|
||||
if invalid_operator:
|
||||
self.log.error(
|
||||
"Operator 'Export Particles' not found in tyFlow editor.")
|
||||
if invalid_object or invalid_operator:
|
||||
raise PublishValidationError(
|
||||
"issues occurred",
|
||||
description="Container should only include tyFlow object "
|
||||
"and tyflow operator 'Export Particle' should be in "
|
||||
"the tyFlow editor.")
|
||||
|
||||
def get_tyflow_object(self, instance):
|
||||
"""Get the nodes which are not tyFlow object(s)
|
||||
and editable mesh(es)
|
||||
|
||||
Args:
|
||||
instance (pyblish.api.Instance): instance
|
||||
|
||||
Returns:
|
||||
list: invalid nodes which are not tyFlow
|
||||
object(s) and editable mesh(es).
|
||||
"""
|
||||
container = instance.data["instance_node"]
|
||||
self.log.debug(f"Validating tyFlow container for {container}")
|
||||
|
||||
allowed_classes = [rt.tyFlow, rt.Editable_Mesh]
|
||||
return [
|
||||
member for member in instance.data["members"]
|
||||
if rt.ClassOf(member) not in allowed_classes
|
||||
]
|
||||
|
||||
def get_tyflow_operator(self, instance):
|
||||
"""Check if the Export Particle Operators in the node
|
||||
connections.
|
||||
|
||||
Args:
|
||||
instance (str): instance node
|
||||
|
||||
Returns:
|
||||
invalid(list): list of invalid nodes which do
|
||||
not consist of Export Particle Operators as parts
|
||||
of the node connections
|
||||
"""
|
||||
invalid = []
|
||||
members = instance.data["members"]
|
||||
for member in members:
|
||||
obj = member.baseobject
|
||||
|
||||
# There must be at least one animation with export
|
||||
# particles enabled
|
||||
has_export_particles = False
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get name of the related tyFlow node
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
if rt.IsProperty(sub_anim, "Export_Particles"):
|
||||
has_export_particles = True
|
||||
break
|
||||
|
||||
if not has_export_particles:
|
||||
invalid.append(member)
|
||||
return invalid
|
||||
|
|
@ -204,8 +204,6 @@ class LoadImage(load.LoaderPlugin):
|
|||
last = first = int(frame_number)
|
||||
|
||||
# Set the global in to the start frame of the sequence
|
||||
read_name = self._get_node_name(representation)
|
||||
node["name"].setValue(read_name)
|
||||
node["file"].setValue(file)
|
||||
node["origfirst"].setValue(first)
|
||||
node["first"].setValue(first)
|
||||
|
|
|
|||
|
|
@ -13,8 +13,10 @@ from openpype.client import get_asset_by_name, get_assets
|
|||
from openpype.pipeline import (
|
||||
register_loader_plugin_path,
|
||||
register_creator_plugin_path,
|
||||
register_inventory_action_path,
|
||||
deregister_loader_plugin_path,
|
||||
deregister_creator_plugin_path,
|
||||
deregister_inventory_action_path,
|
||||
AYON_CONTAINER_ID,
|
||||
legacy_io,
|
||||
)
|
||||
|
|
@ -28,6 +30,7 @@ import unreal # noqa
|
|||
logger = logging.getLogger("openpype.hosts.unreal")
|
||||
|
||||
AYON_CONTAINERS = "AyonContainers"
|
||||
AYON_ASSET_DIR = "/Game/Ayon/Assets"
|
||||
CONTEXT_CONTAINER = "Ayon/context.json"
|
||||
UNREAL_VERSION = semver.VersionInfo(
|
||||
*os.getenv("AYON_UNREAL_VERSION").split(".")
|
||||
|
|
@ -127,6 +130,7 @@ def install():
|
|||
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
|
||||
register_loader_plugin_path(str(LOAD_PATH))
|
||||
register_creator_plugin_path(str(CREATE_PATH))
|
||||
register_inventory_action_path(str(INVENTORY_PATH))
|
||||
_register_callbacks()
|
||||
_register_events()
|
||||
|
||||
|
|
@ -136,6 +140,7 @@ def uninstall():
|
|||
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
|
||||
deregister_loader_plugin_path(str(LOAD_PATH))
|
||||
deregister_creator_plugin_path(str(CREATE_PATH))
|
||||
deregister_inventory_action_path(str(INVENTORY_PATH))
|
||||
|
||||
|
||||
def _register_callbacks():
|
||||
|
|
@ -649,6 +654,141 @@ def generate_sequence(h, h_dir):
|
|||
return sequence, (min_frame, max_frame)
|
||||
|
||||
|
||||
def _get_comps_and_assets(
|
||||
component_class, asset_class, old_assets, new_assets, selected
|
||||
):
|
||||
eas = unreal.get_editor_subsystem(unreal.EditorActorSubsystem)
|
||||
|
||||
components = []
|
||||
if selected:
|
||||
sel_actors = eas.get_selected_level_actors()
|
||||
for actor in sel_actors:
|
||||
comps = actor.get_components_by_class(component_class)
|
||||
components.extend(comps)
|
||||
else:
|
||||
comps = eas.get_all_level_actors_components()
|
||||
components = [
|
||||
c for c in comps if isinstance(c, component_class)
|
||||
]
|
||||
|
||||
# Get all the static meshes among the old assets in a dictionary with
|
||||
# the name as key
|
||||
selected_old_assets = {}
|
||||
for a in old_assets:
|
||||
asset = unreal.EditorAssetLibrary.load_asset(a)
|
||||
if isinstance(asset, asset_class):
|
||||
selected_old_assets[asset.get_name()] = asset
|
||||
|
||||
# Get all the static meshes among the new assets in a dictionary with
|
||||
# the name as key
|
||||
selected_new_assets = {}
|
||||
for a in new_assets:
|
||||
asset = unreal.EditorAssetLibrary.load_asset(a)
|
||||
if isinstance(asset, asset_class):
|
||||
selected_new_assets[asset.get_name()] = asset
|
||||
|
||||
return components, selected_old_assets, selected_new_assets
|
||||
|
||||
|
||||
def replace_static_mesh_actors(old_assets, new_assets, selected):
|
||||
smes = unreal.get_editor_subsystem(unreal.StaticMeshEditorSubsystem)
|
||||
|
||||
static_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
|
||||
unreal.StaticMeshComponent,
|
||||
unreal.StaticMesh,
|
||||
old_assets,
|
||||
new_assets,
|
||||
selected
|
||||
)
|
||||
|
||||
for old_name, old_mesh in old_meshes.items():
|
||||
new_mesh = new_meshes.get(old_name)
|
||||
|
||||
if not new_mesh:
|
||||
continue
|
||||
|
||||
smes.replace_mesh_components_meshes(
|
||||
static_mesh_comps, old_mesh, new_mesh)
|
||||
|
||||
|
||||
def replace_skeletal_mesh_actors(old_assets, new_assets, selected):
|
||||
skeletal_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
|
||||
unreal.SkeletalMeshComponent,
|
||||
unreal.SkeletalMesh,
|
||||
old_assets,
|
||||
new_assets,
|
||||
selected
|
||||
)
|
||||
|
||||
for old_name, old_mesh in old_meshes.items():
|
||||
new_mesh = new_meshes.get(old_name)
|
||||
|
||||
if not new_mesh:
|
||||
continue
|
||||
|
||||
for comp in skeletal_mesh_comps:
|
||||
if comp.get_skeletal_mesh_asset() == old_mesh:
|
||||
comp.set_skeletal_mesh_asset(new_mesh)
|
||||
|
||||
|
||||
def replace_geometry_cache_actors(old_assets, new_assets, selected):
|
||||
geometry_cache_comps, old_caches, new_caches = _get_comps_and_assets(
|
||||
unreal.GeometryCacheComponent,
|
||||
unreal.GeometryCache,
|
||||
old_assets,
|
||||
new_assets,
|
||||
selected
|
||||
)
|
||||
|
||||
for old_name, old_mesh in old_caches.items():
|
||||
new_mesh = new_caches.get(old_name)
|
||||
|
||||
if not new_mesh:
|
||||
continue
|
||||
|
||||
for comp in geometry_cache_comps:
|
||||
if comp.get_editor_property("geometry_cache") == old_mesh:
|
||||
comp.set_geometry_cache(new_mesh)
|
||||
|
||||
|
||||
def delete_asset_if_unused(container, asset_content):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
references = set()
|
||||
|
||||
for asset_path in asset_content:
|
||||
asset = ar.get_asset_by_object_path(asset_path)
|
||||
refs = ar.get_referencers(
|
||||
asset.package_name,
|
||||
unreal.AssetRegistryDependencyOptions(
|
||||
include_soft_package_references=False,
|
||||
include_hard_package_references=True,
|
||||
include_searchable_names=False,
|
||||
include_soft_management_references=False,
|
||||
include_hard_management_references=False
|
||||
))
|
||||
if not refs:
|
||||
continue
|
||||
references = references.union(set(refs))
|
||||
|
||||
# Filter out references that are in the Temp folder
|
||||
cleaned_references = {
|
||||
ref for ref in references if not str(ref).startswith("/Temp/")}
|
||||
|
||||
# Check which of the references are Levels
|
||||
for ref in cleaned_references:
|
||||
loaded_asset = unreal.EditorAssetLibrary.load_asset(ref)
|
||||
if isinstance(loaded_asset, unreal.World):
|
||||
# If there is at least a level, we can stop, we don't want to
|
||||
# delete the container
|
||||
return
|
||||
|
||||
unreal.log("Previous version unused, deleting...")
|
||||
|
||||
# No levels, delete the asset
|
||||
unreal.EditorAssetLibrary.delete_directory(container["namespace"])
|
||||
|
||||
|
||||
@contextmanager
|
||||
def maintained_selection():
|
||||
"""Stub to be either implemented or replaced.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,66 @@
|
|||
import unreal
|
||||
|
||||
from openpype.hosts.unreal.api.tools_ui import qt_app_context
|
||||
from openpype.hosts.unreal.api.pipeline import delete_asset_if_unused
|
||||
from openpype.pipeline import InventoryAction
|
||||
|
||||
|
||||
class DeleteUnusedAssets(InventoryAction):
|
||||
"""Delete all the assets that are not used in any level.
|
||||
"""
|
||||
|
||||
label = "Delete Unused Assets"
|
||||
icon = "trash"
|
||||
color = "red"
|
||||
order = 1
|
||||
|
||||
dialog = None
|
||||
|
||||
def _delete_unused_assets(self, containers):
|
||||
allowed_families = ["model", "rig"]
|
||||
|
||||
for container in containers:
|
||||
container_dir = container.get("namespace")
|
||||
if container.get("family") not in allowed_families:
|
||||
unreal.log_warning(
|
||||
f"Container {container_dir} is not supported.")
|
||||
continue
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
container_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
delete_asset_if_unused(container, asset_content)
|
||||
|
||||
def _show_confirmation_dialog(self, containers):
|
||||
from qtpy import QtCore
|
||||
from openpype.widgets import popup
|
||||
from openpype.style import load_stylesheet
|
||||
|
||||
dialog = popup.Popup()
|
||||
dialog.setWindowFlags(
|
||||
QtCore.Qt.Window
|
||||
| QtCore.Qt.WindowStaysOnTopHint
|
||||
)
|
||||
dialog.setFocusPolicy(QtCore.Qt.StrongFocus)
|
||||
dialog.setWindowTitle("Delete all unused assets")
|
||||
dialog.setMessage(
|
||||
"You are about to delete all the assets in the project that \n"
|
||||
"are not used in any level. Are you sure you want to continue?"
|
||||
)
|
||||
dialog.setButtonText("Delete")
|
||||
|
||||
dialog.on_clicked.connect(
|
||||
lambda: self._delete_unused_assets(containers)
|
||||
)
|
||||
|
||||
dialog.show()
|
||||
dialog.raise_()
|
||||
dialog.activateWindow()
|
||||
dialog.setStyleSheet(load_stylesheet())
|
||||
|
||||
self.dialog = dialog
|
||||
|
||||
def process(self, containers):
|
||||
with qt_app_context():
|
||||
self._show_confirmation_dialog(containers)
|
||||
84
openpype/hosts/unreal/plugins/inventory/update_actors.py
Normal file
84
openpype/hosts/unreal/plugins/inventory/update_actors.py
Normal file
|
|
@ -0,0 +1,84 @@
|
|||
import unreal
|
||||
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
ls,
|
||||
replace_static_mesh_actors,
|
||||
replace_skeletal_mesh_actors,
|
||||
replace_geometry_cache_actors,
|
||||
)
|
||||
from openpype.pipeline import InventoryAction
|
||||
|
||||
|
||||
def update_assets(containers, selected):
|
||||
allowed_families = ["model", "rig"]
|
||||
|
||||
# Get all the containers in the Unreal Project
|
||||
all_containers = ls()
|
||||
|
||||
for container in containers:
|
||||
container_dir = container.get("namespace")
|
||||
if container.get("family") not in allowed_families:
|
||||
unreal.log_warning(
|
||||
f"Container {container_dir} is not supported.")
|
||||
continue
|
||||
|
||||
# Get all containers with same asset_name but different objectName.
|
||||
# These are the containers that need to be updated in the level.
|
||||
sa_containers = [
|
||||
i
|
||||
for i in all_containers
|
||||
if (
|
||||
i.get("asset_name") == container.get("asset_name") and
|
||||
i.get("objectName") != container.get("objectName")
|
||||
)
|
||||
]
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
container_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
# Update all actors in level
|
||||
for sa_cont in sa_containers:
|
||||
sa_dir = sa_cont.get("namespace")
|
||||
old_content = unreal.EditorAssetLibrary.list_assets(
|
||||
sa_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
if container.get("family") == "rig":
|
||||
replace_skeletal_mesh_actors(
|
||||
old_content, asset_content, selected)
|
||||
replace_static_mesh_actors(
|
||||
old_content, asset_content, selected)
|
||||
elif container.get("family") == "model":
|
||||
if container.get("loader") == "PointCacheAlembicLoader":
|
||||
replace_geometry_cache_actors(
|
||||
old_content, asset_content, selected)
|
||||
else:
|
||||
replace_static_mesh_actors(
|
||||
old_content, asset_content, selected)
|
||||
|
||||
unreal.EditorLevelLibrary.save_current_level()
|
||||
|
||||
|
||||
class UpdateAllActors(InventoryAction):
|
||||
"""Update all the Actors in the current level to the version of the asset
|
||||
selected in the scene manager.
|
||||
"""
|
||||
|
||||
label = "Replace all Actors in level to this version"
|
||||
icon = "arrow-up"
|
||||
|
||||
def process(self, containers):
|
||||
update_assets(containers, False)
|
||||
|
||||
|
||||
class UpdateSelectedActors(InventoryAction):
|
||||
"""Update only the selected Actors in the current level to the version
|
||||
of the asset selected in the scene manager.
|
||||
"""
|
||||
|
||||
label = "Replace selected Actors in level to this version"
|
||||
icon = "arrow-up"
|
||||
|
||||
def process(self, containers):
|
||||
update_assets(containers, True)
|
||||
|
|
@ -69,7 +69,7 @@ class AnimationAlembicLoader(plugin.Loader):
|
|||
"""
|
||||
|
||||
# Create directory for asset and ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
root = unreal_pipeline.AYON_ASSET_DIR
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ from openpype.pipeline import (
|
|||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
AYON_ASSET_DIR,
|
||||
create_container,
|
||||
imprint,
|
||||
)
|
||||
|
||||
import unreal # noqa
|
||||
|
||||
|
|
@ -21,8 +25,11 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
root = AYON_ASSET_DIR
|
||||
|
||||
@staticmethod
|
||||
def get_task(
|
||||
self, filename, asset_dir, asset_name, replace,
|
||||
filename, asset_dir, asset_name, replace,
|
||||
frame_start=None, frame_end=None
|
||||
):
|
||||
task = unreal.AssetImportTask()
|
||||
|
|
@ -38,8 +45,6 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', True)
|
||||
|
||||
# set import options here
|
||||
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
|
||||
options.set_editor_property(
|
||||
'import_type', unreal.AlembicImportType.GEOMETRY_CACHE)
|
||||
|
||||
|
|
@ -64,13 +69,42 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
|
||||
return task
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
def import_and_containerize(
|
||||
self, filepath, asset_dir, asset_name, container_name,
|
||||
frame_start, frame_end
|
||||
):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
task = self.get_task(
|
||||
filepath, asset_dir, asset_name, False, frame_start, frame_end)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
# Create Asset Container
|
||||
create_container(container=container_name, path=asset_dir)
|
||||
|
||||
def imprint(
|
||||
self, asset, asset_dir, container_name, asset_name, representation,
|
||||
frame_start, frame_end
|
||||
):
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": representation["_id"],
|
||||
"parent": representation["parent"],
|
||||
"family": representation["context"]["family"],
|
||||
"frame_start": frame_start,
|
||||
"frame_end": frame_end
|
||||
}
|
||||
imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
|
|
@ -79,30 +113,28 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted. This is not used
|
||||
now, data are imprinted by `containerise()`.
|
||||
data (dict): Those would be data to be imprinted.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
|
||||
"""
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
if not version.get("name") and version.get('type') == "hero_version":
|
||||
name_version = f"{name}_hero"
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
name_version = f"{name}_v{version.get('name'):03d}"
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
"{}/{}/{}".format(root, asset, name), suffix="")
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
frame_start = context.get('asset').get('data').get('frameStart')
|
||||
frame_end = context.get('asset').get('data').get('frameEnd')
|
||||
|
||||
|
|
@ -111,30 +143,16 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
if frame_start == frame_end:
|
||||
frame_end += 1
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task = self.get_task(
|
||||
path, asset_dir, asset_name, False, frame_start, frame_end)
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = self.filepath_from_context(context)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name,
|
||||
frame_start, frame_end)
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
"{}/{}".format(asset_dir, container_name), data)
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name,
|
||||
context["representation"], frame_start, frame_end)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
|
|
@ -146,27 +164,43 @@ class PointCacheAlembicLoader(plugin.Loader):
|
|||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
representation["context"]
|
||||
context = representation.get("context", {})
|
||||
|
||||
task = self.get_task(source_path, destination_path, name, False)
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
unreal.log_warning(context)
|
||||
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
if not context:
|
||||
raise RuntimeError("No context found in representation")
|
||||
|
||||
# Create directory for asset and Ayon container
|
||||
asset = context.get('asset')
|
||||
name = context.get('subset')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
frame_start = int(container.get("frame_start"))
|
||||
frame_end = int(container.get("frame_end"))
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = get_representation_path(representation)
|
||||
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name,
|
||||
frame_start, frame_end)
|
||||
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name, representation,
|
||||
frame_start, frame_end)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ from openpype.pipeline import (
|
|||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
AYON_ASSET_DIR,
|
||||
create_container,
|
||||
imprint,
|
||||
)
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
|
|
@ -20,10 +24,12 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
|
|||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
def get_task(self, filename, asset_dir, asset_name, replace):
|
||||
root = AYON_ASSET_DIR
|
||||
|
||||
@staticmethod
|
||||
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
|
||||
task = unreal.AssetImportTask()
|
||||
options = unreal.AbcImportSettings()
|
||||
sm_settings = unreal.AbcStaticMeshSettings()
|
||||
conversion_settings = unreal.AbcConversionSettings(
|
||||
preset=unreal.AbcConversionPreset.CUSTOM,
|
||||
flip_u=False, flip_v=False,
|
||||
|
|
@ -37,72 +43,38 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
|
|||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', True)
|
||||
|
||||
# set import options here
|
||||
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
|
||||
options.set_editor_property(
|
||||
'import_type', unreal.AlembicImportType.SKELETAL)
|
||||
|
||||
options.static_mesh_settings = sm_settings
|
||||
options.conversion_settings = conversion_settings
|
||||
if not default_conversion:
|
||||
conversion_settings = unreal.AbcConversionSettings(
|
||||
preset=unreal.AbcConversionPreset.CUSTOM,
|
||||
flip_u=False, flip_v=False,
|
||||
rotation=[0.0, 0.0, 0.0],
|
||||
scale=[1.0, 1.0, 1.0])
|
||||
options.conversion_settings = conversion_settings
|
||||
|
||||
task.options = options
|
||||
|
||||
return task
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
def import_and_containerize(
|
||||
self, filepath, asset_dir, asset_name, container_name,
|
||||
default_conversion=False
|
||||
):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
task = self.get_task(
|
||||
filepath, asset_dir, asset_name, False, default_conversion)
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted. This is not used
|
||||
now, data are imprinted by `containerise()`.
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
|
||||
# Create directory for asset and ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
if not version.get("name") and version.get('type') == "hero_version":
|
||||
name_version = f"{name}_hero"
|
||||
else:
|
||||
name_version = f"{name}_v{version.get('name'):03d}"
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task = self.get_task(path, asset_dir, asset_name, False)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
# Create Asset Container
|
||||
create_container(container=container_name, path=asset_dir)
|
||||
|
||||
def imprint(
|
||||
self, asset, asset_dir, container_name, asset_name, representation
|
||||
):
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
|
|
@ -111,12 +83,57 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
|
|||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
"representation": representation["_id"],
|
||||
"parent": representation["parent"],
|
||||
"family": representation["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
f"{asset_dir}/{container_name}", data)
|
||||
imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
# Create directory for asset and ayon container
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
if not version.get("name") and version.get('type') == "hero_version":
|
||||
name_version = f"{name}_hero"
|
||||
else:
|
||||
name_version = f"{name}_v{version.get('name'):03d}"
|
||||
|
||||
default_conversion = False
|
||||
if options.get("default_conversion"):
|
||||
default_conversion = options.get("default_conversion")
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = self.filepath_from_context(context)
|
||||
|
||||
self.import_and_containerize(path, asset_dir, asset_name,
|
||||
container_name, default_conversion)
|
||||
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name,
|
||||
context["representation"])
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
|
|
@ -128,26 +145,36 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
|
|||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
context = representation.get("context", {})
|
||||
|
||||
task = self.get_task(source_path, destination_path, name, True)
|
||||
if not context:
|
||||
raise RuntimeError("No context found in representation")
|
||||
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
# Create directory for asset and Ayon container
|
||||
asset = context.get('asset')
|
||||
name = context.get('subset')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = get_representation_path(representation)
|
||||
|
||||
self.import_and_containerize(path, asset_dir, asset_name,
|
||||
container_name)
|
||||
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name, representation)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ from openpype.pipeline import (
|
|||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
AYON_ASSET_DIR,
|
||||
create_container,
|
||||
imprint,
|
||||
)
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
|
|
@ -20,14 +24,79 @@ class SkeletalMeshFBXLoader(plugin.Loader):
|
|||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
root = AYON_ASSET_DIR
|
||||
|
||||
@staticmethod
|
||||
def get_task(filename, asset_dir, asset_name, replace):
|
||||
task = unreal.AssetImportTask()
|
||||
options = unreal.FbxImportUI()
|
||||
|
||||
task.set_editor_property('filename', filename)
|
||||
task.set_editor_property('destination_path', asset_dir)
|
||||
task.set_editor_property('destination_name', asset_name)
|
||||
task.set_editor_property('replace_existing', replace)
|
||||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', True)
|
||||
|
||||
options.set_editor_property(
|
||||
'automated_import_should_detect_type', False)
|
||||
options.set_editor_property('import_as_skeletal', True)
|
||||
options.set_editor_property('import_animations', False)
|
||||
options.set_editor_property('import_mesh', True)
|
||||
options.set_editor_property('import_materials', False)
|
||||
options.set_editor_property('import_textures', False)
|
||||
options.set_editor_property('skeleton', None)
|
||||
options.set_editor_property('create_physics_asset', False)
|
||||
|
||||
options.set_editor_property(
|
||||
'mesh_type_to_import',
|
||||
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
|
||||
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'import_content_type',
|
||||
unreal.FBXImportContentType.FBXICT_ALL)
|
||||
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'normal_import_method',
|
||||
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
|
||||
|
||||
task.options = options
|
||||
|
||||
return task
|
||||
|
||||
def import_and_containerize(
|
||||
self, filepath, asset_dir, asset_name, container_name
|
||||
):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
task = self.get_task(
|
||||
filepath, asset_dir, asset_name, False)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
# Create Asset Container
|
||||
create_container(container=container_name, path=asset_dir)
|
||||
|
||||
def imprint(
|
||||
self, asset, asset_dir, container_name, asset_name, representation
|
||||
):
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": representation["_id"],
|
||||
"parent": representation["parent"],
|
||||
"family": representation["context"]["family"]
|
||||
}
|
||||
imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
This is a two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
|
|
@ -35,23 +104,15 @@ class SkeletalMeshFBXLoader(plugin.Loader):
|
|||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
options (dict): Those would be data to be imprinted. This is not
|
||||
used now, data are imprinted by `containerise()`.
|
||||
data (dict): Those would be data to be imprinted.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
|
||||
"""
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
if options and options.get("asset_dir"):
|
||||
root = options["asset_dir"]
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
if not version.get("name") and version.get('type') == "hero_version":
|
||||
|
|
@ -61,67 +122,20 @@ class SkeletalMeshFBXLoader(plugin.Loader):
|
|||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name_version}", suffix="")
|
||||
f"{self.root}/{asset}/{name_version}", suffix=""
|
||||
)
|
||||
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
task = unreal.AssetImportTask()
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task.set_editor_property('filename', path)
|
||||
task.set_editor_property('destination_path', asset_dir)
|
||||
task.set_editor_property('destination_name', asset_name)
|
||||
task.set_editor_property('replace_existing', False)
|
||||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', False)
|
||||
|
||||
# set import options here
|
||||
options = unreal.FbxImportUI()
|
||||
options.set_editor_property('import_as_skeletal', True)
|
||||
options.set_editor_property('import_animations', False)
|
||||
options.set_editor_property('import_mesh', True)
|
||||
options.set_editor_property('import_materials', False)
|
||||
options.set_editor_property('import_textures', False)
|
||||
options.set_editor_property('skeleton', None)
|
||||
options.set_editor_property('create_physics_asset', False)
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name)
|
||||
|
||||
options.set_editor_property(
|
||||
'mesh_type_to_import',
|
||||
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
|
||||
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'import_content_type',
|
||||
unreal.FBXImportContentType.FBXICT_ALL)
|
||||
# set to import normals, otherwise Unreal will compute them
|
||||
# and it will take a long time, depending on the size of the mesh
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'normal_import_method',
|
||||
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
|
||||
|
||||
task.options = options
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
f"{asset_dir}/{container_name}", data)
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name,
|
||||
context["representation"])
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
|
|
@ -133,58 +147,36 @@ class SkeletalMeshFBXLoader(plugin.Loader):
|
|||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
context = representation.get("context", {})
|
||||
|
||||
task = unreal.AssetImportTask()
|
||||
if not context:
|
||||
raise RuntimeError("No context found in representation")
|
||||
|
||||
task.set_editor_property('filename', source_path)
|
||||
task.set_editor_property('destination_path', destination_path)
|
||||
task.set_editor_property('destination_name', name)
|
||||
task.set_editor_property('replace_existing', True)
|
||||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', True)
|
||||
# Create directory for asset and Ayon container
|
||||
asset = context.get('asset')
|
||||
name = context.get('subset')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
# set import options here
|
||||
options = unreal.FbxImportUI()
|
||||
options.set_editor_property('import_as_skeletal', True)
|
||||
options.set_editor_property('import_animations', False)
|
||||
options.set_editor_property('import_mesh', True)
|
||||
options.set_editor_property('import_materials', True)
|
||||
options.set_editor_property('import_textures', True)
|
||||
options.set_editor_property('skeleton', None)
|
||||
options.set_editor_property('create_physics_asset', False)
|
||||
container_name += suffix
|
||||
|
||||
options.set_editor_property('mesh_type_to_import',
|
||||
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = get_representation_path(representation)
|
||||
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'import_content_type',
|
||||
unreal.FBXImportContentType.FBXICT_ALL
|
||||
)
|
||||
# set to import normals, otherwise Unreal will compute them
|
||||
# and it will take a long time, depending on the size of the mesh
|
||||
options.skeletal_mesh_import_data.set_editor_property(
|
||||
'normal_import_method',
|
||||
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS
|
||||
)
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name)
|
||||
|
||||
task.options = options
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name, representation)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ from openpype.pipeline import (
|
|||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
AYON_ASSET_DIR,
|
||||
create_container,
|
||||
imprint,
|
||||
)
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
|
|
@ -20,6 +24,8 @@ class StaticMeshAlembicLoader(plugin.Loader):
|
|||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
root = AYON_ASSET_DIR
|
||||
|
||||
@staticmethod
|
||||
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
|
||||
task = unreal.AssetImportTask()
|
||||
|
|
@ -53,14 +59,40 @@ class StaticMeshAlembicLoader(plugin.Loader):
|
|||
|
||||
return task
|
||||
|
||||
def import_and_containerize(
|
||||
self, filepath, asset_dir, asset_name, container_name,
|
||||
default_conversion=False
|
||||
):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
task = self.get_task(
|
||||
filepath, asset_dir, asset_name, False, default_conversion)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
# Create Asset Container
|
||||
create_container(container=container_name, path=asset_dir)
|
||||
|
||||
def imprint(
|
||||
self, asset, asset_dir, container_name, asset_name, representation
|
||||
):
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": representation["_id"],
|
||||
"parent": representation["parent"],
|
||||
"family": representation["context"]["family"]
|
||||
}
|
||||
imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
|
|
@ -68,15 +100,12 @@ class StaticMeshAlembicLoader(plugin.Loader):
|
|||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted. This is not used
|
||||
now, data are imprinted by `containerise()`.
|
||||
data (dict): Those would be data to be imprinted.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
|
||||
"""
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
|
|
@ -93,39 +122,22 @@ class StaticMeshAlembicLoader(plugin.Loader):
|
|||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name_version}", suffix="")
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task = self.get_task(
|
||||
path, asset_dir, asset_name, False, default_conversion)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
self.import_and_containerize(path, asset_dir, asset_name,
|
||||
container_name, default_conversion)
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name,
|
||||
context["representation"])
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
@ -134,27 +146,36 @@ class StaticMeshAlembicLoader(plugin.Loader):
|
|||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
context = representation.get("context", {})
|
||||
|
||||
task = self.get_task(source_path, destination_path, name, True, False)
|
||||
if not context:
|
||||
raise RuntimeError("No context found in representation")
|
||||
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
# Create directory for asset and Ayon container
|
||||
asset = context.get('asset')
|
||||
name = context.get('subset')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = get_representation_path(representation)
|
||||
|
||||
self.import_and_containerize(path, asset_dir, asset_name,
|
||||
container_name)
|
||||
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name, representation)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ from openpype.pipeline import (
|
|||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
AYON_ASSET_DIR,
|
||||
create_container,
|
||||
imprint,
|
||||
)
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
|
|
@ -20,6 +24,8 @@ class StaticMeshFBXLoader(plugin.Loader):
|
|||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
root = AYON_ASSET_DIR
|
||||
|
||||
@staticmethod
|
||||
def get_task(filename, asset_dir, asset_name, replace):
|
||||
task = unreal.AssetImportTask()
|
||||
|
|
@ -46,14 +52,39 @@ class StaticMeshFBXLoader(plugin.Loader):
|
|||
|
||||
return task
|
||||
|
||||
def import_and_containerize(
|
||||
self, filepath, asset_dir, asset_name, container_name
|
||||
):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
task = self.get_task(
|
||||
filepath, asset_dir, asset_name, False)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
# Create Asset Container
|
||||
create_container(container=container_name, path=asset_dir)
|
||||
|
||||
def imprint(
|
||||
self, asset, asset_dir, container_name, asset_name, representation
|
||||
):
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": representation["_id"],
|
||||
"parent": representation["parent"],
|
||||
"family": representation["context"]["family"]
|
||||
}
|
||||
imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
|
|
@ -61,23 +92,15 @@ class StaticMeshFBXLoader(plugin.Loader):
|
|||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
options (dict): Those would be data to be imprinted. This is not
|
||||
used now, data are imprinted by `containerise()`.
|
||||
options (dict): Those would be data to be imprinted.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
if options and options.get("asset_dir"):
|
||||
root = options["asset_dir"]
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
if not version.get("name") and version.get('type') == "hero_version":
|
||||
|
|
@ -87,35 +110,20 @@ class StaticMeshFBXLoader(plugin.Loader):
|
|||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name_version}", suffix=""
|
||||
f"{self.root}/{asset}/{name_version}", suffix=""
|
||||
)
|
||||
|
||||
container_name += suffix
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = self.filepath_from_context(context)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task = self.get_task(path, asset_dir, asset_name, False)
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name,
|
||||
context["representation"])
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
|
|
@ -127,27 +135,36 @@ class StaticMeshFBXLoader(plugin.Loader):
|
|||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
context = representation.get("context", {})
|
||||
|
||||
task = self.get_task(source_path, destination_path, name, True)
|
||||
if not context:
|
||||
raise RuntimeError("No context found in representation")
|
||||
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
# Create directory for asset and Ayon container
|
||||
asset = context.get('asset')
|
||||
name = context.get('subset')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
version = context.get('version')
|
||||
# Check if version is hero version and use different name
|
||||
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{self.root}/{asset}/{name_version}", suffix="")
|
||||
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
container_name += suffix
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
path = get_representation_path(representation)
|
||||
|
||||
self.import_and_containerize(
|
||||
path, asset_dir, asset_name, container_name)
|
||||
|
||||
self.imprint(
|
||||
asset, asset_dir, container_name, asset_name, representation)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
asset_dir, recursive=True, include_folder=False
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ class UAssetLoader(plugin.Loader):
|
|||
"""
|
||||
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
root = unreal_pipeline.AYON_ASSET_DIR
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
|
|
|
|||
|
|
@ -86,7 +86,7 @@ class YetiLoader(plugin.Loader):
|
|||
raise RuntimeError("Groom plugin is not activated.")
|
||||
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
root = unreal_pipeline.AYON_ASSET_DIR
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import os
|
|||
import re
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import PublishValidationError
|
||||
|
||||
|
||||
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
||||
|
|
@ -39,8 +40,22 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
|||
collections, remainder = clique.assemble(
|
||||
repr["files"], minimum_items=1, patterns=patterns)
|
||||
|
||||
assert not remainder, "Must not have remainder"
|
||||
assert len(collections) == 1, "Must detect single collection"
|
||||
if remainder:
|
||||
raise PublishValidationError(
|
||||
"Some files have been found outside a sequence. "
|
||||
f"Invalid files: {remainder}")
|
||||
if not collections:
|
||||
raise PublishValidationError(
|
||||
"We have been unable to find a sequence in the "
|
||||
"files. Please ensure the files are named "
|
||||
"appropriately. "
|
||||
f"Files: {repr_files}")
|
||||
if len(collections) > 1:
|
||||
raise PublishValidationError(
|
||||
"Multiple collections detected. There should be a single "
|
||||
"collection per representation. "
|
||||
f"Collections identified: {collections}")
|
||||
|
||||
collection = collections[0]
|
||||
frames = list(collection.indexes)
|
||||
|
||||
|
|
@ -53,8 +68,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
|||
data["clipOut"])
|
||||
|
||||
if current_range != required_range:
|
||||
raise ValueError(f"Invalid frame range: {current_range} - "
|
||||
f"expected: {required_range}")
|
||||
raise PublishValidationError(
|
||||
f"Invalid frame range: {current_range} - "
|
||||
f"expected: {required_range}")
|
||||
|
||||
missing = collection.holes().indexes
|
||||
assert not missing, "Missing frames: %s" % (missing,)
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"Missing frames have been detected. "
|
||||
f"Missing frames: {missing}")
|
||||
|
|
|
|||
|
|
@ -156,8 +156,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
self.log.debug("frameEnd:: {}".format(
|
||||
instance.data["frameEnd"]))
|
||||
except Exception:
|
||||
self.log.warning("Unable to count frames "
|
||||
"duration {}".format(no_of_frames))
|
||||
self.log.warning("Unable to count frames duration.")
|
||||
|
||||
instance.data["handleStart"] = asset_doc["data"]["handleStart"]
|
||||
instance.data["handleEnd"] = asset_doc["data"]["handleEnd"]
|
||||
|
|
|
|||
|
|
@ -65,9 +65,11 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
|
|||
job_info.BatchName += datetime.now().strftime("%d%m%Y%H%M%S")
|
||||
|
||||
# Deadline requires integers in frame range
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
frames = "{start}-{end}x{step}".format(
|
||||
start=int(instance.data["frameStart"]),
|
||||
end=int(instance.data["frameEnd"]),
|
||||
start=int(start),
|
||||
end=int(end),
|
||||
step=int(instance.data["byFrameStep"]),
|
||||
)
|
||||
job_info.Frames = frames
|
||||
|
|
|
|||
|
|
@ -63,7 +63,8 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"xgen",
|
||||
"yeticacheUE"
|
||||
"yeticacheUE",
|
||||
"tycache"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -141,7 +141,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"online",
|
||||
"uasset",
|
||||
"blendScene",
|
||||
"yeticacheUE"
|
||||
"yeticacheUE",
|
||||
"tycache"
|
||||
]
|
||||
|
||||
default_template_name = "publish"
|
||||
|
|
|
|||
|
|
@ -829,9 +829,43 @@ def _convert_nuke_project_settings(ayon_settings, output):
|
|||
# NOTE 'monitorOutLut' is maybe not yet in v3 (ut should be)
|
||||
_convert_host_imageio(ayon_nuke)
|
||||
ayon_imageio = ayon_nuke["imageio"]
|
||||
for item in ayon_imageio["nodes"]["requiredNodes"]:
|
||||
|
||||
# workfile
|
||||
imageio_workfile = ayon_imageio["workfile"]
|
||||
workfile_keys_mapping = (
|
||||
("color_management", "colorManagement"),
|
||||
("native_ocio_config", "OCIO_config"),
|
||||
("working_space", "workingSpaceLUT"),
|
||||
("thumbnail_space", "monitorLut"),
|
||||
)
|
||||
for src, dst in workfile_keys_mapping:
|
||||
if (
|
||||
src in imageio_workfile
|
||||
and dst not in imageio_workfile
|
||||
):
|
||||
imageio_workfile[dst] = imageio_workfile.pop(src)
|
||||
|
||||
# regex inputs
|
||||
if "regex_inputs" in ayon_imageio:
|
||||
ayon_imageio["regexInputs"] = ayon_imageio.pop("regex_inputs")
|
||||
|
||||
# nodes
|
||||
ayon_imageio_nodes = ayon_imageio["nodes"]
|
||||
if "required_nodes" in ayon_imageio_nodes:
|
||||
ayon_imageio_nodes["requiredNodes"] = (
|
||||
ayon_imageio_nodes.pop("required_nodes"))
|
||||
if "override_nodes" in ayon_imageio_nodes:
|
||||
ayon_imageio_nodes["overrideNodes"] = (
|
||||
ayon_imageio_nodes.pop("override_nodes"))
|
||||
|
||||
for item in ayon_imageio_nodes["requiredNodes"]:
|
||||
if "nuke_node_class" in item:
|
||||
item["nukeNodeClass"] = item.pop("nuke_node_class")
|
||||
item["knobs"] = _convert_nuke_knobs(item["knobs"])
|
||||
for item in ayon_imageio["nodes"]["overrideNodes"]:
|
||||
|
||||
for item in ayon_imageio_nodes["overrideNodes"]:
|
||||
if "nuke_node_class" in item:
|
||||
item["nukeNodeClass"] = item.pop("nuke_node_class")
|
||||
item["knobs"] = _convert_nuke_knobs(item["knobs"])
|
||||
|
||||
output["nuke"] = ayon_nuke
|
||||
|
|
|
|||
|
|
@ -53,6 +53,11 @@
|
|||
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"tycache": {
|
||||
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/{@version}",
|
||||
"file": "{originalBasename}.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"source": {
|
||||
"folder": "{root[work]}/{originalDirname}",
|
||||
"file": "{originalBasename}.{ext}",
|
||||
|
|
@ -66,6 +71,7 @@
|
|||
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
|
||||
"simpleUnrealTexture": "Simple Unreal Texture",
|
||||
"online": "online",
|
||||
"tycache": "tycache",
|
||||
"source": "source",
|
||||
"transient": "transient"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -546,6 +546,17 @@
|
|||
"task_types": [],
|
||||
"task_names": [],
|
||||
"template_name": "online"
|
||||
},
|
||||
{
|
||||
"families": [
|
||||
"tycache"
|
||||
],
|
||||
"hosts": [
|
||||
"max"
|
||||
],
|
||||
"task_types": [],
|
||||
"task_names": [],
|
||||
"template_name": "tycache"
|
||||
}
|
||||
],
|
||||
"hero_template_name_profiles": [
|
||||
|
|
|
|||
|
|
@ -25,6 +25,12 @@
|
|||
},
|
||||
"shelves": [],
|
||||
"create": {
|
||||
"CreateAlembicCamera": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateArnoldAss": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
|
|
@ -32,6 +38,66 @@
|
|||
],
|
||||
"ext": ".ass"
|
||||
},
|
||||
"CreateArnoldRop": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateCompositeSequence": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateHDA": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateKarmaROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateMantraROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreatePointCache": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateBGEO": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateRedshiftProxy": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateRedshiftROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateReview": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateStaticMesh": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
|
|
@ -45,31 +111,13 @@
|
|||
"UCX"
|
||||
]
|
||||
},
|
||||
"CreateAlembicCamera": {
|
||||
"CreateUSD": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateCompositeSequence": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreatePointCache": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateRedshiftROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateRemotePublish": {
|
||||
"CreateUSDRender": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
|
|
@ -81,32 +129,42 @@
|
|||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateUSD": {
|
||||
"enabled": false,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateUSDModel": {
|
||||
"enabled": false,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"USDCreateShadingWorkspace": {
|
||||
"enabled": false,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateUSDRender": {
|
||||
"enabled": false,
|
||||
"CreateVrayROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
}
|
||||
},
|
||||
"publish": {
|
||||
"CollectRopFrameRange": {
|
||||
"use_asset_handles": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateMeshIsStatic": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateReviewColorspace": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateSubsetName": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateUnrealStaticMeshName": {
|
||||
"enabled": false,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateWorkfilePaths": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
@ -118,31 +176,6 @@
|
|||
"$HIP",
|
||||
"$JOB"
|
||||
]
|
||||
},
|
||||
"ValidateReviewColorspace": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateSubsetName": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateMeshIsStatic": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateUnrealStaticMeshName": {
|
||||
"enabled": false,
|
||||
"optional": true,
|
||||
"active": true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -374,7 +374,7 @@
|
|||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateScript": {
|
||||
"ValidateScriptAttributes": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
|
|
|
|||
|
|
@ -4,6 +4,16 @@
|
|||
"key": "create",
|
||||
"label": "Creator plugins",
|
||||
"children": [
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_create_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "CreateAlembicCamera",
|
||||
"label": "Create Alembic Camera"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
@ -39,6 +49,52 @@
|
|||
]
|
||||
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_create_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "CreateArnoldRop",
|
||||
"label": "Create Arnold ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreateCompositeSequence",
|
||||
"label": "Create Composite (Image Sequence)"
|
||||
},
|
||||
{
|
||||
"key": "CreateHDA",
|
||||
"label": "Create Houdini Digital Asset"
|
||||
},
|
||||
{
|
||||
"key": "CreateKarmaROP",
|
||||
"label": "Create Karma ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreateMantraROP",
|
||||
"label": "Create Mantra ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreatePointCache",
|
||||
"label": "Create PointCache (Abc)"
|
||||
},
|
||||
{
|
||||
"key": "CreateBGEO",
|
||||
"label": "Create PointCache (Bgeo)"
|
||||
},
|
||||
{
|
||||
"key": "CreateRedshiftProxy",
|
||||
"label": "Create Redshift Proxy"
|
||||
},
|
||||
{
|
||||
"key": "CreateRedshiftROP",
|
||||
"label": "Create Redshift ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreateReview",
|
||||
"label": "Create Review"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
@ -75,44 +131,20 @@
|
|||
"name": "template_create_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "CreateAlembicCamera",
|
||||
"label": "Create Alembic Camera"
|
||||
"key": "CreateUSD",
|
||||
"label": "Create USD (experimental)"
|
||||
},
|
||||
{
|
||||
"key": "CreateCompositeSequence",
|
||||
"label": "Create Composite (Image Sequence)"
|
||||
},
|
||||
{
|
||||
"key": "CreatePointCache",
|
||||
"label": "Create Point Cache"
|
||||
},
|
||||
{
|
||||
"key": "CreateRedshiftROP",
|
||||
"label": "Create Redshift ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreateRemotePublish",
|
||||
"label": "Create Remote Publish"
|
||||
"key": "CreateUSDRender",
|
||||
"label": "Create USD render (experimental)"
|
||||
},
|
||||
{
|
||||
"key": "CreateVDBCache",
|
||||
"label": "Create VDB Cache"
|
||||
},
|
||||
{
|
||||
"key": "CreateUSD",
|
||||
"label": "Create USD"
|
||||
},
|
||||
{
|
||||
"key": "CreateUSDModel",
|
||||
"label": "Create USD Model"
|
||||
},
|
||||
{
|
||||
"key": "USDCreateShadingWorkspace",
|
||||
"label": "Create USD Shading Workspace"
|
||||
},
|
||||
{
|
||||
"key": "CreateUSDRender",
|
||||
"label": "Create USD Render"
|
||||
"key": "CreateVrayROP",
|
||||
"label": "Create VRay ROP"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,6 +4,57 @@
|
|||
"key": "publish",
|
||||
"label": "Publish plugins",
|
||||
"children": [
|
||||
{
|
||||
"type":"label",
|
||||
"label":"Collectors"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CollectRopFrameRange",
|
||||
"label": "Collect Rop Frame Range",
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Disable this if you want the publisher to ignore start and end handles specified in the asset data for publish instances"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "use_asset_handles",
|
||||
"label": "Use asset handles"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Validators"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateContainers",
|
||||
"label": "Validate Containers"
|
||||
},
|
||||
{
|
||||
"key": "ValidateMeshIsStatic",
|
||||
"label": "Validate Mesh is Static"
|
||||
},
|
||||
{
|
||||
"key": "ValidateReviewColorspace",
|
||||
"label": "Validate Review Colorspace"
|
||||
},
|
||||
{
|
||||
"key": "ValidateSubsetName",
|
||||
"label": "Validate Subset Name"
|
||||
},
|
||||
{
|
||||
"key": "ValidateUnrealStaticMeshName",
|
||||
"label": "Validate Unreal Static Mesh Name"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
@ -35,32 +86,6 @@
|
|||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateReviewColorspace",
|
||||
"label": "Validate Review Colorspace"
|
||||
},
|
||||
{
|
||||
"key": "ValidateContainers",
|
||||
"label": "ValidateContainers"
|
||||
},
|
||||
{
|
||||
"key": "ValidateSubsetName",
|
||||
"label": "Validate Subset Name"
|
||||
},
|
||||
{
|
||||
"key": "ValidateMeshIsStatic",
|
||||
"label": "Validate Mesh is Static"
|
||||
},
|
||||
{
|
||||
"key": "ValidateUnrealStaticMeshName",
|
||||
"label": "Validate Unreal Static Mesh Name"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -40,6 +40,10 @@
|
|||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Name and Script Path are mandatory."
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "label",
|
||||
|
|
@ -68,4 +72,4 @@
|
|||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -113,8 +113,8 @@
|
|||
"label": "Validate Gizmo (Group)"
|
||||
},
|
||||
{
|
||||
"key": "ValidateScript",
|
||||
"label": "Validate script settings"
|
||||
"key": "ValidateScriptAttributes",
|
||||
"label": "Validate workfile attributes"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -298,6 +298,7 @@ class NumberAttrWidget(_BaseAttrDefWidget):
|
|||
input_widget.installEventFilter(self)
|
||||
|
||||
multisel_widget = ClickableLineEdit("< Multiselection >", self)
|
||||
multisel_widget.setVisible(False)
|
||||
|
||||
input_widget.valueChanged.connect(self._on_value_change)
|
||||
multisel_widget.clicked.connect(self._on_multi_click)
|
||||
|
|
|
|||
|
|
@ -334,7 +334,7 @@ class WorkAreaFilesWidget(QtWidgets.QWidget):
|
|||
|
||||
def _on_mouse_double_click(self, event):
|
||||
if event.button() == QtCore.Qt.LeftButton:
|
||||
self.save_as_requested.emit()
|
||||
self.open_current_requested.emit()
|
||||
|
||||
def _on_context_menu(self, point):
|
||||
index = self._view.indexAt(point)
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.17.4-nightly.1"
|
||||
__version__ = "3.17.5-nightly.1"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.17.3" # OpenPype
|
||||
version = "3.17.4" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -487,6 +487,17 @@ DEFAULT_TOOLS_VALUES = {
|
|||
"task_types": [],
|
||||
"task_names": [],
|
||||
"template_name": "publish_online"
|
||||
},
|
||||
{
|
||||
"families": [
|
||||
"tycache"
|
||||
],
|
||||
"hosts": [
|
||||
"max"
|
||||
],
|
||||
"task_types": [],
|
||||
"task_names": [],
|
||||
"template_name": "publish_tycache"
|
||||
}
|
||||
],
|
||||
"hero_template_name_profiles": [
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.1.2"
|
||||
__version__ = "0.1.3"
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import sys
|
|||
import re
|
||||
import json
|
||||
import shutil
|
||||
import argparse
|
||||
import zipfile
|
||||
import platform
|
||||
import collections
|
||||
|
|
@ -184,9 +185,12 @@ def create_openpype_package(
|
|||
|
||||
addon_output_dir = output_dir / "openpype" / addon_version
|
||||
private_dir = addon_output_dir / "private"
|
||||
if addon_output_dir.exists():
|
||||
shutil.rmtree(str(addon_output_dir))
|
||||
|
||||
# Make sure dir exists
|
||||
addon_output_dir.mkdir(parents=True)
|
||||
private_dir.mkdir(parents=True)
|
||||
addon_output_dir.mkdir(parents=True, exist_ok=True)
|
||||
private_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy version
|
||||
shutil.copy(str(version_path), str(addon_output_dir))
|
||||
|
|
@ -268,22 +272,37 @@ def create_addon_package(
|
|||
)
|
||||
|
||||
|
||||
def main(create_zip=True, keep_source=False):
|
||||
def main(
|
||||
output_dir=None,
|
||||
skip_zip=True,
|
||||
keep_source=False,
|
||||
clear_output_dir=False,
|
||||
addons=None,
|
||||
):
|
||||
current_dir = Path(os.path.dirname(os.path.abspath(__file__)))
|
||||
root_dir = current_dir.parent
|
||||
output_dir = current_dir / "packages"
|
||||
print("Package creation started...")
|
||||
create_zip = not skip_zip
|
||||
|
||||
# Make sure package dir is empty
|
||||
if output_dir.exists():
|
||||
if output_dir:
|
||||
output_dir = Path(output_dir)
|
||||
else:
|
||||
output_dir = current_dir / "packages"
|
||||
|
||||
if output_dir.exists() and clear_output_dir:
|
||||
shutil.rmtree(str(output_dir))
|
||||
# Make sure output dir is created
|
||||
output_dir.mkdir(parents=True)
|
||||
|
||||
print("Package creation started...")
|
||||
print(f"Output directory: {output_dir}")
|
||||
|
||||
# Make sure output dir is created
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
for addon_dir in current_dir.iterdir():
|
||||
if not addon_dir.is_dir():
|
||||
continue
|
||||
|
||||
if addons and addon_dir.name not in addons:
|
||||
continue
|
||||
|
||||
server_dir = addon_dir / "server"
|
||||
if not server_dir.exists():
|
||||
continue
|
||||
|
|
@ -303,6 +322,54 @@ def main(create_zip=True, keep_source=False):
|
|||
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_zip = "--skip-zip" not in sys.argv
|
||||
keep_sources = "--keep-sources" in sys.argv
|
||||
main(create_zip, keep_sources)
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--skip-zip",
|
||||
dest="skip_zip",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Skip zipping server package and create only"
|
||||
" server folder structure."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--keep-sources",
|
||||
dest="keep_sources",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Keep folder structure when server package is created."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
dest="output_dir",
|
||||
default=None,
|
||||
help=(
|
||||
"Directory path where package will be created"
|
||||
" (Will be purged if already exists!)"
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"-c", "--clear-output-dir",
|
||||
dest="clear_output_dir",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Clear output directory before package creation."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"-a",
|
||||
"--addon",
|
||||
dest="addons",
|
||||
action="append",
|
||||
help="Limit addon creation to given addon name",
|
||||
)
|
||||
|
||||
args = parser.parse_args(sys.argv[1:])
|
||||
main(
|
||||
args.output_dir,
|
||||
args.skip_zip,
|
||||
args.keep_sources,
|
||||
args.clear_output_dir,
|
||||
args.addons,
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
from pydantic import Field
|
||||
|
||||
from ayon_server.settings import BaseSettingsModel
|
||||
|
||||
|
||||
|
|
@ -35,52 +34,110 @@ class CreateStaticMeshModel(BaseSettingsModel):
|
|||
|
||||
|
||||
class CreatePluginsModel(BaseSettingsModel):
|
||||
CreateArnoldAss: CreateArnoldAssModel = Field(
|
||||
default_factory=CreateArnoldAssModel,
|
||||
title="Create Alembic Camera")
|
||||
# "-" is not compatible in the new model
|
||||
CreateStaticMesh: CreateStaticMeshModel = Field(
|
||||
default_factory=CreateStaticMeshModel,
|
||||
title="Create Static Mesh"
|
||||
)
|
||||
CreateAlembicCamera: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Alembic Camera")
|
||||
CreateArnoldAss: CreateArnoldAssModel = Field(
|
||||
default_factory=CreateArnoldAssModel,
|
||||
title="Create Arnold Ass")
|
||||
CreateArnoldRop: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Arnold ROP")
|
||||
CreateCompositeSequence: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Composite Sequence")
|
||||
title="Create Composite (Image Sequence)")
|
||||
CreateHDA: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Houdini Digital Asset")
|
||||
CreateKarmaROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Karma ROP")
|
||||
CreateMantraROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Mantra ROP")
|
||||
CreatePointCache: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Point Cache")
|
||||
title="Create PointCache (Abc)")
|
||||
CreateBGEO: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create PointCache (Bgeo)")
|
||||
CreateRedshiftProxy: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Redshift Proxy")
|
||||
CreateRedshiftROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create RedshiftROP")
|
||||
CreateRemotePublish: CreatorModel = Field(
|
||||
title="Create Redshift ROP")
|
||||
CreateReview: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Remote Publish")
|
||||
title="Create Review")
|
||||
# "-" is not compatible in the new model
|
||||
CreateStaticMesh: CreateStaticMeshModel = Field(
|
||||
default_factory=CreateStaticMeshModel,
|
||||
title="Create Static Mesh")
|
||||
CreateUSD: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD (experimental)")
|
||||
CreateUSDRender: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD render (experimental)")
|
||||
CreateVDBCache: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create VDB Cache")
|
||||
CreateUSD: CreatorModel = Field(
|
||||
CreateVrayROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD")
|
||||
CreateUSDModel: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD model")
|
||||
USDCreateShadingWorkspace: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD shading workspace")
|
||||
CreateUSDRender: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create USD render")
|
||||
title="Create VRay ROP")
|
||||
|
||||
|
||||
DEFAULT_HOUDINI_CREATE_SETTINGS = {
|
||||
"CreateAlembicCamera": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateArnoldAss": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"],
|
||||
"ext": ".ass"
|
||||
},
|
||||
"CreateArnoldRop": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateCompositeSequence": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateHDA": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateKarmaROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateMantraROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreatePointCache": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateBGEO": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateRedshiftProxy": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateRedshiftROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateReview": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateStaticMesh": {
|
||||
"enabled": True,
|
||||
"default_variants": [
|
||||
|
|
@ -94,126 +151,20 @@ DEFAULT_HOUDINI_CREATE_SETTINGS = {
|
|||
"UCX"
|
||||
]
|
||||
},
|
||||
"CreateAlembicCamera": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateCompositeSequence": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreatePointCache": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateRedshiftROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateRemotePublish": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateVDBCache": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateUSD": {
|
||||
"enabled": False,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateUSDModel": {
|
||||
"enabled": False,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"USDCreateShadingWorkspace": {
|
||||
"enabled": False,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateUSDRender": {
|
||||
"enabled": False,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
# Publish Plugins
|
||||
class ValidateWorkfilePathsModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
node_types: list[str] = Field(
|
||||
default_factory=list,
|
||||
title="Node Types"
|
||||
)
|
||||
prohibited_vars: list[str] = Field(
|
||||
default_factory=list,
|
||||
title="Prohibited Variables"
|
||||
)
|
||||
|
||||
|
||||
class BasicValidateModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
active: bool = Field(title="Active")
|
||||
|
||||
|
||||
class PublishPluginsModel(BaseSettingsModel):
|
||||
ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field(
|
||||
default_factory=ValidateWorkfilePathsModel,
|
||||
title="Validate workfile paths settings.")
|
||||
ValidateReviewColorspace: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Review Colorspace.")
|
||||
ValidateContainers: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Latest Containers.")
|
||||
ValidateSubsetName: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Subset Name.")
|
||||
ValidateMeshIsStatic: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Mesh is Static.")
|
||||
ValidateUnrealStaticMeshName: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Unreal Static Mesh Name.")
|
||||
|
||||
|
||||
DEFAULT_HOUDINI_PUBLISH_SETTINGS = {
|
||||
"ValidateWorkfilePaths": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"node_types": [
|
||||
"file",
|
||||
"alembic"
|
||||
],
|
||||
"prohibited_vars": [
|
||||
"$HIP",
|
||||
"$JOB"
|
||||
]
|
||||
},
|
||||
"ValidateReviewColorspace": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateSubsetName": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateMeshIsStatic": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateUnrealStaticMeshName": {
|
||||
"enabled": False,
|
||||
"optional": True,
|
||||
"active": True
|
||||
}
|
||||
"CreateVDBCache": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateVrayROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
}
|
||||
|
|
@ -1,57 +1,19 @@
|
|||
from pydantic import Field
|
||||
from ayon_server.settings import (
|
||||
BaseSettingsModel,
|
||||
MultiplatformPathModel,
|
||||
MultiplatformPathListModel,
|
||||
)
|
||||
from ayon_server.settings import BaseSettingsModel
|
||||
from .general import (
|
||||
GeneralSettingsModel,
|
||||
DEFAULT_GENERAL_SETTINGS
|
||||
)
|
||||
from .imageio import HoudiniImageIOModel
|
||||
from .publish_plugins import (
|
||||
PublishPluginsModel,
|
||||
from .shelves import ShelvesModel
|
||||
from .create import (
|
||||
CreatePluginsModel,
|
||||
DEFAULT_HOUDINI_PUBLISH_SETTINGS,
|
||||
DEFAULT_HOUDINI_CREATE_SETTINGS
|
||||
)
|
||||
|
||||
|
||||
class ShelfToolsModel(BaseSettingsModel):
|
||||
name: str = Field(title="Name")
|
||||
help: str = Field(title="Help text")
|
||||
script: MultiplatformPathModel = Field(
|
||||
default_factory=MultiplatformPathModel,
|
||||
title="Script Path "
|
||||
)
|
||||
icon: MultiplatformPathModel = Field(
|
||||
default_factory=MultiplatformPathModel,
|
||||
title="Icon Path "
|
||||
)
|
||||
|
||||
|
||||
class ShelfDefinitionModel(BaseSettingsModel):
|
||||
_layout = "expanded"
|
||||
shelf_name: str = Field(title="Shelf name")
|
||||
tools_list: list[ShelfToolsModel] = Field(
|
||||
default_factory=list,
|
||||
title="Shelf Tools"
|
||||
)
|
||||
|
||||
|
||||
class ShelvesModel(BaseSettingsModel):
|
||||
_layout = "expanded"
|
||||
shelf_set_name: str = Field(title="Shelfs set name")
|
||||
|
||||
shelf_set_source_path: MultiplatformPathListModel = Field(
|
||||
default_factory=MultiplatformPathListModel,
|
||||
title="Shelf Set Path (optional)"
|
||||
)
|
||||
|
||||
shelf_definition: list[ShelfDefinitionModel] = Field(
|
||||
default_factory=list,
|
||||
title="Shelf Definitions"
|
||||
)
|
||||
from .publish import (
|
||||
PublishPluginsModel,
|
||||
DEFAULT_HOUDINI_PUBLISH_SETTINGS,
|
||||
)
|
||||
|
||||
|
||||
class HoudiniSettings(BaseSettingsModel):
|
||||
|
|
@ -65,18 +27,16 @@ class HoudiniSettings(BaseSettingsModel):
|
|||
)
|
||||
shelves: list[ShelvesModel] = Field(
|
||||
default_factory=list,
|
||||
title="Houdini Scripts Shelves",
|
||||
title="Shelves Manager",
|
||||
)
|
||||
|
||||
publish: PublishPluginsModel = Field(
|
||||
default_factory=PublishPluginsModel,
|
||||
title="Publish Plugins",
|
||||
)
|
||||
|
||||
create: CreatePluginsModel = Field(
|
||||
default_factory=CreatePluginsModel,
|
||||
title="Creator Plugins",
|
||||
)
|
||||
publish: PublishPluginsModel = Field(
|
||||
default_factory=PublishPluginsModel,
|
||||
title="Publish Plugins",
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_VALUES = {
|
||||
|
|
|
|||
103
server_addon/houdini/server/settings/publish.py
Normal file
103
server_addon/houdini/server/settings/publish.py
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
from pydantic import Field
|
||||
from ayon_server.settings import BaseSettingsModel
|
||||
|
||||
|
||||
# Publish Plugins
|
||||
class CollectRopFrameRangeModel(BaseSettingsModel):
|
||||
"""Collect Frame Range
|
||||
Disable this if you want the publisher to
|
||||
ignore start and end handles specified in the
|
||||
asset data for publish instances
|
||||
"""
|
||||
use_asset_handles: bool = Field(
|
||||
title="Use asset handles")
|
||||
|
||||
|
||||
class ValidateWorkfilePathsModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
node_types: list[str] = Field(
|
||||
default_factory=list,
|
||||
title="Node Types"
|
||||
)
|
||||
prohibited_vars: list[str] = Field(
|
||||
default_factory=list,
|
||||
title="Prohibited Variables"
|
||||
)
|
||||
|
||||
|
||||
class BasicValidateModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
active: bool = Field(title="Active")
|
||||
|
||||
|
||||
class PublishPluginsModel(BaseSettingsModel):
|
||||
CollectRopFrameRange: CollectRopFrameRangeModel = Field(
|
||||
default_factory=CollectRopFrameRangeModel,
|
||||
title="Collect Rop Frame Range.",
|
||||
section="Collectors"
|
||||
)
|
||||
ValidateContainers: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Latest Containers.",
|
||||
section="Validators")
|
||||
ValidateMeshIsStatic: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Mesh is Static.")
|
||||
ValidateReviewColorspace: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Review Colorspace.")
|
||||
ValidateSubsetName: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Subset Name.")
|
||||
ValidateUnrealStaticMeshName: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Unreal Static Mesh Name.")
|
||||
ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field(
|
||||
default_factory=ValidateWorkfilePathsModel,
|
||||
title="Validate workfile paths settings.")
|
||||
|
||||
|
||||
DEFAULT_HOUDINI_PUBLISH_SETTINGS = {
|
||||
"CollectRopFrameRange": {
|
||||
"use_asset_handles": True
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateMeshIsStatic": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateReviewColorspace": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateSubsetName": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateUnrealStaticMeshName": {
|
||||
"enabled": False,
|
||||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateWorkfilePaths": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"node_types": [
|
||||
"file",
|
||||
"alembic"
|
||||
],
|
||||
"prohibited_vars": [
|
||||
"$HIP",
|
||||
"$JOB"
|
||||
]
|
||||
}
|
||||
}
|
||||
37
server_addon/houdini/server/settings/shelves.py
Normal file
37
server_addon/houdini/server/settings/shelves.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
from pydantic import Field
|
||||
from ayon_server.settings import (
|
||||
BaseSettingsModel,
|
||||
MultiplatformPathModel
|
||||
)
|
||||
|
||||
|
||||
class ShelfToolsModel(BaseSettingsModel):
|
||||
"""Name and Script Path are mandatory."""
|
||||
label: str = Field(title="Name")
|
||||
script: str = Field(title="Script Path")
|
||||
icon: str = Field("", title="Icon Path")
|
||||
help: str = Field("", title="Help text")
|
||||
|
||||
|
||||
class ShelfDefinitionModel(BaseSettingsModel):
|
||||
_layout = "expanded"
|
||||
shelf_name: str = Field(title="Shelf name")
|
||||
tools_list: list[ShelfToolsModel] = Field(
|
||||
default_factory=list,
|
||||
title="Shelf Tools"
|
||||
)
|
||||
|
||||
|
||||
class ShelvesModel(BaseSettingsModel):
|
||||
_layout = "expanded"
|
||||
shelf_set_name: str = Field("", title="Shelfs set name")
|
||||
|
||||
shelf_set_source_path: MultiplatformPathModel = Field(
|
||||
default_factory=MultiplatformPathModel,
|
||||
title="Shelf Set Path (optional)"
|
||||
)
|
||||
|
||||
shelf_definition: list[ShelfDefinitionModel] = Field(
|
||||
default_factory=list,
|
||||
title="Shelf Definitions"
|
||||
)
|
||||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.1.5"
|
||||
__version__ = "0.2.6"
|
||||
|
|
|
|||
|
|
@ -14,10 +14,30 @@ class NodesModel(BaseSettingsModel):
|
|||
default_factory=list,
|
||||
title="Used in plugins"
|
||||
)
|
||||
nukeNodeClass: str = Field(
|
||||
nuke_node_class: str = Field(
|
||||
title="Nuke Node Class",
|
||||
)
|
||||
|
||||
|
||||
class RequiredNodesModel(NodesModel):
|
||||
knobs: list[KnobModel] = Field(
|
||||
default_factory=list,
|
||||
title="Knobs",
|
||||
)
|
||||
|
||||
@validator("knobs")
|
||||
def ensure_unique_names(cls, value):
|
||||
"""Ensure name fields within the lists have unique names."""
|
||||
ensure_unique_names(value)
|
||||
return value
|
||||
|
||||
|
||||
class OverrideNodesModel(NodesModel):
|
||||
subsets: list[str] = Field(
|
||||
default_factory=list,
|
||||
title="Subsets"
|
||||
)
|
||||
|
||||
knobs: list[KnobModel] = Field(
|
||||
default_factory=list,
|
||||
title="Knobs",
|
||||
|
|
@ -31,13 +51,11 @@ class NodesModel(BaseSettingsModel):
|
|||
|
||||
|
||||
class NodesSetting(BaseSettingsModel):
|
||||
# TODO: rename `requiredNodes` to `required_nodes`
|
||||
requiredNodes: list[NodesModel] = Field(
|
||||
required_nodes: list[RequiredNodesModel] = Field(
|
||||
title="Plugin required",
|
||||
default_factory=list
|
||||
)
|
||||
# TODO: rename `overrideNodes` to `override_nodes`
|
||||
overrideNodes: list[NodesModel] = Field(
|
||||
override_nodes: list[OverrideNodesModel] = Field(
|
||||
title="Plugin's node overrides",
|
||||
default_factory=list
|
||||
)
|
||||
|
|
@ -46,38 +64,40 @@ class NodesSetting(BaseSettingsModel):
|
|||
def ocio_configs_switcher_enum():
|
||||
return [
|
||||
{"value": "nuke-default", "label": "nuke-default"},
|
||||
{"value": "spi-vfx", "label": "spi-vfx"},
|
||||
{"value": "spi-anim", "label": "spi-anim"},
|
||||
{"value": "aces_0.1.1", "label": "aces_0.1.1"},
|
||||
{"value": "aces_0.7.1", "label": "aces_0.7.1"},
|
||||
{"value": "aces_1.0.1", "label": "aces_1.0.1"},
|
||||
{"value": "aces_1.0.3", "label": "aces_1.0.3"},
|
||||
{"value": "aces_1.1", "label": "aces_1.1"},
|
||||
{"value": "aces_1.2", "label": "aces_1.2"},
|
||||
{"value": "aces_1.3", "label": "aces_1.3"},
|
||||
{"value": "custom", "label": "custom"}
|
||||
{"value": "spi-vfx", "label": "spi-vfx (11)"},
|
||||
{"value": "spi-anim", "label": "spi-anim (11)"},
|
||||
{"value": "aces_0.1.1", "label": "aces_0.1.1 (11)"},
|
||||
{"value": "aces_0.7.1", "label": "aces_0.7.1 (11)"},
|
||||
{"value": "aces_1.0.1", "label": "aces_1.0.1 (11)"},
|
||||
{"value": "aces_1.0.3", "label": "aces_1.0.3 (11, 12)"},
|
||||
{"value": "aces_1.1", "label": "aces_1.1 (12, 13)"},
|
||||
{"value": "aces_1.2", "label": "aces_1.2 (13, 14)"},
|
||||
{"value": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1",
|
||||
"label": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)"},
|
||||
{"value": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1",
|
||||
"label": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)"},
|
||||
]
|
||||
|
||||
|
||||
class WorkfileColorspaceSettings(BaseSettingsModel):
|
||||
"""Nuke workfile colorspace preset. """
|
||||
|
||||
colorManagement: Literal["Nuke", "OCIO"] = Field(
|
||||
title="Color Management"
|
||||
color_management: Literal["Nuke", "OCIO"] = Field(
|
||||
title="Color Management Workflow"
|
||||
)
|
||||
|
||||
OCIO_config: str = Field(
|
||||
title="OpenColorIO Config",
|
||||
description="Switch between OCIO configs",
|
||||
native_ocio_config: str = Field(
|
||||
title="Native OpenColorIO Config",
|
||||
description="Switch between native OCIO configs",
|
||||
enum_resolver=ocio_configs_switcher_enum,
|
||||
conditionalEnum=True
|
||||
)
|
||||
|
||||
workingSpaceLUT: str = Field(
|
||||
working_space: str = Field(
|
||||
title="Working Space"
|
||||
)
|
||||
monitorLut: str = Field(
|
||||
title="Monitor"
|
||||
thumbnail_space: str = Field(
|
||||
title="Thumbnail Space"
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -182,12 +202,10 @@ class ImageIOSettings(BaseSettingsModel):
|
|||
title="Nodes"
|
||||
)
|
||||
"""# TODO: enhance settings with host api:
|
||||
- [ ] old settings are using `regexInputs` key but we
|
||||
need to rename to `regex_inputs`
|
||||
- [ ] no need for `inputs` middle part. It can stay
|
||||
directly on `regex_inputs`
|
||||
"""
|
||||
regexInputs: RegexInputsModel = Field(
|
||||
regex_inputs: RegexInputsModel = Field(
|
||||
default_factory=RegexInputsModel,
|
||||
title="Assign colorspace to read nodes via rules"
|
||||
)
|
||||
|
|
@ -201,18 +219,18 @@ DEFAULT_IMAGEIO_SETTINGS = {
|
|||
"viewerProcess": "rec709"
|
||||
},
|
||||
"workfile": {
|
||||
"colorManagement": "Nuke",
|
||||
"OCIO_config": "nuke-default",
|
||||
"workingSpaceLUT": "linear",
|
||||
"monitorLut": "sRGB",
|
||||
"color_management": "Nuke",
|
||||
"native_ocio_config": "nuke-default",
|
||||
"working_space": "linear",
|
||||
"thumbnail_space": "sRGB",
|
||||
},
|
||||
"nodes": {
|
||||
"requiredNodes": [
|
||||
"required_nodes": [
|
||||
{
|
||||
"plugins": [
|
||||
"CreateWriteRender"
|
||||
],
|
||||
"nukeNodeClass": "Write",
|
||||
"nuke_node_class": "Write",
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
|
|
@ -264,7 +282,7 @@ DEFAULT_IMAGEIO_SETTINGS = {
|
|||
"plugins": [
|
||||
"CreateWritePrerender"
|
||||
],
|
||||
"nukeNodeClass": "Write",
|
||||
"nuke_node_class": "Write",
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
|
|
@ -316,7 +334,7 @@ DEFAULT_IMAGEIO_SETTINGS = {
|
|||
"plugins": [
|
||||
"CreateWriteImage"
|
||||
],
|
||||
"nukeNodeClass": "Write",
|
||||
"nuke_node_class": "Write",
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
|
|
@ -360,9 +378,9 @@ DEFAULT_IMAGEIO_SETTINGS = {
|
|||
]
|
||||
}
|
||||
],
|
||||
"overrideNodes": []
|
||||
"override_nodes": []
|
||||
},
|
||||
"regexInputs": {
|
||||
"regex_inputs": {
|
||||
"inputs": [
|
||||
{
|
||||
"regex": "(beauty).*(?=.exr)",
|
||||
|
|
|
|||
|
|
@ -155,8 +155,10 @@ class IntermediateOutputModel(BaseSettingsModel):
|
|||
title="Filter", default_factory=BakingStreamFilterModel)
|
||||
read_raw: bool = Field(title="Read raw switch")
|
||||
viewer_process_override: str = Field(title="Viewer process override")
|
||||
bake_viewer_process: bool = Field(title="Bake view process")
|
||||
bake_viewer_input_process: bool = Field(title="Bake viewer input process")
|
||||
bake_viewer_process: bool = Field(title="Bake viewer process")
|
||||
bake_viewer_input_process: bool = Field(
|
||||
title="Bake viewer input process node (LUT)"
|
||||
)
|
||||
reformat_nodes_config: ReformatNodesConfigModel = Field(
|
||||
default_factory=ReformatNodesConfigModel,
|
||||
title="Reformat Nodes")
|
||||
|
|
@ -261,8 +263,8 @@ class PublishPuginsModel(BaseSettingsModel):
|
|||
title="Validate Backdrop",
|
||||
default_factory=OptionalPluginModel
|
||||
)
|
||||
ValidateScript: OptionalPluginModel = Field(
|
||||
title="Validate Script",
|
||||
ValidateScriptAttributes: OptionalPluginModel = Field(
|
||||
title="Validate workfile attributes",
|
||||
default_factory=OptionalPluginModel
|
||||
)
|
||||
ExtractThumbnail: ExtractThumbnailModel = Field(
|
||||
|
|
@ -343,7 +345,7 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
"optional": True,
|
||||
"active": True
|
||||
},
|
||||
"ValidateScript": {
|
||||
"ValidateScriptAttributes": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"active": True
|
||||
|
|
@ -407,12 +409,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
"text": "Lanczos6"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "black_outside",
|
||||
"boolean": True
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "pbb",
|
||||
"boolean": False
|
||||
}
|
||||
|
|
@ -427,7 +429,7 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
"enabled": False
|
||||
},
|
||||
"ExtractReviewDataMov": {
|
||||
"enabled": True,
|
||||
"enabled": False,
|
||||
"viewer_lut_raw": False,
|
||||
"outputs": [
|
||||
{
|
||||
|
|
@ -463,12 +465,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
"text": "Lanczos6"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "black_outside",
|
||||
"boolean": True
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "pbb",
|
||||
"boolean": False
|
||||
}
|
||||
|
|
@ -518,12 +520,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
"text": "Lanczos6"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "black_outside",
|
||||
"boolean": True
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"type": "boolean",
|
||||
"name": "pbb",
|
||||
"boolean": False
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.1.3"
|
||||
__version__ = "0.1.4"
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue