mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'maya_new_publisher' of github.com:BigRoy/OpenPype into maya_new_publisher
This commit is contained in:
commit
23c4a1dda2
198 changed files with 5206 additions and 2015 deletions
14
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
14
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,13 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.11-nightly.4
|
||||
- 3.15.11-nightly.3
|
||||
- 3.15.11-nightly.2
|
||||
- 3.15.11-nightly.1
|
||||
- 3.15.10
|
||||
- 3.15.10-nightly.2
|
||||
- 3.15.10-nightly.1
|
||||
- 3.15.9
|
||||
- 3.15.9-nightly.2
|
||||
- 3.15.9-nightly.1
|
||||
|
|
@ -128,13 +135,6 @@ body:
|
|||
- 3.14.3
|
||||
- 3.14.3-nightly.7
|
||||
- 3.14.3-nightly.6
|
||||
- 3.14.3-nightly.5
|
||||
- 3.14.3-nightly.4
|
||||
- 3.14.3-nightly.3
|
||||
- 3.14.3-nightly.2
|
||||
- 3.14.3-nightly.1
|
||||
- 3.14.2
|
||||
- 3.14.2-nightly.5
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
393
CHANGELOG.md
393
CHANGELOG.md
|
|
@ -1,6 +1,399 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.15.10](https://github.com/ynput/OpenPype/tree/3.15.10)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.9...3.15.10)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>ImageIO: Adding ImageIO activation toggle to all hosts <a href="https://github.com/ynput/OpenPype/pull/4700">#4700</a></summary>
|
||||
|
||||
Colorspace management can now be enabled at the project level, although it is disabled by default. Once enabled, all hosts will use the OCIO config file defined in the settings. If settings are disabled, the system switches to DCC's native color space management, and we do not store colorspace information at the representative level.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Redshift Proxy Support in 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4625">#4625</a></summary>
|
||||
|
||||
Redshift Proxy Support for 3dsMax.
|
||||
- [x] Creator
|
||||
- [x] Loader
|
||||
- [x] Extractor
|
||||
- [x] Validator
|
||||
- [x] Add documentation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini farm publishing and rendering <a href="https://github.com/ynput/OpenPype/pull/4825">#4825</a></summary>
|
||||
|
||||
Deadline Farm publishing and Rendering for Houdini
|
||||
- [x] Mantra
|
||||
- [x] Karma(including usd renders)
|
||||
- [x] Arnold
|
||||
- [x] Elaborate Redshift ROP for deadline submission
|
||||
- [x] fix the existing bug in Redshift ROP
|
||||
- [x] Vray
|
||||
- [x] add docs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Feature: Blender hook to execute python scripts at launch <a href="https://github.com/ynput/OpenPype/pull/4905">#4905</a></summary>
|
||||
|
||||
Hook to allow hooks to add path to a python script that will be executed when Blender starts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Feature: Resolve: Open last workfile on launch through .scriptlib <a href="https://github.com/ynput/OpenPype/pull/5047">#5047</a></summary>
|
||||
|
||||
Added implementation to Resolve integration to open last workfile on launch.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Remove default windowFlags from publisher <a href="https://github.com/ynput/OpenPype/pull/5089">#5089</a></summary>
|
||||
|
||||
The default windowFlags is making the publisher window (in Linux at least) only show the close button and it's frustrating as many times you just want to minimize the window and get back to the validation after. Removing that line I get what I'd expect.**Before:****After:**
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Show user who created the workfile on the details pane of workfile manager <a href="https://github.com/ynput/OpenPype/pull/5093">#5093</a></summary>
|
||||
|
||||
New PR for https://github.com/ynput/OpenPype/pull/5087, which was closed after merging `next-minor` branch and then realizing we don't need to target it as it was decided it's not required to support windows. More info on that PR discussion.Small addition to add name of the `user` who created the workfile on the details pane of the workfile manager:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Loader: Hide inactive versions in UI <a href="https://github.com/ynput/OpenPype/pull/5100">#5100</a></summary>
|
||||
|
||||
Hide versions with `active` set to `False` in Loader UI.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Repair RenderPass token when merging AOVs. <a href="https://github.com/ynput/OpenPype/pull/5055">#5055</a></summary>
|
||||
|
||||
Validator was flagging that `<RenderPass>` was in the image prefix, but did not repair the issue.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Improve error feedback when no renderable cameras exist for ASS family. <a href="https://github.com/ynput/OpenPype/pull/5092">#5092</a></summary>
|
||||
|
||||
When collecting cameras for `ass` family, this improves the error message when no cameras are renderable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Custom script to set frame range of read nodes <a href="https://github.com/ynput/OpenPype/pull/5039">#5039</a></summary>
|
||||
|
||||
Adding option to set frame range specifically for the read nodes in Openpype Panel. User can set up their preferred frame range with the frame range dialog, which can be showed after clicking `Set Frame Range (Read Node)` in Openpype Tools
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Update extract review letterbox docs <a href="https://github.com/ynput/OpenPype/pull/5074">#5074</a></summary>
|
||||
|
||||
Update Extract Review - Letter Box section in Docs. Letterbox type description is removed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Project pack: Documents only skips roots validation <a href="https://github.com/ynput/OpenPype/pull/5082">#5082</a></summary>
|
||||
|
||||
Single roots validation is skipped if only documents are extracted.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: custom settings for write node without publish <a href="https://github.com/ynput/OpenPype/pull/5084">#5084</a></summary>
|
||||
|
||||
Set Render Output and other settings to write nodes for non-publish purposes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Deadline servers <a href="https://github.com/ynput/OpenPype/pull/5052">#5052</a></summary>
|
||||
|
||||
Fix working with multiple Deadline servers in Maya.
|
||||
- Pools (primary and secondary) attributes were not recreated correctly.
|
||||
- Order of collector plugins were wrong, so collected data was not injected into render instances.
|
||||
- Server attribute was not converted to string so comparing with settings was incorrect.
|
||||
- Improve debug logging for where the webservice url is getting fetched from.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix Load Reference. <a href="https://github.com/ynput/OpenPype/pull/5091">#5091</a></summary>
|
||||
|
||||
Fix bug introduced with https://github.com/ynput/OpenPype/pull/4751 where `cmds.ls` returns a list.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: Publishing Deadline jobs from RedShift <a href="https://github.com/ynput/OpenPype/pull/4960">#4960</a></summary>
|
||||
|
||||
Fix the bug of being uable to publish deadline jobs from RedshiftUse Current File instead of Published Scene for just Redshift.
|
||||
- add save scene before rendering to ensure the scene is saved after the modification.
|
||||
- add separated aov files option to allow users to choose to have aovs in render output
|
||||
- add validator for render publish to aovid overriding the previous renders
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix missing frame range for pointcache and camera exports <a href="https://github.com/ynput/OpenPype/pull/5026">#5026</a></summary>
|
||||
|
||||
Fix missing frame range for pointcache and camera exports on published version.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: collect_frame_fix plugin fix and cleanup <a href="https://github.com/ynput/OpenPype/pull/5064">#5064</a></summary>
|
||||
|
||||
Previous implementation https://github.com/ynput/OpenPype/pull/5036 was broken this is fixing the issue where attribute is found in instance data although the settings were disabled for the plugin.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Hiero: Fix apply settings Clip Load <a href="https://github.com/ynput/OpenPype/pull/5073">#5073</a></summary>
|
||||
|
||||
Changed `apply_settings` to classmethod which fixes the issue with settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Resolve: Make sure scripts dir exists <a href="https://github.com/ynput/OpenPype/pull/5078">#5078</a></summary>
|
||||
|
||||
Make sure the scripts directory exists before looping over it's content.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>removing info knob from nuke creators <a href="https://github.com/ynput/OpenPype/pull/5083">#5083</a></summary>
|
||||
|
||||
- removing instance node if removed via publisher
|
||||
- removing info knob since it is not needed any more (was there only for the transition phase)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tray: Fix restart arguments on update <a href="https://github.com/ynput/OpenPype/pull/5085">#5085</a></summary>
|
||||
|
||||
Fix arguments on restart.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: bug fix on repair action in Arnold Scene Source CBID Validator <a href="https://github.com/ynput/OpenPype/pull/5096">#5096</a></summary>
|
||||
|
||||
Fix the bug of not being able to use repair action in Arnold Scene Source CBID Validator
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: batch of small fixes <a href="https://github.com/ynput/OpenPype/pull/5103">#5103</a></summary>
|
||||
|
||||
- default settings for `imageio.requiredNodes` **CreateWriteImage**
|
||||
- default settings for **LoadImage** representations
|
||||
- **Create** and **Publish** menu items with `parent=main_window` (version > 14)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: make prerender check safer <a href="https://github.com/ynput/OpenPype/pull/5104">#5104</a></summary>
|
||||
|
||||
Prerender wasn't correctly recognized and was replaced with just 'render' family.In Nuke it is correctly `prerender.farm` in families, which wasn't handled here. It resulted into using `render` in templates even if `render` and `prerender` templates were split.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Sort launcher actions alphabetically <a href="https://github.com/ynput/OpenPype/pull/5106">#5106</a></summary>
|
||||
|
||||
The launcher actions weren't being sorted by its label but its name (which on the case of the apps it's the version number) and thus the order wasn't consistent and we kept getting a different order on every launch. From my debugging session, this was the result of what the `actions` variable held after the `filter_compatible_actions` function before these changes:
|
||||
```
|
||||
(Pdb) for p in actions: print(p.order, p.name)
|
||||
0 14-02
|
||||
0 14-02
|
||||
0 14-02
|
||||
0 14-02
|
||||
0 14-02
|
||||
0 19-5-493
|
||||
0 2023
|
||||
0 3-41
|
||||
0 6-01
|
||||
```This caused already a couple bugs from our artists thinking they had launched Nuke X and instead launched Nuke and telling us their Nuke was missing nodes**Before:****After:**
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: Editorial video stream discovery <a href="https://github.com/ynput/OpenPype/pull/5120">#5120</a></summary>
|
||||
|
||||
Editorial create plugin in traypublisher does not expect that first stream in input is video.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: Move from deprecated interface <a href="https://github.com/ynput/OpenPype/pull/5117">#5117</a></summary>
|
||||
|
||||
`INewPublisher` interface is deprecated, this PR is changing the use to `IPublishHost` instead.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>add movalex as a contributor for code <a href="https://github.com/ynput/OpenPype/pull/5076">#5076</a></summary>
|
||||
|
||||
Adds @movalex as a contributor for code.
|
||||
|
||||
This was requested by mkolar [in this comment](https://github.com/ynput/OpenPype/pull/4916#issuecomment-1571498425)
|
||||
|
||||
[skip ci]
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: refactor load plugins <a href="https://github.com/ynput/OpenPype/pull/5079">#5079</a></summary>
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.15.9](https://github.com/ynput/OpenPype/tree/3.15.9)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -855,12 +855,13 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
return conn.find(query_filter, _prepare_fields(fields))
|
||||
|
||||
|
||||
def get_last_versions(project_name, subset_ids, fields=None):
|
||||
def get_last_versions(project_name, subset_ids, active=None, fields=None):
|
||||
"""Latest versions for entered subset_ids.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
|
||||
active (Optional[bool]): If True only active versions are returned.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
|
|
@ -899,12 +900,21 @@ def get_last_versions(project_name, subset_ids, fields=None):
|
|||
if name_needed:
|
||||
group_item["name"] = {"$last": "$name"}
|
||||
|
||||
aggregate_filter = {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}
|
||||
if active is False:
|
||||
aggregate_filter["data.active"] = active
|
||||
elif active is True:
|
||||
aggregate_filter["$or"] = [
|
||||
{"data.active": {"$exists": 0}},
|
||||
{"data.active": active},
|
||||
]
|
||||
|
||||
aggregation_pipeline = [
|
||||
# Find all versions of those subsets
|
||||
{"$match": {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}},
|
||||
{"$match": aggregate_filter},
|
||||
# Sorting versions all together
|
||||
{"$sort": {"name": 1}},
|
||||
# Group them by "parent", but only take the last
|
||||
|
|
|
|||
|
|
@ -220,7 +220,6 @@ def new_representation_doc(
|
|||
"parent": version_id,
|
||||
"name": name,
|
||||
"data": data,
|
||||
|
||||
# Imprint shortcut to context for performance reasons.
|
||||
"context": context
|
||||
}
|
||||
|
|
@ -708,7 +707,11 @@ class OperationsSession(object):
|
|||
return operation
|
||||
|
||||
|
||||
def create_project(project_name, project_code, library_project=False):
|
||||
def create_project(
|
||||
project_name,
|
||||
project_code,
|
||||
library_project=False,
|
||||
):
|
||||
"""Create project using OpenPype settings.
|
||||
|
||||
This project creation function is not validating project document on
|
||||
|
|
@ -752,7 +755,7 @@ def create_project(project_name, project_code, library_project=False):
|
|||
"name": project_name,
|
||||
"data": {
|
||||
"code": project_code,
|
||||
"library_project": library_project
|
||||
"library_project": library_project,
|
||||
},
|
||||
"schema": CURRENT_PROJECT_SCHEMA
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,37 +0,0 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
from openpype.pipeline.colorspace import get_imageio_config
|
||||
from openpype.pipeline.template_data import get_template_data
|
||||
|
||||
|
||||
class PreLaunchHostSetOCIO(PreLaunchHook):
|
||||
"""Set OCIO environment for the host"""
|
||||
|
||||
order = 0
|
||||
app_groups = ["substancepainter"]
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
|
||||
anatomy_data = get_template_data(
|
||||
project_doc=self.data["project_doc"],
|
||||
asset_doc=self.data["asset_doc"],
|
||||
task_name=self.data["task_name"],
|
||||
host_name=self.host_name,
|
||||
system_settings=self.data["system_settings"]
|
||||
)
|
||||
|
||||
ocio_config = get_imageio_config(
|
||||
project_name=self.data["project_doc"]["name"],
|
||||
host_name=self.host_name,
|
||||
project_settings=self.data["project_settings"],
|
||||
anatomy_data=anatomy_data,
|
||||
anatomy=self.data["anatomy"]
|
||||
)
|
||||
|
||||
if ocio_config:
|
||||
ocio_path = ocio_config["path"]
|
||||
self.log.info(f"Setting OCIO config path: {ocio_path}")
|
||||
self.launch_context.env["OCIO"] = ocio_path
|
||||
else:
|
||||
self.log.debug("OCIO not set or enabled")
|
||||
|
|
@ -1,12 +1,27 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
from openpype.pipeline.colorspace import get_imageio_config
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_imageio_config
|
||||
)
|
||||
from openpype.pipeline.template_data import get_template_data_with_names
|
||||
|
||||
|
||||
class FusionPreLaunchOCIO(PreLaunchHook):
|
||||
"""Set OCIO environment variable for Fusion"""
|
||||
app_groups = ["fusion"]
|
||||
class OCIOEnvHook(PreLaunchHook):
|
||||
"""Set OCIO environment variable for hosts that use OpenColorIO."""
|
||||
|
||||
order = 0
|
||||
hosts = [
|
||||
"substancepainter",
|
||||
"fusion",
|
||||
"blender",
|
||||
"aftereffects",
|
||||
"max",
|
||||
"houdini",
|
||||
"maya",
|
||||
"nuke",
|
||||
"hiero",
|
||||
"resolve"
|
||||
]
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
|
|
@ -26,7 +41,13 @@ class FusionPreLaunchOCIO(PreLaunchHook):
|
|||
anatomy_data=template_data,
|
||||
anatomy=self.data["anatomy"]
|
||||
)
|
||||
ocio_path = config_data["path"]
|
||||
|
||||
self.log.info(f"Setting OCIO config path: {ocio_path}")
|
||||
self.launch_context.env["OCIO"] = ocio_path
|
||||
if config_data:
|
||||
ocio_path = config_data["path"]
|
||||
|
||||
self.log.info(
|
||||
f"Setting OCIO environment to config path: {ocio_path}")
|
||||
|
||||
self.launch_context.env["OCIO"] = ocio_path
|
||||
else:
|
||||
self.log.debug("OCIO not set or enabled")
|
||||
|
|
@ -134,6 +134,27 @@ def append_user_scripts():
|
|||
traceback.print_exc()
|
||||
|
||||
|
||||
def set_app_templates_path():
|
||||
# Blender requires the app templates to be in `BLENDER_USER_SCRIPTS`.
|
||||
# After running Blender, we set that variable to our custom path, so
|
||||
# that the user can use their custom app templates.
|
||||
|
||||
# We look among the scripts paths for one of the paths that contains
|
||||
# the app templates. The path must contain the subfolder
|
||||
# `startup/bl_app_templates_user`.
|
||||
paths = os.environ.get("OPENPYPE_BLENDER_USER_SCRIPTS").split(os.pathsep)
|
||||
|
||||
app_templates_path = None
|
||||
for path in paths:
|
||||
if os.path.isdir(
|
||||
os.path.join(path, "startup", "bl_app_templates_user")):
|
||||
app_templates_path = path
|
||||
break
|
||||
|
||||
if app_templates_path and os.path.isdir(app_templates_path):
|
||||
os.environ["BLENDER_USER_SCRIPTS"] = app_templates_path
|
||||
|
||||
|
||||
def imprint(node: bpy.types.bpy_struct_meta_idprop, data: Dict):
|
||||
r"""Write `data` to `node` as userDefined attributes
|
||||
|
||||
|
|
|
|||
|
|
@ -60,6 +60,7 @@ def install():
|
|||
register_creator_plugin_path(str(CREATE_PATH))
|
||||
|
||||
lib.append_user_scripts()
|
||||
lib.set_app_templates_path()
|
||||
|
||||
register_event_callback("new", on_new)
|
||||
register_event_callback("open", on_open)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from qtpy import QtCore, QtWidgets
|
|||
from openpype import style
|
||||
from openpype.lib import Logger, StringTemplate
|
||||
from openpype.pipeline import LegacyCreator, LoaderPlugin
|
||||
from openpype.pipeline.colorspace import get_remapped_colorspace_to_native
|
||||
from openpype.settings import get_current_project_settings
|
||||
|
||||
from . import constants
|
||||
|
|
@ -701,6 +702,7 @@ class ClipLoader(LoaderPlugin):
|
|||
]
|
||||
|
||||
_mapping = None
|
||||
_host_settings = None
|
||||
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
|
||||
|
|
@ -769,15 +771,26 @@ class ClipLoader(LoaderPlugin):
|
|||
Returns:
|
||||
str: native colorspace name defined in mapping or None
|
||||
"""
|
||||
# TODO: rewrite to support only pipeline's remapping
|
||||
if not cls._host_settings:
|
||||
cls._host_settings = get_current_project_settings()["flame"]
|
||||
|
||||
# [Deprecated] way of remapping
|
||||
if not cls._mapping:
|
||||
settings = get_current_project_settings()["flame"]
|
||||
mapping = settings["imageio"]["profilesMapping"]["inputs"]
|
||||
mapping = (
|
||||
cls._host_settings["imageio"]["profilesMapping"]["inputs"])
|
||||
cls._mapping = {
|
||||
input["ocioName"]: input["flameName"]
|
||||
for input in mapping
|
||||
}
|
||||
|
||||
return cls._mapping.get(input_colorspace)
|
||||
native_name = cls._mapping.get(input_colorspace)
|
||||
|
||||
if not native_name:
|
||||
native_name = get_remapped_colorspace_to_native(
|
||||
input_colorspace, "flame", cls._host_settings["imageio"])
|
||||
|
||||
return native_name
|
||||
|
||||
|
||||
class OpenClipSolver(flib.MediaInfoFile):
|
||||
|
|
|
|||
|
|
@ -47,6 +47,17 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
|
||||
imageio_flame = project_settings["flame"]["imageio"]
|
||||
|
||||
# Check whether 'enabled' key from host imageio settings exists
|
||||
# so we can tell if host is using the new colormanagement framework.
|
||||
# If the 'enabled' isn't found we want 'colormanaged' set to True
|
||||
# because prior to the key existing we always did colormanagement for
|
||||
# Flame
|
||||
colormanaged = imageio_flame.get("enabled")
|
||||
# if key was not found, set to True
|
||||
# ensuring backward compatibility
|
||||
if colormanaged is None:
|
||||
colormanaged = True
|
||||
|
||||
# get user name and host name
|
||||
user_name = get_openpype_username()
|
||||
user_name = user_name.replace(".", "_")
|
||||
|
|
@ -68,9 +79,7 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"FrameWidth": int(width),
|
||||
"FrameHeight": int(height),
|
||||
"AspectRatio": float((width / height) * _db_p_data["pixelAspect"]),
|
||||
"FrameRate": self._get_flame_fps(fps),
|
||||
"FrameDepth": str(imageio_flame["project"]["frameDepth"]),
|
||||
"FieldDominance": str(imageio_flame["project"]["fieldDominance"])
|
||||
"FrameRate": self._get_flame_fps(fps)
|
||||
}
|
||||
|
||||
data_to_script = {
|
||||
|
|
@ -78,7 +87,6 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"host_name": _env.get("FLAME_WIRETAP_HOSTNAME") or hostname,
|
||||
"volume_name": volume_name,
|
||||
"group_name": _env.get("FLAME_WIRETAP_GROUP"),
|
||||
"color_policy": str(imageio_flame["project"]["colourPolicy"]),
|
||||
|
||||
# from project
|
||||
"project_name": project_name,
|
||||
|
|
@ -86,6 +94,16 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"project_data": project_data
|
||||
}
|
||||
|
||||
# add color management data
|
||||
if colormanaged:
|
||||
project_data.update({
|
||||
"FrameDepth": str(imageio_flame["project"]["frameDepth"]),
|
||||
"FieldDominance": str(
|
||||
imageio_flame["project"]["fieldDominance"])
|
||||
})
|
||||
data_to_script["color_policy"] = str(
|
||||
imageio_flame["project"]["colourPolicy"])
|
||||
|
||||
self.log.info(pformat(dict(_env)))
|
||||
self.log.info(pformat(data_to_script))
|
||||
|
||||
|
|
|
|||
|
|
@ -23,11 +23,17 @@ except ImportError:
|
|||
|
||||
from openpype.client import get_project
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import legacy_io, Anatomy
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name, legacy_io, Anatomy
|
||||
)
|
||||
from openpype.pipeline.load import filter_containers
|
||||
from openpype.lib import Logger
|
||||
from . import tags
|
||||
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_imageio_config
|
||||
)
|
||||
|
||||
|
||||
class DeprecatedWarning(DeprecationWarning):
|
||||
pass
|
||||
|
|
@ -1047,6 +1053,18 @@ def apply_colorspace_project():
|
|||
imageio = get_project_settings(project_name)["hiero"]["imageio"]
|
||||
presets = imageio.get("workfile")
|
||||
|
||||
# backward compatibility layer
|
||||
# TODO: remove this after some time
|
||||
config_data = get_imageio_config(
|
||||
project_name=get_current_project_name(),
|
||||
host_name="hiero"
|
||||
)
|
||||
|
||||
if config_data:
|
||||
presets.update({
|
||||
"ocioConfigName": "custom"
|
||||
})
|
||||
|
||||
# save the workfile as subversion "comment:_colorspaceChange"
|
||||
split_current_file = os.path.splitext(current_file)
|
||||
copy_current_file = current_file
|
||||
|
|
|
|||
56
openpype/hosts/houdini/api/colorspace.py
Normal file
56
openpype/hosts/houdini/api/colorspace.py
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
import attr
|
||||
import hou
|
||||
from openpype.hosts.houdini.api.lib import get_color_management_preferences
|
||||
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
"""Data class for Render Layer metadata."""
|
||||
frameStart = attr.ib()
|
||||
frameEnd = attr.ib()
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderProduct(object):
|
||||
"""Getting Colorspace as
|
||||
Specific Render Product Parameter for submitting
|
||||
publish job.
|
||||
|
||||
"""
|
||||
colorspace = attr.ib() # colorspace
|
||||
view = attr.ib()
|
||||
productName = attr.ib(default=None)
|
||||
|
||||
|
||||
class ARenderProduct(object):
|
||||
|
||||
def __init__(self):
|
||||
"""Constructor."""
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
self.layer_data.products = self.get_colorspace_data()
|
||||
|
||||
def _get_layer_data(self):
|
||||
return LayerMetadata(
|
||||
frameStart=int(hou.playbar.frameRange()[0]),
|
||||
frameEnd=int(hou.playbar.frameRange()[1]),
|
||||
)
|
||||
|
||||
def get_colorspace_data(self):
|
||||
"""To be implemented by renderer class.
|
||||
|
||||
This should return a list of RenderProducts.
|
||||
|
||||
Returns:
|
||||
list: List of RenderProduct
|
||||
|
||||
"""
|
||||
data = get_color_management_preferences()
|
||||
colorspace_data = [
|
||||
RenderProduct(
|
||||
colorspace=data["display"],
|
||||
view=data["view"],
|
||||
productName=""
|
||||
)
|
||||
]
|
||||
return colorspace_data
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import uuid
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
|
|
@ -581,3 +582,74 @@ def splitext(name, allowed_multidot_extensions):
|
|||
return name[:-len(ext)], ext
|
||||
|
||||
return os.path.splitext(name)
|
||||
|
||||
|
||||
def get_top_referenced_parm(parm):
|
||||
|
||||
processed = set() # disallow infinite loop
|
||||
while True:
|
||||
if parm.path() in processed:
|
||||
raise RuntimeError("Parameter references result in cycle.")
|
||||
|
||||
processed.add(parm.path())
|
||||
|
||||
ref = parm.getReferencedParm()
|
||||
if ref.path() == parm.path():
|
||||
# It returns itself when it doesn't reference
|
||||
# another parameter
|
||||
return ref
|
||||
else:
|
||||
parm = ref
|
||||
|
||||
|
||||
def evalParmNoFrame(node, parm, pad_character="#"):
|
||||
|
||||
parameter = node.parm(parm)
|
||||
assert parameter, "Parameter does not exist: %s.%s" % (node, parm)
|
||||
|
||||
# If the parameter has a parameter reference, then get that
|
||||
# parameter instead as otherwise `unexpandedString()` fails.
|
||||
parameter = get_top_referenced_parm(parameter)
|
||||
|
||||
# Substitute out the frame numbering with padded characters
|
||||
try:
|
||||
raw = parameter.unexpandedString()
|
||||
except hou.Error as exc:
|
||||
print("Failed: %s" % parameter)
|
||||
raise RuntimeError(exc)
|
||||
|
||||
def replace(match):
|
||||
padding = 1
|
||||
n = match.group(2)
|
||||
if n and int(n):
|
||||
padding = int(n)
|
||||
return pad_character * padding
|
||||
|
||||
expression = re.sub(r"(\$F([0-9]*))", replace, raw)
|
||||
|
||||
with hou.ScriptEvalContext(parameter):
|
||||
return hou.expandStringAtFrame(expression, 0)
|
||||
|
||||
|
||||
def get_color_management_preferences():
|
||||
"""Get default OCIO preferences"""
|
||||
data = {
|
||||
"config": hou.Color.ocio_configPath()
|
||||
|
||||
}
|
||||
|
||||
# Get default display and view from OCIO
|
||||
display = hou.Color.ocio_defaultDisplay()
|
||||
disp_regex = re.compile(r"^(?P<name>.+-)(?P<display>.+)$")
|
||||
disp_match = disp_regex.match(display)
|
||||
|
||||
view = hou.Color.ocio_defaultView()
|
||||
view_regex = re.compile(r"^(?P<name>.+- )(?P<view>.+)$")
|
||||
view_match = view_regex.match(view)
|
||||
data.update({
|
||||
"display": disp_match.group("display"),
|
||||
"view": view_match.group("view")
|
||||
|
||||
})
|
||||
|
||||
return data
|
||||
|
|
|
|||
71
openpype/hosts/houdini/plugins/create/create_arnold_rop.py
Normal file
71
openpype/hosts/houdini/plugins/create/create_arnold_rop.py
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.lib import EnumDef
|
||||
|
||||
|
||||
class CreateArnoldRop(plugin.HoudiniCreator):
|
||||
"""Arnold ROP"""
|
||||
|
||||
identifier = "io.openpype.creators.houdini.arnold_rop"
|
||||
label = "Arnold ROP"
|
||||
family = "arnold_rop"
|
||||
icon = "magic"
|
||||
defaults = ["master"]
|
||||
|
||||
# Default extension
|
||||
ext = "exr"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
# Remove the active, we are checking the bypass flag of the nodes
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "arnold"})
|
||||
|
||||
# Add chunk size attribute
|
||||
instance_data["chunkSize"] = 1
|
||||
# Submit for job publishing
|
||||
instance_data["farm"] = True
|
||||
|
||||
instance = super(CreateArnoldRop, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: plugin.CreatedInstance
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
ext = pre_create_data.get("image_format")
|
||||
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
ext=ext,
|
||||
)
|
||||
parms = {
|
||||
# Render frame range
|
||||
"trange": 1,
|
||||
|
||||
# Arnold ROP settings
|
||||
"ar_picture": filepath,
|
||||
"ar_exr_half_precision": 1 # half precision
|
||||
}
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock any parameters in this list
|
||||
to_lock = ["family", "id"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateArnoldRop, self).get_pre_create_attr_defs()
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default=self.ext,
|
||||
label="Image Format Options")
|
||||
]
|
||||
114
openpype/hosts/houdini/plugins/create/create_karma_rop.py
Normal file
114
openpype/hosts/houdini/plugins/create/create_karma_rop.py
Normal file
|
|
@ -0,0 +1,114 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin to create Karma ROP."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.lib import BoolDef, EnumDef, NumberDef
|
||||
|
||||
|
||||
class CreateKarmaROP(plugin.HoudiniCreator):
|
||||
"""Karma ROP"""
|
||||
identifier = "io.openpype.creators.houdini.karma_rop"
|
||||
label = "Karma ROP"
|
||||
family = "karma_rop"
|
||||
icon = "magic"
|
||||
defaults = ["master"]
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "karma"})
|
||||
# Add chunk size attribute
|
||||
instance_data["chunkSize"] = 10
|
||||
# Submit for job publishing
|
||||
instance_data["farm"] = True
|
||||
|
||||
instance = super(CreateKarmaROP, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
ext = pre_create_data.get("image_format")
|
||||
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
ext=ext,
|
||||
)
|
||||
checkpoint = "{cp_dir}{subset_name}.$F4.checkpoint".format(
|
||||
cp_dir=hou.text.expandString("$HIP/pyblish/"),
|
||||
subset_name=subset_name
|
||||
)
|
||||
|
||||
usd_directory = "{usd_dir}{subset_name}_$RENDERID".format(
|
||||
usd_dir=hou.text.expandString("$HIP/pyblish/renders/usd_renders/"), # noqa
|
||||
subset_name=subset_name
|
||||
)
|
||||
|
||||
parms = {
|
||||
# Render Frame Range
|
||||
"trange": 1,
|
||||
# Karma ROP Setting
|
||||
"picture": filepath,
|
||||
# Karma Checkpoint Setting
|
||||
"productName": checkpoint,
|
||||
# USD Output Directory
|
||||
"savetodirectory": usd_directory,
|
||||
}
|
||||
|
||||
res_x = pre_create_data.get("res_x")
|
||||
res_y = pre_create_data.get("res_y")
|
||||
|
||||
if self.selected_nodes:
|
||||
# If camera found in selection
|
||||
# we will use as render camera
|
||||
camera = None
|
||||
for node in self.selected_nodes:
|
||||
if node.type().name() == "cam":
|
||||
has_camera = pre_create_data.get("cam_res")
|
||||
if has_camera:
|
||||
res_x = node.evalParm("resx")
|
||||
res_y = node.evalParm("resy")
|
||||
|
||||
if not camera:
|
||||
self.log.warning("No render camera found in selection")
|
||||
|
||||
parms.update({
|
||||
"camera": camera or "",
|
||||
"resolutionx": res_x,
|
||||
"resolutiony": res_y,
|
||||
})
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateKarmaROP, self).get_pre_create_attr_defs()
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default="exr",
|
||||
label="Image Format Options"),
|
||||
NumberDef("res_x",
|
||||
label="width",
|
||||
default=1920,
|
||||
decimals=0),
|
||||
NumberDef("res_y",
|
||||
label="height",
|
||||
default=720,
|
||||
decimals=0),
|
||||
BoolDef("cam_res",
|
||||
label="Camera Resolution",
|
||||
default=False)
|
||||
]
|
||||
88
openpype/hosts/houdini/plugins/create/create_mantra_rop.py
Normal file
88
openpype/hosts/houdini/plugins/create/create_mantra_rop.py
Normal file
|
|
@ -0,0 +1,88 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin to create Mantra ROP."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.lib import EnumDef, BoolDef
|
||||
|
||||
|
||||
class CreateMantraROP(plugin.HoudiniCreator):
|
||||
"""Mantra ROP"""
|
||||
identifier = "io.openpype.creators.houdini.mantra_rop"
|
||||
label = "Mantra ROP"
|
||||
family = "mantra_rop"
|
||||
icon = "magic"
|
||||
defaults = ["master"]
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "ifd"})
|
||||
# Add chunk size attribute
|
||||
instance_data["chunkSize"] = 10
|
||||
# Submit for job publishing
|
||||
instance_data["farm"] = True
|
||||
|
||||
instance = super(CreateMantraROP, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
ext = pre_create_data.get("image_format")
|
||||
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
ext=ext,
|
||||
)
|
||||
|
||||
parms = {
|
||||
# Render Frame Range
|
||||
"trange": 1,
|
||||
# Mantra ROP Setting
|
||||
"vm_picture": filepath,
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
# If camera found in selection
|
||||
# we will use as render camera
|
||||
camera = None
|
||||
for node in self.selected_nodes:
|
||||
if node.type().name() == "cam":
|
||||
camera = node.path()
|
||||
|
||||
if not camera:
|
||||
self.log.warning("No render camera found in selection")
|
||||
|
||||
parms.update({"camera": camera or ""})
|
||||
|
||||
custom_res = pre_create_data.get("override_resolution")
|
||||
if custom_res:
|
||||
parms.update({"override_camerares": 1})
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateMantraROP, self).get_pre_create_attr_defs()
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default="exr",
|
||||
label="Image Format Options"),
|
||||
BoolDef("override_resolution",
|
||||
label="Override Camera Resolution",
|
||||
tooltip="Override the current camera "
|
||||
"resolution, recommended for IPR.",
|
||||
default=False)
|
||||
]
|
||||
|
|
@ -1,7 +1,10 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin to create Redshift ROP."""
|
||||
import hou # noqa
|
||||
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.lib import EnumDef
|
||||
|
||||
|
||||
class CreateRedshiftROP(plugin.HoudiniCreator):
|
||||
|
|
@ -11,20 +14,16 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
family = "redshift_rop"
|
||||
icon = "magic"
|
||||
defaults = ["master"]
|
||||
ext = "exr"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "Redshift_ROP"})
|
||||
# Add chunk size attribute
|
||||
instance_data["chunkSize"] = 10
|
||||
|
||||
# Clear the family prefix from the subset
|
||||
subset = subset_name
|
||||
subset_no_prefix = subset[len(self.family):]
|
||||
subset_no_prefix = subset_no_prefix[0].lower() + subset_no_prefix[1:]
|
||||
subset_name = subset_no_prefix
|
||||
# Submit for job publishing
|
||||
instance_data["farm"] = True
|
||||
|
||||
instance = super(CreateRedshiftROP, self).create(
|
||||
subset_name,
|
||||
|
|
@ -34,11 +33,10 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
basename = instance_node.name()
|
||||
instance_node.setName(basename + "_ROP", unique_name=True)
|
||||
|
||||
# Also create the linked Redshift IPR Rop
|
||||
try:
|
||||
ipr_rop = self.parent.createNode(
|
||||
ipr_rop = instance_node.parent().createNode(
|
||||
"Redshift_IPR", node_name=basename + "_IPR"
|
||||
)
|
||||
except hou.OperationFailed:
|
||||
|
|
@ -50,19 +48,58 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1))
|
||||
|
||||
# Set the linked rop to the Redshift ROP
|
||||
ipr_rop.parm("linked_rop").set(ipr_rop.relativePathTo(instance))
|
||||
ipr_rop.parm("linked_rop").set(instance_node.path())
|
||||
|
||||
ext = pre_create_data.get("image_format")
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
fmt="${aov}.$F4.{ext}".format(aov="AOV", ext=ext)
|
||||
)
|
||||
|
||||
prefix = '${HIP}/render/${HIPNAME}/`chs("subset")`.${AOV}.$F4.exr'
|
||||
parms = {
|
||||
# Render frame range
|
||||
"trange": 1,
|
||||
# Redshift ROP settings
|
||||
"RS_outputFileNamePrefix": prefix,
|
||||
"RS_outputMultilayerMode": 0, # no multi-layered exr
|
||||
"RS_outputFileNamePrefix": filepath,
|
||||
"RS_outputMultilayerMode": "1", # no multi-layered exr
|
||||
"RS_outputBeautyAOVSuffix": "beauty",
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
# set up the render camera from the selected node
|
||||
camera = None
|
||||
for node in self.selected_nodes:
|
||||
if node.type().name() == "cam":
|
||||
camera = node.path()
|
||||
parms.update({
|
||||
"RS_renderCamera": camera or ""})
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
node = instance.data.get("instance_node")
|
||||
|
||||
ipr_node = hou.node(f"{node}_IPR")
|
||||
if ipr_node:
|
||||
ipr_node.destroy()
|
||||
|
||||
return super(CreateRedshiftROP, self).remove_instances(instances)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateRedshiftROP, self).get_pre_create_attr_defs()
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default=self.ext,
|
||||
label="Image Format Options")
|
||||
]
|
||||
|
|
|
|||
156
openpype/hosts/houdini/plugins/create/create_vray_rop.py
Normal file
156
openpype/hosts/houdini/plugins/create/create_vray_rop.py
Normal file
|
|
@ -0,0 +1,156 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin to create VRay ROP."""
|
||||
import hou
|
||||
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.lib import EnumDef, BoolDef
|
||||
|
||||
|
||||
class CreateVrayROP(plugin.HoudiniCreator):
|
||||
"""VRay ROP"""
|
||||
|
||||
identifier = "io.openpype.creators.houdini.vray_rop"
|
||||
label = "VRay ROP"
|
||||
family = "vray_rop"
|
||||
icon = "magic"
|
||||
defaults = ["master"]
|
||||
|
||||
ext = "exr"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "vray_renderer"})
|
||||
# Add chunk size attribute
|
||||
instance_data["chunkSize"] = 10
|
||||
# Submit for job publishing
|
||||
instance_data["farm"] = True
|
||||
|
||||
instance = super(CreateVrayROP, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
# Add IPR for Vray
|
||||
basename = instance_node.name()
|
||||
try:
|
||||
ipr_rop = instance_node.parent().createNode(
|
||||
"vray", node_name=basename + "_IPR"
|
||||
)
|
||||
except hou.OperationFailed:
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
"Cannot create Vray render node. "
|
||||
"Make sure Vray installed and enabled!"
|
||||
)
|
||||
|
||||
ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1))
|
||||
ipr_rop.parm("rop").set(instance_node.path())
|
||||
|
||||
parms = {
|
||||
"trange": 1,
|
||||
"SettingsEXR_bits_per_channel": "16" # half precision
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
# set up the render camera from the selected node
|
||||
camera = None
|
||||
for node in self.selected_nodes:
|
||||
if node.type().name() == "cam":
|
||||
camera = node.path()
|
||||
parms.update({
|
||||
"render_camera": camera or ""
|
||||
})
|
||||
|
||||
# Enable render element
|
||||
ext = pre_create_data.get("image_format")
|
||||
instance_data["RenderElement"] = pre_create_data.get("render_element_enabled") # noqa
|
||||
if pre_create_data.get("render_element_enabled", True):
|
||||
# Vray has its own tag for AOV file output
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
fmt="${aov}.$F4.{ext}".format(aov="AOV",
|
||||
ext=ext)
|
||||
)
|
||||
filepath = "{}{}".format(
|
||||
hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
"{}/{}.${}.$F4.{}".format(subset_name,
|
||||
subset_name,
|
||||
"AOV",
|
||||
ext)
|
||||
)
|
||||
re_rop = instance_node.parent().createNode(
|
||||
"vray_render_channels",
|
||||
node_name=basename + "_render_element"
|
||||
)
|
||||
# move the render element node next to the vray renderer node
|
||||
re_rop.setPosition(instance_node.position() + hou.Vector2(0, 1))
|
||||
re_path = re_rop.path()
|
||||
parms.update({
|
||||
"use_render_channels": 1,
|
||||
"SettingsOutput_img_file_path": filepath,
|
||||
"render_network_render_channels": re_path
|
||||
})
|
||||
|
||||
else:
|
||||
filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format(
|
||||
renders_dir=hou.text.expandString("$HIP/pyblish/renders/"),
|
||||
subset_name=subset_name,
|
||||
fmt="$F4.{ext}".format(ext=ext)
|
||||
)
|
||||
parms.update({
|
||||
"use_render_channels": 0,
|
||||
"SettingsOutput_img_file_path": filepath
|
||||
})
|
||||
|
||||
custom_res = pre_create_data.get("override_resolution")
|
||||
if custom_res:
|
||||
parms.update({"override_camerares": 1})
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# lock parameters from AVALON
|
||||
to_lock = ["family", "id"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
node = instance.data.get("instance_node")
|
||||
# for the extra render node from the plugins
|
||||
# such as vray and redshift
|
||||
ipr_node = hou.node("{}{}".format(node, "_IPR"))
|
||||
if ipr_node:
|
||||
ipr_node.destroy()
|
||||
re_node = hou.node("{}{}".format(node,
|
||||
"_render_element"))
|
||||
if re_node:
|
||||
re_node.destroy()
|
||||
|
||||
return super(CreateVrayROP, self).remove_instances(instances)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateVrayROP, self).get_pre_create_attr_defs()
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default=self.ext,
|
||||
label="Image Format Options"),
|
||||
BoolDef("override_resolution",
|
||||
label="Override Camera Resolution",
|
||||
tooltip="Override the current camera "
|
||||
"resolution, recommended for IPR.",
|
||||
default=False),
|
||||
BoolDef("render_element_enabled",
|
||||
label="Render Element",
|
||||
tooltip="Create Render Element Node "
|
||||
"if enabled",
|
||||
default=False)
|
||||
]
|
||||
135
openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py
Normal file
135
openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py
Normal file
|
|
@ -0,0 +1,135 @@
|
|||
import os
|
||||
import re
|
||||
|
||||
import hou
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api import colorspace
|
||||
from openpype.hosts.houdini.api.lib import (
|
||||
evalParmNoFrame, get_color_management_preferences)
|
||||
|
||||
|
||||
class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
|
||||
"""Collect Arnold ROP Render Products
|
||||
|
||||
Collects the instance.data["files"] for the render products.
|
||||
|
||||
Provides:
|
||||
instance -> files
|
||||
|
||||
"""
|
||||
|
||||
label = "Arnold ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
hosts = ["houdini"]
|
||||
families = ["arnold_rop"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
rop = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# Collect chunkSize
|
||||
chunk_size_parm = rop.parm("chunkSize")
|
||||
if chunk_size_parm:
|
||||
chunk_size = int(chunk_size_parm.eval())
|
||||
instance.data["chunkSize"] = chunk_size
|
||||
self.log.debug("Chunk Size: %s" % chunk_size)
|
||||
|
||||
default_prefix = evalParmNoFrame(rop, "ar_picture")
|
||||
render_products = []
|
||||
|
||||
# Default beauty AOV
|
||||
beauty_product = self.get_render_product_name(prefix=default_prefix,
|
||||
suffix=None)
|
||||
render_products.append(beauty_product)
|
||||
|
||||
files_by_aov = {
|
||||
"": self.generate_expected_files(instance, beauty_product)
|
||||
}
|
||||
|
||||
num_aovs = rop.evalParm("ar_aovs")
|
||||
for index in range(1, num_aovs + 1):
|
||||
# Skip disabled AOVs
|
||||
if not rop.evalParm("ar_enable_aovP{}".format(index)):
|
||||
continue
|
||||
|
||||
if rop.evalParm("ar_aov_exr_enable_layer_name{}".format(index)):
|
||||
label = rop.evalParm("ar_aov_exr_layer_name{}".format(index))
|
||||
else:
|
||||
label = evalParmNoFrame(rop, "ar_aov_label{}".format(index))
|
||||
|
||||
aov_product = self.get_render_product_name(default_prefix,
|
||||
suffix=label)
|
||||
render_products.append(aov_product)
|
||||
files_by_aov[label] = self.generate_expected_files(instance,
|
||||
aov_product)
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: {}".format(product))
|
||||
|
||||
instance.data["files"] = list(render_products)
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
|
||||
# For now by default do NOT try to publish the rendered output
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = [] # stub required data
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
|
||||
# update the colorspace data
|
||||
colorspace_data = get_color_management_preferences()
|
||||
instance.data["colorspaceConfig"] = colorspace_data["config"]
|
||||
instance.data["colorspaceDisplay"] = colorspace_data["display"]
|
||||
instance.data["colorspaceView"] = colorspace_data["view"]
|
||||
|
||||
def get_render_product_name(self, prefix, suffix):
|
||||
"""Return the output filename using the AOV prefix and suffix"""
|
||||
|
||||
# When AOV is explicitly defined in prefix we just swap it out
|
||||
# directly with the AOV suffix to embed it.
|
||||
# Note: ${AOV} seems to be evaluated in the parameter as %AOV%
|
||||
if "%AOV%" in prefix:
|
||||
# It seems that when some special separator characters are present
|
||||
# before the %AOV% token that Redshift will secretly remove it if
|
||||
# there is no suffix for the current product, for example:
|
||||
# foo_%AOV% -> foo.exr
|
||||
pattern = "%AOV%" if suffix else "[._-]?%AOV%"
|
||||
product_name = re.sub(pattern,
|
||||
suffix,
|
||||
prefix,
|
||||
flags=re.IGNORECASE)
|
||||
else:
|
||||
if suffix:
|
||||
# Add ".{suffix}" before the extension
|
||||
prefix_base, ext = os.path.splitext(prefix)
|
||||
product_name = prefix_base + "." + suffix + ext
|
||||
else:
|
||||
product_name = prefix
|
||||
|
||||
return product_name
|
||||
|
||||
def generate_expected_files(self, instance, path):
|
||||
"""Create expected files in instance data"""
|
||||
|
||||
dir = os.path.dirname(path)
|
||||
file = os.path.basename(path)
|
||||
|
||||
if "#" in file:
|
||||
def replace(match):
|
||||
return "%0{}d".format(len(match.group()))
|
||||
|
||||
file = re.sub("#+", replace, file)
|
||||
|
||||
if "%" not in file:
|
||||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
return expected_files
|
||||
|
|
@ -11,15 +11,13 @@ from openpype.hosts.houdini.api import lib
|
|||
class CollectFrames(pyblish.api.InstancePlugin):
|
||||
"""Collect all frames which would be saved from the ROP nodes"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
ropnode = hou.node(instance.data["instance_node"])
|
||||
frame_data = lib.get_frame_data(ropnode)
|
||||
instance.data.update(frame_data)
|
||||
|
||||
start_frame = instance.data.get("frameStart", None)
|
||||
end_frame = instance.data.get("frameEnd", None)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,56 @@
|
|||
import hou
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectInstanceNodeFrameRange(pyblish.api.InstancePlugin):
|
||||
"""Collect time range frame data for the instance node."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.001
|
||||
label = "Instance Node Frame Range"
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
node_path = instance.data.get("instance_node")
|
||||
node = hou.node(node_path) if node_path else None
|
||||
if not node_path or not node:
|
||||
self.log.debug("No instance node found for instance: "
|
||||
"{}".format(instance))
|
||||
return
|
||||
|
||||
frame_data = self.get_frame_data(node)
|
||||
if not frame_data:
|
||||
return
|
||||
|
||||
self.log.info("Collected time data: {}".format(frame_data))
|
||||
instance.data.update(frame_data)
|
||||
|
||||
def get_frame_data(self, node):
|
||||
"""Get the frame data: start frame, end frame and steps
|
||||
Args:
|
||||
node(hou.Node)
|
||||
|
||||
Returns:
|
||||
dict
|
||||
|
||||
"""
|
||||
|
||||
data = {}
|
||||
|
||||
if node.parm("trange") is None:
|
||||
self.log.debug("Node has no 'trange' parameter: "
|
||||
"{}".format(node.path()))
|
||||
return data
|
||||
|
||||
if node.evalParm("trange") == 0:
|
||||
# Ignore 'render current frame'
|
||||
self.log.debug("Node '{}' has 'Render current frame' set. "
|
||||
"Time range data ignored.".format(node.path()))
|
||||
return data
|
||||
|
||||
data["frameStart"] = node.evalParm("f1")
|
||||
data["frameEnd"] = node.evalParm("f2")
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
return data
|
||||
|
|
@ -70,16 +70,10 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
if "active" in data:
|
||||
data["publish"] = data["active"]
|
||||
|
||||
data.update(self.get_frame_data(node))
|
||||
|
||||
# Create nice name if the instance has a frame range.
|
||||
label = data.get("name", node.name())
|
||||
label += " (%s)" % data["asset"] # include asset in name
|
||||
|
||||
if "frameStart" in data and "frameEnd" in data:
|
||||
frames = "[{frameStart} - {frameEnd}]".format(**data)
|
||||
label = "{} {}".format(label, frames)
|
||||
|
||||
instance = context.create_instance(label)
|
||||
|
||||
# Include `families` using `family` data
|
||||
|
|
@ -118,6 +112,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
data["frameStart"] = node.evalParm("f1")
|
||||
data["frameEnd"] = node.evalParm("f2")
|
||||
data["steps"] = node.evalParm("f3")
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
return data
|
||||
|
|
|
|||
104
openpype/hosts/houdini/plugins/publish/collect_karma_rop.py
Normal file
104
openpype/hosts/houdini/plugins/publish/collect_karma_rop.py
Normal file
|
|
@ -0,0 +1,104 @@
|
|||
import re
|
||||
import os
|
||||
|
||||
import hou
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api.lib import (
|
||||
evalParmNoFrame,
|
||||
get_color_management_preferences
|
||||
)
|
||||
from openpype.hosts.houdini.api import (
|
||||
colorspace
|
||||
)
|
||||
|
||||
|
||||
class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
|
||||
"""Collect Karma Render Products
|
||||
|
||||
Collects the instance.data["files"] for the multipart render product.
|
||||
|
||||
Provides:
|
||||
instance -> files
|
||||
|
||||
"""
|
||||
|
||||
label = "Karma ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
hosts = ["houdini"]
|
||||
families = ["karma_rop"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
rop = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# Collect chunkSize
|
||||
chunk_size_parm = rop.parm("chunkSize")
|
||||
if chunk_size_parm:
|
||||
chunk_size = int(chunk_size_parm.eval())
|
||||
instance.data["chunkSize"] = chunk_size
|
||||
self.log.debug("Chunk Size: %s" % chunk_size)
|
||||
|
||||
default_prefix = evalParmNoFrame(rop, "picture")
|
||||
render_products = []
|
||||
|
||||
# Default beauty AOV
|
||||
beauty_product = self.get_render_product_name(
|
||||
prefix=default_prefix, suffix=None
|
||||
)
|
||||
render_products.append(beauty_product)
|
||||
|
||||
files_by_aov = {
|
||||
"beauty": self.generate_expected_files(instance,
|
||||
beauty_product)
|
||||
}
|
||||
|
||||
filenames = list(render_products)
|
||||
instance.data["files"] = filenames
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: %s" % product)
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
|
||||
# update the colorspace data
|
||||
colorspace_data = get_color_management_preferences()
|
||||
instance.data["colorspaceConfig"] = colorspace_data["config"]
|
||||
instance.data["colorspaceDisplay"] = colorspace_data["display"]
|
||||
instance.data["colorspaceView"] = colorspace_data["view"]
|
||||
|
||||
def get_render_product_name(self, prefix, suffix):
|
||||
product_name = prefix
|
||||
if suffix:
|
||||
# Add ".{suffix}" before the extension
|
||||
prefix_base, ext = os.path.splitext(prefix)
|
||||
product_name = "{}.{}{}".format(prefix_base, suffix, ext)
|
||||
|
||||
return product_name
|
||||
|
||||
def generate_expected_files(self, instance, path):
|
||||
"""Create expected files in instance data"""
|
||||
|
||||
dir = os.path.dirname(path)
|
||||
file = os.path.basename(path)
|
||||
|
||||
if "#" in file:
|
||||
def replace(match):
|
||||
return "%0{}d".format(len(match.group()))
|
||||
|
||||
file = re.sub("#+", replace, file)
|
||||
|
||||
if "%" not in file:
|
||||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
return expected_files
|
||||
127
openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py
Normal file
127
openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py
Normal file
|
|
@ -0,0 +1,127 @@
|
|||
import re
|
||||
import os
|
||||
|
||||
import hou
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api.lib import (
|
||||
evalParmNoFrame,
|
||||
get_color_management_preferences
|
||||
)
|
||||
from openpype.hosts.houdini.api import (
|
||||
colorspace
|
||||
)
|
||||
|
||||
|
||||
class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
|
||||
"""Collect Mantra Render Products
|
||||
|
||||
Collects the instance.data["files"] for the render products.
|
||||
|
||||
Provides:
|
||||
instance -> files
|
||||
|
||||
"""
|
||||
|
||||
label = "Mantra ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
hosts = ["houdini"]
|
||||
families = ["mantra_rop"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
rop = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# Collect chunkSize
|
||||
chunk_size_parm = rop.parm("chunkSize")
|
||||
if chunk_size_parm:
|
||||
chunk_size = int(chunk_size_parm.eval())
|
||||
instance.data["chunkSize"] = chunk_size
|
||||
self.log.debug("Chunk Size: %s" % chunk_size)
|
||||
|
||||
default_prefix = evalParmNoFrame(rop, "vm_picture")
|
||||
render_products = []
|
||||
|
||||
# Default beauty AOV
|
||||
beauty_product = self.get_render_product_name(
|
||||
prefix=default_prefix, suffix=None
|
||||
)
|
||||
render_products.append(beauty_product)
|
||||
|
||||
files_by_aov = {
|
||||
"beauty": self.generate_expected_files(instance,
|
||||
beauty_product)
|
||||
}
|
||||
|
||||
aov_numbers = rop.evalParm("vm_numaux")
|
||||
if aov_numbers > 0:
|
||||
# get the filenames of the AOVs
|
||||
for i in range(1, aov_numbers + 1):
|
||||
var = rop.evalParm("vm_variable_plane%d" % i)
|
||||
if var:
|
||||
aov_name = "vm_filename_plane%d" % i
|
||||
aov_boolean = "vm_usefile_plane%d" % i
|
||||
aov_enabled = rop.evalParm(aov_boolean)
|
||||
has_aov_path = rop.evalParm(aov_name)
|
||||
if has_aov_path and aov_enabled == 1:
|
||||
aov_prefix = evalParmNoFrame(rop, aov_name)
|
||||
aov_product = self.get_render_product_name(
|
||||
prefix=aov_prefix, suffix=None
|
||||
)
|
||||
render_products.append(aov_product)
|
||||
|
||||
files_by_aov[var] = self.generate_expected_files(instance, aov_product) # noqa
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: %s" % product)
|
||||
|
||||
filenames = list(render_products)
|
||||
instance.data["files"] = filenames
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
|
||||
# For now by default do NOT try to publish the rendered output
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = [] # stub required data
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
|
||||
# update the colorspace data
|
||||
colorspace_data = get_color_management_preferences()
|
||||
instance.data["colorspaceConfig"] = colorspace_data["config"]
|
||||
instance.data["colorspaceDisplay"] = colorspace_data["display"]
|
||||
instance.data["colorspaceView"] = colorspace_data["view"]
|
||||
|
||||
def get_render_product_name(self, prefix, suffix):
|
||||
product_name = prefix
|
||||
if suffix:
|
||||
# Add ".{suffix}" before the extension
|
||||
prefix_base, ext = os.path.splitext(prefix)
|
||||
product_name = prefix_base + "." + suffix + ext
|
||||
|
||||
return product_name
|
||||
|
||||
def generate_expected_files(self, instance, path):
|
||||
"""Create expected files in instance data"""
|
||||
|
||||
dir = os.path.dirname(path)
|
||||
file = os.path.basename(path)
|
||||
|
||||
if "#" in file:
|
||||
def replace(match):
|
||||
return "%0{}d".format(len(match.group()))
|
||||
|
||||
file = re.sub("#+", replace, file)
|
||||
|
||||
if "%" not in file:
|
||||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
return expected_files
|
||||
|
|
@ -4,52 +4,13 @@ import os
|
|||
import hou
|
||||
import pyblish.api
|
||||
|
||||
|
||||
def get_top_referenced_parm(parm):
|
||||
|
||||
processed = set() # disallow infinite loop
|
||||
while True:
|
||||
if parm.path() in processed:
|
||||
raise RuntimeError("Parameter references result in cycle.")
|
||||
|
||||
processed.add(parm.path())
|
||||
|
||||
ref = parm.getReferencedParm()
|
||||
if ref.path() == parm.path():
|
||||
# It returns itself when it doesn't reference
|
||||
# another parameter
|
||||
return ref
|
||||
else:
|
||||
parm = ref
|
||||
|
||||
|
||||
def evalParmNoFrame(node, parm, pad_character="#"):
|
||||
|
||||
parameter = node.parm(parm)
|
||||
assert parameter, "Parameter does not exist: %s.%s" % (node, parm)
|
||||
|
||||
# If the parameter has a parameter reference, then get that
|
||||
# parameter instead as otherwise `unexpandedString()` fails.
|
||||
parameter = get_top_referenced_parm(parameter)
|
||||
|
||||
# Substitute out the frame numbering with padded characters
|
||||
try:
|
||||
raw = parameter.unexpandedString()
|
||||
except hou.Error as exc:
|
||||
print("Failed: %s" % parameter)
|
||||
raise RuntimeError(exc)
|
||||
|
||||
def replace(match):
|
||||
padding = 1
|
||||
n = match.group(2)
|
||||
if n and int(n):
|
||||
padding = int(n)
|
||||
return pad_character * padding
|
||||
|
||||
expression = re.sub(r"(\$F([0-9]*))", replace, raw)
|
||||
|
||||
with hou.ScriptEvalContext(parameter):
|
||||
return hou.expandStringAtFrame(expression, 0)
|
||||
from openpype.hosts.houdini.api.lib import (
|
||||
evalParmNoFrame,
|
||||
get_color_management_preferences
|
||||
)
|
||||
from openpype.hosts.houdini.api import (
|
||||
colorspace
|
||||
)
|
||||
|
||||
|
||||
class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
||||
|
|
@ -87,6 +48,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
prefix=default_prefix, suffix=beauty_suffix
|
||||
)
|
||||
render_products.append(beauty_product)
|
||||
files_by_aov = {
|
||||
"_": self.generate_expected_files(instance,
|
||||
beauty_product)}
|
||||
|
||||
num_aovs = rop.evalParm("RS_aov")
|
||||
for index in range(num_aovs):
|
||||
|
|
@ -104,11 +68,29 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
aov_product = self.get_render_product_name(aov_prefix, aov_suffix)
|
||||
render_products.append(aov_product)
|
||||
|
||||
files_by_aov[aov_suffix] = self.generate_expected_files(instance,
|
||||
aov_product) # noqa
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: %s" % product)
|
||||
|
||||
filenames = list(render_products)
|
||||
instance.data["files"] = filenames
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
|
||||
# For now by default do NOT try to publish the rendered output
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = [] # stub required data
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
|
||||
# update the colorspace data
|
||||
colorspace_data = get_color_management_preferences()
|
||||
instance.data["colorspaceConfig"] = colorspace_data["config"]
|
||||
instance.data["colorspaceDisplay"] = colorspace_data["display"]
|
||||
instance.data["colorspaceView"] = colorspace_data["view"]
|
||||
|
||||
def get_render_product_name(self, prefix, suffix):
|
||||
"""Return the output filename using the AOV prefix and suffix"""
|
||||
|
|
@ -133,3 +115,27 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
product_name = prefix
|
||||
|
||||
return product_name
|
||||
|
||||
def generate_expected_files(self, instance, path):
|
||||
"""Create expected files in instance data"""
|
||||
|
||||
dir = os.path.dirname(path)
|
||||
file = os.path.basename(path)
|
||||
|
||||
if "#" in file:
|
||||
def replace(match):
|
||||
return "%0{}d".format(len(match.group()))
|
||||
|
||||
file = re.sub("#+", replace, file)
|
||||
|
||||
if "%" not in file:
|
||||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
return expected_files
|
||||
|
|
|
|||
|
|
@ -0,0 +1,41 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collector plugin for frames data on ROP instances."""
|
||||
import hou # noqa
|
||||
import pyblish.api
|
||||
from openpype.hosts.houdini.api import lib
|
||||
|
||||
|
||||
class CollectRopFrameRange(pyblish.api.InstancePlugin):
|
||||
"""Collect all frames which would be saved from the ROP nodes"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect RopNode Frame Range"
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
node_path = instance.data.get("instance_node")
|
||||
if node_path is None:
|
||||
# Instance without instance node like a workfile instance
|
||||
return
|
||||
|
||||
ropnode = hou.node(node_path)
|
||||
frame_data = lib.get_frame_data(ropnode)
|
||||
|
||||
if "frameStart" in frame_data and "frameEnd" in frame_data:
|
||||
|
||||
# Log artist friendly message about the collected frame range
|
||||
message = (
|
||||
"Frame range {0[frameStart]} - {0[frameEnd]}"
|
||||
).format(frame_data)
|
||||
if frame_data.get("step", 1.0) != 1.0:
|
||||
message += " with step {0[step]}".format(frame_data)
|
||||
self.log.info(message)
|
||||
|
||||
instance.data.update(frame_data)
|
||||
|
||||
# Add frame range to label if the instance has a frame range.
|
||||
label = instance.data.get("label", instance.data["name"])
|
||||
instance.data["label"] = (
|
||||
"{0} [{1[frameStart]} - {1[frameEnd]}]".format(label,
|
||||
frame_data)
|
||||
)
|
||||
129
openpype/hosts/houdini/plugins/publish/collect_vray_rop.py
Normal file
129
openpype/hosts/houdini/plugins/publish/collect_vray_rop.py
Normal file
|
|
@ -0,0 +1,129 @@
|
|||
import re
|
||||
import os
|
||||
|
||||
import hou
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api.lib import (
|
||||
evalParmNoFrame,
|
||||
get_color_management_preferences
|
||||
)
|
||||
from openpype.hosts.houdini.api import (
|
||||
colorspace
|
||||
)
|
||||
|
||||
|
||||
class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
||||
"""Collect Vray Render Products
|
||||
|
||||
Collects the instance.data["files"] for the render products.
|
||||
|
||||
Provides:
|
||||
instance -> files
|
||||
|
||||
"""
|
||||
|
||||
label = "VRay ROP Render Products"
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
hosts = ["houdini"]
|
||||
families = ["vray_rop"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
rop = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# Collect chunkSize
|
||||
chunk_size_parm = rop.parm("chunkSize")
|
||||
if chunk_size_parm:
|
||||
chunk_size = int(chunk_size_parm.eval())
|
||||
instance.data["chunkSize"] = chunk_size
|
||||
self.log.debug("Chunk Size: %s" % chunk_size)
|
||||
|
||||
default_prefix = evalParmNoFrame(rop, "SettingsOutput_img_file_path")
|
||||
render_products = []
|
||||
# TODO: add render elements if render element
|
||||
|
||||
beauty_product = self.get_beauty_render_product(default_prefix)
|
||||
render_products.append(beauty_product)
|
||||
files_by_aov = {
|
||||
"RGB Color": self.generate_expected_files(instance,
|
||||
beauty_product)}
|
||||
|
||||
if instance.data.get("RenderElement", True):
|
||||
render_element = self.get_render_element_name(rop, default_prefix)
|
||||
if render_element:
|
||||
for aov, renderpass in render_element.items():
|
||||
render_products.append(renderpass)
|
||||
files_by_aov[aov] = self.generate_expected_files(instance, renderpass) # noqa
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: %s" % product)
|
||||
filenames = list(render_products)
|
||||
instance.data["files"] = filenames
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
|
||||
# For now by default do NOT try to publish the rendered output
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = [] # stub required data
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
self.log.debug("expectedFiles:{}".format(files_by_aov))
|
||||
|
||||
# update the colorspace data
|
||||
colorspace_data = get_color_management_preferences()
|
||||
instance.data["colorspaceConfig"] = colorspace_data["config"]
|
||||
instance.data["colorspaceDisplay"] = colorspace_data["display"]
|
||||
instance.data["colorspaceView"] = colorspace_data["view"]
|
||||
|
||||
def get_beauty_render_product(self, prefix, suffix="<reName>"):
|
||||
"""Return the beauty output filename if render element enabled
|
||||
"""
|
||||
aov_parm = ".{}".format(suffix)
|
||||
beauty_product = None
|
||||
if aov_parm in prefix:
|
||||
beauty_product = prefix.replace(aov_parm, "")
|
||||
else:
|
||||
beauty_product = prefix
|
||||
|
||||
return beauty_product
|
||||
|
||||
def get_render_element_name(self, node, prefix, suffix="<reName>"):
|
||||
"""Return the output filename using the AOV prefix and suffix
|
||||
"""
|
||||
render_element_dict = {}
|
||||
# need a rewrite
|
||||
re_path = node.evalParm("render_network_render_channels")
|
||||
if re_path:
|
||||
node_children = hou.node(re_path).children()
|
||||
for element in node_children:
|
||||
if element.shaderName() != "vray:SettingsRenderChannels":
|
||||
aov = str(element)
|
||||
render_product = prefix.replace(suffix, aov)
|
||||
render_element_dict[aov] = render_product
|
||||
return render_element_dict
|
||||
|
||||
def generate_expected_files(self, instance, path):
|
||||
"""Create expected files in instance data"""
|
||||
|
||||
dir = os.path.dirname(path)
|
||||
file = os.path.basename(path)
|
||||
|
||||
if "#" in file:
|
||||
def replace(match):
|
||||
return "%0{}d".format(len(match.group()))
|
||||
|
||||
file = re.sub("#+", replace, file)
|
||||
|
||||
if "%" not in file:
|
||||
return path
|
||||
|
||||
expected_files = []
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"]
|
||||
for i in range(int(start), (int(end) + 1)):
|
||||
expected_files.append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
return expected_files
|
||||
|
|
@ -2,7 +2,10 @@ import pyblish.api
|
|||
|
||||
from openpype.lib import version_up
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.action import get_errored_plugins_from_data
|
||||
from openpype.hosts.houdini.api import HoudiniHost
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Increment the current file.
|
||||
|
|
@ -14,17 +17,32 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
|||
label = "Increment current file"
|
||||
order = pyblish.api.IntegratorOrder + 9.0
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
families = ["workfile",
|
||||
"redshift_rop",
|
||||
"arnold_rop",
|
||||
"mantra_rop",
|
||||
"karma_rop",
|
||||
"usdrender"]
|
||||
optional = True
|
||||
|
||||
def process(self, context):
|
||||
|
||||
errored_plugins = get_errored_plugins_from_data(context)
|
||||
if any(
|
||||
plugin.__name__ == "HoudiniSubmitPublishDeadline"
|
||||
for plugin in errored_plugins
|
||||
):
|
||||
raise KnownPublishError(
|
||||
"Skipping incrementing current file because "
|
||||
"submission to deadline failed."
|
||||
)
|
||||
|
||||
# Filename must not have changed since collecting
|
||||
host = registered_host() # type: HoudiniHost
|
||||
current_file = host.current_file()
|
||||
assert (
|
||||
context.data["currentFile"] == current_file
|
||||
), "Collected filename from current scene name."
|
||||
), "Collected filename mismatches from current scene name."
|
||||
|
||||
new_filepath = version_up(current_file)
|
||||
host.save_workfile(new_filepath)
|
||||
|
|
|
|||
|
|
@ -1,30 +1,27 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Library of functions useful for 3dsmax pipeline."""
|
||||
import json
|
||||
import six
|
||||
from pymxs import runtime as rt
|
||||
from typing import Union
|
||||
import contextlib
|
||||
import json
|
||||
from typing import Any, Dict, Union
|
||||
|
||||
import six
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project_asset,
|
||||
get_current_project
|
||||
)
|
||||
|
||||
get_current_project, get_current_project_asset,)
|
||||
from pymxs import runtime as rt
|
||||
|
||||
JSON_PREFIX = "JSON::"
|
||||
|
||||
|
||||
def imprint(node_name: str, data: dict) -> bool:
|
||||
node = rt.getNodeByName(node_name)
|
||||
node = rt.GetNodeByName(node_name)
|
||||
if not node:
|
||||
return False
|
||||
|
||||
for k, v in data.items():
|
||||
if isinstance(v, (dict, list)):
|
||||
rt.setUserProp(node, k, f'{JSON_PREFIX}{json.dumps(v)}')
|
||||
rt.SetUserProp(node, k, f"{JSON_PREFIX}{json.dumps(v)}")
|
||||
else:
|
||||
rt.setUserProp(node, k, v)
|
||||
rt.SetUserProp(node, k, v)
|
||||
|
||||
return True
|
||||
|
||||
|
|
@ -44,7 +41,7 @@ def lsattr(
|
|||
Returns:
|
||||
list of nodes.
|
||||
"""
|
||||
root = rt.rootnode if root is None else rt.getNodeByName(root)
|
||||
root = rt.RootNode if root is None else rt.GetNodeByName(root)
|
||||
|
||||
def output_node(node, nodes):
|
||||
nodes.append(node)
|
||||
|
|
@ -55,16 +52,16 @@ def lsattr(
|
|||
output_node(root, nodes)
|
||||
return [
|
||||
n for n in nodes
|
||||
if rt.getUserProp(n, attr) == value
|
||||
if rt.GetUserProp(n, attr) == value
|
||||
] if value else [
|
||||
n for n in nodes
|
||||
if rt.getUserProp(n, attr)
|
||||
if rt.GetUserProp(n, attr)
|
||||
]
|
||||
|
||||
|
||||
def read(container) -> dict:
|
||||
data = {}
|
||||
props = rt.getUserPropBuffer(container)
|
||||
props = rt.GetUserPropBuffer(container)
|
||||
# this shouldn't happen but let's guard against it anyway
|
||||
if not props:
|
||||
return data
|
||||
|
|
@ -79,29 +76,25 @@ def read(container) -> dict:
|
|||
value = value.strip()
|
||||
if isinstance(value.strip(), six.string_types) and \
|
||||
value.startswith(JSON_PREFIX):
|
||||
try:
|
||||
with contextlib.suppress(json.JSONDecodeError):
|
||||
value = json.loads(value[len(JSON_PREFIX):])
|
||||
except json.JSONDecodeError:
|
||||
# not a json
|
||||
pass
|
||||
|
||||
data[key.strip()] = value
|
||||
|
||||
data["instance_node"] = container.name
|
||||
data["instance_node"] = container.Name
|
||||
|
||||
return data
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
previous_selection = rt.getCurrentSelection()
|
||||
previous_selection = rt.GetCurrentSelection()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if previous_selection:
|
||||
rt.select(previous_selection)
|
||||
rt.Select(previous_selection)
|
||||
else:
|
||||
rt.select()
|
||||
rt.Select()
|
||||
|
||||
|
||||
def get_all_children(parent, node_type=None):
|
||||
|
|
@ -123,7 +116,7 @@ def get_all_children(parent, node_type=None):
|
|||
return children
|
||||
child_list = list_children(parent)
|
||||
|
||||
return ([x for x in child_list if rt.superClassOf(x) == node_type]
|
||||
return ([x for x in child_list if rt.SuperClassOf(x) == node_type]
|
||||
if node_type else child_list)
|
||||
|
||||
|
||||
|
|
@ -182,7 +175,7 @@ def set_scene_resolution(width: int, height: int):
|
|||
"""
|
||||
# make sure the render dialog is closed
|
||||
# for the update of resolution
|
||||
# Changing the Render Setup dialog settingsshould be done
|
||||
# Changing the Render Setup dialog settings should be done
|
||||
# with the actual Render Setup dialog in a closed state.
|
||||
if rt.renderSceneDialog.isOpen():
|
||||
rt.renderSceneDialog.close()
|
||||
|
|
@ -190,6 +183,7 @@ def set_scene_resolution(width: int, height: int):
|
|||
rt.renderWidth = width
|
||||
rt.renderHeight = height
|
||||
|
||||
|
||||
def reset_scene_resolution():
|
||||
"""Apply the scene resolution from the project definition
|
||||
|
||||
|
|
@ -212,7 +206,7 @@ def reset_scene_resolution():
|
|||
set_scene_resolution(width, height)
|
||||
|
||||
|
||||
def get_frame_range() -> dict:
|
||||
def get_frame_range() -> Union[Dict[str, Any], None]:
|
||||
"""Get the current assets frame range and handles.
|
||||
|
||||
Returns:
|
||||
|
|
@ -259,7 +253,7 @@ def reset_frame_range(fps: bool = True):
|
|||
frange_cmd = (
|
||||
f"animationRange = interval {frame_start_handle} {frame_end_handle}"
|
||||
)
|
||||
rt.execute(frange_cmd)
|
||||
rt.Execute(frange_cmd)
|
||||
set_render_frame_range(frame_start_handle, frame_end_handle)
|
||||
|
||||
|
||||
|
|
@ -289,5 +283,5 @@ def get_max_version():
|
|||
#(25000, 62, 0, 25, 0, 0, 997, 2023, "")
|
||||
max_info[7] = max version date
|
||||
"""
|
||||
max_info = rt.maxversion()
|
||||
max_info = rt.MaxVersion()
|
||||
return max_info[7]
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ class RenderProducts(object):
|
|||
"""Get all the Arnold AOVs name"""
|
||||
aov_name = []
|
||||
|
||||
amw = rt.MaxtoAOps.AOVsManagerWindow()
|
||||
amw = rt.MaxToAOps.AOVsManagerWindow()
|
||||
aov_mgr = rt.renderers.current.AOVManager
|
||||
# Check if there is any aov group set in AOV manager
|
||||
aov_group_num = len(aov_mgr.drivers)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from operator import attrgetter
|
|||
|
||||
import json
|
||||
|
||||
from openpype.host import HostBase, IWorkfileHost, ILoadHost, INewPublisher
|
||||
from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
register_creator_plugin_path,
|
||||
|
|
@ -28,7 +28,7 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
|||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
|
||||
class MaxHost(HostBase, IWorkfileHost, ILoadHost, INewPublisher):
|
||||
class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
||||
|
||||
name = "max"
|
||||
menu = None
|
||||
|
|
|
|||
|
|
@ -1,15 +1,105 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""3dsmax specific Avalon/Pyblish plugin definitions."""
|
||||
from pymxs import runtime as rt
|
||||
import six
|
||||
from abc import ABCMeta
|
||||
from openpype.pipeline import (
|
||||
CreatorError,
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
|
||||
import six
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.lib import BoolDef
|
||||
from .lib import imprint, read, lsattr
|
||||
from openpype.pipeline import CreatedInstance, Creator, CreatorError
|
||||
|
||||
from .lib import imprint, lsattr, read
|
||||
|
||||
MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
||||
(
|
||||
parameters main rollout:OPparams
|
||||
(
|
||||
all_handles type:#maxObjectTab tabSize:0 tabSizeVariable:on
|
||||
)
|
||||
|
||||
rollout OPparams "OP Parameters"
|
||||
(
|
||||
listbox list_node "Node References" items:#()
|
||||
button button_add "Add to Container"
|
||||
button button_del "Delete from Container"
|
||||
|
||||
fn node_to_name the_node =
|
||||
(
|
||||
handle = the_node.handle
|
||||
obj_name = the_node.name
|
||||
handle_name = obj_name + "<" + handle as string + ">"
|
||||
return handle_name
|
||||
)
|
||||
|
||||
on button_add pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to add to
|
||||
the Container" buttontext:"Add"
|
||||
if current_selection == undefined then return False
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
for c in current_selection do
|
||||
(
|
||||
handle_name = node_to_name c
|
||||
node_ref = NodeTransformMonitor node:c
|
||||
append temp_arr handle_name
|
||||
append i_node_arr node_ref
|
||||
)
|
||||
all_handles = join i_node_arr all_handles
|
||||
list_node.items = join temp_arr list_node.items
|
||||
)
|
||||
|
||||
on button_del pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to remove
|
||||
from the Container" buttontext:"Remove"
|
||||
if current_selection == undefined then return False
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
new_i_node_arr = #()
|
||||
new_temp_arr = #()
|
||||
|
||||
for c in current_selection do
|
||||
(
|
||||
node_ref = NodeTransformMonitor node:c as string
|
||||
handle_name = node_to_name c
|
||||
tmp_all_handles = #()
|
||||
for i in all_handles do
|
||||
(
|
||||
tmp = i as string
|
||||
append tmp_all_handles tmp
|
||||
)
|
||||
idx = finditem tmp_all_handles node_ref
|
||||
if idx do
|
||||
(
|
||||
new_i_node_arr = DeleteItem all_handles idx
|
||||
|
||||
)
|
||||
idx = finditem list_node.items handle_name
|
||||
if idx do
|
||||
(
|
||||
new_temp_arr = DeleteItem list_node.items idx
|
||||
)
|
||||
)
|
||||
all_handles = join i_node_arr new_i_node_arr
|
||||
list_node.items = join temp_arr new_temp_arr
|
||||
)
|
||||
|
||||
on OPparams open do
|
||||
(
|
||||
if all_handles.count != 0 then
|
||||
(
|
||||
temp_arr = #()
|
||||
for x in all_handles do
|
||||
(
|
||||
handle_name = node_to_name x.node
|
||||
append temp_arr handle_name
|
||||
)
|
||||
list_node.items = temp_arr
|
||||
)
|
||||
)
|
||||
)
|
||||
)"""
|
||||
|
||||
|
||||
class OpenPypeCreatorError(CreatorError):
|
||||
|
|
@ -20,28 +110,40 @@ class MaxCreatorBase(object):
|
|||
|
||||
@staticmethod
|
||||
def cache_subsets(shared_data):
|
||||
if shared_data.get("max_cached_subsets") is None:
|
||||
shared_data["max_cached_subsets"] = {}
|
||||
cached_instances = lsattr("id", "pyblish.avalon.instance")
|
||||
for i in cached_instances:
|
||||
creator_id = rt.getUserProp(i, "creator_identifier")
|
||||
if creator_id not in shared_data["max_cached_subsets"]:
|
||||
shared_data["max_cached_subsets"][creator_id] = [i.name]
|
||||
else:
|
||||
shared_data[
|
||||
"max_cached_subsets"][creator_id].append(i.name) # noqa
|
||||
if shared_data.get("max_cached_subsets") is not None:
|
||||
return shared_data
|
||||
|
||||
shared_data["max_cached_subsets"] = {}
|
||||
cached_instances = lsattr("id", "pyblish.avalon.instance")
|
||||
for i in cached_instances:
|
||||
creator_id = rt.GetUserProp(i, "creator_identifier")
|
||||
if creator_id not in shared_data["max_cached_subsets"]:
|
||||
shared_data["max_cached_subsets"][creator_id] = [i.name]
|
||||
else:
|
||||
shared_data[
|
||||
"max_cached_subsets"][creator_id].append(i.name)
|
||||
return shared_data
|
||||
|
||||
@staticmethod
|
||||
def create_instance_node(node_name: str, parent: str = ""):
|
||||
parent_node = rt.getNodeByName(parent) if parent else rt.rootScene
|
||||
if not parent_node:
|
||||
raise OpenPypeCreatorError(f"Specified parent {parent} not found")
|
||||
def create_instance_node(node):
|
||||
"""Create instance node.
|
||||
|
||||
container = rt.container(name=node_name)
|
||||
container.Parent = parent_node
|
||||
If the supplied node is existing node, it will be used to hold the
|
||||
instance, otherwise new node of type Dummy will be created.
|
||||
|
||||
return container
|
||||
Args:
|
||||
node (rt.MXSWrapperBase, str): Node or node name to use.
|
||||
|
||||
Returns:
|
||||
instance
|
||||
"""
|
||||
if isinstance(node, str):
|
||||
node = rt.Container(name=node)
|
||||
|
||||
attrs = rt.Execute(MS_CUSTOM_ATTRIB)
|
||||
rt.custAttributes.add(node.baseObject, attrs)
|
||||
|
||||
return node
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
|
|
@ -50,7 +152,7 @@ class MaxCreator(Creator, MaxCreatorBase):
|
|||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
if pre_create_data.get("use_selection"):
|
||||
self.selected_nodes = rt.getCurrentSelection()
|
||||
self.selected_nodes = rt.GetCurrentSelection()
|
||||
|
||||
instance_node = self.create_instance_node(subset_name)
|
||||
instance_data["instance_node"] = instance_node.name
|
||||
|
|
@ -60,8 +162,16 @@ class MaxCreator(Creator, MaxCreatorBase):
|
|||
instance_data,
|
||||
self
|
||||
)
|
||||
for node in self.selected_nodes:
|
||||
node.Parent = instance_node
|
||||
if pre_create_data.get("use_selection"):
|
||||
|
||||
node_list = []
|
||||
for i in self.selected_nodes:
|
||||
node_ref = rt.NodeTransformMonitor(node=i)
|
||||
node_list.append(node_ref)
|
||||
|
||||
# Setting the property
|
||||
rt.setProperty(
|
||||
instance_node.openPypeData, "all_handles", node_list)
|
||||
|
||||
self._add_instance_to_context(instance)
|
||||
imprint(instance_node.name, instance.data_to_store())
|
||||
|
|
@ -70,10 +180,9 @@ class MaxCreator(Creator, MaxCreatorBase):
|
|||
|
||||
def collect_instances(self):
|
||||
self.cache_subsets(self.collection_shared_data)
|
||||
for instance in self.collection_shared_data[
|
||||
"max_cached_subsets"].get(self.identifier, []):
|
||||
for instance in self.collection_shared_data["max_cached_subsets"].get(self.identifier, []): # noqa
|
||||
created_instance = CreatedInstance.from_existing(
|
||||
read(rt.getNodeByName(instance)), self
|
||||
read(rt.GetNodeByName(instance)), self
|
||||
)
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
|
|
@ -98,12 +207,12 @@ class MaxCreator(Creator, MaxCreatorBase):
|
|||
|
||||
"""
|
||||
for instance in instances:
|
||||
instance_node = rt.getNodeByName(
|
||||
instance_node = rt.GetNodeByName(
|
||||
instance.data.get("instance_node"))
|
||||
if instance_node:
|
||||
rt.select(instance_node)
|
||||
rt.execute(f'for o in selection do for c in o.children do c.parent = undefined') # noqa
|
||||
rt.delete(instance_node)
|
||||
count = rt.custAttributes.count(instance_node)
|
||||
rt.custAttributes.delete(instance_node, count)
|
||||
rt.Delete(instance_node)
|
||||
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,26 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating camera."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreateCamera(plugin.MaxCreator):
|
||||
"""Creator plugin for Camera."""
|
||||
identifier = "io.openpype.creators.max.camera"
|
||||
label = "Camera"
|
||||
family = "camera"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
sel_obj = list(rt.selection)
|
||||
instance = super(CreateCamera, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
|
|||
|
|
@ -1,26 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating raw max scene."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreateMaxScene(plugin.MaxCreator):
|
||||
"""Creator plugin for 3ds max scenes."""
|
||||
identifier = "io.openpype.creators.max.maxScene"
|
||||
label = "Max Scene"
|
||||
family = "maxScene"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
sel_obj = list(rt.selection)
|
||||
instance = super(CreateMaxScene, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
|
|||
|
|
@ -1,28 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for model."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreateModel(plugin.MaxCreator):
|
||||
"""Creator plugin for Model."""
|
||||
identifier = "io.openpype.creators.max.model"
|
||||
label = "Model"
|
||||
family = "model"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
instance = super(CreateModel, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
sel_obj = None
|
||||
if self.selected_nodes:
|
||||
sel_obj = list(self.selected_nodes)
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
|
|||
|
|
@ -1,22 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating pointcache alembics."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreatePointCache(plugin.MaxCreator):
|
||||
"""Creator plugin for Point caches."""
|
||||
identifier = "io.openpype.creators.max.pointcache"
|
||||
label = "Point Cache"
|
||||
family = "pointcache"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
# from pymxs import runtime as rt
|
||||
|
||||
_ = super(CreatePointCache, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
|
|||
|
|
@ -1,26 +1,11 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating point cloud."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreatePointCloud(plugin.MaxCreator):
|
||||
"""Creator plugin for Point Clouds."""
|
||||
identifier = "io.openpype.creators.max.pointcloud"
|
||||
label = "Point Cloud"
|
||||
family = "pointcloud"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
sel_obj = list(rt.selection)
|
||||
instance = super(CreatePointCloud, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
|
|||
|
|
@ -9,10 +9,3 @@ class CreateRedshiftProxy(plugin.MaxCreator):
|
|||
label = "Redshift Proxy"
|
||||
family = "redshiftproxy"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
||||
_ = super(CreateRedshiftProxy, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
|
|
|
|||
|
|
@ -2,11 +2,11 @@
|
|||
"""Creator plugin for creating camera."""
|
||||
import os
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
|
||||
|
||||
class CreateRender(plugin.MaxCreator):
|
||||
"""Creator plugin for Renders."""
|
||||
identifier = "io.openpype.creators.max.render"
|
||||
label = "Render"
|
||||
family = "maxrender"
|
||||
|
|
@ -22,22 +22,11 @@ class CreateRender(plugin.MaxCreator):
|
|||
instance = super(CreateRender, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
pre_create_data)
|
||||
container_name = instance.data.get("instance_node")
|
||||
container = rt.getNodeByName(container_name)
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
||||
# make sure the render dialog is closed
|
||||
# for the update of resolution
|
||||
# Changing the Render Setup dialog settings should be done
|
||||
# with the actual Render Setup dialog in a closed state.
|
||||
|
||||
# set viewport camera for rendering(mandatory for deadline)
|
||||
RenderSettings().set_render_camera(sel_obj)
|
||||
sel_obj = self.selected_nodes
|
||||
if sel_obj:
|
||||
# set viewport camera for rendering(mandatory for deadline)
|
||||
RenderSettings(self.project_settings).set_render_camera(sel_obj)
|
||||
# set output paths for rendering(mandatory for deadline)
|
||||
RenderSettings().render_output(container_name)
|
||||
|
|
|
|||
|
|
@ -1,14 +1,12 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class FbxLoader(load.LoaderPlugin):
|
||||
"""Fbx Loader"""
|
||||
"""Fbx Loader."""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["fbx"]
|
||||
|
|
@ -24,17 +22,17 @@ class FbxLoader(load.LoaderPlugin):
|
|||
rt.FBXImporterSetParam("Camera", True)
|
||||
rt.FBXImporterSetParam("AxisConversionMethod", True)
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.importFile(
|
||||
rt.ImportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
using=rt.FBXIMP)
|
||||
|
||||
container = rt.getNodeByName(f"{name}")
|
||||
container = rt.GetNodeByName(f"{name}")
|
||||
if not container:
|
||||
container = rt.container()
|
||||
container = rt.Container()
|
||||
container.name = f"{name}"
|
||||
|
||||
for selection in rt.getCurrentSelection():
|
||||
for selection in rt.GetCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
return containerise(
|
||||
|
|
@ -44,18 +42,33 @@ class FbxLoader(load.LoaderPlugin):
|
|||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Select(node.Children)
|
||||
fbx_reimport_cmd = (
|
||||
f"""
|
||||
|
||||
fbx_objects = self.get_container_children(node)
|
||||
for fbx_object in fbx_objects:
|
||||
fbx_object.source = path
|
||||
FBXImporterSetParam "Animation" true
|
||||
FBXImporterSetParam "Cameras" true
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{path}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
rt.Execute(fbx_reimport_cmd)
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
|
|
@ -1,13 +1,12 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class MaxSceneLoader(load.LoaderPlugin):
|
||||
"""Max Scene Loader"""
|
||||
"""Max Scene Loader."""
|
||||
|
||||
families = ["camera",
|
||||
"maxScene",
|
||||
|
|
@ -23,23 +22,11 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
path = os.path.normpath(self.fname)
|
||||
# import the max scene by using "merge file"
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
merge_before = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.Container
|
||||
}
|
||||
rt.mergeMaxFile(path)
|
||||
|
||||
merge_after = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.Container
|
||||
}
|
||||
max_containers = merge_after.difference(merge_before)
|
||||
|
||||
if len(max_containers) != 1:
|
||||
self.log.error("Something failed when loading.")
|
||||
|
||||
max_container = max_containers.pop()
|
||||
rt.MergeMaxFile(path)
|
||||
max_objects = rt.getLastMergedNodes()
|
||||
max_container = rt.Container(name=f"{name}")
|
||||
for max_object in max_objects:
|
||||
max_object.Parent = max_container
|
||||
|
||||
return containerise(
|
||||
name, [max_container], context, loader=self.__class__.__name__)
|
||||
|
|
@ -48,17 +35,27 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
max_objects = node.Children
|
||||
node_name = container["instance_node"]
|
||||
|
||||
rt.MergeMaxFile(path,
|
||||
rt.Name("noRedraw"),
|
||||
rt.Name("deleteOldDups"),
|
||||
rt.Name("useSceneMtlDups"))
|
||||
|
||||
max_objects = rt.getLastMergedNodes()
|
||||
container_node = rt.GetNodeByName(node_name)
|
||||
for max_object in max_objects:
|
||||
max_object.source = path
|
||||
max_object.Parent = container_node
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,5 @@
|
|||
|
||||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
|
@ -24,24 +21,20 @@ class ModelAbcLoader(load.LoaderPlugin):
|
|||
file_path = os.path.normpath(self.fname)
|
||||
|
||||
abc_before = {
|
||||
c for c in rt.rootNode.Children
|
||||
c
|
||||
for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
abc_import_cmd = (f"""
|
||||
AlembicImport.ImportToRoot = false
|
||||
AlembicImport.CustomAttributes = true
|
||||
AlembicImport.UVs = true
|
||||
AlembicImport.VertexColors = true
|
||||
|
||||
importFile @"{file_path}" #noPrompt
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {abc_import_cmd}")
|
||||
rt.execute(abc_import_cmd)
|
||||
rt.AlembicImport.ImportToRoot = False
|
||||
rt.AlembicImport.CustomAttributes = True
|
||||
rt.AlembicImport.UVs = True
|
||||
rt.AlembicImport.VertexColors = True
|
||||
rt.importFile(file_path, rt.name("noPrompt"))
|
||||
|
||||
abc_after = {
|
||||
c for c in rt.rootNode.Children
|
||||
c
|
||||
for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
|
|
@ -54,31 +47,34 @@ importFile @"{file_path}" #noPrompt
|
|||
abc_container = abc_containers.pop()
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__)
|
||||
name, [abc_container], context, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
|
||||
for alembic in rt.selection:
|
||||
abc = rt.getNodeByName(alembic.name)
|
||||
rt.select(abc.Children)
|
||||
for abc_con in rt.selection:
|
||||
container = rt.getNodeByName(abc_con.name)
|
||||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Select(node.Children)
|
||||
|
||||
for alembic in rt.Selection:
|
||||
abc = rt.GetNodeByName(alembic.name)
|
||||
rt.Select(abc.Children)
|
||||
for abc_con in rt.Selection:
|
||||
container = rt.GetNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.select(container.Children)
|
||||
for abc_obj in rt.selection:
|
||||
alembic_obj = rt.getNodeByName(abc_obj.name)
|
||||
rt.Select(container.Children)
|
||||
for abc_obj in rt.Selection:
|
||||
alembic_obj = rt.GetNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
@ -86,8 +82,8 @@ importFile @"{file_path}" #noPrompt
|
|||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
||||
@staticmethod
|
||||
def get_container_children(parent, type_name):
|
||||
|
|
@ -102,7 +98,7 @@ importFile @"{file_path}" #noPrompt
|
|||
|
||||
filtered = []
|
||||
for child in list_children(parent):
|
||||
class_type = str(rt.classOf(child.baseObject))
|
||||
class_type = str(rt.ClassOf(child.baseObject))
|
||||
if class_type == type_name:
|
||||
filtered.append(child)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,12 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class FbxModelLoader(load.LoaderPlugin):
|
||||
"""Fbx Model Loader"""
|
||||
"""Fbx Model Loader."""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["fbx"]
|
||||
|
|
@ -24,46 +21,40 @@ class FbxModelLoader(load.LoaderPlugin):
|
|||
rt.FBXImporterSetParam("Animation", False)
|
||||
rt.FBXImporterSetParam("Cameras", False)
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.importFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
using=rt.FBXIMP)
|
||||
rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP)
|
||||
|
||||
container = rt.getNodeByName(f"{name}")
|
||||
container = rt.GetNodeByName(name)
|
||||
if not container:
|
||||
container = rt.container()
|
||||
container.name = f"{name}"
|
||||
container = rt.Container()
|
||||
container.name = name
|
||||
|
||||
for selection in rt.getCurrentSelection():
|
||||
for selection in rt.GetCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
return containerise(
|
||||
name, [container], context, loader=self.__class__.__name__)
|
||||
name, [container], context, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
fbx_reimport_cmd = (
|
||||
f"""
|
||||
FBXImporterSetParam "Animation" false
|
||||
FBXImporterSetParam "Cameras" false
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{path}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
rt.execute(fbx_reimport_cmd)
|
||||
rt.FBXImporterSetParam("Animation", False)
|
||||
rt.FBXImporterSetParam("Cameras", False)
|
||||
rt.FBXImporterSetParam("AxisConversionMethod", True)
|
||||
rt.FBXImporterSetParam("UpAxis", "Y")
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP)
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
@ -71,5 +62,5 @@ importFile @"{path}" #noPrompt using:FBXIMP
|
|||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
|
|
@ -1,15 +1,13 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class ObjLoader(load.LoaderPlugin):
|
||||
"""Obj Loader"""
|
||||
"""Obj Loader."""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["obj"]
|
||||
|
|
@ -21,18 +19,18 @@ class ObjLoader(load.LoaderPlugin):
|
|||
from pymxs import runtime as rt
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
self.log.debug(f"Executing command to import..")
|
||||
self.log.debug("Executing command to import..")
|
||||
|
||||
rt.execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
|
||||
rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
|
||||
# create "missing" container for obj import
|
||||
container = rt.container()
|
||||
container.name = f"{name}"
|
||||
container = rt.Container()
|
||||
container.name = name
|
||||
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
for selection in rt.GetCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
asset = rt.GetNodeByName(name)
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
|
@ -42,27 +40,30 @@ class ObjLoader(load.LoaderPlugin):
|
|||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
node = rt.GetNodeByName(node_name)
|
||||
|
||||
instance_name, _ = node_name.split("_")
|
||||
container = rt.getNodeByName(instance_name)
|
||||
for n in container.Children:
|
||||
rt.delete(n)
|
||||
container = rt.GetNodeByName(instance_name)
|
||||
for child in container.Children:
|
||||
rt.Delete(child)
|
||||
|
||||
rt.execute(f'importFile @"{path}" #noPrompt using:ObjImp')
|
||||
rt.Execute(f'importFile @"{path}" #noPrompt using:ObjImp')
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
for selection in rt.GetCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,9 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class ModelUSDLoader(load.LoaderPlugin):
|
||||
|
|
@ -19,6 +18,7 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
# asset_filepath
|
||||
filepath = os.path.normpath(self.fname)
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
|
|
@ -27,11 +27,11 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
log_filepath = filepath.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.LogLevel = rt.Name("info")
|
||||
rt.USDImporter.importFile(filepath,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
asset = rt.GetNodeByName(name)
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
|
@ -41,11 +41,11 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
node = rt.GetNodeByName(node_name)
|
||||
for n in node.Children:
|
||||
for r in n.Children:
|
||||
rt.delete(r)
|
||||
rt.delete(n)
|
||||
rt.Delete(r)
|
||||
rt.Delete(n)
|
||||
instance_name, _ = node_name.split("_")
|
||||
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
|
|
@ -54,15 +54,15 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
log_filepath = path.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.LogLevel = rt.Name("info")
|
||||
rt.USDImporter.importFile(path,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{instance_name}")
|
||||
asset = rt.GetNodeByName(instance_name)
|
||||
asset.Parent = node
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
|
|
@ -74,5 +74,5 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
|
|
@ -5,19 +5,15 @@ Because of limited api, alembics can be only loaded, but not easily updated.
|
|||
|
||||
"""
|
||||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
|
||||
|
||||
class AbcLoader(load.LoaderPlugin):
|
||||
"""Alembic loader."""
|
||||
|
||||
families = ["camera",
|
||||
"animation",
|
||||
"pointcache"]
|
||||
families = ["camera", "animation", "pointcache"]
|
||||
label = "Load Alembic"
|
||||
representations = ["abc"]
|
||||
order = -10
|
||||
|
|
@ -30,21 +26,17 @@ class AbcLoader(load.LoaderPlugin):
|
|||
file_path = os.path.normpath(self.fname)
|
||||
|
||||
abc_before = {
|
||||
c for c in rt.rootNode.Children
|
||||
c
|
||||
for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
abc_export_cmd = (f"""
|
||||
AlembicImport.ImportToRoot = false
|
||||
|
||||
importFile @"{file_path}" #noPrompt
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {abc_export_cmd}")
|
||||
rt.execute(abc_export_cmd)
|
||||
rt.AlembicImport.ImportToRoot = False
|
||||
rt.importFile(file_path, rt.name("noPrompt"))
|
||||
|
||||
abc_after = {
|
||||
c for c in rt.rootNode.Children
|
||||
c
|
||||
for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
|
|
@ -56,22 +48,42 @@ importFile @"{file_path}" #noPrompt
|
|||
|
||||
abc_container = abc_containers.pop()
|
||||
|
||||
for abc in rt.GetCurrentSelection():
|
||||
for cam_shape in abc.Children:
|
||||
cam_shape.playbackType = 2
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__)
|
||||
name, [abc_container], context, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
|
||||
alembic_objects = self.get_container_children(node, "AlembicObject")
|
||||
for alembic_object in alembic_objects:
|
||||
alembic_object.source = path
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node.Children)
|
||||
|
||||
for alembic in rt.Selection:
|
||||
abc = rt.GetNodeByName(alembic.name)
|
||||
rt.Select(abc.Children)
|
||||
for abc_con in rt.Selection:
|
||||
container = rt.GetNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.Select(container.Children)
|
||||
for abc_obj in rt.Selection:
|
||||
alembic_obj = rt.GetNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
@ -79,8 +91,8 @@ importFile @"{file_path}" #noPrompt
|
|||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
||||
@staticmethod
|
||||
def get_container_children(parent, type_name):
|
||||
|
|
|
|||
|
|
@ -1,13 +1,12 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
class PointCloudLoader(load.LoaderPlugin):
|
||||
"""Point Cloud Loader"""
|
||||
"""Point Cloud Loader."""
|
||||
|
||||
families = ["pointcloud"]
|
||||
representations = ["prt"]
|
||||
|
|
@ -23,7 +22,7 @@ class PointCloudLoader(load.LoaderPlugin):
|
|||
obj = rt.tyCache()
|
||||
obj.filename = filepath
|
||||
|
||||
prt_container = rt.getNodeByName(f"{obj.name}")
|
||||
prt_container = rt.GetNodeByName(obj.name)
|
||||
|
||||
return containerise(
|
||||
name, [prt_container], context, loader=self.__class__.__name__)
|
||||
|
|
@ -33,19 +32,23 @@ class PointCloudLoader(load.LoaderPlugin):
|
|||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
with maintained_selection():
|
||||
rt.Select(node.Children)
|
||||
for prt in rt.Selection:
|
||||
prt_object = rt.GetNodeByName(prt.name)
|
||||
prt_object.filename = path
|
||||
|
||||
prt_objects = self.get_container_children(node)
|
||||
for prt_object in prt_objects:
|
||||
prt_object.source = path
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
"""remove the container"""
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Delete(node)
|
||||
|
|
|
|||
22
openpype/hosts/max/plugins/publish/collect_members.py
Normal file
22
openpype/hosts/max/plugins/publish/collect_members.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect instance members."""
|
||||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class CollectMembers(pyblish.api.InstancePlugin):
|
||||
"""Collect Set Members."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
label = "Collect Instance Members"
|
||||
hosts = ['max']
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if instance.data.get("instance_node"):
|
||||
container = rt.GetNodeByName(instance.data["instance_node"])
|
||||
instance.data["members"] = [
|
||||
member.node for member
|
||||
in container.openPypeData.all_handles
|
||||
]
|
||||
self.log.debug("{}".format(instance.data["members"]))
|
||||
|
|
@ -1,14 +1,14 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin, publish
|
||||
|
||||
|
||||
class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Camera with AlembicExport
|
||||
"""
|
||||
"""Extract Camera with AlembicExport."""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.1
|
||||
label = "Extract Alembic Camera"
|
||||
|
|
@ -31,20 +31,21 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
# We run the render
|
||||
self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir))
|
||||
self.log.info(f"Writing alembic '{filename}' to '{stagingdir}'")
|
||||
|
||||
rt.AlembicExport.ArchiveType = rt.name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.name("maya")
|
||||
rt.AlembicExport.ArchiveType = rt.Name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.Name("maya")
|
||||
rt.AlembicExport.StartFrame = start
|
||||
rt.AlembicExport.EndFrame = end
|
||||
rt.AlembicExport.CustomAttributes = True
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.exportFile(
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.ExportFile(
|
||||
path,
|
||||
rt.name("noPrompt"),
|
||||
rt.Name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.AlembicExport,
|
||||
)
|
||||
|
|
@ -58,6 +59,8 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
"ext": "abc",
|
||||
"files": filename,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
self.log.info(f"Extracted instance '{instance.name}' to: {path}")
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin, publish
|
||||
|
||||
|
||||
class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Camera with FbxExporter
|
||||
"""
|
||||
"""Extract Camera with FbxExporter."""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.2
|
||||
label = "Extract Fbx Camera"
|
||||
|
|
@ -26,7 +26,7 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
self.log.info("Writing fbx file '%s' to '%s'" % (filename, filepath))
|
||||
self.log.info(f"Writing fbx file '{filename}' to '{filepath}'")
|
||||
|
||||
rt.FBXExporterSetParam("Animation", True)
|
||||
rt.FBXExporterSetParam("Cameras", True)
|
||||
|
|
@ -36,10 +36,11 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.exportFile(
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.ExportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
rt.Name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.FBXEXP,
|
||||
)
|
||||
|
|
@ -55,6 +56,4 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info(
|
||||
"Extracted instance '%s' to: %s" % (instance.name, filepath)
|
||||
)
|
||||
self.log.info(f"Extracted instance '{instance.name}' to: {filepath}")
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import get_all_children
|
||||
|
||||
|
||||
class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
|
|
@ -33,7 +32,7 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
nodes = get_all_children(rt.getNodeByName(container))
|
||||
nodes = instance.data["members"]
|
||||
rt.saveNodes(nodes, max_path, quiet=True)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
|
||||
|
||||
class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
|
|
@ -40,7 +40,8 @@ class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
|
||||
|
||||
class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
|
|
@ -22,6 +22,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
|
|
@ -39,7 +40,8 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
|
||||
|
||||
class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
|
|
@ -31,7 +31,8 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
|
|
|
|||
|
|
@ -1,20 +1,15 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin, publish
|
||||
|
||||
|
||||
class ExtractModelUSD(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in USDA Format
|
||||
"""
|
||||
"""Extract Geometry in USDA Format."""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract Geometry (USD)"
|
||||
|
|
@ -26,31 +21,28 @@ class ExtractModelUSD(publish.Extractor,
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
asset_filename = "{name}.usda".format(**instance.data)
|
||||
asset_filepath = os.path.join(stagingdir,
|
||||
asset_filename)
|
||||
self.log.info("Writing USD '%s' to '%s'" % (asset_filepath,
|
||||
stagingdir))
|
||||
self.log.info(f"Writing USD '{asset_filepath}' to '{stagingdir}'")
|
||||
|
||||
log_filename = "{name}.txt".format(**instance.data)
|
||||
log_filepath = os.path.join(stagingdir,
|
||||
log_filename)
|
||||
self.log.info("Writing log '%s' to '%s'" % (log_filepath,
|
||||
stagingdir))
|
||||
self.log.info(f"Writing log '{log_filepath}' to '{stagingdir}'")
|
||||
|
||||
# get the nodes which need to be exported
|
||||
export_options = self.get_export_options(log_filepath)
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = self.get_node_list(container)
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.USDExporter.ExportFile(asset_filepath,
|
||||
exportOptions=export_options,
|
||||
contentSource=rt.name("selected"),
|
||||
contentSource=rt.Name("selected"),
|
||||
nodeList=node_list)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
|
|
@ -73,25 +65,11 @@ class ExtractModelUSD(publish.Extractor,
|
|||
}
|
||||
instance.data["representations"].append(log_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
asset_filepath))
|
||||
self.log.info(
|
||||
f"Extracted instance '{instance.name}' to: {asset_filepath}")
|
||||
|
||||
def get_node_list(self, container):
|
||||
"""
|
||||
Get the target nodes which are
|
||||
the children of the container
|
||||
"""
|
||||
node_list = []
|
||||
|
||||
container_node = rt.getNodeByName(container)
|
||||
target_node = container_node.Children
|
||||
rt.select(target_node)
|
||||
for sel in rt.selection:
|
||||
node_list.append(sel)
|
||||
|
||||
return node_list
|
||||
|
||||
def get_export_options(self, log_path):
|
||||
@staticmethod
|
||||
def get_export_options(log_path):
|
||||
"""Set Export Options for USD Exporter"""
|
||||
|
||||
export_options = rt.USDExporter.createOptions()
|
||||
|
|
@ -101,13 +79,13 @@ class ExtractModelUSD(publish.Extractor,
|
|||
export_options.Lights = False
|
||||
export_options.Cameras = False
|
||||
export_options.Materials = False
|
||||
export_options.MeshFormat = rt.name('fromScene')
|
||||
export_options.FileFormat = rt.name('ascii')
|
||||
export_options.UpAxis = rt.name('y')
|
||||
export_options.LogLevel = rt.name('info')
|
||||
export_options.MeshFormat = rt.Name('fromScene')
|
||||
export_options.FileFormat = rt.Name('ascii')
|
||||
export_options.UpAxis = rt.Name('y')
|
||||
export_options.LogLevel = rt.Name('info')
|
||||
export_options.LogPath = log_path
|
||||
export_options.PreserveEdgeOrientation = True
|
||||
export_options.TimeMode = rt.name('current')
|
||||
export_options.TimeMode = rt.Name('current')
|
||||
|
||||
rt.USDexporter.UIOptions = export_options
|
||||
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.pipeline import publish
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection, get_all_children
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
|
||||
|
||||
class ExtractAlembic(publish.Extractor):
|
||||
|
|
@ -72,7 +72,8 @@ class ExtractAlembic(publish.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
path,
|
||||
rt.name("noPrompt"),
|
||||
|
|
|
|||
|
|
@ -1,42 +1,34 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
def get_setting(project_setting=None):
|
||||
project_setting = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
return (project_setting["max"]["PointCloud"])
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractPointCloud(publish.Extractor):
|
||||
"""
|
||||
Extract PRT format with tyFlow operators
|
||||
Extract PRT format with tyFlow operators.
|
||||
|
||||
Notes:
|
||||
Currently only works for the default partition setting
|
||||
|
||||
Args:
|
||||
export_particle(): sets up all job arguments for attributes
|
||||
to be exported in MAXscript
|
||||
self.export_particle(): sets up all job arguments for attributes
|
||||
to be exported in MAXscript
|
||||
|
||||
get_operators(): get the export_particle operator
|
||||
self.get_operators(): get the export_particle operator
|
||||
|
||||
get_custom_attr(): get all custom channel attributes from Openpype
|
||||
setting and sets it as job arguments before exporting
|
||||
self.get_custom_attr(): get all custom channel attributes from Openpype
|
||||
setting and sets it as job arguments before exporting
|
||||
|
||||
get_files(): get the files with tyFlow naming convention
|
||||
before publishing
|
||||
self.get_files(): get the files with tyFlow naming convention
|
||||
before publishing
|
||||
|
||||
partition_output_name(): get the naming with partition settings.
|
||||
get_partition(): get partition value
|
||||
self.partition_output_name(): get the naming with partition settings.
|
||||
|
||||
self.get_partition(): get partition value
|
||||
|
||||
"""
|
||||
|
||||
|
|
@ -46,9 +38,9 @@ class ExtractPointCloud(publish.Extractor):
|
|||
families = ["pointcloud"]
|
||||
|
||||
def process(self, instance):
|
||||
self.settings = self.get_setting(instance)
|
||||
start = int(instance.context.data.get("frameStart"))
|
||||
end = int(instance.context.data.get("frameEnd"))
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Extracting PRT...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
|
|
@ -56,22 +48,25 @@ class ExtractPointCloud(publish.Extractor):
|
|||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
with maintained_selection():
|
||||
job_args = self.export_particle(container,
|
||||
job_args = self.export_particle(instance.data["members"],
|
||||
start,
|
||||
end,
|
||||
path)
|
||||
|
||||
for job in job_args:
|
||||
rt.execute(job)
|
||||
rt.Execute(job)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.info("Writing PRT with TyFlow Plugin...")
|
||||
filenames = self.get_files(container, path, start, end)
|
||||
self.log.debug("filenames: {0}".format(filenames))
|
||||
filenames = self.get_files(
|
||||
instance.data["members"], path, start, end)
|
||||
self.log.debug(f"filenames: {filenames}")
|
||||
|
||||
partition = self.partition_output_name(container)
|
||||
partition = self.partition_output_name(
|
||||
instance.data["members"])
|
||||
|
||||
representation = {
|
||||
'name': 'prt',
|
||||
|
|
@ -81,67 +76,84 @@ class ExtractPointCloud(publish.Extractor):
|
|||
"outputName": partition # partition value
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
path))
|
||||
self.log.info(f"Extracted instance '{instance.name}' to: {path}")
|
||||
|
||||
def export_particle(self,
|
||||
container,
|
||||
members,
|
||||
start,
|
||||
end,
|
||||
filepath):
|
||||
"""Sets up all job arguments for attributes.
|
||||
|
||||
Those attributes are to be exported in MAX Script.
|
||||
|
||||
Args:
|
||||
members (list): Member nodes of the instance.
|
||||
start (int): Start frame.
|
||||
end (int): End frame.
|
||||
filepath (str): Path to PRT file.
|
||||
|
||||
Returns:
|
||||
list of arguments for MAX Script.
|
||||
|
||||
"""
|
||||
job_args = []
|
||||
opt_list = self.get_operators(container)
|
||||
opt_list = self.get_operators(members)
|
||||
for operator in opt_list:
|
||||
start_frame = "{0}.frameStart={1}".format(operator,
|
||||
start)
|
||||
start_frame = f"{operator}.frameStart={start}"
|
||||
job_args.append(start_frame)
|
||||
end_frame = "{0}.frameEnd={1}".format(operator,
|
||||
end)
|
||||
end_frame = f"{operator}.frameEnd={end}"
|
||||
job_args.append(end_frame)
|
||||
filepath = filepath.replace("\\", "/")
|
||||
prt_filename = '{0}.PRTFilename="{1}"'.format(operator,
|
||||
filepath)
|
||||
|
||||
prt_filename = f'{operator}.PRTFilename="{filepath}"'
|
||||
job_args.append(prt_filename)
|
||||
# Partition
|
||||
mode = "{0}.PRTPartitionsMode=2".format(operator)
|
||||
mode = f"{operator}.PRTPartitionsMode=2"
|
||||
job_args.append(mode)
|
||||
|
||||
additional_args = self.get_custom_attr(operator)
|
||||
for args in additional_args:
|
||||
job_args.append(args)
|
||||
|
||||
prt_export = "{0}.exportPRT()".format(operator)
|
||||
job_args.extend(iter(additional_args))
|
||||
prt_export = f"{operator}.exportPRT()"
|
||||
job_args.append(prt_export)
|
||||
|
||||
return job_args
|
||||
|
||||
def get_operators(self, container):
|
||||
"""Get Export Particles Operator"""
|
||||
@staticmethod
|
||||
def get_operators(members):
|
||||
"""Get Export Particles Operator.
|
||||
|
||||
Args:
|
||||
members (list): Instance members.
|
||||
|
||||
Returns:
|
||||
list of particle operators
|
||||
|
||||
"""
|
||||
opt_list = []
|
||||
node = rt.getNodebyName(container)
|
||||
selection_list = list(node.Children)
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
# TODO: to see if it can be used maxscript instead
|
||||
anim_names = rt.getsubanimnames(obj)
|
||||
for member in members:
|
||||
obj = member.baseobject
|
||||
# TODO: to see if it can be used maxscript instead
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
sub_anim = rt.getsubanim(obj, anim_name)
|
||||
boolean = rt.isProperty(sub_anim, "Export_Particles")
|
||||
event_name = sub_anim.name
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
boolean = rt.IsProperty(sub_anim, "Export_Particles")
|
||||
if boolean:
|
||||
opt = "${0}.{1}.export_particles".format(sel.name,
|
||||
event_name)
|
||||
opt_list.append(opt)
|
||||
event_name = sub_anim.Name
|
||||
opt = f"${member.Name}.{event_name}.export_particles"
|
||||
opt_list.append(opt)
|
||||
|
||||
return opt_list
|
||||
|
||||
@staticmethod
|
||||
def get_setting(instance):
|
||||
project_setting = instance.context.data["project_settings"]
|
||||
return project_setting["max"]["PointCloud"]
|
||||
|
||||
def get_custom_attr(self, operator):
|
||||
"""Get Custom Attributes"""
|
||||
|
||||
custom_attr_list = []
|
||||
attr_settings = get_setting()["attribute"]
|
||||
attr_settings = self.settings["attribute"]
|
||||
for key, value in attr_settings.items():
|
||||
custom_attr = "{0}.PRTChannels_{1}=True".format(operator,
|
||||
value)
|
||||
|
|
@ -157,14 +169,25 @@ class ExtractPointCloud(publish.Extractor):
|
|||
path,
|
||||
start_frame,
|
||||
end_frame):
|
||||
"""
|
||||
Note:
|
||||
Set the filenames accordingly to the tyFlow file
|
||||
naming extension for the publishing purpose
|
||||
"""Get file names for tyFlow.
|
||||
|
||||
Actual File Output from tyFlow:
|
||||
Set the filenames accordingly to the tyFlow file
|
||||
naming extension for the publishing purpose
|
||||
|
||||
Actual File Output from tyFlow::
|
||||
<SceneFile>__part<PartitionStart>of<PartitionCount>.<frame>.prt
|
||||
|
||||
e.g. tyFlow_cloth_CCCS_blobbyFill_001__part1of1_00004.prt
|
||||
|
||||
Args:
|
||||
container: Instance node.
|
||||
path (str): Output directory.
|
||||
start_frame (int): Start frame.
|
||||
end_frame (int): End frame.
|
||||
|
||||
Returns:
|
||||
list of filenames
|
||||
|
||||
"""
|
||||
filenames = []
|
||||
filename = os.path.basename(path)
|
||||
|
|
@ -181,27 +204,36 @@ class ExtractPointCloud(publish.Extractor):
|
|||
return filenames
|
||||
|
||||
def partition_output_name(self, container):
|
||||
"""
|
||||
Notes:
|
||||
Partition output name set for mapping
|
||||
the published file output
|
||||
"""Get partition output name.
|
||||
|
||||
Partition output name set for mapping
|
||||
the published file output.
|
||||
|
||||
Todo:
|
||||
Customizes the setting for the output.
|
||||
|
||||
Args:
|
||||
container: Instance node.
|
||||
|
||||
Returns:
|
||||
str: Partition name.
|
||||
|
||||
todo:
|
||||
Customizes the setting for the output
|
||||
"""
|
||||
partition_count, partition_start = self.get_partition(container)
|
||||
partition = "_part{:03}of{}".format(partition_start,
|
||||
partition_count)
|
||||
|
||||
return partition
|
||||
return f"_part{partition_start:03}of{partition_count}"
|
||||
|
||||
def get_partition(self, container):
|
||||
"""
|
||||
Get Partition Value
|
||||
"""Get Partition value.
|
||||
|
||||
Args:
|
||||
container: Instance node.
|
||||
|
||||
"""
|
||||
opt_list = self.get_operators(container)
|
||||
# TODO: This looks strange? Iterating over
|
||||
# the opt_list but returning from inside?
|
||||
for operator in opt_list:
|
||||
count = rt.execute(f'{operator}.PRTPartitionsCount')
|
||||
start = rt.execute(f'{operator}.PRTPartitionsFrom')
|
||||
count = rt.Execute(f'{operator}.PRTPartitionsCount')
|
||||
start = rt.Execute(f'{operator}.PRTPartitionsFrom')
|
||||
|
||||
return count, start
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
con = rt.getNodeByName(container)
|
||||
rt.select(con.Children)
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
# Redshift rsProxy command
|
||||
# rsProxy fp selected compress connectivity startFrame endFrame
|
||||
# camera warnExisting transformPivotToOrigin
|
||||
|
|
|
|||
|
|
@ -20,28 +20,23 @@ class ValidateCameraContent(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Camera instance must only include"
|
||||
"camera (and camera target)")
|
||||
raise PublishValidationError(("Camera instance must only include"
|
||||
"camera (and camera target). "
|
||||
f"Invalid content {invalid}"))
|
||||
|
||||
def get_invalid(self, instance):
|
||||
"""
|
||||
Get invalid nodes if the instance is not camera
|
||||
"""
|
||||
invalid = list()
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating look content for "
|
||||
"{}".format(container))
|
||||
self.log.info(f"Validating camera content for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
# to avoid Attribute Error from pymxs wrapper
|
||||
sel_tmp = str(sel)
|
||||
found = False
|
||||
for cam in self.camera_type:
|
||||
if sel_tmp.startswith(cam):
|
||||
found = True
|
||||
break
|
||||
found = any(sel_tmp.startswith(cam) for cam in self.camera_type)
|
||||
if not found:
|
||||
self.log.error("Camera not found")
|
||||
invalid.append(sel)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateModelContent(pyblish.api.InstancePlugin):
|
||||
"""Validates Model instance contents.
|
||||
|
|
@ -19,26 +20,25 @@ class ValidateModelContent(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Model instance must only include"
|
||||
"Geometry and Editable Mesh")
|
||||
raise PublishValidationError(("Model instance must only include"
|
||||
"Geometry and Editable Mesh. "
|
||||
f"Invalid types on: {invalid}"))
|
||||
|
||||
def get_invalid(self, instance):
|
||||
"""
|
||||
Get invalid nodes if the instance is not camera
|
||||
"""
|
||||
invalid = list()
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating look content for "
|
||||
"{}".format(container))
|
||||
self.log.info(f"Validating model content for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children) or rt.getCurrentSelection()
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
if rt.classOf(sel) in rt.Camera.classes:
|
||||
if rt.ClassOf(sel) in rt.Camera.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Light.classes:
|
||||
if rt.ClassOf(sel) in rt.Light.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Shape.classes:
|
||||
if rt.ClassOf(sel) in rt.Shape.classes:
|
||||
invalid.append(sel)
|
||||
|
||||
return invalid
|
||||
|
|
|
|||
|
|
@ -18,6 +18,5 @@ class ValidateMaxContents(pyblish.api.InstancePlugin):
|
|||
label = "Max Scene Contents"
|
||||
|
||||
def process(self, instance):
|
||||
container = rt.getNodeByName(instance.data["instance_node"])
|
||||
if not list(container.Children):
|
||||
if not instance.data["members"]:
|
||||
raise PublishValidationError("No content found in the container")
|
||||
|
|
|
|||
|
|
@ -9,11 +9,11 @@ def get_setting(project_setting=None):
|
|||
project_setting = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
return (project_setting["max"]["PointCloud"])
|
||||
return project_setting["max"]["PointCloud"]
|
||||
|
||||
|
||||
class ValidatePointCloud(pyblish.api.InstancePlugin):
|
||||
"""Validate that workfile was saved."""
|
||||
"""Validate that work file was saved."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["pointcloud"]
|
||||
|
|
@ -34,39 +34,42 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
of export_particle operator
|
||||
|
||||
"""
|
||||
invalid = self.get_tyFlow_object(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Non tyFlow object "
|
||||
"found: {}".format(invalid))
|
||||
invalid = self.get_tyFlow_operator(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("tyFlow ExportParticle operator "
|
||||
"not found: {}".format(invalid))
|
||||
report = []
|
||||
|
||||
invalid = self.validate_export_mode(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("The export mode is not at PRT")
|
||||
invalid_object = self.get_tyflow_object(instance)
|
||||
if invalid_object:
|
||||
report.append(f"Non tyFlow object found: {invalid_object}")
|
||||
|
||||
invalid = self.validate_partition_value(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("tyFlow Partition setting is "
|
||||
"not at the default value")
|
||||
invalid = self.validate_custom_attribute(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Custom Attribute not found "
|
||||
":{}".format(invalid))
|
||||
invalid_operator = self.get_tyflow_operator(instance)
|
||||
if invalid_operator:
|
||||
report.append((
|
||||
"tyFlow ExportParticle operator not "
|
||||
f"found: {invalid_operator}"))
|
||||
|
||||
def get_tyFlow_object(self, instance):
|
||||
if self.validate_export_mode(instance):
|
||||
report.append("The export mode is not at PRT")
|
||||
|
||||
if self.validate_partition_value(instance):
|
||||
report.append(("tyFlow Partition setting is "
|
||||
"not at the default value"))
|
||||
|
||||
invalid_attribute = self.validate_custom_attribute(instance)
|
||||
if invalid_attribute:
|
||||
report.append(("Custom Attribute not found "
|
||||
f":{invalid_attribute}"))
|
||||
|
||||
if report:
|
||||
raise PublishValidationError(f"{report}")
|
||||
|
||||
def get_tyflow_object(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating tyFlow container "
|
||||
"for {}".format(container))
|
||||
self.log.info(f"Validating tyFlow container for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
sel_tmp = str(sel)
|
||||
if rt.classOf(sel) in [rt.tyFlow,
|
||||
if rt.ClassOf(sel) in [rt.tyFlow,
|
||||
rt.Editable_Mesh]:
|
||||
if "tyFlow" not in sel_tmp:
|
||||
invalid.append(sel)
|
||||
|
|
@ -75,23 +78,20 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
|
||||
return invalid
|
||||
|
||||
def get_tyFlow_operator(self, instance):
|
||||
def get_tyflow_operator(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating tyFlow object "
|
||||
"for {}".format(container))
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
self.log.info(f"Validating tyFlow object for {container}")
|
||||
selection_list = instance.data["members"]
|
||||
bool_list = []
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.getsubanimnames(obj)
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get all the names of the related tyFlow nodes
|
||||
sub_anim = rt.getsubanim(obj, anim_name)
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
boolean = rt.isProperty(sub_anim, "Export_Particles")
|
||||
boolean = rt.IsProperty(sub_anim, "Export_Particles")
|
||||
bool_list.append(str(boolean))
|
||||
# if the export_particles property is not there
|
||||
# it means there is not a "Export Particle" operator
|
||||
|
|
@ -104,21 +104,18 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
def validate_custom_attribute(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating tyFlow custom "
|
||||
"attributes for {}".format(container))
|
||||
self.log.info(
|
||||
f"Validating tyFlow custom attributes for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.getsubanimnames(obj)
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get all the names of the related tyFlow nodes
|
||||
sub_anim = rt.getsubanim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
boolean = rt.isProperty(sub_anim, "Export_Particles")
|
||||
event_name = sub_anim.name
|
||||
if boolean:
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
if rt.IsProperty(sub_anim, "Export_Particles"):
|
||||
event_name = sub_anim.name
|
||||
opt = "${0}.{1}.export_particles".format(sel.name,
|
||||
event_name)
|
||||
attributes = get_setting()["attribute"]
|
||||
|
|
@ -126,39 +123,36 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
custom_attr = "{0}.PRTChannels_{1}".format(opt,
|
||||
value)
|
||||
try:
|
||||
rt.execute(custom_attr)
|
||||
rt.Execute(custom_attr)
|
||||
except RuntimeError:
|
||||
invalid.add(key)
|
||||
invalid.append(key)
|
||||
|
||||
return invalid
|
||||
|
||||
def validate_partition_value(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating tyFlow partition "
|
||||
"value for {}".format(container))
|
||||
self.log.info(
|
||||
f"Validating tyFlow partition value for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
selection_list = instance.data["members"]
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.getsubanimnames(obj)
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get all the names of the related tyFlow nodes
|
||||
sub_anim = rt.getsubanim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
boolean = rt.isProperty(sub_anim, "Export_Particles")
|
||||
event_name = sub_anim.name
|
||||
if boolean:
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
if rt.IsProperty(sub_anim, "Export_Particles"):
|
||||
event_name = sub_anim.name
|
||||
opt = "${0}.{1}.export_particles".format(sel.name,
|
||||
event_name)
|
||||
count = rt.execute(f'{opt}.PRTPartitionsCount')
|
||||
count = rt.Execute(f'{opt}.PRTPartitionsCount')
|
||||
if count != 100:
|
||||
invalid.append(count)
|
||||
start = rt.execute(f'{opt}.PRTPartitionsFrom')
|
||||
start = rt.Execute(f'{opt}.PRTPartitionsFrom')
|
||||
if start != 1:
|
||||
invalid.append(start)
|
||||
end = rt.execute(f'{opt}.PRTPartitionsTo')
|
||||
end = rt.Execute(f'{opt}.PRTPartitionsTo')
|
||||
if end != 1:
|
||||
invalid.append(end)
|
||||
|
||||
|
|
@ -167,24 +161,23 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
def validate_export_mode(self, instance):
|
||||
invalid = []
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating tyFlow export "
|
||||
"mode for {}".format(container))
|
||||
self.log.info(
|
||||
f"Validating tyFlow export mode for {container}")
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
con = rt.GetNodeByName(container)
|
||||
selection_list = list(con.Children)
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.getsubanimnames(obj)
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
for anim_name in anim_names:
|
||||
# get all the names of the related tyFlow nodes
|
||||
sub_anim = rt.getsubanim(obj, anim_name)
|
||||
sub_anim = rt.GetSubAnim(obj, anim_name)
|
||||
# check if there is export particle operator
|
||||
boolean = rt.isProperty(sub_anim, "Export_Particles")
|
||||
boolean = rt.IsProperty(sub_anim, "Export_Particles")
|
||||
event_name = sub_anim.name
|
||||
if boolean:
|
||||
opt = "${0}.{1}.export_particles".format(sel.name,
|
||||
event_name)
|
||||
export_mode = rt.execute(f'{opt}.exportMode')
|
||||
opt = f"${sel.name}.{event_name}.export_particles"
|
||||
export_mode = rt.Execute(f'{opt}.exportMode')
|
||||
if export_mode != 1:
|
||||
invalid.append(export_mode)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,36 +1,37 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
"""Validator for USD plugin."""
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pyblish.api import InstancePlugin, ValidatorOrder
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateUSDPlugin(pyblish.api.InstancePlugin):
|
||||
"""Validates if USD plugin is installed or loaded in Max
|
||||
"""
|
||||
def get_plugins() -> list:
|
||||
"""Get plugin list from 3ds max."""
|
||||
manager = rt.PluginManager
|
||||
count = manager.pluginDllCount
|
||||
plugin_info_list = []
|
||||
for p in range(1, count + 1):
|
||||
plugin_info = manager.pluginDllName(p)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
return plugin_info_list
|
||||
|
||||
|
||||
class ValidateUSDPlugin(InstancePlugin):
|
||||
"""Validates if USD plugin is installed or loaded in 3ds max."""
|
||||
|
||||
order = ValidatorOrder - 0.01
|
||||
families = ["model"]
|
||||
hosts = ["max"]
|
||||
label = "USD Plugin"
|
||||
|
||||
def process(self, instance):
|
||||
plugin_mgr = rt.pluginManager
|
||||
plugin_count = plugin_mgr.pluginDllCount
|
||||
plugin_info = self.get_plugins(plugin_mgr,
|
||||
plugin_count)
|
||||
"""Plugin entry point."""
|
||||
|
||||
plugin_info = get_plugins()
|
||||
usd_import = "usdimport.dli"
|
||||
if usd_import not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_import))
|
||||
raise PublishValidationError(f"USD Plugin {usd_import} not found")
|
||||
usd_export = "usdexport.dle"
|
||||
if usd_export not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_export))
|
||||
|
||||
def get_plugins(self, manager, count):
|
||||
plugin_info_list = list()
|
||||
for p in range(1, count + 1):
|
||||
plugin_info = manager.pluginDllName(p)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
return plugin_info_list
|
||||
raise PublishValidationError(f"USD Plugin {usd_export} not found")
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
"""Standalone helper functions"""
|
||||
|
||||
import os
|
||||
from pprint import pformat
|
||||
import sys
|
||||
import platform
|
||||
import uuid
|
||||
|
|
@ -3262,75 +3263,6 @@ def iter_shader_edits(relationships, shader_nodes, nodes_by_id, label=None):
|
|||
def set_colorspace():
|
||||
"""Set Colorspace from project configuration
|
||||
"""
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
imageio = get_project_settings(project_name)["maya"]["imageio"]
|
||||
|
||||
# Maya 2022+ introduces new OCIO v2 color management settings that
|
||||
# can override the old color managenement preferences. OpenPype has
|
||||
# separate settings for both so we fall back when necessary.
|
||||
use_ocio_v2 = imageio["colorManagementPreference_v2"]["enabled"]
|
||||
required_maya_version = 2022
|
||||
maya_version = int(cmds.about(version=True))
|
||||
maya_supports_ocio_v2 = maya_version >= required_maya_version
|
||||
if use_ocio_v2 and not maya_supports_ocio_v2:
|
||||
# Fallback to legacy behavior with a warning
|
||||
log.warning("Color Management Preference v2 is enabled but not "
|
||||
"supported by current Maya version: {} (< {}). Falling "
|
||||
"back to legacy settings.".format(
|
||||
maya_version, required_maya_version)
|
||||
)
|
||||
use_ocio_v2 = False
|
||||
|
||||
if use_ocio_v2:
|
||||
root_dict = imageio["colorManagementPreference_v2"]
|
||||
else:
|
||||
root_dict = imageio["colorManagementPreference"]
|
||||
|
||||
if not isinstance(root_dict, dict):
|
||||
msg = "set_colorspace(): argument should be dictionary"
|
||||
log.error(msg)
|
||||
|
||||
log.debug(">> root_dict: {}".format(root_dict))
|
||||
|
||||
# enable color management
|
||||
cmds.colorManagementPrefs(e=True, cmEnabled=True)
|
||||
cmds.colorManagementPrefs(e=True, ocioRulesEnabled=True)
|
||||
|
||||
# set config path
|
||||
custom_ocio_config = False
|
||||
if root_dict.get("configFilePath"):
|
||||
unresolved_path = root_dict["configFilePath"]
|
||||
ocio_paths = unresolved_path[platform.system().lower()]
|
||||
|
||||
resolved_path = None
|
||||
for ocio_p in ocio_paths:
|
||||
resolved_path = str(ocio_p).format(**os.environ)
|
||||
if not os.path.exists(resolved_path):
|
||||
continue
|
||||
|
||||
if resolved_path:
|
||||
filepath = str(resolved_path).replace("\\", "/")
|
||||
cmds.colorManagementPrefs(e=True, configFilePath=filepath)
|
||||
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=True)
|
||||
log.debug("maya '{}' changed to: {}".format(
|
||||
"configFilePath", resolved_path))
|
||||
custom_ocio_config = True
|
||||
else:
|
||||
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=False)
|
||||
cmds.colorManagementPrefs(e=True, configFilePath="")
|
||||
|
||||
# If no custom OCIO config file was set we make sure that Maya 2022+
|
||||
# either chooses between Maya's newer default v2 or legacy config based
|
||||
# on OpenPype setting to use ocio v2 or not.
|
||||
if maya_supports_ocio_v2 and not custom_ocio_config:
|
||||
if use_ocio_v2:
|
||||
# Use Maya 2022+ default OCIO v2 config
|
||||
log.info("Setting default Maya OCIO v2 config")
|
||||
cmds.colorManagementPrefs(edit=True, configFilePath="")
|
||||
else:
|
||||
# Set the Maya default config file path
|
||||
log.info("Setting default Maya OCIO v1 legacy config")
|
||||
cmds.colorManagementPrefs(edit=True, configFilePath="legacy")
|
||||
|
||||
# set color spaces for rendering space and view transforms
|
||||
def _colormanage(**kwargs):
|
||||
|
|
@ -3347,17 +3279,74 @@ def set_colorspace():
|
|||
except RuntimeError as exc:
|
||||
log.error(exc)
|
||||
|
||||
if use_ocio_v2:
|
||||
_colormanage(renderingSpaceName=root_dict["renderSpace"])
|
||||
_colormanage(displayName=root_dict["displayName"])
|
||||
_colormanage(viewName=root_dict["viewName"])
|
||||
else:
|
||||
_colormanage(renderingSpaceName=root_dict["renderSpace"])
|
||||
if maya_supports_ocio_v2:
|
||||
_colormanage(viewName=root_dict["viewTransform"])
|
||||
_colormanage(displayName="legacy")
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
imageio = get_project_settings(project_name)["maya"]["imageio"]
|
||||
|
||||
# ocio compatibility variables
|
||||
ocio_v2_maya_version = 2022
|
||||
maya_version = int(cmds.about(version=True))
|
||||
ocio_v2_support = use_ocio_v2 = maya_version >= ocio_v2_maya_version
|
||||
|
||||
root_dict = {}
|
||||
use_workfile_settings = imageio.get("workfile", {}).get("enabled")
|
||||
|
||||
if use_workfile_settings:
|
||||
# TODO: deprecated code from 3.15.5 - remove
|
||||
# Maya 2022+ introduces new OCIO v2 color management settings that
|
||||
# can override the old color management preferences. OpenPype has
|
||||
# separate settings for both so we fall back when necessary.
|
||||
use_ocio_v2 = imageio["colorManagementPreference_v2"]["enabled"]
|
||||
if use_ocio_v2 and not ocio_v2_support:
|
||||
# Fallback to legacy behavior with a warning
|
||||
log.warning(
|
||||
"Color Management Preference v2 is enabled but not "
|
||||
"supported by current Maya version: {} (< {}). Falling "
|
||||
"back to legacy settings.".format(
|
||||
maya_version, ocio_v2_maya_version)
|
||||
)
|
||||
|
||||
if use_ocio_v2:
|
||||
root_dict = imageio["colorManagementPreference_v2"]
|
||||
else:
|
||||
_colormanage(viewTransformName=root_dict["viewTransform"])
|
||||
root_dict = imageio["colorManagementPreference"]
|
||||
|
||||
if not isinstance(root_dict, dict):
|
||||
msg = "set_colorspace(): argument should be dictionary"
|
||||
log.error(msg)
|
||||
|
||||
else:
|
||||
root_dict = imageio["workfile"]
|
||||
|
||||
log.debug(">> root_dict: {}".format(pformat(root_dict)))
|
||||
|
||||
if root_dict:
|
||||
# enable color management
|
||||
cmds.colorManagementPrefs(e=True, cmEnabled=True)
|
||||
cmds.colorManagementPrefs(e=True, ocioRulesEnabled=True)
|
||||
|
||||
# backward compatibility
|
||||
# TODO: deprecated code from 3.15.5 - refactor to use new settings
|
||||
view_name = root_dict.get("viewTransform")
|
||||
if view_name is None:
|
||||
view_name = root_dict.get("viewName")
|
||||
|
||||
if use_ocio_v2:
|
||||
# Use Maya 2022+ default OCIO v2 config
|
||||
log.info("Setting default Maya OCIO v2 config")
|
||||
cmds.colorManagementPrefs(edit=True, configFilePath="")
|
||||
|
||||
# set rendering space and view transform
|
||||
_colormanage(renderingSpaceName=root_dict["renderSpace"])
|
||||
_colormanage(viewName=view_name)
|
||||
_colormanage(displayName=root_dict["displayName"])
|
||||
else:
|
||||
# Set the Maya default config file path
|
||||
log.info("Setting default Maya OCIO v1 legacy config")
|
||||
cmds.colorManagementPrefs(edit=True, configFilePath="legacy")
|
||||
|
||||
# set rendering space and view transform
|
||||
_colormanage(renderingSpaceName=root_dict["renderSpace"])
|
||||
_colormanage(viewTransformName=view_name)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from openpype.tools.workfile_template_build import (
|
|||
WorkfileBuildPlaceholderDialog,
|
||||
)
|
||||
|
||||
from .lib import read, imprint
|
||||
from .lib import read, imprint, get_main_window
|
||||
|
||||
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
|
||||
|
||||
|
|
@ -173,44 +173,37 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
|
||||
def create_placeholder(self, placeholder_data):
|
||||
selection = cmds.ls(selection=True)
|
||||
if not selection:
|
||||
raise ValueError("Nothing is selected")
|
||||
if len(selection) > 1:
|
||||
raise ValueError("More then one item are selected")
|
||||
|
||||
parent = selection[0] if selection else None
|
||||
|
||||
placeholder_data["plugin_identifier"] = self.identifier
|
||||
|
||||
placeholder_name = self._create_placeholder_name(placeholder_data)
|
||||
|
||||
placeholder = cmds.spaceLocator(name=placeholder_name)[0]
|
||||
# TODO: this can crash if selection can't be used
|
||||
cmds.parent(placeholder, selection[0])
|
||||
if parent:
|
||||
placeholder = cmds.parent(placeholder, selection[0])[0]
|
||||
|
||||
# get the long name of the placeholder (with the groups)
|
||||
placeholder_full_name = (
|
||||
cmds.ls(selection[0], long=True)[0]
|
||||
+ "|"
|
||||
+ placeholder.replace("|", "")
|
||||
)
|
||||
|
||||
imprint(placeholder_full_name, placeholder_data)
|
||||
imprint(placeholder, placeholder_data)
|
||||
|
||||
# Add helper attributes to keep placeholder info
|
||||
cmds.addAttr(
|
||||
placeholder_full_name,
|
||||
placeholder,
|
||||
longName="parent",
|
||||
hidden=True,
|
||||
dataType="string"
|
||||
)
|
||||
cmds.addAttr(
|
||||
placeholder_full_name,
|
||||
placeholder,
|
||||
longName="index",
|
||||
hidden=True,
|
||||
attributeType="short",
|
||||
defaultValue=-1
|
||||
)
|
||||
|
||||
cmds.setAttr(placeholder_full_name + ".parent", "", type="string")
|
||||
cmds.setAttr(placeholder + ".parent", "", type="string")
|
||||
|
||||
def update_placeholder(self, placeholder_item, placeholder_data):
|
||||
node_name = placeholder_item.scene_identifier
|
||||
|
|
@ -233,7 +226,7 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
if placeholder_data.get("plugin_identifier") != self.identifier:
|
||||
continue
|
||||
|
||||
# TODO do data validations and maybe updgrades if are invalid
|
||||
# TODO do data validations and maybe upgrades if they are invalid
|
||||
output.append(
|
||||
LoadPlaceholderItem(node_name, placeholder_data, self)
|
||||
)
|
||||
|
|
@ -319,8 +312,9 @@ def update_workfile_template(*args):
|
|||
def create_placeholder(*args):
|
||||
host = registered_host()
|
||||
builder = MayaTemplateBuilder(host)
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window.exec_()
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder,
|
||||
parent=get_main_window())
|
||||
window.show()
|
||||
|
||||
|
||||
def update_placeholder(*args):
|
||||
|
|
@ -343,6 +337,7 @@ def update_placeholder(*args):
|
|||
raise ValueError("Too many selected nodes")
|
||||
|
||||
placeholder_item = placeholder_items[0]
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder,
|
||||
parent=get_main_window())
|
||||
window.set_update_mode(placeholder_item)
|
||||
window.exec_()
|
||||
|
|
|
|||
|
|
@ -98,4 +98,4 @@ class CreateArnoldSceneSource(plugin.MayaCreator):
|
|||
|
||||
content = cmds.sets(name=instance_node + "_content_SET", empty=True)
|
||||
proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
|
||||
cmds.sets([content, proxy], forceElement=instance)
|
||||
cmds.sets([content, proxy], forceElement=instance_node)
|
||||
|
|
|
|||
|
|
@ -105,7 +105,8 @@ class ImportMayaLoader(load.LoaderPlugin):
|
|||
"camera",
|
||||
"rig",
|
||||
"camerarig",
|
||||
"staticMesh"
|
||||
"staticMesh",
|
||||
"workfile"
|
||||
]
|
||||
|
||||
label = "Import"
|
||||
|
|
|
|||
|
|
@ -6,23 +6,29 @@ import maya.cmds as cmds
|
|||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
legacy_io,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
unique_namespace, get_attribute_input, maintained_selection
|
||||
unique_namespace,
|
||||
get_attribute_input,
|
||||
maintained_selection,
|
||||
convert_to_maya_fps
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
|
||||
|
||||
def is_sequence(files):
|
||||
sequence = False
|
||||
collections, remainder = clique.assemble(files)
|
||||
collections, remainder = clique.assemble(files, minimum_items=1)
|
||||
if collections:
|
||||
sequence = True
|
||||
|
||||
return sequence
|
||||
|
||||
|
||||
def get_current_session_fps():
|
||||
session_fps = float(legacy_io.Session.get('AVALON_FPS', 25))
|
||||
return convert_to_maya_fps(session_fps)
|
||||
|
||||
class ArnoldStandinLoader(load.LoaderPlugin):
|
||||
"""Load as Arnold standin"""
|
||||
|
||||
|
|
@ -35,9 +41,15 @@ class ArnoldStandinLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
if not cmds.pluginInfo("mtoa", query=True, loaded=True):
|
||||
cmds.loadPlugin("mtoa")
|
||||
# Create defaultArnoldRenderOptions before creating aiStandin
|
||||
# which tries to connect it. Since we load the plugin and directly
|
||||
# create aiStandin without the defaultArnoldRenderOptions,
|
||||
# we need to create the render options for aiStandin creation.
|
||||
from mtoa.core import createOptions
|
||||
createOptions()
|
||||
|
||||
# Make sure to load arnold before importing `mtoa.ui.arnoldmenu`
|
||||
cmds.loadPlugin("mtoa", quiet=True)
|
||||
import mtoa.ui.arnoldmenu
|
||||
|
||||
version = context['version']
|
||||
|
|
@ -84,6 +96,9 @@ class ArnoldStandinLoader(load.LoaderPlugin):
|
|||
sequence = is_sequence(os.listdir(os.path.dirname(self.fname)))
|
||||
cmds.setAttr(standin_shape + ".useFrameExtension", sequence)
|
||||
|
||||
fps = float(version["data"].get("fps"))or get_current_session_fps()
|
||||
cmds.setAttr(standin_shape + ".abcFPS", fps)
|
||||
|
||||
nodes = [root, standin, standin_shape]
|
||||
if operator is not None:
|
||||
nodes.append(operator)
|
||||
|
|
|
|||
|
|
@ -273,6 +273,11 @@ class FileNodeLoader(load.LoaderPlugin):
|
|||
project_name, host_name,
|
||||
project_settings=project_settings
|
||||
)
|
||||
|
||||
# ignore if host imageio is not enabled
|
||||
if not config_data:
|
||||
return
|
||||
|
||||
file_rules = get_imageio_file_rules(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ def preserve_modelpanel_cameras(container, log=None):
|
|||
panel_cameras = {}
|
||||
for panel in cmds.getPanel(type="modelPanel"):
|
||||
cam = cmds.ls(cmds.modelPanel(panel, query=True, camera=True),
|
||||
long=True)
|
||||
long=True)[0]
|
||||
|
||||
# Often but not always maya returns the transform from the
|
||||
# modelPanel as opposed to the camera shape, so we convert it
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import shutil
|
||||
|
||||
import maya.cmds as cmds
|
||||
import xgenm
|
||||
|
|
@ -116,8 +117,8 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
def update(self, container, representation):
|
||||
"""Workflow for updating Xgen.
|
||||
|
||||
- Copy and potentially overwrite the workspace .xgen file.
|
||||
- Export changes to delta file.
|
||||
- Copy and overwrite the workspace .xgen file.
|
||||
- Set collection attributes to not include delta files.
|
||||
- Update xgen maya file reference.
|
||||
- Apply the delta file changes.
|
||||
|
|
@ -130,6 +131,10 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
There is an implicit increment of the xgen and delta files, due to
|
||||
using the workfile basename.
|
||||
"""
|
||||
# Storing current description to try and maintain later.
|
||||
current_description = (
|
||||
xgenm.xgGlobal.DescriptionEditor.currentDescription()
|
||||
)
|
||||
|
||||
container_node = container["objectName"]
|
||||
members = get_container_members(container_node)
|
||||
|
|
@ -160,6 +165,7 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
data_path
|
||||
)
|
||||
data = {"xgProjectPath": project_path, "xgDataPath": data_path}
|
||||
shutil.copy(new_xgen_file, xgen_file)
|
||||
write_xgen_file(data, xgen_file)
|
||||
|
||||
attribute_data = {
|
||||
|
|
@ -171,3 +177,11 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
super().update(container, representation)
|
||||
|
||||
xgenm.applyDelta(xgen_palette.replace("|", ""), xgd_file)
|
||||
|
||||
# Restore current selected description if it exists.
|
||||
if cmds.objExists(current_description):
|
||||
xgenm.xgGlobal.DescriptionEditor.setCurrentDescription(
|
||||
current_description
|
||||
)
|
||||
# Full UI refresh.
|
||||
xgenm.xgGlobal.DescriptionEditor.refresh("Full")
|
||||
|
|
|
|||
|
|
@ -18,30 +18,39 @@ class CollectArnoldSceneSource(pyblish.api.InstancePlugin):
|
|||
for objset in objsets:
|
||||
objset = str(objset)
|
||||
members = cmds.sets(objset, query=True)
|
||||
members = cmds.ls(members, long=True)
|
||||
if members is None:
|
||||
self.log.warning("Skipped empty instance: \"%s\" " % objset)
|
||||
continue
|
||||
if objset.endswith("content_SET"):
|
||||
members = cmds.ls(members, long=True)
|
||||
children = get_all_children(members)
|
||||
instance.data["contentMembers"] = children
|
||||
self.log.debug("content members: {}".format(children))
|
||||
elif objset.endswith("proxy_SET"):
|
||||
set_members = get_all_children(cmds.ls(members, long=True))
|
||||
instance.data["proxy"] = set_members
|
||||
self.log.debug("proxy members: {}".format(set_members))
|
||||
instance.data["contentMembers"] = self.get_hierarchy(members)
|
||||
if objset.endswith("proxy_SET"):
|
||||
instance.data["proxy"] = self.get_hierarchy(members)
|
||||
|
||||
# Use camera in object set if present else default to render globals
|
||||
# camera.
|
||||
cameras = cmds.ls(type="camera", long=True)
|
||||
renderable = [c for c in cameras if cmds.getAttr("%s.renderable" % c)]
|
||||
camera = renderable[0]
|
||||
for node in instance.data["contentMembers"]:
|
||||
camera_shapes = cmds.listRelatives(
|
||||
node, shapes=True, type="camera"
|
||||
)
|
||||
if camera_shapes:
|
||||
camera = node
|
||||
instance.data["camera"] = camera
|
||||
if renderable:
|
||||
camera = renderable[0]
|
||||
for node in instance.data["contentMembers"]:
|
||||
camera_shapes = cmds.listRelatives(
|
||||
node, shapes=True, type="camera"
|
||||
)
|
||||
if camera_shapes:
|
||||
camera = node
|
||||
instance.data["camera"] = camera
|
||||
else:
|
||||
self.log.debug("No renderable cameras found.")
|
||||
|
||||
self.log.debug("data: {}".format(instance.data))
|
||||
|
||||
def get_hierarchy(self, nodes):
|
||||
"""Return nodes with all their children"""
|
||||
nodes = cmds.ls(nodes, long=True)
|
||||
if not nodes:
|
||||
return []
|
||||
children = get_all_children(nodes)
|
||||
# Make sure nodes merged with children only
|
||||
# contains unique entries
|
||||
return list(set(nodes + children))
|
||||
|
|
|
|||
|
|
@ -30,11 +30,12 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
camera = cameras[0] if cameras else None
|
||||
|
||||
context = instance.context
|
||||
objectset = context.data['objectsets']
|
||||
objectset = {
|
||||
i.data.get("instance_node") for i in context
|
||||
}
|
||||
|
||||
# Convert enum attribute index to string for Display Lights.
|
||||
index = instance.data.get("displayLights", 0)
|
||||
display_lights = lib.DISPLAY_LIGHTS_VALUES[index]
|
||||
# Collect display lights.
|
||||
display_lights = instance.data.get("displayLights", "default")
|
||||
if display_lights == "project_settings":
|
||||
settings = instance.context.data["project_settings"]
|
||||
settings = settings["maya"]["publish"]["ExtractPlayblast"]
|
||||
|
|
@ -56,7 +57,7 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
burninDataMembers["focalLength"] = focal_length
|
||||
|
||||
# Account for nested instances like model.
|
||||
reviewable_subsets = list(set(members) & set(objectset))
|
||||
reviewable_subsets = list(set(members) & objectset)
|
||||
if reviewable_subsets:
|
||||
if len(reviewable_subsets) > 1:
|
||||
raise KnownPublishError(
|
||||
|
|
|
|||
|
|
@ -30,12 +30,12 @@ class CollectXgen(pyblish.api.InstancePlugin):
|
|||
if data["xgmPalettes"]:
|
||||
data["xgmPalette"] = data["xgmPalettes"][0]
|
||||
|
||||
data["xgenConnections"] = {}
|
||||
data["xgenConnections"] = set()
|
||||
for node in data["xgmSubdPatches"]:
|
||||
data["xgenConnections"][node] = {}
|
||||
for attr in ["transform", "geometry"]:
|
||||
input = get_attribute_input("{}.{}".format(node, attr))
|
||||
data["xgenConnections"][node][attr] = input
|
||||
connected_transform = get_attribute_input(
|
||||
node + ".transform"
|
||||
).split(".")[0]
|
||||
data["xgenConnections"].add(connected_transform)
|
||||
|
||||
# Collect all files under palette root as resources.
|
||||
import xgenm
|
||||
|
|
|
|||
|
|
@ -109,6 +109,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
return
|
||||
|
||||
kwargs["filename"] = file_path.replace(".ass", "_proxy.ass")
|
||||
|
||||
filenames, _ = self._extract(
|
||||
instance.data["proxy"], attribute_data, kwargs
|
||||
)
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ class ExtractWorkfileXgen(publish.Extractor):
|
|||
continue
|
||||
|
||||
render_start_frame = instance.data["frameStart"]
|
||||
render_end_frame = instance.data["frameStart"]
|
||||
render_end_frame = instance.data["frameEnd"]
|
||||
|
||||
if start_frame is None:
|
||||
start_frame = render_start_frame
|
||||
|
|
|
|||
|
|
@ -51,11 +51,9 @@ class ExtractXgen(publish.Extractor):
|
|||
with delete_after() as delete_bin:
|
||||
duplicate_nodes = []
|
||||
# Collect nodes to export.
|
||||
for _, connections in instance.data["xgenConnections"].items():
|
||||
transform_name = connections["transform"].split(".")[0]
|
||||
|
||||
for node in instance.data["xgenConnections"]:
|
||||
# Duplicate_transform subd patch geometry.
|
||||
duplicate_transform = cmds.duplicate(transform_name)[0]
|
||||
duplicate_transform = cmds.duplicate(node)[0]
|
||||
delete_bin.append(duplicate_transform)
|
||||
|
||||
# Discard the children.
|
||||
|
|
@ -88,6 +86,18 @@ class ExtractXgen(publish.Extractor):
|
|||
|
||||
delete_bin.append(palette)
|
||||
|
||||
# Copy shading assignments.
|
||||
nodes = (
|
||||
instance.data["xgmDescriptions"] +
|
||||
instance.data["xgmSubdPatches"]
|
||||
)
|
||||
for node in nodes:
|
||||
target_node = node.split(":")[-1]
|
||||
shading_engine = cmds.listConnections(
|
||||
node, type="shadingEngine"
|
||||
)[0]
|
||||
cmds.sets(target_node, edit=True, forceElement=shading_engine)
|
||||
|
||||
# Export duplicated palettes.
|
||||
xgenm.exportPalette(palette, xgen_path)
|
||||
|
||||
|
|
|
|||
|
|
@ -70,5 +70,5 @@ class ValidateArnoldSceneSourceCbid(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
for content_node, proxy_node in cls.get_invalid_couples(cls, instance):
|
||||
lib.set_id(proxy_node, lib.get_id(content_node), overwrite=False)
|
||||
for content_node, proxy_node in cls.get_invalid_couples(instance):
|
||||
lib.set_id(proxy_node, lib.get_id(content_node), overwrite=True)
|
||||
|
|
|
|||
|
|
@ -23,11 +23,13 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
# we cannot ask this until user open render settings as
|
||||
# `defaultArnoldRenderOptions` doesn't exists
|
||||
# `defaultArnoldRenderOptions` doesn't exist
|
||||
errors = []
|
||||
|
||||
try:
|
||||
relative_texture = cmds.getAttr(
|
||||
absolute_texture = cmds.getAttr(
|
||||
"defaultArnoldRenderOptions.absolute_texture_paths")
|
||||
relative_procedural = cmds.getAttr(
|
||||
absolute_procedural = cmds.getAttr(
|
||||
"defaultArnoldRenderOptions.absolute_procedural_paths")
|
||||
texture_search_path = cmds.getAttr(
|
||||
"defaultArnoldRenderOptions.tspath"
|
||||
|
|
@ -42,10 +44,11 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
|
|||
|
||||
scene_dir, scene_basename = os.path.split(cmds.file(q=True, loc=True))
|
||||
scene_name, _ = os.path.splitext(scene_basename)
|
||||
assert self.maya_is_true(relative_texture) is not True, \
|
||||
("Texture path is set to be absolute")
|
||||
assert self.maya_is_true(relative_procedural) is not True, \
|
||||
("Procedural path is set to be absolute")
|
||||
|
||||
if self.maya_is_true(absolute_texture):
|
||||
errors.append("Texture path is set to be absolute")
|
||||
if self.maya_is_true(absolute_procedural):
|
||||
errors.append("Procedural path is set to be absolute")
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
|
|
@ -57,15 +60,20 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin):
|
|||
for k in keys:
|
||||
paths.append("[{}]".format(k))
|
||||
|
||||
self.log.info("discovered roots: {}".format(":".join(paths)))
|
||||
self.log.debug("discovered roots: {}".format(":".join(paths)))
|
||||
|
||||
assert ":".join(paths) in texture_search_path, (
|
||||
"Project roots are not in texture_search_path"
|
||||
)
|
||||
if ":".join(paths) not in texture_search_path:
|
||||
errors.append((
|
||||
"Project roots {} are not in texture_search_path: {}"
|
||||
).format(paths, texture_search_path))
|
||||
|
||||
assert ":".join(paths) in procedural_search_path, (
|
||||
"Project roots are not in procedural_search_path"
|
||||
)
|
||||
if ":".join(paths) not in procedural_search_path:
|
||||
errors.append((
|
||||
"Project roots {} are not in procedural_search_path: {}"
|
||||
).format(paths, procedural_search_path))
|
||||
|
||||
if errors:
|
||||
raise PublishValidationError("\n".join(errors))
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
|
|
@ -1,10 +1,8 @@
|
|||
import pyblish.api
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
PublishValidationError, RepairAction, ValidateContentsOrder)
|
||||
|
||||
|
||||
class ValidateRenderImageRule(pyblish.api.InstancePlugin):
|
||||
|
|
@ -27,12 +25,12 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin):
|
|||
required_images_rule = self.get_default_render_image_folder(instance)
|
||||
current_images_rule = cmds.workspace(fileRuleEntry="images")
|
||||
|
||||
assert current_images_rule == required_images_rule, (
|
||||
"Invalid workspace `images` file rule value: '{}'. "
|
||||
"Must be set to: '{}'".format(
|
||||
current_images_rule, required_images_rule
|
||||
)
|
||||
)
|
||||
if current_images_rule != required_images_rule:
|
||||
raise PublishValidationError(
|
||||
(
|
||||
"Invalid workspace `images` file rule value: '{}'. "
|
||||
"Must be set to: '{}'"
|
||||
).format(current_images_rule, required_images_rule))
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
|
|
@ -278,16 +278,18 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
# go through definitions and test if such node.attribute exists.
|
||||
# if so, compare its value from the one required.
|
||||
for attribute, data in cls.get_nodes(instance, renderer).items():
|
||||
for data in cls.get_nodes(instance, renderer):
|
||||
for node in data["nodes"]:
|
||||
try:
|
||||
render_value = cmds.getAttr(
|
||||
"{}.{}".format(node, attribute)
|
||||
"{}.{}".format(node, data["attribute"])
|
||||
)
|
||||
except PublishValidationError:
|
||||
invalid = True
|
||||
cls.log.error(
|
||||
"Cannot get value of {}.{}".format(node, attribute)
|
||||
"Cannot get value of {}.{}".format(
|
||||
node, data["attribute"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
if render_value not in data["values"]:
|
||||
|
|
@ -295,7 +297,10 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
cls.log.error(
|
||||
"Invalid value {} set on {}.{}. Expecting "
|
||||
"{}".format(
|
||||
render_value, node, attribute, data["values"]
|
||||
render_value,
|
||||
node,
|
||||
data["attribute"],
|
||||
data["values"]
|
||||
)
|
||||
)
|
||||
|
||||
|
|
@ -309,7 +314,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
"{}_render_attributes".format(renderer)
|
||||
) or []
|
||||
)
|
||||
result = {}
|
||||
result = []
|
||||
for attr, values in OrderedDict(validation_settings).items():
|
||||
values = [convert_to_int_or_float(v) for v in values if v]
|
||||
|
||||
|
|
@ -339,7 +344,13 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
)
|
||||
continue
|
||||
|
||||
result[attribute_name] = {"nodes": nodes, "values": values}
|
||||
result.append(
|
||||
{
|
||||
"attribute": attribute_name,
|
||||
"nodes": nodes,
|
||||
"values": values
|
||||
}
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
|
@ -354,11 +365,11 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
|
||||
for attribute, data in cls.get_nodes(instance, renderer).items():
|
||||
for data in cls.get_nodes(instance, renderer):
|
||||
if not data["values"]:
|
||||
continue
|
||||
for node in data["nodes"]:
|
||||
lib.set_attribute(attribute, data["values"][0], node)
|
||||
lib.set_attribute(data["attribute"], data["values"][0], node)
|
||||
|
||||
with lib.renderlayer(layer_node):
|
||||
default = lib.RENDER_ATTRS['default']
|
||||
|
|
@ -368,6 +379,17 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
cmds.setAttr("defaultRenderGlobals.animation", True)
|
||||
|
||||
# Repair prefix
|
||||
if renderer == "arnold":
|
||||
multipart = cmds.getAttr("defaultArnoldDriver.mergeAOVs")
|
||||
if multipart:
|
||||
separator_variations = [
|
||||
"_<RenderPass>",
|
||||
"<RenderPass>_",
|
||||
"<RenderPass>",
|
||||
]
|
||||
for variant in separator_variations:
|
||||
default_prefix = default_prefix.replace(variant, "")
|
||||
|
||||
if renderer != "renderman":
|
||||
node = render_attrs["node"]
|
||||
prefix_attr = render_attrs["prefix"]
|
||||
|
|
|
|||
|
|
@ -61,9 +61,7 @@ class ValidateXgen(pyblish.api.InstancePlugin):
|
|||
# We need a namespace else there will be a naming conflict when
|
||||
# extracting because of stripping namespaces and parenting to world.
|
||||
node_names = [instance.data["xgmPalette"]]
|
||||
for _, connections in instance.data["xgenConnections"].items():
|
||||
node_names.append(connections["transform"].split(".")[0])
|
||||
|
||||
node_names.extend(instance.data["xgenConnections"])
|
||||
non_namespaced_nodes = [n for n in node_names if ":" not in n]
|
||||
if non_namespaced_nodes:
|
||||
raise PublishValidationError(
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ from openpype.lib import (
|
|||
env_value_to_bool,
|
||||
Logger,
|
||||
get_version_from_path,
|
||||
StringTemplate,
|
||||
)
|
||||
|
||||
from openpype.settings import (
|
||||
|
|
@ -39,6 +40,7 @@ from openpype.settings import (
|
|||
from openpype.modules import ModulesManager
|
||||
from openpype.pipeline.template_data import get_template_data_with_names
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
discover_legacy_creator_plugins,
|
||||
legacy_io,
|
||||
Anatomy,
|
||||
|
|
@ -1299,13 +1301,8 @@ def create_write_node(
|
|||
|
||||
# build file path to workfiles
|
||||
fdir = str(anatomy_filled["work"]["folder"]).replace("\\", "/")
|
||||
fpath = data["fpath_template"].format(
|
||||
work=fdir,
|
||||
version=data["version"],
|
||||
subset=data["subset"],
|
||||
frame=data["frame"],
|
||||
ext=ext
|
||||
)
|
||||
data["work"] = fdir
|
||||
fpath = StringTemplate(data["fpath_template"]).format_strict(data)
|
||||
|
||||
# create directory
|
||||
if not os.path.isdir(os.path.dirname(fpath)):
|
||||
|
|
@ -1403,8 +1400,6 @@ def create_write_node(
|
|||
# adding write to read button
|
||||
add_button_clear_rendered(GN, os.path.dirname(fpath))
|
||||
|
||||
GN.addKnob(nuke.Text_Knob('', ''))
|
||||
|
||||
# set tile color
|
||||
tile_color = next(
|
||||
iter(
|
||||
|
|
@ -2003,63 +1998,104 @@ class WorkfileSettings(object):
|
|||
"Attention! Viewer nodes {} were erased."
|
||||
"It had wrong color profile".format(erased_viewers))
|
||||
|
||||
def set_root_colorspace(self, nuke_colorspace):
|
||||
def set_root_colorspace(self, imageio_host):
|
||||
''' Adds correct colorspace to root
|
||||
|
||||
Arguments:
|
||||
nuke_colorspace (dict): adjustmensts from presets
|
||||
imageio_host (dict): host colorspace configurations
|
||||
|
||||
'''
|
||||
workfile_settings = nuke_colorspace["workfile"]
|
||||
config_data = get_imageio_config(
|
||||
project_name=get_current_project_name(),
|
||||
host_name="nuke"
|
||||
)
|
||||
|
||||
# resolve config data if they are enabled in host
|
||||
config_data = None
|
||||
if nuke_colorspace.get("ocio_config", {}).get("enabled"):
|
||||
# switch ocio config to custom config
|
||||
workfile_settings["OCIO_config"] = "custom"
|
||||
workfile_settings["colorManagement"] = "OCIO"
|
||||
workfile_settings = imageio_host["workfile"]
|
||||
|
||||
# get resolved ocio config path
|
||||
config_data = get_imageio_config(
|
||||
legacy_io.active_project(), "nuke"
|
||||
)
|
||||
if not config_data:
|
||||
# TODO: backward compatibility for old projects - remove later
|
||||
# perhaps old project overrides is having it set to older version
|
||||
# with use of `customOCIOConfigPath`
|
||||
resolved_path = None
|
||||
if workfile_settings.get("customOCIOConfigPath"):
|
||||
unresolved_path = workfile_settings["customOCIOConfigPath"]
|
||||
ocio_paths = unresolved_path[platform.system().lower()]
|
||||
|
||||
# first set OCIO
|
||||
if self._root_node["colorManagement"].value() \
|
||||
not in str(workfile_settings["colorManagement"]):
|
||||
self._root_node["colorManagement"].setValue(
|
||||
str(workfile_settings["colorManagement"]))
|
||||
for ocio_p in ocio_paths:
|
||||
resolved_path = str(ocio_p).format(**os.environ)
|
||||
if not os.path.exists(resolved_path):
|
||||
continue
|
||||
|
||||
# we dont need the key anymore
|
||||
workfile_settings.pop("colorManagement")
|
||||
if resolved_path:
|
||||
# set values to root
|
||||
self._root_node["colorManagement"].setValue("OCIO")
|
||||
self._root_node["OCIO_config"].setValue("custom")
|
||||
self._root_node["customOCIOConfigPath"].setValue(
|
||||
resolved_path)
|
||||
else:
|
||||
# no ocio config found and no custom path used
|
||||
if self._root_node["colorManagement"].value() \
|
||||
not in str(workfile_settings["colorManagement"]):
|
||||
self._root_node["colorManagement"].setValue(
|
||||
str(workfile_settings["colorManagement"]))
|
||||
|
||||
# second set ocio version
|
||||
if self._root_node["OCIO_config"].value() \
|
||||
not in str(workfile_settings["OCIO_config"]):
|
||||
self._root_node["OCIO_config"].setValue(
|
||||
str(workfile_settings["OCIO_config"]))
|
||||
# second set ocio version
|
||||
if self._root_node["OCIO_config"].value() \
|
||||
not in str(workfile_settings["OCIO_config"]):
|
||||
self._root_node["OCIO_config"].setValue(
|
||||
str(workfile_settings["OCIO_config"]))
|
||||
|
||||
# we dont need the key anymore
|
||||
workfile_settings.pop("OCIO_config")
|
||||
else:
|
||||
# set values to root
|
||||
self._root_node["colorManagement"].setValue("OCIO")
|
||||
|
||||
# third set ocio custom path
|
||||
if config_data:
|
||||
self._root_node["customOCIOConfigPath"].setValue(
|
||||
str(config_data["path"]).replace("\\", "/")
|
||||
)
|
||||
# backward compatibility, remove in case it exists
|
||||
workfile_settings.pop("customOCIOConfigPath")
|
||||
# we dont need the key anymore
|
||||
workfile_settings.pop("customOCIOConfigPath", None)
|
||||
workfile_settings.pop("colorManagement", None)
|
||||
workfile_settings.pop("OCIO_config", None)
|
||||
|
||||
# then set the rest
|
||||
for knob, value in workfile_settings.items():
|
||||
for knob, value_ in workfile_settings.items():
|
||||
# skip unfilled ocio config path
|
||||
# it will be dict in value
|
||||
if isinstance(value, dict):
|
||||
if isinstance(value_, dict):
|
||||
continue
|
||||
if self._root_node[knob].value() not in value:
|
||||
self._root_node[knob].setValue(str(value))
|
||||
# skip empty values
|
||||
if not value_:
|
||||
continue
|
||||
if self._root_node[knob].value() not in value_:
|
||||
self._root_node[knob].setValue(str(value_))
|
||||
log.debug("nuke.root()['{}'] changed to: {}".format(
|
||||
knob, value))
|
||||
knob, value_))
|
||||
|
||||
# set ocio config path
|
||||
if config_data:
|
||||
current_ocio_path = os.getenv("OCIO")
|
||||
if current_ocio_path != config_data["path"]:
|
||||
message = """
|
||||
It seems like there's a mismatch between the OCIO config path set in your Nuke
|
||||
settings and the actual path set in your OCIO environment.
|
||||
|
||||
To resolve this, please follow these steps:
|
||||
1. Close Nuke if it's currently open.
|
||||
2. Reopen Nuke.
|
||||
|
||||
Please note the paths for your reference:
|
||||
|
||||
- The OCIO environment path currently set:
|
||||
`{env_path}`
|
||||
|
||||
- The path in your current Nuke settings:
|
||||
`{settings_path}`
|
||||
|
||||
Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
||||
"""
|
||||
nuke.message(
|
||||
message.format(
|
||||
env_path=current_ocio_path,
|
||||
settings_path=config_data["path"]
|
||||
)
|
||||
)
|
||||
|
||||
def set_writes_colorspace(self):
|
||||
''' Adds correct colorspace to write node dict
|
||||
|
|
@ -2156,7 +2192,7 @@ class WorkfileSettings(object):
|
|||
|
||||
log.debug(changes)
|
||||
if changes:
|
||||
msg = "Read nodes are not set to correct colospace:\n\n"
|
||||
msg = "Read nodes are not set to correct colorspace:\n\n"
|
||||
for nname, knobs in changes.items():
|
||||
msg += (
|
||||
" - node: '{0}' is now '{1}' but should be '{2}'\n"
|
||||
|
|
|
|||
|
|
@ -237,15 +237,25 @@ def _install_menu():
|
|||
|
||||
menu.addSeparator()
|
||||
if not ASSIST:
|
||||
# only add parent if nuke version is 14 or higher
|
||||
# known issue with no solution yet
|
||||
menu.addCommand(
|
||||
"Create...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None
|
||||
),
|
||||
tab="create"
|
||||
)
|
||||
)
|
||||
# only add parent if nuke version is 14 or higher
|
||||
# known issue with no solution yet
|
||||
menu.addCommand(
|
||||
"Publish...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None
|
||||
),
|
||||
tab="publish"
|
||||
)
|
||||
)
|
||||
|
|
@ -564,6 +574,7 @@ def remove_instance(instance):
|
|||
instance_node = instance.transient_data["node"]
|
||||
instance_knob = instance_node.knobs()[INSTANCE_DATA_KNOB]
|
||||
instance_node.removeKnob(instance_knob)
|
||||
nuke.delete(instance_node)
|
||||
|
||||
|
||||
def select_instance(instance):
|
||||
|
|
|
|||
|
|
@ -75,20 +75,6 @@ class NukeCreator(NewCreator):
|
|||
for pass_key in keys:
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
||||
def add_info_knob(self, node):
|
||||
if "OP_info" in node.knobs().keys():
|
||||
return
|
||||
|
||||
# add info text
|
||||
info_knob = nuke.Text_Knob("OP_info", "")
|
||||
info_knob.setValue("""
|
||||
<span style=\"color:#fc0303\">
|
||||
<p>This node is maintained by <b>OpenPype Publisher</b>.</p>
|
||||
<p>To remove it use Publisher gui.</p>
|
||||
</span>
|
||||
""")
|
||||
node.addKnob(info_knob)
|
||||
|
||||
def check_existing_subset(self, subset_name):
|
||||
"""Make sure subset name is unique.
|
||||
|
||||
|
|
@ -153,8 +139,6 @@ class NukeCreator(NewCreator):
|
|||
created_node = nuke.createNode(node_type)
|
||||
created_node["name"].setValue(node_name)
|
||||
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
for key, values in node_knobs.items():
|
||||
if key in created_node.knobs():
|
||||
created_node["key"].setValue(values)
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ from .lib import (
|
|||
select_nodes,
|
||||
duplicate_node,
|
||||
node_tempfile,
|
||||
get_main_window
|
||||
)
|
||||
|
||||
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
|
||||
|
|
@ -963,8 +964,9 @@ def update_workfile_template(*args):
|
|||
def create_placeholder(*args):
|
||||
host = registered_host()
|
||||
builder = NukeTemplateBuilder(host)
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window.exec_()
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder,
|
||||
parent=get_main_window())
|
||||
window.show()
|
||||
|
||||
|
||||
def update_placeholder(*args):
|
||||
|
|
@ -988,6 +990,7 @@ def update_placeholder(*args):
|
|||
raise ValueError("Too many selected nodes")
|
||||
|
||||
placeholder_item = placeholder_items[0]
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder,
|
||||
parent=get_main_window())
|
||||
window.set_update_mode(placeholder_item)
|
||||
window.exec_()
|
||||
|
|
|
|||
|
|
@ -36,8 +36,6 @@ class CreateBackdrop(NukeCreator):
|
|||
created_node["note_font_size"].setValue(24)
|
||||
created_node["label"].setValue("[{}]".format(node_name))
|
||||
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
return created_node
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -39,8 +39,6 @@ class CreateCamera(NukeCreator):
|
|||
|
||||
created_node["name"].setValue(node_name)
|
||||
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
return created_node
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -40,8 +40,6 @@ class CreateGizmo(NukeCreator):
|
|||
|
||||
created_node["name"].setValue(node_name)
|
||||
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
return created_node
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -40,8 +40,6 @@ class CreateModel(NukeCreator):
|
|||
|
||||
created_node["name"].setValue(node_name)
|
||||
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
return created_node
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ class CreateSource(NukeCreator):
|
|||
read_node["tile_color"].setValue(
|
||||
int(self.node_color, 16))
|
||||
read_node["name"].setValue(node_name)
|
||||
self.add_info_knob(read_node)
|
||||
|
||||
return read_node
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -86,7 +86,6 @@ class CreateWriteImage(napi.NukeWriteCreator):
|
|||
"frame": nuke.frame()
|
||||
}
|
||||
)
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
self._add_frame_range_limit(created_node, instance_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -74,7 +74,6 @@ class CreateWritePrerender(napi.NukeWriteCreator):
|
|||
"height": height
|
||||
}
|
||||
)
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
self._add_frame_range_limit(created_node)
|
||||
|
||||
|
|
|
|||
|
|
@ -66,7 +66,6 @@ class CreateWriteRender(napi.NukeWriteCreator):
|
|||
"height": height
|
||||
}
|
||||
)
|
||||
self.add_info_knob(created_node)
|
||||
|
||||
self.integrate_links(created_node, outputs=False)
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class NukeRenderLocal(publish.Extractor,
|
|||
order = pyblish.api.ExtractorOrder
|
||||
label = "Render Local"
|
||||
hosts = ["nuke"]
|
||||
families = ["render.local", "prerender.local", "still.local"]
|
||||
families = ["render.local", "prerender.local", "image.local"]
|
||||
|
||||
def process(self, instance):
|
||||
child_nodes = (
|
||||
|
|
@ -136,9 +136,9 @@ class NukeRenderLocal(publish.Extractor,
|
|||
families.remove('prerender.local')
|
||||
families.insert(0, "prerender")
|
||||
instance.data["anatomyData"]["family"] = "prerender"
|
||||
elif "still.local" in families:
|
||||
elif "image.local" in families:
|
||||
instance.data['family'] = 'image'
|
||||
families.remove('still.local')
|
||||
families.remove('image.local')
|
||||
instance.data["anatomyData"]["family"] = "image"
|
||||
instance.data["families"] = families
|
||||
|
||||
|
|
|
|||
151
openpype/hosts/nuke/startup/custom_write_node.py
Normal file
151
openpype/hosts/nuke/startup/custom_write_node.py
Normal file
|
|
@ -0,0 +1,151 @@
|
|||
""" OpenPype custom script for setting up write nodes for non-publish """
|
||||
import os
|
||||
import nuke
|
||||
import nukescripts
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
set_node_knobs_from_settings,
|
||||
get_nuke_imageio_settings
|
||||
)
|
||||
|
||||
|
||||
temp_rendering_path_template = (
|
||||
"{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}")
|
||||
|
||||
knobs_setting = {
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
"name": "file_type",
|
||||
"value": "exr"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "datatype",
|
||||
"value": "16 bit half"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "compression",
|
||||
"value": "Zip (1 scanline)"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "autocrop",
|
||||
"value": True
|
||||
},
|
||||
{
|
||||
"type": "color_gui",
|
||||
"name": "tile_color",
|
||||
"value": [
|
||||
186,
|
||||
35,
|
||||
35,
|
||||
255
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "channels",
|
||||
"value": "rgb"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "create_directories",
|
||||
"value": True
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
class WriteNodeKnobSettingPanel(nukescripts.PythonPanel):
|
||||
""" Write Node's Knobs Settings Panel """
|
||||
def __init__(self):
|
||||
nukescripts.PythonPanel.__init__(self, "Set Knobs Value(Write Node)")
|
||||
|
||||
preset_name, _ = self.get_node_knobs_setting()
|
||||
# create knobs
|
||||
|
||||
self.selected_preset_name = nuke.Enumeration_Knob(
|
||||
'preset_selector', 'presets', preset_name)
|
||||
# add knobs to panel
|
||||
self.addKnob(self.selected_preset_name)
|
||||
|
||||
def process(self):
|
||||
""" Process the panel values. """
|
||||
write_selected_nodes = [
|
||||
selected_nodes for selected_nodes in nuke.selectedNodes()
|
||||
if selected_nodes.Class() == "Write"]
|
||||
|
||||
selected_preset = self.selected_preset_name.value()
|
||||
ext = None
|
||||
knobs = knobs_setting["knobs"]
|
||||
preset_name, node_knobs_presets = (
|
||||
self.get_node_knobs_setting(selected_preset)
|
||||
)
|
||||
|
||||
if selected_preset and preset_name:
|
||||
if not node_knobs_presets:
|
||||
nuke.message(
|
||||
"No knobs value found in subset group.."
|
||||
"\nDefault setting will be used..")
|
||||
else:
|
||||
knobs = node_knobs_presets
|
||||
|
||||
ext_knob_list = [knob for knob in knobs if knob["name"] == "file_type"]
|
||||
if not ext_knob_list:
|
||||
nuke.message(
|
||||
"ERROR: No file type found in the subset's knobs."
|
||||
"\nPlease add one to complete setting up the node")
|
||||
return
|
||||
else:
|
||||
for knob in ext_knob_list:
|
||||
ext = knob["value"]
|
||||
|
||||
anatomy = Anatomy()
|
||||
|
||||
frame_padding = int(
|
||||
anatomy.templates["render"].get(
|
||||
"frame_padding"
|
||||
)
|
||||
)
|
||||
for write_node in write_selected_nodes:
|
||||
# data for mapping the path
|
||||
data = {
|
||||
"work": os.getenv("AVALON_WORKDIR"),
|
||||
"subset": write_node["name"].value(),
|
||||
"frame": "#" * frame_padding,
|
||||
"ext": ext
|
||||
}
|
||||
file_path = temp_rendering_path_template.format(**data)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
write_node["file"].setValue(file_path)
|
||||
set_node_knobs_from_settings(write_node, knobs)
|
||||
|
||||
def get_node_knobs_setting(self, selected_preset=None):
|
||||
preset_name = []
|
||||
knobs_nodes = []
|
||||
settings = [
|
||||
node_settings for node_settings
|
||||
in get_nuke_imageio_settings()["nodes"]["overrideNodes"]
|
||||
if node_settings["nukeNodeClass"] == "Write"
|
||||
and node_settings["subsets"]
|
||||
]
|
||||
if not settings:
|
||||
return
|
||||
|
||||
for i, _ in enumerate(settings):
|
||||
if selected_preset in settings[i]["subsets"]:
|
||||
knobs_nodes = settings[i]["knobs"]
|
||||
|
||||
for setting in settings:
|
||||
for subset in setting["subsets"]:
|
||||
preset_name.append(subset)
|
||||
|
||||
return preset_name, knobs_nodes
|
||||
|
||||
|
||||
def main():
|
||||
p_ = WriteNodeKnobSettingPanel()
|
||||
if p_.showModalDialog():
|
||||
print(p_.process())
|
||||
|
|
@ -222,7 +222,6 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
"label": subset,
|
||||
"name": subset,
|
||||
"family": in_data["family"],
|
||||
# "version": in_data.get("version", 1),
|
||||
"frameStart": in_data.get("representations", [None])[0].get(
|
||||
"frameStart", None
|
||||
),
|
||||
|
|
@ -232,6 +231,14 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
"families": instance_families
|
||||
}
|
||||
)
|
||||
# Fill version only if 'use_next_available_version' is disabled
|
||||
# and version is filled in instance data
|
||||
version = in_data.get("version")
|
||||
use_next_available_version = in_data.get(
|
||||
"use_next_available_version", True)
|
||||
if not use_next_available_version and version is not None:
|
||||
instance.data["version"] = version
|
||||
|
||||
self.log.info("collected instance: {}".format(pformat(instance.data)))
|
||||
self.log.info("parsing data: {}".format(pformat(in_data)))
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,14 @@
|
|||
from openpype.lib.attribute_definitions import FileDef
|
||||
from openpype.client import (
|
||||
get_assets,
|
||||
get_subsets,
|
||||
get_last_versions,
|
||||
)
|
||||
from openpype.lib.attribute_definitions import (
|
||||
FileDef,
|
||||
BoolDef,
|
||||
NumberDef,
|
||||
UISeparatorDef,
|
||||
)
|
||||
from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS
|
||||
from openpype.pipeline.create import (
|
||||
Creator,
|
||||
|
|
@ -94,6 +104,7 @@ class TrayPublishCreator(Creator):
|
|||
class SettingsCreator(TrayPublishCreator):
|
||||
create_allow_context_change = True
|
||||
create_allow_thumbnail = True
|
||||
allow_version_control = False
|
||||
|
||||
extensions = []
|
||||
|
||||
|
|
@ -101,8 +112,18 @@ class SettingsCreator(TrayPublishCreator):
|
|||
# Pass precreate data to creator attributes
|
||||
thumbnail_path = pre_create_data.pop(PRE_CREATE_THUMBNAIL_KEY, None)
|
||||
|
||||
# Fill 'version_to_use' if version control is enabled
|
||||
if self.allow_version_control:
|
||||
asset_name = data["asset"]
|
||||
subset_docs_by_asset_id = self._prepare_next_versions(
|
||||
[asset_name], [subset_name])
|
||||
version = subset_docs_by_asset_id[asset_name].get(subset_name)
|
||||
pre_create_data["version_to_use"] = version
|
||||
data["_previous_last_version"] = version
|
||||
|
||||
data["creator_attributes"] = pre_create_data
|
||||
data["settings_creator"] = True
|
||||
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name, data, self)
|
||||
|
||||
|
|
@ -111,7 +132,158 @@ class SettingsCreator(TrayPublishCreator):
|
|||
if thumbnail_path:
|
||||
self.set_instance_thumbnail_path(new_instance.id, thumbnail_path)
|
||||
|
||||
def _prepare_next_versions(self, asset_names, subset_names):
|
||||
"""Prepare next versions for given asset and subset names.
|
||||
|
||||
Todos:
|
||||
Expect combination of subset names by asset name to avoid
|
||||
unnecessary server calls for unused subsets.
|
||||
|
||||
Args:
|
||||
asset_names (Iterable[str]): Asset names.
|
||||
subset_names (Iterable[str]): Subset names.
|
||||
|
||||
Returns:
|
||||
dict[str, dict[str, int]]: Last versions by asset
|
||||
and subset names.
|
||||
"""
|
||||
|
||||
# Prepare all versions for all combinations to '1'
|
||||
subset_docs_by_asset_id = {
|
||||
asset_name: {
|
||||
subset_name: 1
|
||||
for subset_name in subset_names
|
||||
}
|
||||
for asset_name in asset_names
|
||||
}
|
||||
if not asset_names or not subset_names:
|
||||
return subset_docs_by_asset_id
|
||||
|
||||
asset_docs = get_assets(
|
||||
self.project_name,
|
||||
asset_names=asset_names,
|
||||
fields=["_id", "name"]
|
||||
)
|
||||
asset_names_by_id = {
|
||||
asset_doc["_id"]: asset_doc["name"]
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
subset_docs = list(get_subsets(
|
||||
self.project_name,
|
||||
asset_ids=asset_names_by_id.keys(),
|
||||
subset_names=subset_names,
|
||||
fields=["_id", "name", "parent"]
|
||||
))
|
||||
|
||||
subset_ids = {subset_doc["_id"] for subset_doc in subset_docs}
|
||||
last_versions = get_last_versions(
|
||||
self.project_name,
|
||||
subset_ids,
|
||||
fields=["name", "parent"])
|
||||
|
||||
for subset_doc in subset_docs:
|
||||
asset_id = subset_doc["parent"]
|
||||
asset_name = asset_names_by_id[asset_id]
|
||||
subset_name = subset_doc["name"]
|
||||
subset_id = subset_doc["_id"]
|
||||
last_version = last_versions.get(subset_id)
|
||||
version = 0
|
||||
if last_version is not None:
|
||||
version = last_version["name"]
|
||||
subset_docs_by_asset_id[asset_name][subset_name] += version
|
||||
return subset_docs_by_asset_id
|
||||
|
||||
def _fill_next_versions(self, instances_data):
|
||||
"""Fill next version for instances.
|
||||
|
||||
Instances have also stored previous next version to be able to
|
||||
recognize if user did enter different version. If version was
|
||||
not changed by user, or user set it to '0' the next version will be
|
||||
updated by current database state.
|
||||
"""
|
||||
|
||||
filtered_instance_data = []
|
||||
for instance in instances_data:
|
||||
previous_last_version = instance.get("_previous_last_version")
|
||||
creator_attributes = instance["creator_attributes"]
|
||||
use_next_version = creator_attributes.get(
|
||||
"use_next_version", True)
|
||||
version = creator_attributes.get("version_to_use", 0)
|
||||
if (
|
||||
use_next_version
|
||||
or version == 0
|
||||
or version == previous_last_version
|
||||
):
|
||||
filtered_instance_data.append(instance)
|
||||
|
||||
asset_names = {
|
||||
instance["asset"]
|
||||
for instance in filtered_instance_data}
|
||||
subset_names = {
|
||||
instance["subset"]
|
||||
for instance in filtered_instance_data}
|
||||
subset_docs_by_asset_id = self._prepare_next_versions(
|
||||
asset_names, subset_names
|
||||
)
|
||||
for instance in filtered_instance_data:
|
||||
asset_name = instance["asset"]
|
||||
subset_name = instance["subset"]
|
||||
version = subset_docs_by_asset_id[asset_name][subset_name]
|
||||
instance["creator_attributes"]["version_to_use"] = version
|
||||
instance["_previous_last_version"] = version
|
||||
|
||||
def collect_instances(self):
|
||||
"""Collect instances from host.
|
||||
|
||||
Overriden to be able to manage version control attributes. If version
|
||||
control is disabled, the attributes will be removed from instances,
|
||||
and next versions are filled if is version control enabled.
|
||||
"""
|
||||
|
||||
instances_by_identifier = cache_and_get_instances(
|
||||
self, SHARED_DATA_KEY, list_instances
|
||||
)
|
||||
instances = instances_by_identifier[self.identifier]
|
||||
if not instances:
|
||||
return
|
||||
|
||||
if self.allow_version_control:
|
||||
self._fill_next_versions(instances)
|
||||
|
||||
for instance_data in instances:
|
||||
# Make sure that there are not data related to version control
|
||||
# if plugin does not support it
|
||||
if not self.allow_version_control:
|
||||
instance_data.pop("_previous_last_version", None)
|
||||
creator_attributes = instance_data["creator_attributes"]
|
||||
creator_attributes.pop("version_to_use", None)
|
||||
creator_attributes.pop("use_next_version", None)
|
||||
|
||||
instance = CreatedInstance.from_existing(instance_data, self)
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
defs = self.get_pre_create_attr_defs()
|
||||
if self.allow_version_control:
|
||||
defs += [
|
||||
UISeparatorDef(),
|
||||
BoolDef(
|
||||
"use_next_version",
|
||||
default=True,
|
||||
label="Use next version",
|
||||
),
|
||||
NumberDef(
|
||||
"version_to_use",
|
||||
default=1,
|
||||
minimum=0,
|
||||
maximum=999,
|
||||
label="Version to use",
|
||||
)
|
||||
]
|
||||
return defs
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attributes
|
||||
return [
|
||||
FileDef(
|
||||
"representation_files",
|
||||
|
|
@ -132,10 +304,6 @@ class SettingsCreator(TrayPublishCreator):
|
|||
)
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attrobites
|
||||
return self.get_instance_attr_defs()
|
||||
|
||||
@classmethod
|
||||
def from_settings(cls, item_data):
|
||||
identifier = item_data["identifier"]
|
||||
|
|
@ -155,6 +323,8 @@ class SettingsCreator(TrayPublishCreator):
|
|||
"extensions": item_data["extensions"],
|
||||
"allow_sequences": item_data["allow_sequences"],
|
||||
"allow_multiple_items": item_data["allow_multiple_items"],
|
||||
"default_variants": item_data["default_variants"]
|
||||
"allow_version_control": item_data.get(
|
||||
"allow_version_control", False),
|
||||
"default_variants": item_data["default_variants"],
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -487,7 +487,22 @@ or updating already created. Publishing will create OTIO file.
|
|||
)
|
||||
|
||||
# get video stream data
|
||||
video_stream = media_data["streams"][0]
|
||||
video_streams = []
|
||||
audio_streams = []
|
||||
for stream in media_data["streams"]:
|
||||
codec_type = stream.get("codec_type")
|
||||
if codec_type == "audio":
|
||||
audio_streams.append(stream)
|
||||
|
||||
elif codec_type == "video":
|
||||
video_streams.append(stream)
|
||||
|
||||
if not video_streams:
|
||||
raise ValueError(
|
||||
"Could not find video stream in source file."
|
||||
)
|
||||
|
||||
video_stream = video_streams[0]
|
||||
return_data = {
|
||||
"video": True,
|
||||
"start_frame": 0,
|
||||
|
|
@ -500,12 +515,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
}
|
||||
|
||||
# get audio streams data
|
||||
audio_stream = [
|
||||
stream for stream in media_data["streams"]
|
||||
if stream["codec_type"] == "audio"
|
||||
]
|
||||
|
||||
if audio_stream:
|
||||
if audio_streams:
|
||||
return_data["audio"] = True
|
||||
|
||||
except Exception as exc:
|
||||
|
|
|
|||
|
|
@ -47,6 +47,8 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
"Created temp staging directory for instance {}. {}"
|
||||
).format(instance_label, tmp_folder))
|
||||
|
||||
self._fill_version(instance, instance_label)
|
||||
|
||||
# Store filepaths for validation of their existence
|
||||
source_filepaths = []
|
||||
# Make sure there are no representations with same name
|
||||
|
|
@ -93,6 +95,28 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
)
|
||||
)
|
||||
|
||||
def _fill_version(self, instance, instance_label):
|
||||
"""Fill instance version under which will be instance integrated.
|
||||
|
||||
Instance must have set 'use_next_version' to 'False'
|
||||
and 'version_to_use' to version to use.
|
||||
|
||||
Args:
|
||||
instance (pyblish.api.Instance): Instance to fill version for.
|
||||
instance_label (str): Label of instance to fill version for.
|
||||
"""
|
||||
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
use_next_version = creator_attributes.get("use_next_version", True)
|
||||
# If 'version_to_use' is '0' it means that next version should be used
|
||||
version_to_use = creator_attributes.get("version_to_use", 0)
|
||||
if use_next_version or not version_to_use:
|
||||
return
|
||||
instance.data["version"] = version_to_use
|
||||
self.log.debug(
|
||||
"Version for instance \"{}\" was set to \"{}\"".format(
|
||||
instance_label, version_to_use))
|
||||
|
||||
def _create_main_representations(
|
||||
self,
|
||||
instance,
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue