mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into tests/publish_process
This commit is contained in:
commit
83d5bff73e
260 changed files with 3033 additions and 1979 deletions
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,9 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.16.5
|
||||
- 3.16.5-nightly.5
|
||||
- 3.16.5-nightly.4
|
||||
- 3.16.5-nightly.3
|
||||
- 3.16.5-nightly.2
|
||||
- 3.16.5-nightly.1
|
||||
|
|
@ -132,9 +135,6 @@ body:
|
|||
- 3.14.9-nightly.3
|
||||
- 3.14.9-nightly.2
|
||||
- 3.14.9-nightly.1
|
||||
- 3.14.8
|
||||
- 3.14.8-nightly.4
|
||||
- 3.14.8-nightly.3
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
673
CHANGELOG.md
673
CHANGELOG.md
|
|
@ -1,6 +1,679 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.16.5](https://github.com/ynput/OpenPype/tree/3.16.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.4...3.16.5)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Attribute Definitions: Multiselection enum def <a href="https://github.com/ynput/OpenPype/pull/5547">#5547</a></summary>
|
||||
|
||||
Added `multiselection` option to `EnumDef`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Farm: adding target collector <a href="https://github.com/ynput/OpenPype/pull/5494">#5494</a></summary>
|
||||
|
||||
Enhancing farm publishing workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optimize validate plug-in path attributes <a href="https://github.com/ynput/OpenPype/pull/5522">#5522</a></summary>
|
||||
|
||||
- Optimize query (use `cmds.ls` once)
|
||||
- Add Select Invalid action
|
||||
- Improve validation report
|
||||
- Avoid "Unknown object type" errors
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove Validate Instance Attributes plug-in <a href="https://github.com/ynput/OpenPype/pull/5525">#5525</a></summary>
|
||||
|
||||
Remove Validate Instance Attributes plug-in.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Tweak logging for artist facing reports <a href="https://github.com/ynput/OpenPype/pull/5537">#5537</a></summary>
|
||||
|
||||
Tweak the logging of publishing for global, deadline, maya and a fusion plugin to have a cleaner artist-facing report.
|
||||
- Fix context being reported correctly from CollectContext
|
||||
- Fix ValidateMeshArnoldAttributes: fix when arnold is not loaded, fix applying settings, fix for when ai attributes do not exist
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Update settings <a href="https://github.com/ynput/OpenPype/pull/5544">#5544</a></summary>
|
||||
|
||||
Updated settings in AYON addons and conversion of AYON settings in OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Removed Ass export script <a href="https://github.com/ynput/OpenPype/pull/5560">#5560</a></summary>
|
||||
|
||||
Removed Arnold render script, which was obsolete and unused.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Allow for knob values to be validated against multiple values. <a href="https://github.com/ynput/OpenPype/pull/5042">#5042</a></summary>
|
||||
|
||||
Knob values can now be validated against multiple values, so you can allow write nodes to be `exr` and `png`, or `16-bit` and `32-bit`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Cosmetics for Higher version of publish already exists validation error <a href="https://github.com/ynput/OpenPype/pull/5190">#5190</a></summary>
|
||||
|
||||
Fix double spaces in message.Example output **after** the PR:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: publish existing frames on farm <a href="https://github.com/ynput/OpenPype/pull/5409">#5409</a></summary>
|
||||
|
||||
This PR proposes adding a fourth option in Nuke render publish called "Use Existing Frames - Farm". This would be useful when the farm is busy or when the artist lacks enough farm licenses. Additionally, some artists prefer rendering on the farm but still want to check frames before publishing.By adding the "Use Existing Frames - Farm" option, artists will have more flexibility and control over their render publishing process. This enhancement will streamline the workflow and improve efficiency for Nuke users.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Create project in temp location and move to final when done <a href="https://github.com/ynput/OpenPype/pull/5476">#5476</a></summary>
|
||||
|
||||
Create Unreal project in local temporary folder and when done, move it to final destination.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: adding audio product type into default presets <a href="https://github.com/ynput/OpenPype/pull/5489">#5489</a></summary>
|
||||
|
||||
Adding Audio product type into default presets so anybody can publish audio to their shots.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: avoiding cleanup of flagged representation <a href="https://github.com/ynput/OpenPype/pull/5502">#5502</a></summary>
|
||||
|
||||
Publishing folder can be flagged as persistent at representation level.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: missing tag could raise error <a href="https://github.com/ynput/OpenPype/pull/5511">#5511</a></summary>
|
||||
|
||||
- avoiding potential situation where missing Tag key could raise error
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Queued event system <a href="https://github.com/ynput/OpenPype/pull/5514">#5514</a></summary>
|
||||
|
||||
Implemented event system with more expected behavior of event system. If an event is triggered during other event callback, it is not processed immediately but waits until all callbacks of previous events are done. The event system also allows to not trigger events directly once `emit_event` is called which gives option to process events in custom loops.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Tweak log message to provide plugin name after "Plugin" <a href="https://github.com/ynput/OpenPype/pull/5521">#5521</a></summary>
|
||||
|
||||
Fix logged message for settings automatically applied to plugin attributes
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve VDB Selection <a href="https://github.com/ynput/OpenPype/pull/5523">#5523</a></summary>
|
||||
|
||||
Improves VDB selection if selection is `SopNode`: return the selected sop nodeif selection is `ObjNode`: get the output node with the minimum 'outputidx' or the node with display flag
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Refactor/tweak Validate Instance In same Context plug-in <a href="https://github.com/ynput/OpenPype/pull/5526">#5526</a></summary>
|
||||
|
||||
- Chore/Refactor: Re-use existing select invalid and repair actions
|
||||
- Enhancement: provide more elaborate PublishValidationError report
|
||||
- Bugfix: fix "optional" support by using `OptionalPyblishPluginMixin` base class.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Update houdini main menu <a href="https://github.com/ynput/OpenPype/pull/5527">#5527</a></summary>
|
||||
|
||||
This PR adds two updates:
|
||||
- dynamic main menu
|
||||
- dynamic asset name and task
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Reset FPS when clicking Set Frame Range <a href="https://github.com/ynput/OpenPype/pull/5528">#5528</a></summary>
|
||||
|
||||
_Similar to Maya,_ Make `Set Frame Range` resets FPS, issue https://github.com/ynput/OpenPype/issues/5516
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Deadline plugins optimize, cleanup and fix optional support for validate deadline pools <a href="https://github.com/ynput/OpenPype/pull/5531">#5531</a></summary>
|
||||
|
||||
- Fix optional support of validate deadline pools
|
||||
- Query deadline webservice only once per URL for verification, and once for available deadline pools instead of for every instance
|
||||
- Use `deadlineUrl` in `instance.data` when validating pools if it is set.
|
||||
- Code cleanup: Re-use existing `requests_get` implementation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: PowerShell script for docker build <a href="https://github.com/ynput/OpenPype/pull/5535">#5535</a></summary>
|
||||
|
||||
Added PowerShell script to run docker build.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Deadline expand userpaths in executables list <a href="https://github.com/ynput/OpenPype/pull/5540">#5540</a></summary>
|
||||
|
||||
Expande `~` paths in executables list.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Use correct git url <a href="https://github.com/ynput/OpenPype/pull/5542">#5542</a></summary>
|
||||
|
||||
Fixed github url in README.md.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Create plugin does not expect system settings <a href="https://github.com/ynput/OpenPype/pull/5553">#5553</a></summary>
|
||||
|
||||
System settings are not passed to initialization of create plugin initialization (and `apply_settings`).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Allow custom Qt scale factor rounding policy <a href="https://github.com/ynput/OpenPype/pull/5555">#5555</a></summary>
|
||||
|
||||
Do not force `PassThrough` rounding policy if different policy is defined via env variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix outdated containers pop-up on opening last workfile on launch <a href="https://github.com/ynput/OpenPype/pull/5567">#5567</a></summary>
|
||||
|
||||
Fix Houdini not showing outdated containers pop-up on scene open when launching with last workfile argument
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve errors e.g. raise PublishValidationError or cosmetics <a href="https://github.com/ynput/OpenPype/pull/5568">#5568</a></summary>
|
||||
|
||||
Improve errors e.g. raise PublishValidationError or cosmeticsThis also fixes the Increment Current File plug-in since due to an invalid import it was previously broken
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Code updates <a href="https://github.com/ynput/OpenPype/pull/5569">#5569</a></summary>
|
||||
|
||||
Update fusion code which contains obsolete code. Removed `switch_ui.py` script from fusion with related script in scripts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Validate Shape Zero fix repair action + provide informational artist-facing report <a href="https://github.com/ynput/OpenPype/pull/5524">#5524</a></summary>
|
||||
|
||||
Refactor to PublishValidationError to allow the RepairAction to work + provide informational report message
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix attribute definitions for `CreateYetiCache` <a href="https://github.com/ynput/OpenPype/pull/5574">#5574</a></summary>
|
||||
|
||||
Fix attribute definitions for `CreateYetiCache`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Optional Renderable Camera Validator for Render Instance <a href="https://github.com/ynput/OpenPype/pull/5286">#5286</a></summary>
|
||||
|
||||
Optional validation to check on renderable camera being set up correctly for deadline submission.If not being set up correctly, it wont pass the validation and user can perform repair actions.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Adding custom modifiers back to the loaded objects <a href="https://github.com/ynput/OpenPype/pull/5378">#5378</a></summary>
|
||||
|
||||
The custom parameters OpenpypeData doesn't show in the loaded container when it is being loaded through the loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Use default_variant to Houdini Node TAB Creator <a href="https://github.com/ynput/OpenPype/pull/5421">#5421</a></summary>
|
||||
|
||||
Use the default variant of the creator plugins on the interactive creator from the TAB node search instead of hard-coding it to `Main`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: adding inherited colorspace from instance <a href="https://github.com/ynput/OpenPype/pull/5454">#5454</a></summary>
|
||||
|
||||
Thumbnails are extracted with inherited colorspace collected from rendering write node.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add kitsu credentials to deadline publish job <a href="https://github.com/ynput/OpenPype/pull/5455">#5455</a></summary>
|
||||
|
||||
This PR hopefully fixes this issue #5440
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial <a href="https://github.com/ynput/OpenPype/pull/5475">#5475</a></summary>
|
||||
|
||||
Fill entities and update template data on instances during extract AYON hierarchy.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Fix version 0 when integrating to Ftrack - OP-6595 <a href="https://github.com/ynput/OpenPype/pull/5477">#5477</a></summary>
|
||||
|
||||
Fix publishing version 0 to Ftrack.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>OCIO: windows unc path support in Nuke and Hiero <a href="https://github.com/ynput/OpenPype/pull/5479">#5479</a></summary>
|
||||
|
||||
Hiero and Nuke is not supporting windows unc path formatting in OCIO environment variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: Added super call to init <a href="https://github.com/ynput/OpenPype/pull/5480">#5480</a></summary>
|
||||
|
||||
DL 10.3 requires plugin inheriting from DeadlinePlugin to call super's **init** explicitly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing thumbnail and monitor out root attributes <a href="https://github.com/ynput/OpenPype/pull/5483">#5483</a></summary>
|
||||
|
||||
Nuke Root Colorspace settings for Thumbnail and Monitor Out schema was gradually changed between version 12, 13, 14 and we needed to address those changes individually for particular version.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing missing `instance_id` error <a href="https://github.com/ynput/OpenPype/pull/5484">#5484</a></summary>
|
||||
|
||||
Workfiles with Instances created in old publisher workflow were rising error during converting method since they were missing `instance_id` key introduced in new publisher workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: existing frames validator is repairing render target <a href="https://github.com/ynput/OpenPype/pull/5486">#5486</a></summary>
|
||||
|
||||
Nuke is now correctly repairing render target after the existing frames validator finds missing frames and repair action is used.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>added UE to extract burnins families <a href="https://github.com/ynput/OpenPype/pull/5487">#5487</a></summary>
|
||||
|
||||
This PR fixes missing burnins in reviewables when rendering from UE.
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Harmony: refresh code for current Deadline <a href="https://github.com/ynput/OpenPype/pull/5493">#5493</a></summary>
|
||||
|
||||
- Added support in Deadline Plug-in for new versions of Harmony, in particular version 21 and 22.
|
||||
- Remove review=False flag on render instance
|
||||
- Add farm=True flag on render instance
|
||||
- Fix is_in_tests function call in Harmony Deadline submission plugin
|
||||
- Force HarmonyOpenPype.py Deadline Python plug-in to py3
|
||||
- Fix cosmetics/hound in HarmonyOpenPype.py Deadline Python plug-in
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix multiselection value <a href="https://github.com/ynput/OpenPype/pull/5505">#5505</a></summary>
|
||||
|
||||
Selection of multiple instances in Publisher does not cause that all instances change all publish attributes to the same value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Avoid warnings on thumbnails if source image also has alpha channel <a href="https://github.com/ynput/OpenPype/pull/5510">#5510</a></summary>
|
||||
|
||||
Avoids the following warning from `ExtractThumbnailFromSource`:
|
||||
```
|
||||
// pyblish.ExtractThumbnailFromSource : oiiotool WARNING: -o : Can't save 4 channels to jpeg... saving only R,G,B
|
||||
```
|
||||
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Update ayon-python-api <a href="https://github.com/ynput/OpenPype/pull/5512">#5512</a></summary>
|
||||
|
||||
Update ayon python api and related callbacks.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Fixing the bug of falling back to use workfile for Arnold or any renderers except Redshift <a href="https://github.com/ynput/OpenPype/pull/5520">#5520</a></summary>
|
||||
|
||||
Fix the bug of falling back to use workfile for Arnold
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Fix Validate Publish Dir Validator <a href="https://github.com/ynput/OpenPype/pull/5534">#5534</a></summary>
|
||||
|
||||
Nonsensical "family" key was used instead of real value (as 'render' etc.) which would result in wrong translation of intermediate family names.Updated docstring.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>have the addons loading respect a custom AYON_ADDONS_DIR <a href="https://github.com/ynput/OpenPype/pull/5539">#5539</a></summary>
|
||||
|
||||
When using a custom AYON_ADDONS_DIR environment variable that variable is used in the launcher correctly and downloads and extracts addons to there, however when running Ayon does not respect this environment variable
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: files on representation cannot be single item list <a href="https://github.com/ynput/OpenPype/pull/5545">#5545</a></summary>
|
||||
|
||||
Further logic expects that single item files will be only 'string' not 'list' (eg. repre["files"] = "abc.exr" not repre["files"] = ["abc.exr"].This would cause an issue in ExtractReview later.This could happen if DL rendered single frame file with different frame value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Webpublisher: better encode list values for click <a href="https://github.com/ynput/OpenPype/pull/5546">#5546</a></summary>
|
||||
|
||||
Targets could be a list, original implementation pushed it as a separate items, it must be added as `--targets webpulish --targets filepublish`.`wepublish_routes` handles triggering from UI, changes in `publish_functions` handle triggering from cmd (for tests, api access).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Introduce imprint function for correct version in hda loader <a href="https://github.com/ynput/OpenPype/pull/5548">#5548</a></summary>
|
||||
|
||||
Resolve #5478
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial (2) <a href="https://github.com/ynput/OpenPype/pull/5549">#5549</a></summary>
|
||||
|
||||
Fix changes made in https://github.com/ynput/OpenPype/pull/5475.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: OP Data updates in Loaders <a href="https://github.com/ynput/OpenPype/pull/5563">#5563</a></summary>
|
||||
|
||||
Fix the bug on the loaders not being able to load the objects when iterating key and values with the dict.Max prefers list over the list in dict.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Create Plugins: Better check of overriden '__init__' method <a href="https://github.com/ynput/OpenPype/pull/5571">#5571</a></summary>
|
||||
|
||||
Create plugins do not log warning messages about each create plugin because of wrong `__init__` method check.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tests: fix unit tests <a href="https://github.com/ynput/OpenPype/pull/5533">#5533</a></summary>
|
||||
|
||||
Fixed failing tests.Updated Unreal's validator to match removed general one which had a couple of issues fixed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -663,10 +663,13 @@ def convert_v4_representation_to_v3(representation):
|
|||
if isinstance(context, six.string_types):
|
||||
context = json.loads(context)
|
||||
|
||||
if "folder" in context:
|
||||
_c_folder = context.pop("folder")
|
||||
if "asset" not in context and "folder" in context:
|
||||
_c_folder = context["folder"]
|
||||
context["asset"] = _c_folder["name"]
|
||||
|
||||
elif "asset" in context and "folder" not in context:
|
||||
context["folder"] = {"name": context["asset"]}
|
||||
|
||||
if "product" in context:
|
||||
_c_product = context.pop("product")
|
||||
context["family"] = _c_product["type"]
|
||||
|
|
@ -959,9 +962,11 @@ def convert_create_representation_to_v4(representation, con):
|
|||
converted_representation["files"] = new_files
|
||||
|
||||
context = representation["context"]
|
||||
context["folder"] = {
|
||||
"name": context.pop("asset", None)
|
||||
}
|
||||
if "folder" not in context:
|
||||
context["folder"] = {
|
||||
"name": context.get("asset")
|
||||
}
|
||||
|
||||
context["product"] = {
|
||||
"type": context.pop("family", None),
|
||||
"name": context.pop("subset", None),
|
||||
|
|
@ -1285,7 +1290,7 @@ def convert_update_representation_to_v4(
|
|||
|
||||
if "context" in update_data:
|
||||
context = update_data["context"]
|
||||
if "asset" in context:
|
||||
if "folder" not in context and "asset" in context:
|
||||
context["folder"] = {"name": context.pop("asset")}
|
||||
|
||||
if "family" in context or "subset" in context:
|
||||
|
|
|
|||
|
|
@ -45,6 +45,9 @@ class OCIOEnvHook(PreLaunchHook):
|
|||
if config_data:
|
||||
ocio_path = config_data["path"]
|
||||
|
||||
if self.host_name in ["nuke", "hiero"]:
|
||||
ocio_path = ocio_path.replace("\\", "/")
|
||||
|
||||
self.log.info(
|
||||
f"Setting OCIO environment to config path: {ocio_path}")
|
||||
|
||||
|
|
|
|||
|
|
@ -164,7 +164,7 @@ class RenderCreator(Creator):
|
|||
api.get_stub().rename_item(comp_id,
|
||||
new_comp_name)
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
def apply_settings(self, project_settings):
|
||||
plugin_settings = (
|
||||
project_settings["aftereffects"]["create"]["RenderCreator"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -138,7 +138,6 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
fam = "render.farm"
|
||||
if fam not in instance.families:
|
||||
instance.families.append(fam)
|
||||
instance.toBeRenderedOn = "deadline"
|
||||
instance.renderer = "aerender"
|
||||
instance.farm = True # to skip integrate
|
||||
if "review" in instance.families:
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
libpath = self.filepath_from_context(context)
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class AbcCameraLoader(plugin.AssetLoader):
|
|||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ class FbxCameraLoader(plugin.AssetLoader):
|
|||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
|
|
|||
|
|
@ -21,8 +21,6 @@ class ExtractABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
|
|
|
|||
|
|
@ -20,8 +20,6 @@ class ExtractAnimationABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
|
|
|
|||
|
|
@ -21,16 +21,11 @@ class ExtractCameraABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
active = None
|
||||
|
||||
asset_group = None
|
||||
for obj in instance:
|
||||
if obj.get(AVALON_PROPERTY):
|
||||
|
|
|
|||
|
|
@ -48,7 +48,6 @@ class LoadClip(opfapi.ClipLoader):
|
|||
self.fpd = fproject.current_workspace.desktop
|
||||
|
||||
# load clip to timeline and get main variables
|
||||
namespace = namespace
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
|
|
|
|||
|
|
@ -45,7 +45,6 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
self.batch = options.get("batch") or flame.batch
|
||||
|
||||
# load clip to timeline and get main variables
|
||||
namespace = namespace
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
|
|
|
|||
|
|
@ -325,7 +325,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
def _create_shot_instance(self, context, clip_name, **data):
|
||||
master_layer = data.get("heroTrack")
|
||||
hierarchy_data = data.get("hierarchyData")
|
||||
asset = data.get("asset")
|
||||
|
||||
if not master_layer:
|
||||
return
|
||||
|
|
|
|||
|
|
@ -1,16 +0,0 @@
|
|||
from openpype.hosts.fusion.api import (
|
||||
comp_lock_and_undo_chunk,
|
||||
get_current_comp
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
comp = get_current_comp()
|
||||
"""Set all selected backgrounds to 32 bit"""
|
||||
with comp_lock_and_undo_chunk(comp, 'Selected Backgrounds to 32bit'):
|
||||
tools = comp.GetToolList(True, "Background").values()
|
||||
for tool in tools:
|
||||
tool.Depth = 5
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
from openpype.hosts.fusion.api import (
|
||||
comp_lock_and_undo_chunk,
|
||||
get_current_comp
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
comp = get_current_comp()
|
||||
"""Set all backgrounds to 32 bit"""
|
||||
with comp_lock_and_undo_chunk(comp, 'Backgrounds to 32bit'):
|
||||
tools = comp.GetToolList(False, "Background").values()
|
||||
for tool in tools:
|
||||
tool.Depth = 5
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
from openpype.hosts.fusion.api import (
|
||||
comp_lock_and_undo_chunk,
|
||||
get_current_comp
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
comp = get_current_comp()
|
||||
"""Set all selected loaders to 32 bit"""
|
||||
with comp_lock_and_undo_chunk(comp, 'Selected Loaders to 32bit'):
|
||||
tools = comp.GetToolList(True, "Loader").values()
|
||||
for tool in tools:
|
||||
tool.Depth = 5
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
from openpype.hosts.fusion.api import (
|
||||
comp_lock_and_undo_chunk,
|
||||
get_current_comp
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
comp = get_current_comp()
|
||||
"""Set all loaders to 32 bit"""
|
||||
with comp_lock_and_undo_chunk(comp, 'Loaders to 32bit'):
|
||||
tools = comp.GetToolList(False, "Loader").values()
|
||||
for tool in tools:
|
||||
tool.Depth = 5
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -1,200 +0,0 @@
|
|||
import os
|
||||
import sys
|
||||
import glob
|
||||
import logging
|
||||
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
import qtawesome as qta
|
||||
|
||||
from openpype.client import get_assets
|
||||
from openpype import style
|
||||
from openpype.pipeline import (
|
||||
install_host,
|
||||
get_current_project_name,
|
||||
)
|
||||
from openpype.hosts.fusion import api
|
||||
from openpype.pipeline.context_tools import get_workdir_from_session
|
||||
|
||||
log = logging.getLogger("Fusion Switch Shot")
|
||||
|
||||
|
||||
class App(QtWidgets.QWidget):
|
||||
|
||||
def __init__(self, parent=None):
|
||||
|
||||
################################################
|
||||
# |---------------------| |------------------| #
|
||||
# |Comp | |Asset | #
|
||||
# |[..][ v]| |[ v]| #
|
||||
# |---------------------| |------------------| #
|
||||
# | Update existing comp [ ] | #
|
||||
# |------------------------------------------| #
|
||||
# | Switch | #
|
||||
# |------------------------------------------| #
|
||||
################################################
|
||||
|
||||
QtWidgets.QWidget.__init__(self, parent)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout()
|
||||
|
||||
# Comp related input
|
||||
comp_hlayout = QtWidgets.QHBoxLayout()
|
||||
comp_label = QtWidgets.QLabel("Comp file")
|
||||
comp_label.setFixedWidth(50)
|
||||
comp_box = QtWidgets.QComboBox()
|
||||
|
||||
button_icon = qta.icon("fa.folder", color="white")
|
||||
open_from_dir = QtWidgets.QPushButton()
|
||||
open_from_dir.setIcon(button_icon)
|
||||
|
||||
comp_box.setFixedHeight(25)
|
||||
open_from_dir.setFixedWidth(25)
|
||||
open_from_dir.setFixedHeight(25)
|
||||
|
||||
comp_hlayout.addWidget(comp_label)
|
||||
comp_hlayout.addWidget(comp_box)
|
||||
comp_hlayout.addWidget(open_from_dir)
|
||||
|
||||
# Asset related input
|
||||
asset_hlayout = QtWidgets.QHBoxLayout()
|
||||
asset_label = QtWidgets.QLabel("Shot")
|
||||
asset_label.setFixedWidth(50)
|
||||
|
||||
asset_box = QtWidgets.QComboBox()
|
||||
asset_box.setLineEdit(QtWidgets.QLineEdit())
|
||||
asset_box.setFixedHeight(25)
|
||||
|
||||
refresh_icon = qta.icon("fa.refresh", color="white")
|
||||
refresh_btn = QtWidgets.QPushButton()
|
||||
refresh_btn.setIcon(refresh_icon)
|
||||
|
||||
asset_box.setFixedHeight(25)
|
||||
refresh_btn.setFixedWidth(25)
|
||||
refresh_btn.setFixedHeight(25)
|
||||
|
||||
asset_hlayout.addWidget(asset_label)
|
||||
asset_hlayout.addWidget(asset_box)
|
||||
asset_hlayout.addWidget(refresh_btn)
|
||||
|
||||
# Options
|
||||
options = QtWidgets.QHBoxLayout()
|
||||
options.setAlignment(QtCore.Qt.AlignLeft)
|
||||
|
||||
current_comp_check = QtWidgets.QCheckBox()
|
||||
current_comp_check.setChecked(True)
|
||||
current_comp_label = QtWidgets.QLabel("Use current comp")
|
||||
|
||||
options.addWidget(current_comp_label)
|
||||
options.addWidget(current_comp_check)
|
||||
|
||||
accept_btn = QtWidgets.QPushButton("Switch")
|
||||
|
||||
layout.addLayout(options)
|
||||
layout.addLayout(comp_hlayout)
|
||||
layout.addLayout(asset_hlayout)
|
||||
layout.addWidget(accept_btn)
|
||||
|
||||
self._open_from_dir = open_from_dir
|
||||
self._comps = comp_box
|
||||
self._assets = asset_box
|
||||
self._use_current = current_comp_check
|
||||
self._accept_btn = accept_btn
|
||||
self._refresh_btn = refresh_btn
|
||||
|
||||
self.setWindowTitle("Fusion Switch Shot")
|
||||
self.setLayout(layout)
|
||||
|
||||
self.resize(260, 140)
|
||||
self.setMinimumWidth(260)
|
||||
self.setFixedHeight(140)
|
||||
|
||||
self.connections()
|
||||
|
||||
# Update ui to correct state
|
||||
self._on_use_current_comp()
|
||||
self._refresh()
|
||||
|
||||
def connections(self):
|
||||
self._use_current.clicked.connect(self._on_use_current_comp)
|
||||
self._open_from_dir.clicked.connect(self._on_open_from_dir)
|
||||
self._refresh_btn.clicked.connect(self._refresh)
|
||||
self._accept_btn.clicked.connect(self._on_switch)
|
||||
|
||||
def _on_use_current_comp(self):
|
||||
state = self._use_current.isChecked()
|
||||
self._open_from_dir.setEnabled(not state)
|
||||
self._comps.setEnabled(not state)
|
||||
|
||||
def _on_open_from_dir(self):
|
||||
|
||||
start_dir = get_workdir_from_session()
|
||||
comp_file, _ = QtWidgets.QFileDialog.getOpenFileName(
|
||||
self, "Choose comp", start_dir)
|
||||
|
||||
if not comp_file:
|
||||
return
|
||||
|
||||
# Create completer
|
||||
self.populate_comp_box([comp_file])
|
||||
self._refresh()
|
||||
|
||||
def _refresh(self):
|
||||
# Clear any existing items
|
||||
self._assets.clear()
|
||||
|
||||
asset_names = self.collect_asset_names()
|
||||
completer = QtWidgets.QCompleter(asset_names)
|
||||
|
||||
self._assets.setCompleter(completer)
|
||||
self._assets.addItems(asset_names)
|
||||
|
||||
def _on_switch(self):
|
||||
|
||||
if not self._use_current.isChecked():
|
||||
file_name = self._comps.itemData(self._comps.currentIndex())
|
||||
else:
|
||||
comp = api.get_current_comp()
|
||||
file_name = comp.GetAttrs("COMPS_FileName")
|
||||
|
||||
asset = self._assets.currentText()
|
||||
|
||||
import colorbleed.scripts.fusion_switch_shot as switch_shot
|
||||
switch_shot.switch(asset_name=asset, filepath=file_name, new=True)
|
||||
|
||||
def collect_slap_comps(self, directory):
|
||||
items = glob.glob("{}/*.comp".format(directory))
|
||||
return items
|
||||
|
||||
def collect_asset_names(self):
|
||||
project_name = get_current_project_name()
|
||||
asset_docs = get_assets(project_name, fields=["name"])
|
||||
asset_names = {
|
||||
asset_doc["name"]
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
return list(asset_names)
|
||||
|
||||
def populate_comp_box(self, files):
|
||||
"""Ensure we display the filename only but the path is stored as well
|
||||
|
||||
Args:
|
||||
files (list): list of full file path [path/to/item/item.ext,]
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
for f in files:
|
||||
filename = os.path.basename(f)
|
||||
self._comps.addItem(filename, userData=f)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
install_host(api)
|
||||
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
window = App()
|
||||
window.setStyleSheet(style.load_stylesheet())
|
||||
window.show()
|
||||
sys.exit(app.exec_())
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
"""Forces Fusion to 'retrigger' the Loader to update.
|
||||
|
||||
Warning:
|
||||
This might change settings like 'Reverse', 'Loop', trims and other
|
||||
settings of the Loader. So use this at your own risk.
|
||||
|
||||
"""
|
||||
from openpype.hosts.fusion.api.pipeline import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk
|
||||
)
|
||||
|
||||
|
||||
def update_loader_ranges():
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp, "Reload clip time ranges"):
|
||||
tools = comp.GetToolList(True, "Loader").values()
|
||||
for tool in tools:
|
||||
|
||||
# Get tool attributes
|
||||
tool_a = tool.GetAttrs()
|
||||
clipTable = tool_a['TOOLST_Clip_Name']
|
||||
altclipTable = tool_a['TOOLST_AltClip_Name']
|
||||
startTime = tool_a['TOOLNT_Clip_Start']
|
||||
old_global_in = tool.GlobalIn[comp.CurrentTime]
|
||||
|
||||
# Reapply
|
||||
for index, _ in clipTable.items():
|
||||
time = startTime[index]
|
||||
tool.Clip[time] = tool.Clip[time]
|
||||
|
||||
for index, _ in altclipTable.items():
|
||||
time = startTime[index]
|
||||
tool.ProxyFilename[time] = tool.ProxyFilename[time]
|
||||
|
||||
tool.GlobalIn[comp.CurrentTime] = old_global_in
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
update_loader_ranges()
|
||||
|
|
@ -5,7 +5,7 @@ Global = {
|
|||
Map = {
|
||||
["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy",
|
||||
["Config:"] = "UserPaths:Config;OpenPype:Config",
|
||||
["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts",
|
||||
["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts",
|
||||
},
|
||||
},
|
||||
Script = {
|
||||
|
|
|
|||
|
|
@ -30,10 +30,6 @@ class CreateSaver(NewCreator):
|
|||
instance_attributes = [
|
||||
"reviewable"
|
||||
]
|
||||
default_variants = [
|
||||
"Main",
|
||||
"Mask"
|
||||
]
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
|
|
@ -250,11 +246,7 @@ class CreateSaver(NewCreator):
|
|||
label="Review",
|
||||
)
|
||||
|
||||
def apply_settings(
|
||||
self,
|
||||
project_settings,
|
||||
system_settings
|
||||
):
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
# plugin settings
|
||||
|
|
|
|||
|
|
@ -85,5 +85,5 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
|
|||
# Add review family if the instance is marked as 'review'
|
||||
# This could be done through a 'review' Creator attribute.
|
||||
if instance.data.get("review", False):
|
||||
self.log.info("Adding review family..")
|
||||
self.log.debug("Adding review family..")
|
||||
instance.data["families"].append("review")
|
||||
|
|
|
|||
|
|
@ -108,7 +108,6 @@ class CollectFusionRender(
|
|||
fam = "render.farm"
|
||||
if fam not in instance.families:
|
||||
instance.families.append(fam)
|
||||
instance.toBeRenderedOn = "deadline"
|
||||
instance.farm = True # to skip integrate
|
||||
if "review" in instance.families:
|
||||
# to skip ExtractReview locally
|
||||
|
|
|
|||
|
|
@ -82,7 +82,6 @@ class TemplateLoader(load.LoaderPlugin):
|
|||
node = harmony.find_node_by_name(node_name, "GROUP")
|
||||
self_name = self.__class__.__name__
|
||||
|
||||
update_and_replace = False
|
||||
if is_representation_from_latest(representation):
|
||||
self._set_green(node)
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -147,13 +147,13 @@ class CollectFarmRender(publish.AbstractCollectRender):
|
|||
attachTo=False,
|
||||
setMembers=[node],
|
||||
publish=info[4],
|
||||
review=False,
|
||||
renderer=None,
|
||||
priority=50,
|
||||
name=node.split("/")[1],
|
||||
|
||||
family="render.farm",
|
||||
families=["render.farm"],
|
||||
farm=True,
|
||||
|
||||
resolutionWidth=context.data["resolutionWidth"],
|
||||
resolutionHeight=context.data["resolutionHeight"],
|
||||
|
|
@ -174,7 +174,6 @@ class CollectFarmRender(publish.AbstractCollectRender):
|
|||
outputFormat=info[1],
|
||||
outputStartFrame=info[3],
|
||||
leadingZeros=info[2],
|
||||
toBeRenderedOn='deadline',
|
||||
ignoreFrameHandleCheck=True
|
||||
|
||||
)
|
||||
|
|
|
|||
|
|
@ -317,20 +317,6 @@ class Spacer(QtWidgets.QWidget):
|
|||
self.setLayout(layout)
|
||||
|
||||
|
||||
def get_reference_node_parents(ref):
|
||||
"""Return all parent reference nodes of reference node
|
||||
|
||||
Args:
|
||||
ref (str): reference node.
|
||||
|
||||
Returns:
|
||||
list: The upstream parent reference nodes.
|
||||
|
||||
"""
|
||||
parents = []
|
||||
return parents
|
||||
|
||||
|
||||
class SequenceLoader(LoaderPlugin):
|
||||
"""A basic SequenceLoader for Resolve
|
||||
|
||||
|
|
|
|||
|
|
@ -43,7 +43,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
if review and review_track_index == _track_index:
|
||||
continue
|
||||
for sitem in sub_track_items:
|
||||
effect = None
|
||||
# make sure this subtrack item is relative of track item
|
||||
if ((track_item not in sitem.linkedItems())
|
||||
and (len(sitem.linkedItems()) > 0)):
|
||||
|
|
@ -53,7 +52,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
effect = self.add_effect(_track_index, sitem)
|
||||
|
||||
if effect:
|
||||
effects.update(effect)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import attr
|
||||
import hou
|
||||
from openpype.hosts.houdini.api.lib import get_color_management_preferences
|
||||
|
||||
from openpype.pipeline.colorspace import get_display_view_colorspace_name
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
|
|
@ -54,3 +54,16 @@ class ARenderProduct(object):
|
|||
)
|
||||
]
|
||||
return colorspace_data
|
||||
|
||||
|
||||
def get_default_display_view_colorspace():
|
||||
"""Returns the colorspace attribute of the default (display, view) pair.
|
||||
|
||||
It's used for 'ociocolorspace' parm in OpenGL Node."""
|
||||
|
||||
prefs = get_color_management_preferences()
|
||||
return get_display_view_colorspace_name(
|
||||
config_path=prefs["config"],
|
||||
display=prefs["display"],
|
||||
view=prefs["view"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -57,28 +57,31 @@ def create_interactive(creator_identifier, **kwargs):
|
|||
list: The created instances.
|
||||
|
||||
"""
|
||||
|
||||
# TODO Use Qt instead
|
||||
result, variant = hou.ui.readInput('Define variant name',
|
||||
buttons=("Ok", "Cancel"),
|
||||
initial_contents='Main',
|
||||
title="Define variant",
|
||||
help="Set the variant for the "
|
||||
"publish instance",
|
||||
close_choice=1)
|
||||
if result == 1:
|
||||
# User interrupted
|
||||
return
|
||||
variant = variant.strip()
|
||||
if not variant:
|
||||
raise RuntimeError("Empty variant value entered.")
|
||||
|
||||
host = registered_host()
|
||||
context = CreateContext(host)
|
||||
creator = context.manual_creators.get(creator_identifier)
|
||||
if not creator:
|
||||
raise RuntimeError("Invalid creator identifier: "
|
||||
"{}".format(creator_identifier))
|
||||
raise RuntimeError("Invalid creator identifier: {}".format(
|
||||
creator_identifier)
|
||||
)
|
||||
|
||||
# TODO Use Qt instead
|
||||
result, variant = hou.ui.readInput(
|
||||
"Define variant name",
|
||||
buttons=("Ok", "Cancel"),
|
||||
initial_contents=creator.get_default_variant(),
|
||||
title="Define variant",
|
||||
help="Set the variant for the publish instance",
|
||||
close_choice=1
|
||||
)
|
||||
|
||||
if result == 1:
|
||||
# User interrupted
|
||||
return
|
||||
|
||||
variant = variant.strip()
|
||||
if not variant:
|
||||
raise RuntimeError("Empty variant value entered.")
|
||||
|
||||
# TODO: Once more elaborate unique create behavior should exist per Creator
|
||||
# instead of per network editor area then we should move this from here
|
||||
|
|
|
|||
|
|
@ -303,6 +303,28 @@ def on_save():
|
|||
lib.set_id(node, new_id, overwrite=False)
|
||||
|
||||
|
||||
def _show_outdated_content_popup():
|
||||
# Get main window
|
||||
parent = lib.get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated content pop-up "
|
||||
"because Houdini window can't be found.")
|
||||
else:
|
||||
from openpype.widgets import popup
|
||||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
from openpype.tools.utils import host_tools
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Houdini scene has outdated content")
|
||||
dialog.setMessage("There are outdated containers in "
|
||||
"your Houdini scene.")
|
||||
dialog.on_clicked.connect(_on_show_inventory)
|
||||
dialog.show()
|
||||
|
||||
|
||||
def on_open():
|
||||
|
||||
if not hou.isUIAvailable():
|
||||
|
|
@ -316,28 +338,18 @@ def on_open():
|
|||
lib.validate_fps()
|
||||
|
||||
if any_outdated_containers():
|
||||
from openpype.widgets import popup
|
||||
|
||||
log.warning("Scene has outdated content.")
|
||||
|
||||
# Get main window
|
||||
parent = lib.get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated content pop-up "
|
||||
"because Houdini window can't be found.")
|
||||
# When opening Houdini with last workfile on launch the UI hasn't
|
||||
# initialized yet completely when the `on_open` callback triggers.
|
||||
# We defer the dialog popup to wait for the UI to become available.
|
||||
# We assume it will open because `hou.isUIAvailable()` returns True
|
||||
import hdefereval
|
||||
hdefereval.executeDeferred(_show_outdated_content_popup)
|
||||
else:
|
||||
_show_outdated_content_popup()
|
||||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
from openpype.tools.utils import host_tools
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Houdini scene has outdated content")
|
||||
dialog.setMessage("There are outdated containers in "
|
||||
"your Houdini scene.")
|
||||
dialog.on_clicked.connect(_on_show_inventory)
|
||||
dialog.show()
|
||||
log.warning("Scene has outdated content.")
|
||||
|
||||
|
||||
def on_new():
|
||||
|
|
|
|||
|
|
@ -296,7 +296,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
"""
|
||||
return [hou.ropNodeTypeCategory()]
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
settings_name = self.settings_name
|
||||
|
|
|
|||
|
|
@ -3,6 +3,9 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.lib import EnumDef, BoolDef, NumberDef
|
||||
|
||||
import os
|
||||
import hou
|
||||
|
||||
|
||||
class CreateReview(plugin.HoudiniCreator):
|
||||
"""Review with OpenGL ROP"""
|
||||
|
|
@ -13,7 +16,6 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
icon = "video-camera"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "opengl"})
|
||||
|
|
@ -82,6 +84,11 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Set OCIO Colorspace to the default output colorspace
|
||||
# if there's OCIO
|
||||
if os.getenv("OCIO"):
|
||||
self.set_colorcorrect_to_default_view_space(instance_node)
|
||||
|
||||
to_lock = ["id", "family"]
|
||||
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
|
@ -123,3 +130,23 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
minimum=0.0001,
|
||||
decimals=3)
|
||||
]
|
||||
|
||||
def set_colorcorrect_to_default_view_space(self,
|
||||
instance_node):
|
||||
"""Set ociocolorspace to the default output space."""
|
||||
from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
|
||||
|
||||
# set Color Correction parameter to OpenColorIO
|
||||
instance_node.setParms({"colorcorrect": 2})
|
||||
|
||||
# Get default view space for ociocolorspace parm.
|
||||
default_view_space = get_default_display_view_colorspace()
|
||||
instance_node.setParms(
|
||||
{"ociocolorspace": default_view_space}
|
||||
)
|
||||
|
||||
self.log.debug(
|
||||
"'OCIO Colorspace' parm on '{}' has been set to "
|
||||
"the default view color space '{}'"
|
||||
.format(instance_node, default_view_space)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -34,7 +34,6 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
# Create a new geo node
|
||||
container = obj.createNode("geo", node_name=node_name)
|
||||
is_sequence = bool(context["representation"]["context"].get("frame"))
|
||||
|
||||
# Remove the file node, it only loads static meshes
|
||||
# Houdini 17 has removed the file node from the geo node
|
||||
|
|
|
|||
|
|
@ -59,6 +59,9 @@ class HdaLoader(load.LoaderPlugin):
|
|||
def_paths = [d.libraryFilePath() for d in defs]
|
||||
new = def_paths.index(file_path)
|
||||
defs[new].setIsPreferred(True)
|
||||
hda_node.setParms({
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def remove(self, container):
|
||||
node = container["node"]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
||||
"""Collect the out node's SOP/COP Path value."""
|
||||
|
|
@ -63,8 +65,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
out_node = node.parm("startnode").evalAsNode()
|
||||
|
||||
else:
|
||||
raise ValueError(
|
||||
"ROP node type '%s' is" " not supported." % node_type
|
||||
raise KnownPublishError(
|
||||
"ROP node type '{}' is not supported.".format(node_type)
|
||||
)
|
||||
|
||||
if not out_node:
|
||||
|
|
|
|||
|
|
@ -80,14 +80,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
def get_beauty_render_product(self, prefix, suffix="<reName>"):
|
||||
"""Return the beauty output filename if render element enabled
|
||||
"""
|
||||
# Remove aov suffix from the product: `prefix.aov_suffix` -> `prefix`
|
||||
aov_parm = ".{}".format(suffix)
|
||||
beauty_product = None
|
||||
if aov_parm in prefix:
|
||||
beauty_product = prefix.replace(aov_parm, "")
|
||||
else:
|
||||
beauty_product = prefix
|
||||
|
||||
return beauty_product
|
||||
return prefix.replace(aov_parm, "")
|
||||
|
||||
def get_render_element_name(self, node, prefix, suffix="<reName>"):
|
||||
"""Return the output filename using the AOV prefix and suffix
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import pyblish.api
|
|||
|
||||
from openpype.lib import version_up
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.action import get_errored_plugins_from_data
|
||||
from openpype.pipeline.publish import get_errored_plugins_from_context
|
||||
from openpype.hosts.houdini.api import HoudiniHost
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
|
@ -27,7 +27,7 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
|
||||
errored_plugins = get_errored_plugins_from_data(context)
|
||||
errored_plugins = get_errored_plugins_from_context(context)
|
||||
if any(
|
||||
plugin.__name__ == "HoudiniSubmitPublishDeadline"
|
||||
for plugin in errored_plugins
|
||||
|
|
@ -40,9 +40,10 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
|||
# Filename must not have changed since collecting
|
||||
host = registered_host() # type: HoudiniHost
|
||||
current_file = host.current_file()
|
||||
assert (
|
||||
context.data["currentFile"] == current_file
|
||||
), "Collected filename mismatches from current scene name."
|
||||
if context.data["currentFile"] != current_file:
|
||||
raise KnownPublishError(
|
||||
"Collected filename mismatches from current scene name."
|
||||
)
|
||||
|
||||
new_filepath = version_up(current_file)
|
||||
host.save_workfile(new_filepath)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import PublishValidationError
|
||||
from openpype.hosts.houdini.api import lib
|
||||
import hou
|
||||
|
||||
|
|
@ -30,7 +31,7 @@ class ValidateAnimationSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
raise PublishValidationError(
|
||||
"Output settings do no match for '%s'" % instance
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -36,11 +36,11 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
|
|||
if node.parm("shellexec").eval():
|
||||
self.raise_error("Must not execute in shell")
|
||||
if node.parm("prerender").eval() != cmd:
|
||||
self.raise_error(("REMOTE_PUBLISH node does not have "
|
||||
"correct prerender script."))
|
||||
self.raise_error("REMOTE_PUBLISH node does not have "
|
||||
"correct prerender script.")
|
||||
if node.parm("lprerender").eval() != "python":
|
||||
self.raise_error(("REMOTE_PUBLISH node prerender script "
|
||||
"type not set to 'python'"))
|
||||
self.raise_error("REMOTE_PUBLISH node prerender script "
|
||||
"type not set to 'python'")
|
||||
|
||||
@classmethod
|
||||
def repair(cls, context):
|
||||
|
|
@ -48,5 +48,4 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
|
|||
lib.create_remote_publish_node(force=True)
|
||||
|
||||
def raise_error(self, message):
|
||||
self.log.error(message)
|
||||
raise PublishValidationError(message, title=self.label)
|
||||
raise PublishValidationError(message)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,90 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.houdini.api.action import SelectROPAction
|
||||
|
||||
import os
|
||||
import hou
|
||||
|
||||
|
||||
class SetDefaultViewSpaceAction(RepairAction):
|
||||
label = "Set default view colorspace"
|
||||
icon = "mdi.monitor"
|
||||
|
||||
|
||||
class ValidateReviewColorspace(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validate Review Colorspace parameters.
|
||||
|
||||
It checks if 'OCIO Colorspace' parameter was set to valid value.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Review Colorspace"
|
||||
actions = [SetDefaultViewSpaceAction, SelectROPAction]
|
||||
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
if os.getenv("OCIO") is None:
|
||||
self.log.debug(
|
||||
"Using Houdini's Default Color Management, "
|
||||
" skipping check.."
|
||||
)
|
||||
return
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
if rop_node.evalParm("colorcorrect") != 2:
|
||||
# any colorspace settings other than default requires
|
||||
# 'Color Correct' parm to be set to 'OpenColorIO'
|
||||
raise PublishValidationError(
|
||||
"'Color Correction' parm on '{}' ROP must be set to"
|
||||
" 'OpenColorIO'".format(rop_node.path())
|
||||
)
|
||||
|
||||
if rop_node.evalParm("ociocolorspace") not in \
|
||||
hou.Color.ocio_spaces():
|
||||
|
||||
raise PublishValidationError(
|
||||
"Invalid value: Colorspace name doesn't exist.\n"
|
||||
"Check 'OCIO Colorspace' parameter on '{}' ROP"
|
||||
.format(rop_node.path())
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
"""Set Default View Space Action.
|
||||
|
||||
It is a helper action more than a repair action,
|
||||
used to set colorspace on opengl node to the default view.
|
||||
"""
|
||||
from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
|
||||
if rop_node.evalParm("colorcorrect") != 2:
|
||||
rop_node.setParms({"colorcorrect": 2})
|
||||
cls.log.debug(
|
||||
"'Color Correction' parm on '{}' has been set to"
|
||||
" 'OpenColorIO'".format(rop_node.path())
|
||||
)
|
||||
|
||||
# Get default view colorspace name
|
||||
default_view_space = get_default_display_view_colorspace()
|
||||
|
||||
rop_node.setParms({"ociocolorspace": default_view_space})
|
||||
cls.log.info(
|
||||
"'OCIO Colorspace' parm on '{}' has been set to "
|
||||
"the default view color space '{}'"
|
||||
.format(rop_node, default_view_space)
|
||||
)
|
||||
|
|
@ -24,7 +24,7 @@ class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin):
|
|||
|
||||
if not os.path.isabs(filepath):
|
||||
invalid.append(
|
||||
"Output file path is not " "absolute path: %s" % filepath
|
||||
"Output file path is not absolute path: %s" % filepath
|
||||
)
|
||||
|
||||
if invalid:
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from typing import Any, Dict, Union
|
|||
|
||||
import six
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project, get_current_project_asset,)
|
||||
get_current_project, get_current_project_asset)
|
||||
from pymxs import runtime as rt
|
||||
|
||||
JSON_PREFIX = "JSON::"
|
||||
|
|
@ -312,3 +312,98 @@ def set_timeline(frameStart, frameEnd):
|
|||
"""
|
||||
rt.animationRange = rt.interval(frameStart, frameEnd)
|
||||
return rt.animationRange
|
||||
|
||||
|
||||
def unique_namespace(namespace, format="%02d",
|
||||
prefix="", suffix="", con_suffix="CON"):
|
||||
"""Return unique namespace
|
||||
|
||||
Arguments:
|
||||
namespace (str): Name of namespace to consider
|
||||
format (str, optional): Formatting of the given iteration number
|
||||
suffix (str, optional): Only consider namespaces with this suffix.
|
||||
con_suffix: max only, for finding the name of the master container
|
||||
|
||||
>>> unique_namespace("bar")
|
||||
# bar01
|
||||
>>> unique_namespace(":hello")
|
||||
# :hello01
|
||||
>>> unique_namespace("bar:", suffix="_NS")
|
||||
# bar01_NS:
|
||||
|
||||
"""
|
||||
|
||||
def current_namespace():
|
||||
current = namespace
|
||||
# When inside a namespace Max adds no trailing :
|
||||
if not current.endswith(":"):
|
||||
current += ":"
|
||||
return current
|
||||
|
||||
# Always check against the absolute namespace root
|
||||
# There's no clash with :x if we're defining namespace :a:x
|
||||
ROOT = ":" if namespace.startswith(":") else current_namespace()
|
||||
|
||||
# Strip trailing `:` tokens since we might want to add a suffix
|
||||
start = ":" if namespace.startswith(":") else ""
|
||||
end = ":" if namespace.endswith(":") else ""
|
||||
namespace = namespace.strip(":")
|
||||
if ":" in namespace:
|
||||
# Split off any nesting that we don't uniqify anyway.
|
||||
parents, namespace = namespace.rsplit(":", 1)
|
||||
start += parents + ":"
|
||||
ROOT += start
|
||||
|
||||
iteration = 1
|
||||
increment_version = True
|
||||
while increment_version:
|
||||
nr_namespace = namespace + format % iteration
|
||||
unique = prefix + nr_namespace + suffix
|
||||
container_name = f"{unique}:{namespace}{con_suffix}"
|
||||
if not rt.getNodeByName(container_name):
|
||||
name_space = start + unique + end
|
||||
increment_version = False
|
||||
return name_space
|
||||
else:
|
||||
increment_version = True
|
||||
iteration += 1
|
||||
|
||||
|
||||
def get_namespace(container_name):
|
||||
"""Get the namespace and name of the sub-container
|
||||
|
||||
Args:
|
||||
container_name (str): the name of master container
|
||||
|
||||
Raises:
|
||||
RuntimeError: when there is no master container found
|
||||
|
||||
Returns:
|
||||
namespace (str): namespace of the sub-container
|
||||
name (str): name of the sub-container
|
||||
"""
|
||||
node = rt.getNodeByName(container_name)
|
||||
if not node:
|
||||
raise RuntimeError("Master Container Not Found..")
|
||||
name = rt.getUserProp(node, "name")
|
||||
namespace = rt.getUserProp(node, "namespace")
|
||||
return namespace, name
|
||||
|
||||
|
||||
def object_transform_set(container_children):
|
||||
"""A function which allows to store the transform of
|
||||
previous loaded object(s)
|
||||
Args:
|
||||
container_children(list): A list of nodes
|
||||
|
||||
Returns:
|
||||
transform_set (dict): A dict with all transform data of
|
||||
the previous loaded object(s)
|
||||
"""
|
||||
transform_set = {}
|
||||
for node in container_children:
|
||||
name = f"{node.name}.transform"
|
||||
transform_set[name] = node.pos
|
||||
name = f"{node.name}.scale"
|
||||
transform_set[name] = node.scale
|
||||
return transform_set
|
||||
|
|
|
|||
|
|
@ -37,13 +37,10 @@ class RenderSettings(object):
|
|||
def set_render_camera(self, selection):
|
||||
for sel in selection:
|
||||
# to avoid Attribute Error from pymxs wrapper
|
||||
found = False
|
||||
if rt.classOf(sel) in rt.Camera.classes:
|
||||
found = True
|
||||
rt.viewport.setCamera(sel)
|
||||
break
|
||||
if not found:
|
||||
raise RuntimeError("Camera not found")
|
||||
return
|
||||
raise RuntimeError("Active Camera not found")
|
||||
|
||||
def render_output(self, container):
|
||||
folder = rt.maxFilePath
|
||||
|
|
@ -113,7 +110,8 @@ class RenderSettings(object):
|
|||
# for setting up renderable camera
|
||||
arv = rt.MAXToAOps.ArnoldRenderView()
|
||||
render_camera = rt.viewport.GetCamera()
|
||||
arv.setOption("Camera", str(render_camera))
|
||||
if render_camera:
|
||||
arv.setOption("Camera", str(render_camera))
|
||||
|
||||
# TODO: add AOVs and extension
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
|
|
|
|||
|
|
@ -15,8 +15,10 @@ from openpype.pipeline import (
|
|||
)
|
||||
from openpype.hosts.max.api.menu import OpenPypeMenu
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.plugin import MS_CUSTOM_ATTRIB
|
||||
from openpype.hosts.max import MAX_HOST_DIR
|
||||
|
||||
|
||||
from pymxs import runtime as rt # noqa
|
||||
|
||||
log = logging.getLogger("openpype.hosts.max")
|
||||
|
|
@ -152,17 +154,18 @@ def ls() -> list:
|
|||
yield lib.read(container)
|
||||
|
||||
|
||||
def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"):
|
||||
def containerise(name: str, nodes: list, context,
|
||||
namespace=None, loader=None, suffix="_CON"):
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": "",
|
||||
"namespace": namespace or "",
|
||||
"loader": loader,
|
||||
"representation": context["representation"]["_id"],
|
||||
}
|
||||
|
||||
container_name = f"{name}{suffix}"
|
||||
container_name = f"{namespace}:{name}{suffix}"
|
||||
container = rt.container(name=container_name)
|
||||
for node in nodes:
|
||||
node.Parent = container
|
||||
|
|
@ -170,3 +173,53 @@ def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"):
|
|||
if not lib.imprint(container_name, data):
|
||||
print(f"imprinting of {container_name} failed.")
|
||||
return container
|
||||
|
||||
|
||||
def load_custom_attribute_data():
|
||||
"""Re-loading the Openpype/AYON custom parameter built by the creator
|
||||
|
||||
Returns:
|
||||
attribute: re-loading the custom OP attributes set in Maxscript
|
||||
"""
|
||||
return rt.Execute(MS_CUSTOM_ATTRIB)
|
||||
|
||||
|
||||
def import_custom_attribute_data(container: str, selections: list):
|
||||
"""Importing the Openpype/AYON custom parameter built by the creator
|
||||
|
||||
Args:
|
||||
container (str): target container which adds custom attributes
|
||||
selections (list): nodes to be added into
|
||||
group in custom attributes
|
||||
"""
|
||||
attrs = load_custom_attribute_data()
|
||||
modifier = rt.EmptyModifier()
|
||||
rt.addModifier(container, modifier)
|
||||
container.modifiers[0].name = "OP Data"
|
||||
rt.custAttributes.add(container.modifiers[0], attrs)
|
||||
node_list = []
|
||||
sel_list = []
|
||||
for i in selections:
|
||||
node_ref = rt.NodeTransformMonitor(node=i)
|
||||
node_list.append(node_ref)
|
||||
sel_list.append(str(i))
|
||||
|
||||
# Setting the property
|
||||
rt.setProperty(
|
||||
container.modifiers[0].openPypeData,
|
||||
"all_handles", node_list)
|
||||
rt.setProperty(
|
||||
container.modifiers[0].openPypeData,
|
||||
"sel_list", sel_list)
|
||||
|
||||
def update_custom_attribute_data(container: str, selections: list):
|
||||
"""Updating the Openpype/AYON custom parameter built by the creator
|
||||
|
||||
Args:
|
||||
container (str): target container which adds custom attributes
|
||||
selections (list): nodes to be added into
|
||||
group in custom attributes
|
||||
"""
|
||||
if container.modifiers[0].name == "OP Data":
|
||||
rt.deleteModifier(container, container.modifiers[0])
|
||||
import_custom_attribute_data(container, selections)
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ class CreateRender(plugin.MaxCreator):
|
|||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
sel_obj = list(rt.selection)
|
||||
file = rt.maxFileName
|
||||
filename, _ = os.path.splitext(file)
|
||||
instance_data["AssetName"] = filename
|
||||
|
|
|
|||
|
|
@ -1,7 +1,16 @@
|
|||
import os
|
||||
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
get_namespace,
|
||||
object_transform_set
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
|
|
@ -13,50 +22,76 @@ class FbxLoader(load.LoaderPlugin):
|
|||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
filepath = self.filepath_from_context(context)
|
||||
filepath = os.path.normpath(filepath)
|
||||
rt.FBXImporterSetParam("Animation", True)
|
||||
rt.FBXImporterSetParam("Camera", True)
|
||||
rt.FBXImporterSetParam("AxisConversionMethod", True)
|
||||
rt.FBXImporterSetParam("Mode", rt.Name("create"))
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.ImportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
using=rt.FBXIMP)
|
||||
|
||||
container = rt.GetNodeByName(f"{name}")
|
||||
if not container:
|
||||
container = rt.Container()
|
||||
container.name = f"{name}"
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
container = rt.container(
|
||||
name=f"{namespace}:{name}_{self.postfix}")
|
||||
selections = rt.GetCurrentSelection()
|
||||
import_custom_attribute_data(container, selections)
|
||||
|
||||
for selection in rt.GetCurrentSelection():
|
||||
for selection in selections:
|
||||
selection.Parent = container
|
||||
selection.name = f"{namespace}:{selection.name}"
|
||||
|
||||
return containerise(
|
||||
name, [container], context, loader=self.__class__.__name__)
|
||||
name, [container], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Select(node.Children)
|
||||
fbx_reimport_cmd = (
|
||||
f"""
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
namespace, name = get_namespace(node_name)
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
inst_container = rt.getNodeByName(sub_node_name)
|
||||
rt.Select(inst_container.Children)
|
||||
transform_data = object_transform_set(inst_container.Children)
|
||||
for prev_fbx_obj in rt.selection:
|
||||
if rt.isValidNode(prev_fbx_obj):
|
||||
rt.Delete(prev_fbx_obj)
|
||||
|
||||
FBXImporterSetParam "Animation" true
|
||||
FBXImporterSetParam "Cameras" true
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
rt.FBXImporterSetParam("Animation", True)
|
||||
rt.FBXImporterSetParam("Camera", True)
|
||||
rt.FBXImporterSetParam("Mode", rt.Name("merge"))
|
||||
rt.FBXImporterSetParam("AxisConversionMethod", True)
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.ImportFile(
|
||||
path, rt.name("noPrompt"), using=rt.FBXIMP)
|
||||
current_fbx_objects = rt.GetCurrentSelection()
|
||||
for fbx_object in current_fbx_objects:
|
||||
if fbx_object.Parent != inst_container:
|
||||
fbx_object.Parent = inst_container
|
||||
fbx_object.name = f"{namespace}:{fbx_object.name}"
|
||||
fbx_object.pos = transform_data[
|
||||
f"{fbx_object.name}.transform"]
|
||||
fbx_object.scale = transform_data[
|
||||
f"{fbx_object.name}.scale"]
|
||||
|
||||
importFile @"{path}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
rt.Execute(fbx_reimport_cmd)
|
||||
for children in node.Children:
|
||||
if rt.classOf(children) == rt.Container:
|
||||
if children.name == sub_node_name:
|
||||
update_custom_attribute_data(
|
||||
children, current_fbx_objects)
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
|
|
|
|||
|
|
@ -1,7 +1,15 @@
|
|||
import os
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
get_namespace,
|
||||
object_transform_set
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise, import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
|
|
@ -16,22 +24,34 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
order = -8
|
||||
icon = "code-fork"
|
||||
color = "green"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
path = os.path.normpath(path)
|
||||
# import the max scene by using "merge file"
|
||||
path = path.replace('\\', '/')
|
||||
rt.MergeMaxFile(path)
|
||||
rt.MergeMaxFile(path, quiet=True, includeFullGroup=True)
|
||||
max_objects = rt.getLastMergedNodes()
|
||||
max_container = rt.Container(name=f"{name}")
|
||||
for max_object in max_objects:
|
||||
max_object.Parent = max_container
|
||||
max_object_names = [obj.name for obj in max_objects]
|
||||
# implement the OP/AYON custom attributes before load
|
||||
max_container = []
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
container_name = f"{namespace}:{name}_{self.postfix}"
|
||||
container = rt.Container(name=container_name)
|
||||
import_custom_attribute_data(container, max_objects)
|
||||
max_container.append(container)
|
||||
max_container.extend(max_objects)
|
||||
for max_obj, obj_name in zip(max_objects, max_object_names):
|
||||
max_obj.name = f"{namespace}:{obj_name}"
|
||||
return containerise(
|
||||
name, [max_container], context, loader=self.__class__.__name__)
|
||||
name, max_container, context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -39,15 +59,32 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
|
||||
rt.MergeMaxFile(path,
|
||||
rt.Name("noRedraw"),
|
||||
rt.Name("deleteOldDups"),
|
||||
rt.Name("useSceneMtlDups"))
|
||||
node = rt.getNodeByName(node_name)
|
||||
namespace, name = get_namespace(node_name)
|
||||
sub_container_name = f"{namespace}:{name}_{self.postfix}"
|
||||
# delete the old container with attribute
|
||||
# delete old duplicate
|
||||
rt.Select(node.Children)
|
||||
transform_data = object_transform_set(node.Children)
|
||||
for prev_max_obj in rt.GetCurrentSelection():
|
||||
if rt.isValidNode(prev_max_obj) and prev_max_obj.name != sub_container_name: # noqa
|
||||
rt.Delete(prev_max_obj)
|
||||
rt.MergeMaxFile(path, rt.Name("deleteOldDups"))
|
||||
|
||||
max_objects = rt.getLastMergedNodes()
|
||||
container_node = rt.GetNodeByName(node_name)
|
||||
for max_object in max_objects:
|
||||
max_object.Parent = container_node
|
||||
current_max_objects = rt.getLastMergedNodes()
|
||||
current_max_object_names = [obj.name for obj
|
||||
in current_max_objects]
|
||||
sub_container = rt.getNodeByName(sub_container_name)
|
||||
update_custom_attribute_data(sub_container, current_max_objects)
|
||||
for max_object in current_max_objects:
|
||||
max_object.Parent = node
|
||||
for max_obj, obj_name in zip(current_max_objects,
|
||||
current_max_object_names):
|
||||
max_obj.name = f"{namespace}:{obj_name}"
|
||||
max_obj.pos = transform_data[
|
||||
f"{max_obj.name}.transform"]
|
||||
max_obj.scale = transform_data[
|
||||
f"{max_obj.name}.scale"]
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
|
|
|
|||
|
|
@ -1,8 +1,14 @@
|
|||
import os
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
from openpype.hosts.max.api.lib import (
|
||||
maintained_selection, unique_namespace
|
||||
)
|
||||
|
||||
|
||||
class ModelAbcLoader(load.LoaderPlugin):
|
||||
|
|
@ -14,6 +20,7 @@ class ModelAbcLoader(load.LoaderPlugin):
|
|||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -30,7 +37,7 @@ class ModelAbcLoader(load.LoaderPlugin):
|
|||
rt.AlembicImport.CustomAttributes = True
|
||||
rt.AlembicImport.UVs = True
|
||||
rt.AlembicImport.VertexColors = True
|
||||
rt.importFile(file_path, rt.name("noPrompt"))
|
||||
rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport)
|
||||
|
||||
abc_after = {
|
||||
c
|
||||
|
|
@ -45,9 +52,22 @@ class ModelAbcLoader(load.LoaderPlugin):
|
|||
self.log.error("Something failed when loading.")
|
||||
|
||||
abc_container = abc_containers.pop()
|
||||
import_custom_attribute_data(
|
||||
abc_container, abc_container.Children)
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
for abc_object in abc_container.Children:
|
||||
abc_object.name = f"{namespace}:{abc_object.name}"
|
||||
# rename the abc container with namespace
|
||||
abc_container_name = f"{namespace}:{name}_{self.postfix}"
|
||||
abc_container.name = abc_container_name
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__
|
||||
name, [abc_container], context,
|
||||
namespace, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
|
@ -55,21 +75,19 @@ class ModelAbcLoader(load.LoaderPlugin):
|
|||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
rt.Select(node.Children)
|
||||
|
||||
for alembic in rt.Selection:
|
||||
abc = rt.GetNodeByName(alembic.name)
|
||||
rt.Select(abc.Children)
|
||||
for abc_con in rt.Selection:
|
||||
container = rt.GetNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.Select(container.Children)
|
||||
for abc_obj in rt.Selection:
|
||||
alembic_obj = rt.GetNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
rt.Select(node.Children)
|
||||
|
||||
for alembic in rt.Selection:
|
||||
abc = rt.GetNodeByName(alembic.name)
|
||||
update_custom_attribute_data(abc, abc.Children)
|
||||
rt.Select(abc.Children)
|
||||
for abc_con in abc.Children:
|
||||
abc_con.source = path
|
||||
rt.Select(abc_con.Children)
|
||||
for abc_obj in abc_con.Children:
|
||||
abc_obj.source = path
|
||||
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
|
|
|
|||
|
|
@ -1,7 +1,15 @@
|
|||
import os
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise, import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
get_namespace,
|
||||
object_transform_set
|
||||
)
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
|
|
@ -13,6 +21,7 @@ class FbxModelLoader(load.LoaderPlugin):
|
|||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -20,39 +29,69 @@ class FbxModelLoader(load.LoaderPlugin):
|
|||
filepath = os.path.normpath(self.filepath_from_context(context))
|
||||
rt.FBXImporterSetParam("Animation", False)
|
||||
rt.FBXImporterSetParam("Cameras", False)
|
||||
rt.FBXImporterSetParam("Mode", rt.Name("create"))
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP)
|
||||
|
||||
container = rt.GetNodeByName(name)
|
||||
if not container:
|
||||
container = rt.Container()
|
||||
container.name = name
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
container = rt.container(
|
||||
name=f"{namespace}:{name}_{self.postfix}")
|
||||
selections = rt.GetCurrentSelection()
|
||||
import_custom_attribute_data(container, selections)
|
||||
|
||||
for selection in rt.GetCurrentSelection():
|
||||
for selection in selections:
|
||||
selection.Parent = container
|
||||
selection.name = f"{namespace}:{selection.name}"
|
||||
|
||||
return containerise(
|
||||
name, [container], context, loader=self.__class__.__name__
|
||||
name, [container], context,
|
||||
namespace, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
namespace, name = get_namespace(node_name)
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
inst_container = rt.getNodeByName(sub_node_name)
|
||||
rt.Select(inst_container.Children)
|
||||
transform_data = object_transform_set(inst_container.Children)
|
||||
for prev_fbx_obj in rt.selection:
|
||||
if rt.isValidNode(prev_fbx_obj):
|
||||
rt.Delete(prev_fbx_obj)
|
||||
|
||||
rt.FBXImporterSetParam("Animation", False)
|
||||
rt.FBXImporterSetParam("Cameras", False)
|
||||
rt.FBXImporterSetParam("Mode", rt.Name("merge"))
|
||||
rt.FBXImporterSetParam("AxisConversionMethod", True)
|
||||
rt.FBXImporterSetParam("UpAxis", "Y")
|
||||
rt.FBXImporterSetParam("Preserveinstances", True)
|
||||
rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP)
|
||||
current_fbx_objects = rt.GetCurrentSelection()
|
||||
for fbx_object in current_fbx_objects:
|
||||
if fbx_object.Parent != inst_container:
|
||||
fbx_object.Parent = inst_container
|
||||
fbx_object.name = f"{namespace}:{fbx_object.name}"
|
||||
fbx_object.pos = transform_data[
|
||||
f"{fbx_object.name}.transform"]
|
||||
fbx_object.scale = transform_data[
|
||||
f"{fbx_object.name}.scale"]
|
||||
|
||||
for children in node.Children:
|
||||
if rt.classOf(children) == rt.Container:
|
||||
if children.name == sub_node_name:
|
||||
update_custom_attribute_data(
|
||||
children, current_fbx_objects)
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
node_name,
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,18 @@
|
|||
import os
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
get_namespace,
|
||||
maintained_selection,
|
||||
object_transform_set
|
||||
)
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
|
|
@ -14,6 +24,7 @@ class ObjLoader(load.LoaderPlugin):
|
|||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -22,36 +33,49 @@ class ObjLoader(load.LoaderPlugin):
|
|||
self.log.debug("Executing command to import..")
|
||||
|
||||
rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
# create "missing" container for obj import
|
||||
container = rt.Container()
|
||||
container.name = name
|
||||
|
||||
container = rt.Container(name=f"{namespace}:{name}_{self.postfix}")
|
||||
selections = rt.GetCurrentSelection()
|
||||
import_custom_attribute_data(container, selections)
|
||||
# get current selection
|
||||
for selection in rt.GetCurrentSelection():
|
||||
for selection in selections:
|
||||
selection.Parent = container
|
||||
|
||||
asset = rt.GetNodeByName(name)
|
||||
|
||||
selection.name = f"{namespace}:{selection.name}"
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
name, [container], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.GetNodeByName(node_name)
|
||||
|
||||
instance_name, _ = node_name.split("_")
|
||||
container = rt.GetNodeByName(instance_name)
|
||||
for child in container.Children:
|
||||
rt.Delete(child)
|
||||
node = rt.getNodeByName(node_name)
|
||||
namespace, name = get_namespace(node_name)
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
inst_container = rt.getNodeByName(sub_node_name)
|
||||
rt.Select(inst_container.Children)
|
||||
transform_data = object_transform_set(inst_container.Children)
|
||||
for prev_obj in rt.selection:
|
||||
if rt.isValidNode(prev_obj):
|
||||
rt.Delete(prev_obj)
|
||||
|
||||
rt.Execute(f'importFile @"{path}" #noPrompt using:ObjImp')
|
||||
# get current selection
|
||||
for selection in rt.GetCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
selections = rt.GetCurrentSelection()
|
||||
update_custom_attribute_data(inst_container, selections)
|
||||
for selection in selections:
|
||||
selection.Parent = inst_container
|
||||
selection.name = f"{namespace}:{selection.name}"
|
||||
selection.pos = transform_data[
|
||||
f"{selection.name}.transform"]
|
||||
selection.scale = transform_data[
|
||||
f"{selection.name}.scale"]
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,16 @@
|
|||
import os
|
||||
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace,
|
||||
get_namespace,
|
||||
object_transform_set
|
||||
)
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
|
|
@ -15,6 +23,7 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -30,11 +39,24 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
rt.LogLevel = rt.Name("info")
|
||||
rt.USDImporter.importFile(filepath,
|
||||
importOptions=import_options)
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
asset = rt.GetNodeByName(name)
|
||||
import_custom_attribute_data(asset, asset.Children)
|
||||
for usd_asset in asset.Children:
|
||||
usd_asset.name = f"{namespace}:{usd_asset.name}"
|
||||
|
||||
asset_name = f"{namespace}:{name}_{self.postfix}"
|
||||
asset.name = asset_name
|
||||
# need to get the correct container after renamed
|
||||
asset = rt.GetNodeByName(asset_name)
|
||||
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
name, [asset], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -42,11 +64,16 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.GetNodeByName(node_name)
|
||||
namespace, name = get_namespace(node_name)
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
transform_data = None
|
||||
for n in node.Children:
|
||||
for r in n.Children:
|
||||
rt.Delete(r)
|
||||
rt.Select(n.Children)
|
||||
transform_data = object_transform_set(n.Children)
|
||||
for prev_usd_asset in rt.selection:
|
||||
if rt.isValidNode(prev_usd_asset):
|
||||
rt.Delete(prev_usd_asset)
|
||||
rt.Delete(n)
|
||||
instance_name, _ = node_name.split("_")
|
||||
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
base_filename = os.path.basename(path)
|
||||
|
|
@ -55,11 +82,20 @@ class ModelUSDLoader(load.LoaderPlugin):
|
|||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.Name("info")
|
||||
rt.USDImporter.importFile(path,
|
||||
importOptions=import_options)
|
||||
rt.USDImporter.importFile(
|
||||
path, importOptions=import_options)
|
||||
|
||||
asset = rt.GetNodeByName(instance_name)
|
||||
asset = rt.GetNodeByName(name)
|
||||
asset.Parent = node
|
||||
import_custom_attribute_data(asset, asset.Children)
|
||||
for children in asset.Children:
|
||||
children.name = f"{namespace}:{children.name}"
|
||||
children.pos = transform_data[
|
||||
f"{children.name}.transform"]
|
||||
children.scale = transform_data[
|
||||
f"{children.name}.scale"]
|
||||
|
||||
asset.name = sub_node_name
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node)
|
||||
|
|
|
|||
|
|
@ -7,7 +7,12 @@ Because of limited api, alembics can be only loaded, but not easily updated.
|
|||
import os
|
||||
from openpype.pipeline import load, get_representation_path
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.lib import unique_namespace
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
|
||||
|
||||
class AbcLoader(load.LoaderPlugin):
|
||||
|
|
@ -19,6 +24,7 @@ class AbcLoader(load.LoaderPlugin):
|
|||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -33,7 +39,7 @@ class AbcLoader(load.LoaderPlugin):
|
|||
}
|
||||
|
||||
rt.AlembicImport.ImportToRoot = False
|
||||
rt.importFile(file_path, rt.name("noPrompt"))
|
||||
rt.importFile(file_path, rt.name("noPrompt"), using=rt.AlembicImport)
|
||||
|
||||
abc_after = {
|
||||
c
|
||||
|
|
@ -48,13 +54,27 @@ class AbcLoader(load.LoaderPlugin):
|
|||
self.log.error("Something failed when loading.")
|
||||
|
||||
abc_container = abc_containers.pop()
|
||||
|
||||
for abc in rt.GetCurrentSelection():
|
||||
selections = rt.GetCurrentSelection()
|
||||
import_custom_attribute_data(
|
||||
abc_container, abc_container.Children)
|
||||
for abc in selections:
|
||||
for cam_shape in abc.Children:
|
||||
cam_shape.playbackType = 2
|
||||
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
|
||||
for abc_object in abc_container.Children:
|
||||
abc_object.name = f"{namespace}:{abc_object.name}"
|
||||
# rename the abc container with namespace
|
||||
abc_container_name = f"{namespace}:{name}_{self.postfix}"
|
||||
abc_container.name = abc_container_name
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__
|
||||
name, [abc_container], context,
|
||||
namespace, loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
|
@ -63,28 +83,23 @@ class AbcLoader(load.LoaderPlugin):
|
|||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
|
||||
alembic_objects = self.get_container_children(node, "AlembicObject")
|
||||
for alembic_object in alembic_objects:
|
||||
alembic_object.source = path
|
||||
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
with maintained_selection():
|
||||
rt.Select(node.Children)
|
||||
|
||||
for alembic in rt.Selection:
|
||||
abc = rt.GetNodeByName(alembic.name)
|
||||
update_custom_attribute_data(abc, abc.Children)
|
||||
rt.Select(abc.Children)
|
||||
for abc_con in rt.Selection:
|
||||
container = rt.GetNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.Select(container.Children)
|
||||
for abc_obj in rt.Selection:
|
||||
alembic_obj = rt.GetNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
for abc_con in abc.Children:
|
||||
abc_con.source = path
|
||||
rt.Select(abc_con.Children)
|
||||
for abc_obj in abc_con.Children:
|
||||
abc_obj.source = path
|
||||
|
||||
lib.imprint(
|
||||
container["instance_node"],
|
||||
{"representation": str(representation["_id"])},
|
||||
)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -1,7 +1,14 @@
|
|||
import os
|
||||
|
||||
from openpype.hosts.max.api import lib, maintained_selection
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace, get_namespace
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.pipeline import get_representation_path, load
|
||||
|
||||
|
||||
|
|
@ -13,6 +20,7 @@ class PointCloudLoader(load.LoaderPlugin):
|
|||
order = -8
|
||||
icon = "code-fork"
|
||||
color = "green"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
"""load point cloud by tyCache"""
|
||||
|
|
@ -22,10 +30,19 @@ class PointCloudLoader(load.LoaderPlugin):
|
|||
obj = rt.tyCache()
|
||||
obj.filename = filepath
|
||||
|
||||
prt_container = rt.GetNodeByName(obj.name)
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
prt_container = rt.Container(
|
||||
name=f"{namespace}:{name}_{self.postfix}")
|
||||
import_custom_attribute_data(prt_container, [obj])
|
||||
obj.Parent = prt_container
|
||||
obj.name = f"{namespace}:{obj.name}"
|
||||
|
||||
return containerise(
|
||||
name, [prt_container], context, loader=self.__class__.__name__)
|
||||
name, [prt_container], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""update the container"""
|
||||
|
|
@ -33,15 +50,18 @@ class PointCloudLoader(load.LoaderPlugin):
|
|||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.GetNodeByName(container["instance_node"])
|
||||
namespace, name = get_namespace(container["instance_node"])
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
inst_container = rt.getNodeByName(sub_node_name)
|
||||
update_custom_attribute_data(
|
||||
inst_container, inst_container.Children)
|
||||
with maintained_selection():
|
||||
rt.Select(node.Children)
|
||||
for prt in rt.Selection:
|
||||
prt_object = rt.GetNodeByName(prt.name)
|
||||
prt_object.filename = path
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
for prt in inst_container.Children:
|
||||
prt.filename = path
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -5,8 +5,15 @@ from openpype.pipeline import (
|
|||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api.pipeline import (
|
||||
containerise,
|
||||
import_custom_attribute_data,
|
||||
update_custom_attribute_data
|
||||
)
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import (
|
||||
unique_namespace, get_namespace
|
||||
)
|
||||
|
||||
|
||||
class RedshiftProxyLoader(load.LoaderPlugin):
|
||||
|
|
@ -18,6 +25,7 @@ class RedshiftProxyLoader(load.LoaderPlugin):
|
|||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
postfix = "param"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
|
@ -30,24 +38,32 @@ class RedshiftProxyLoader(load.LoaderPlugin):
|
|||
if collections:
|
||||
rs_proxy.is_sequence = True
|
||||
|
||||
container = rt.container()
|
||||
container.name = name
|
||||
namespace = unique_namespace(
|
||||
name + "_",
|
||||
suffix="_",
|
||||
)
|
||||
container = rt.Container(
|
||||
name=f"{namespace}:{name}_{self.postfix}")
|
||||
rs_proxy.Parent = container
|
||||
|
||||
asset = rt.getNodeByName(name)
|
||||
rs_proxy.name = f"{namespace}:{rs_proxy.name}"
|
||||
import_custom_attribute_data(container, [rs_proxy])
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
name, [container], context,
|
||||
namespace, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
for children in node.Children:
|
||||
children_node = rt.getNodeByName(children.name)
|
||||
for proxy in children_node.Children:
|
||||
proxy.file = path
|
||||
namespace, name = get_namespace(container["instance_node"])
|
||||
sub_node_name = f"{namespace}:{name}_{self.postfix}"
|
||||
inst_container = rt.getNodeByName(sub_node_name)
|
||||
|
||||
update_custom_attribute_data(
|
||||
inst_container, inst_container.Children)
|
||||
for proxy in inst_container.Children:
|
||||
proxy.file = path
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
|
|
|
|||
|
|
@ -30,10 +30,12 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
asset = get_current_asset_name()
|
||||
|
||||
files_by_aov = RenderProducts().get_beauty(instance.name)
|
||||
folder = folder.replace("\\", "/")
|
||||
aovs = RenderProducts().get_aovs(instance.name)
|
||||
files_by_aov.update(aovs)
|
||||
|
||||
camera = rt.viewport.GetCamera()
|
||||
instance.data["cameras"] = [camera.name] if camera else None # noqa
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["files"] = list()
|
||||
|
|
|
|||
|
|
@ -22,8 +22,6 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
start = float(instance.data.get("frameStartHandle", 1))
|
||||
end = float(instance.data.get("frameEndHandle", 1))
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Camera ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
|
|
|
|||
|
|
@ -19,9 +19,8 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Camera ...")
|
||||
self.log.debug("Extracting Camera ...")
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
||||
|
|
|
|||
|
|
@ -18,10 +18,9 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
# publish the raw scene for camera
|
||||
self.log.info("Extracting Raw Max Scene ...")
|
||||
self.log.debug("Extracting Raw Max Scene ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.max".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,9 +20,7 @@ class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,10 +20,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,9 +20,7 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.obj".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -54,8 +54,6 @@ class ExtractAlembic(publish.Extractor):
|
|||
start = float(instance.data.get("frameStartHandle", 1))
|
||||
end = float(instance.data.get("frameEndHandle", 1))
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.debug("Extracting pointcache ...")
|
||||
|
||||
parent_dir = self.staging_dir(instance)
|
||||
|
|
|
|||
|
|
@ -16,11 +16,10 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
families = ["redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
container = instance.data["instance_node"]
|
||||
start = int(instance.context.data.get("frameStart"))
|
||||
end = int(instance.context.data.get("frameEnd"))
|
||||
|
||||
self.log.info("Extracting Redshift Proxy...")
|
||||
self.log.debug("Extracting Redshift Proxy...")
|
||||
stagingdir = self.staging_dir(instance)
|
||||
rs_filename = "{name}.rs".format(**instance.data)
|
||||
rs_filepath = os.path.join(stagingdir, rs_filename)
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@ class ValidateMaxContents(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.ValidatorOrder
|
||||
families = ["camera",
|
||||
"maxScene",
|
||||
"maxrender",
|
||||
"review"]
|
||||
hosts = ["max"]
|
||||
label = "Max Scene Contents"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,46 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin)
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.max.api.lib import get_current_renderer
|
||||
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateRenderableCamera(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validates Renderable Camera
|
||||
|
||||
Check if the renderable camera used for rendering
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["maxrender"]
|
||||
hosts = ["max"]
|
||||
label = "Renderable Camera"
|
||||
optional = True
|
||||
actions = [RepairAction]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
if not instance.data["cameras"]:
|
||||
raise PublishValidationError(
|
||||
"No renderable Camera found in scene."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
rt.viewport.setType(rt.Name("view_camera"))
|
||||
camera = rt.viewport.GetCamera()
|
||||
cls.log.info(f"Camera {camera} set as renderable camera")
|
||||
renderer_class = get_current_renderer()
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
if renderer == "Arnold":
|
||||
arv = rt.MAXToAOps.ArnoldRenderView()
|
||||
arv.setOption("Camera", str(camera))
|
||||
arv.close()
|
||||
instance.data["cameras"] = [camera.name]
|
||||
|
|
@ -6,11 +6,6 @@ from openpype.pipeline import (
|
|||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api.lib import reset_scene_resolution
|
||||
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project_asset,
|
||||
get_current_project
|
||||
)
|
||||
|
||||
|
||||
class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
|
|
@ -43,22 +38,16 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
|||
"on asset or shot.")
|
||||
|
||||
def get_db_resolution(self, instance):
|
||||
data = ["data.resolutionWidth", "data.resolutionHeight"]
|
||||
project_resolution = get_current_project(fields=data)
|
||||
project_resolution_data = project_resolution["data"]
|
||||
asset_resolution = get_current_project_asset(fields=data)
|
||||
asset_resolution_data = asset_resolution["data"]
|
||||
# Set project resolution
|
||||
project_width = int(
|
||||
project_resolution_data.get("resolutionWidth", 1920))
|
||||
project_height = int(
|
||||
project_resolution_data.get("resolutionHeight", 1080))
|
||||
width = int(
|
||||
asset_resolution_data.get("resolutionWidth", project_width))
|
||||
height = int(
|
||||
asset_resolution_data.get("resolutionHeight", project_height))
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
for data in [asset_doc["data"], project_doc["data"]]:
|
||||
if "resolutionWidth" in data and "resolutionHeight" in data:
|
||||
width = data["resolutionWidth"]
|
||||
height = data["resolutionHeight"]
|
||||
return int(width), int(height)
|
||||
|
||||
return width, height
|
||||
# Defaults if not found in asset document or project document
|
||||
return 1920, 1080
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
|
|
@ -177,12 +177,7 @@ class RenderSettings(object):
|
|||
# list all the aovs
|
||||
all_rs_aovs = cmds.ls(type='RedshiftAOV')
|
||||
for rs_aov in redshift_aovs:
|
||||
rs_layername = rs_aov
|
||||
if " " in rs_aov:
|
||||
rs_renderlayer = rs_aov.replace(" ", "")
|
||||
rs_layername = "rsAov_{}".format(rs_renderlayer)
|
||||
else:
|
||||
rs_layername = "rsAov_{}".format(rs_aov)
|
||||
rs_layername = "rsAov_{}".format(rs_aov.replace(" ", ""))
|
||||
if rs_layername in all_rs_aovs:
|
||||
continue
|
||||
cmds.rsCreateAov(type=rs_aov)
|
||||
|
|
@ -317,7 +312,7 @@ class RenderSettings(object):
|
|||
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(aov_separator)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
six.reraise(
|
||||
CreatorError,
|
||||
CreatorError(
|
||||
|
|
|
|||
|
|
@ -260,7 +260,7 @@ class MayaCreator(NewCreator, MayaCreatorBase):
|
|||
default=True)
|
||||
]
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
settings_name = self.settings_name
|
||||
|
|
@ -683,7 +683,6 @@ class ReferenceLoader(Loader):
|
|||
loaded_containers.append(container)
|
||||
self._organize_containers(nodes, container)
|
||||
c += 1
|
||||
namespace = None
|
||||
|
||||
return loaded_containers
|
||||
|
||||
|
|
|
|||
|
|
@ -81,10 +81,8 @@ class CreateAnimation(plugin.MayaHiddenCreator):
|
|||
|
||||
return defs
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
super(CreateAnimation, self).apply_settings(
|
||||
project_settings, system_settings
|
||||
)
|
||||
def apply_settings(self, project_settings):
|
||||
super(CreateAnimation, self).apply_settings(project_settings)
|
||||
# Hardcoding creator to be enabled due to existing settings would
|
||||
# disable the creator causing the creator plugin to not be
|
||||
# discoverable.
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ class CreateRenderlayer(plugin.RenderlayerCreator):
|
|||
render_settings = {}
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
def apply_settings(cls, project_settings):
|
||||
cls.render_settings = project_settings["maya"]["RenderSettings"]
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class CreateUnrealSkeletalMesh(plugin.MayaCreator):
|
|||
# Defined in settings
|
||||
joint_hints = set()
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
def apply_settings(self, project_settings):
|
||||
"""Apply project settings to creator"""
|
||||
settings = (
|
||||
project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"]
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ class CreateUnrealStaticMesh(plugin.MayaCreator):
|
|||
# Defined in settings
|
||||
collision_prefixes = []
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
def apply_settings(self, project_settings):
|
||||
"""Apply project settings to creator"""
|
||||
settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"]
|
||||
self.collision_prefixes = settings["collision_prefixes"]
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ class CreateVRayScene(plugin.RenderlayerCreator):
|
|||
singleton_node_name = "vraysceneMain"
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
def apply_settings(cls, project_settings):
|
||||
cls.render_settings = project_settings["maya"]["RenderSettings"]
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
|
|
|||
|
|
@ -13,8 +13,7 @@ class CreateYetiCache(plugin.MayaCreator):
|
|||
family = "yeticache"
|
||||
icon = "pagelines"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateYetiCache, self).__init__(*args, **kwargs)
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = [
|
||||
NumberDef("preroll",
|
||||
|
|
@ -36,3 +35,5 @@ class CreateYetiCache(plugin.MayaCreator):
|
|||
default=3,
|
||||
decimals=0)
|
||||
)
|
||||
|
||||
return defs
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ class ImportReference(InventoryAction):
|
|||
color = "#d8d8d8"
|
||||
|
||||
def process(self, containers):
|
||||
references = cmds.ls(type="reference")
|
||||
for container in containers:
|
||||
if container["loader"] != "ReferenceLoader":
|
||||
print("Not a reference, skipping")
|
||||
|
|
|
|||
|
|
@ -43,8 +43,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
import multiverse
|
||||
|
||||
# Create the shape
|
||||
shape = None
|
||||
transform = None
|
||||
with maintained_selection():
|
||||
cmds.namespace(addNamespace=namespace)
|
||||
with namespaced(namespace, new=False):
|
||||
|
|
|
|||
|
|
@ -205,7 +205,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
cmds.setAttr("{}.selectHandleZ".format(group_name), cz)
|
||||
|
||||
if family == "rig":
|
||||
self._post_process_rig(name, namespace, context, options)
|
||||
self._post_process_rig(namespace, context, options)
|
||||
else:
|
||||
if "translate" in options:
|
||||
if not attach_to_root and new_nodes:
|
||||
|
|
@ -229,7 +229,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
members = get_container_members(container)
|
||||
self._lock_camera_transforms(members)
|
||||
|
||||
def _post_process_rig(self, name, namespace, context, options):
|
||||
def _post_process_rig(self, namespace, context, options):
|
||||
|
||||
nodes = self[:]
|
||||
create_rig_animation_instance(
|
||||
|
|
|
|||
|
|
@ -53,8 +53,6 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
)
|
||||
|
||||
# Reference xgen. Xgen does not like being referenced in under a group.
|
||||
new_nodes = []
|
||||
|
||||
with maintained_selection():
|
||||
nodes = cmds.file(
|
||||
maya_filepath,
|
||||
|
|
|
|||
|
|
@ -15,6 +15,16 @@ from openpype.hosts.maya.api import lib
|
|||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
|
||||
|
||||
# Do not reset these values on update but only apply on first load
|
||||
# to preserve any potential local overrides
|
||||
SKIP_UPDATE_ATTRS = {
|
||||
"displayOutput",
|
||||
"viewportDensity",
|
||||
"viewportWidth",
|
||||
"viewportLength",
|
||||
}
|
||||
|
||||
|
||||
def set_attribute(node, attr, value):
|
||||
"""Wrapper of set attribute which ignores None values"""
|
||||
if value is None:
|
||||
|
|
@ -205,6 +215,8 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
yeti_node = yeti_nodes[0]
|
||||
|
||||
for attr, value in node_settings["attrs"].items():
|
||||
if attr in SKIP_UPDATE_ATTRS:
|
||||
continue
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
cmds.setAttr("{}.representation".format(container_node),
|
||||
|
|
@ -311,7 +323,6 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
# Update attributes with defaults
|
||||
attributes = node_settings["attrs"]
|
||||
attributes.update({
|
||||
"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1,
|
||||
|
||||
|
|
@ -321,6 +332,9 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
"visibleInRefractions": True
|
||||
})
|
||||
|
||||
if "viewportDensity" not in attributes:
|
||||
attributes["viewportDensity"] = 0.1
|
||||
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
|
|
|||
|
|
@ -35,14 +35,11 @@ class CollectAssembly(pyblish.api.InstancePlugin):
|
|||
# Get all content from the instance
|
||||
instance_lookup = set(cmds.ls(instance, type="transform", long=True))
|
||||
data = defaultdict(list)
|
||||
self.log.info(instance_lookup)
|
||||
|
||||
hierarchy_nodes = []
|
||||
for container in containers:
|
||||
|
||||
self.log.info(container)
|
||||
root = lib.get_container_transforms(container, root=True)
|
||||
self.log.info(root)
|
||||
if not root or root not in instance_lookup:
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,6 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
|
|||
hosts = ["maya"]
|
||||
label = "Maya History"
|
||||
families = ["rig"]
|
||||
verbose = False
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -28,6 +28,8 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["maya"]
|
||||
|
||||
valid_empty_families = {"workfile", "renderlayer"}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
objset = instance.data.get("instance_node")
|
||||
|
|
@ -58,7 +60,7 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
instance[:] = members_hierarchy
|
||||
|
||||
elif instance.data["family"] != "workfile":
|
||||
elif instance.data["family"] not in self.valid_empty_families:
|
||||
self.log.warning("Empty instance: \"%s\" " % objset)
|
||||
# Store the exact members of the object set
|
||||
instance.data["setMembers"] = members
|
||||
|
|
|
|||
|
|
@ -356,8 +356,9 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
# Thus the data will be limited to only what we need.
|
||||
self.log.debug("obj_set {}".format(sets[obj_set]))
|
||||
if not sets[obj_set]["members"]:
|
||||
self.log.info(
|
||||
"Removing redundant set information: {}".format(obj_set))
|
||||
self.log.debug(
|
||||
"Removing redundant set information: {}".format(obj_set)
|
||||
)
|
||||
sets.pop(obj_set, None)
|
||||
|
||||
self.log.debug("Gathering attribute changes to instance members..")
|
||||
|
|
@ -396,9 +397,9 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if con:
|
||||
materials.extend(con)
|
||||
|
||||
self.log.info("Found materials:\n{}".format(materials))
|
||||
self.log.debug("Found materials:\n{}".format(materials))
|
||||
|
||||
self.log.info("Found the following sets:\n{}".format(look_sets))
|
||||
self.log.debug("Found the following sets:\n{}".format(look_sets))
|
||||
# Get the entire node chain of the look sets
|
||||
# history = cmds.listHistory(look_sets)
|
||||
history = []
|
||||
|
|
@ -456,7 +457,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
instance.extend(shader for shader in look_sets if shader
|
||||
not in instance_lookup)
|
||||
|
||||
self.log.info("Collected look for %s" % instance)
|
||||
self.log.debug("Collected look for %s" % instance)
|
||||
|
||||
def collect_sets(self, instance):
|
||||
"""Collect all objectSets which are of importance for publishing
|
||||
|
|
@ -593,7 +594,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if attribute == "fileTextureName":
|
||||
computed_attribute = node + ".computedFileTextureNamePattern"
|
||||
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
self.log.debug(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
|
|
@ -621,7 +622,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
dependNode=True)
|
||||
)
|
||||
if not source and cmds.nodeType(node) in pxr_nodes:
|
||||
self.log.info("Renderman: source is empty, skipping...")
|
||||
self.log.debug("Renderman: source is empty, skipping...")
|
||||
continue
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
|
|
@ -630,14 +631,14 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
self.log.debug("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.debug("collection of resource done:")
|
||||
self.log.debug(" - node: {}".format(node))
|
||||
self.log.debug(" - attribute: {}".format(attribute))
|
||||
self.log.debug(" - source: {}".format(source))
|
||||
self.log.debug(" - file: {}".format(files))
|
||||
self.log.debug(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
yield {
|
||||
|
|
|
|||
|
|
@ -268,7 +268,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
self.log.info("Processing mvLook for '{}'".format(instance))
|
||||
self.log.debug("Processing mvLook for '{}'".format(instance))
|
||||
|
||||
nodes = set()
|
||||
for node in instance:
|
||||
|
|
@ -281,13 +281,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
long=True)
|
||||
nodes.update(nodes_of_interest)
|
||||
|
||||
files = []
|
||||
sets = {}
|
||||
instance.data["resources"] = []
|
||||
publishMipMap = instance.data["publishMipMap"]
|
||||
|
||||
for node in nodes:
|
||||
self.log.info("Getting resources for '{}'".format(node))
|
||||
self.log.debug("Getting resources for '{}'".format(node))
|
||||
|
||||
# We know what nodes need to be collected, now we need to
|
||||
# extract the materials overrides.
|
||||
|
|
@ -380,12 +379,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(fname_attrib))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.debug("collection of resource done:")
|
||||
self.log.debug(" - node: {}".format(node))
|
||||
self.log.debug(" - attribute: {}".format(fname_attrib))
|
||||
self.log.debug(" - source: {}".format(source))
|
||||
self.log.debug(" - file: {}".format(files))
|
||||
self.log.debug(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
resource = {"node": node,
|
||||
|
|
@ -406,14 +405,14 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
extra_files = []
|
||||
self.log.debug("Expecting MipMaps, going to look for them.")
|
||||
for fname in files:
|
||||
self.log.info("Checking '{}' for mipmaps".format(fname))
|
||||
self.log.debug("Checking '{}' for mipmaps".format(fname))
|
||||
if is_mipmap(fname):
|
||||
self.log.debug(" - file is already MipMap, skipping.")
|
||||
continue
|
||||
|
||||
mipmap = get_mipmap(fname)
|
||||
if mipmap:
|
||||
self.log.info(" mipmap found for '{}'".format(fname))
|
||||
self.log.debug(" mipmap found for '{}'".format(fname))
|
||||
extra_files.append(mipmap)
|
||||
else:
|
||||
self.log.warning(" no mipmap found for '{}'".format(fname))
|
||||
|
|
|
|||
|
|
@ -105,7 +105,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
"family": cmds.getAttr("{}.family".format(s)),
|
||||
}
|
||||
)
|
||||
self.log.info(" -> attach render to: {}".format(s))
|
||||
self.log.debug(" -> attach render to: {}".format(s))
|
||||
|
||||
layer_name = layer.name()
|
||||
|
||||
|
|
@ -137,10 +137,10 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
has_cameras = any(product.camera for product in render_products)
|
||||
assert has_cameras, "No render cameras found."
|
||||
|
||||
self.log.info("multipart: {}".format(
|
||||
self.log.debug("multipart: {}".format(
|
||||
multipart))
|
||||
assert expected_files, "no file names were generated, this is a bug"
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"expected files: {}".format(
|
||||
json.dumps(expected_files, indent=4, sort_keys=True)
|
||||
)
|
||||
|
|
@ -175,7 +175,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
publish_meta_path = os.path.dirname(full_path)
|
||||
aov_dict[aov_first_key] = full_paths
|
||||
full_exp_files = [aov_dict]
|
||||
self.log.info(full_exp_files)
|
||||
self.log.debug(full_exp_files)
|
||||
|
||||
if publish_meta_path is None:
|
||||
raise KnownPublishError("Unable to detect any expected output "
|
||||
|
|
@ -227,7 +227,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
if platform.system().lower() in ["linux", "darwin"]:
|
||||
common_publish_meta_path = "/" + common_publish_meta_path
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
# Get layer specific settings, might be overrides
|
||||
|
|
@ -300,7 +300,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
)
|
||||
if rr_settings["enabled"]:
|
||||
data["rrPathName"] = instance.data.get("rrPathName")
|
||||
self.log.info(data["rrPathName"])
|
||||
self.log.debug(data["rrPathName"])
|
||||
|
||||
if self.sync_workfile_version:
|
||||
data["version"] = context.data["version"]
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
|
|||
|
||||
# Get renderer
|
||||
renderer = instance.data["renderer"]
|
||||
self.log.info("Renderer found: {}".format(renderer))
|
||||
self.log.debug("Renderer found: {}".format(renderer))
|
||||
|
||||
rp_node_types = {"vray": ["VRayRenderElement", "VRayRenderElementSet"],
|
||||
"arnold": ["aiAOV"],
|
||||
|
|
@ -66,8 +66,8 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
|
|||
|
||||
result.append(render_pass)
|
||||
|
||||
self.log.info("Found {} render elements / AOVs for "
|
||||
"'{}'".format(len(result), instance.data["subset"]))
|
||||
self.log.debug("Found {} render elements / AOVs for "
|
||||
"'{}'".format(len(result), instance.data["subset"]))
|
||||
|
||||
instance.data["renderPasses"] = result
|
||||
|
||||
|
|
|
|||
|
|
@ -21,11 +21,12 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
layer = instance.data["renderlayer"]
|
||||
|
||||
self.log.info("layer: {}".format(layer))
|
||||
cameras = cmds.ls(type="camera", long=True)
|
||||
renderable = [c for c in cameras if
|
||||
get_attr_in_layer("%s.renderable" % c, layer)]
|
||||
renderable = [cam for cam in cameras if
|
||||
get_attr_in_layer("{}.renderable".format(cam), layer)]
|
||||
|
||||
self.log.info("Found cameras %s: %s" % (len(renderable), renderable))
|
||||
self.log.debug(
|
||||
"Found renderable cameras %s: %s", len(renderable), renderable
|
||||
)
|
||||
|
||||
instance.data["cameras"] = renderable
|
||||
|
|
|
|||
39
openpype/hosts/maya/plugins/publish/collect_rig_sets.py
Normal file
39
openpype/hosts/maya/plugins/publish/collect_rig_sets.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
import pyblish.api
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class CollectRigSets(pyblish.api.InstancePlugin):
|
||||
"""Ensure rig contains pipeline-critical content
|
||||
|
||||
Every rig must contain at least two object sets:
|
||||
"controls_SET" - Set of all animatable controls
|
||||
"out_SET" - Set of all cacheable meshes
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.05
|
||||
label = "Collect Rig Sets"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
|
||||
accepted_output = ["mesh", "transform"]
|
||||
accepted_controllers = ["transform"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
searching = {"controls_SET", "out_SET"}
|
||||
found = {}
|
||||
for node in cmds.ls(instance, exactType="objectSet"):
|
||||
for suffix in searching:
|
||||
if node.endswith(suffix):
|
||||
found[suffix] = node
|
||||
searching.remove(suffix)
|
||||
break
|
||||
if not searching:
|
||||
break
|
||||
|
||||
self.log.debug("Found sets: {}".format(found))
|
||||
rig_sets = instance.data.setdefault("rig_sets", {})
|
||||
for name, objset in found.items():
|
||||
rig_sets[name] = objset
|
||||
|
|
@ -19,7 +19,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
|
|||
instance.data["geometryMembers"] = cmds.sets(
|
||||
geometry_set, query=True)
|
||||
|
||||
self.log.info("geometry: {}".format(
|
||||
self.log.debug("geometry: {}".format(
|
||||
pformat(instance.data.get("geometryMembers"))))
|
||||
|
||||
collision_set = [
|
||||
|
|
@ -29,7 +29,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
|
|||
instance.data["collisionMembers"] = cmds.sets(
|
||||
collision_set, query=True)
|
||||
|
||||
self.log.info("collisions: {}".format(
|
||||
self.log.debug("collisions: {}".format(
|
||||
pformat(instance.data.get("collisionMembers"))))
|
||||
|
||||
frame = cmds.currentTime(query=True)
|
||||
|
|
|
|||
|
|
@ -67,5 +67,5 @@ class CollectXgen(pyblish.api.InstancePlugin):
|
|||
|
||||
data["transfers"] = transfers
|
||||
|
||||
self.log.info(data)
|
||||
self.log.debug(data)
|
||||
instance.data.update(data)
|
||||
|
|
|
|||
|
|
@ -4,12 +4,23 @@ import pyblish.api
|
|||
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
SETTINGS = {"renderDensity",
|
||||
"renderWidth",
|
||||
"renderLength",
|
||||
"increaseRenderBounds",
|
||||
"imageSearchPath",
|
||||
"cbId"}
|
||||
|
||||
SETTINGS = {
|
||||
# Preview
|
||||
"displayOutput",
|
||||
"colorR", "colorG", "colorB",
|
||||
"viewportDensity",
|
||||
"viewportWidth",
|
||||
"viewportLength",
|
||||
# Render attributes
|
||||
"renderDensity",
|
||||
"renderWidth",
|
||||
"renderLength",
|
||||
"increaseRenderBounds",
|
||||
"imageSearchPath",
|
||||
# Pipeline specific
|
||||
"cbId"
|
||||
}
|
||||
|
||||
|
||||
class CollectYetiCache(pyblish.api.InstancePlugin):
|
||||
|
|
@ -39,10 +50,6 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
|
|||
# Get yeti nodes and their transforms
|
||||
yeti_shapes = cmds.ls(instance, type="pgYetiMaya")
|
||||
for shape in yeti_shapes:
|
||||
shape_data = {"transform": None,
|
||||
"name": shape,
|
||||
"cbId": lib.get_id(shape),
|
||||
"attrs": None}
|
||||
|
||||
# Get specific node attributes
|
||||
attr_data = {}
|
||||
|
|
@ -58,9 +65,12 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
|
|||
parent = cmds.listRelatives(shape, parent=True)[0]
|
||||
transform_data = {"name": parent, "cbId": lib.get_id(parent)}
|
||||
|
||||
# Store collected data
|
||||
shape_data["attrs"] = attr_data
|
||||
shape_data["transform"] = transform_data
|
||||
shape_data = {
|
||||
"transform": transform_data,
|
||||
"name": shape,
|
||||
"cbId": lib.get_id(shape),
|
||||
"attrs": attr_data,
|
||||
}
|
||||
|
||||
settings["nodes"].append(shape_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -119,7 +119,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
texture_filenames = []
|
||||
if image_search_paths:
|
||||
|
||||
|
||||
# TODO: Somehow this uses OS environment path separator, `:` vs `;`
|
||||
# Later on check whether this is pipeline OS cross-compatible.
|
||||
image_search_paths = [p for p in
|
||||
|
|
@ -130,13 +129,13 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
|
||||
# List all related textures
|
||||
texture_filenames = cmds.pgYetiCommand(node, listTextures=True)
|
||||
self.log.info("Found %i texture(s)" % len(texture_filenames))
|
||||
self.log.debug("Found %i texture(s)" % len(texture_filenames))
|
||||
|
||||
# Get all reference nodes
|
||||
reference_nodes = cmds.pgYetiGraph(node,
|
||||
listNodes=True,
|
||||
type="reference")
|
||||
self.log.info("Found %i reference node(s)" % len(reference_nodes))
|
||||
self.log.debug("Found %i reference node(s)" % len(reference_nodes))
|
||||
|
||||
if texture_filenames and not image_search_paths:
|
||||
raise ValueError("pgYetiMaya node '%s' is missing the path to the "
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracted instance {} to: {}".format(instance.name, staging_dir)
|
||||
)
|
||||
|
||||
|
|
@ -126,7 +126,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
instance.data["representations"].append(representation)
|
||||
|
||||
def _extract(self, nodes, attribute_data, kwargs):
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Writing {} with:\n{}".format(kwargs["filename"], kwargs)
|
||||
)
|
||||
filenames = []
|
||||
|
|
@ -180,12 +180,12 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
with lib.attribute_values(attribute_data):
|
||||
with lib.maintained_selection():
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Writing: {}".format(duplicate_nodes)
|
||||
)
|
||||
cmds.select(duplicate_nodes, noExpand=True)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracting ass sequence with: {}".format(kwargs)
|
||||
)
|
||||
|
||||
|
|
@ -194,6 +194,6 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
for file in exported_files:
|
||||
filenames.append(os.path.split(file)[1])
|
||||
|
||||
self.log.info("Exported: {}".format(filenames))
|
||||
self.log.debug("Exported: {}".format(filenames))
|
||||
|
||||
return filenames, nodes_by_id
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class ExtractAssembly(publish.Extractor):
|
|||
json_filename = "{}.json".format(instance.name)
|
||||
json_path = os.path.join(staging_dir, json_filename)
|
||||
|
||||
self.log.info("Dumping scene data for debugging ..")
|
||||
self.log.debug("Dumping scene data for debugging ..")
|
||||
with open(json_path, "w") as filepath:
|
||||
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
|
||||
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
"Attributes to bake must be specified as a list"
|
||||
)
|
||||
for attr in self.bake_attributes:
|
||||
self.log.info("Adding {} attribute".format(attr))
|
||||
self.log.debug("Adding {} attribute".format(attr))
|
||||
job_str += " -attr {0}".format(attr)
|
||||
|
||||
with lib.evaluation("off"):
|
||||
|
|
@ -112,5 +112,5 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -151,7 +151,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
with lib.evaluation("off"):
|
||||
with lib.suspended_refresh():
|
||||
if bake_to_worldspace:
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Performing camera bakes: {}".format(transform))
|
||||
baked = lib.bake_to_world_space(
|
||||
transform,
|
||||
|
|
@ -186,7 +186,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
unlock(plug)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
self.log.debug("Performing extraction..")
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
|
|
@ -217,5 +217,5 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -33,11 +33,11 @@ class ExtractFBX(publish.Extractor):
|
|||
# to format it into a string in a mel expression
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
self.log.info("Extracting FBX to: {0}".format(path))
|
||||
self.log.debug("Extracting FBX to: {0}".format(path))
|
||||
|
||||
members = instance.data["setMembers"]
|
||||
self.log.info("Members: {0}".format(members))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Members: {0}".format(members))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
|
|
@ -58,4 +58,4 @@ class ExtractFBX(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract FBX successful to: {0}".format(path))
|
||||
self.log.debug("Extract FBX successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -20,14 +20,10 @@ class ExtractGLB(publish.Extractor):
|
|||
filename = "{0}.glb".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
|
||||
self.log.info("Extracting GLB to: {}".format(path))
|
||||
|
||||
cmds.loadPlugin("maya2glTF", quiet=True)
|
||||
|
||||
nodes = instance[:]
|
||||
|
||||
self.log.info("Instance: {0}".format(nodes))
|
||||
|
||||
start_frame = instance.data('frameStart') or \
|
||||
int(cmds.playbackOptions(query=True,
|
||||
animationStartTime=True))# noqa
|
||||
|
|
@ -48,6 +44,7 @@ class ExtractGLB(publish.Extractor):
|
|||
"vno": True # visibleNodeOnly
|
||||
}
|
||||
|
||||
self.log.debug("Extracting GLB to: {}".format(path))
|
||||
with lib.maintained_selection():
|
||||
cmds.select(nodes, hi=True, noExpand=True)
|
||||
extract_gltf(staging_dir,
|
||||
|
|
@ -65,4 +62,4 @@ class ExtractGLB(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract GLB successful to: {0}".format(path))
|
||||
self.log.debug("Extract GLB successful to: {0}".format(path))
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue